+
+### [Version 1.19.26](https://github.com/lobehub/lobe-chat/compare/v1.19.25...v1.19.26)
+
+Released on **2024-09-24**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix url config import after user state init.
+
+#### 💄 Styles
+
+- **misc**: Add support function call for 360AI, left sidebar has only assistants.
+
+
+
+
+Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Fix url config import after user state init, closes [#4072](https://github.com/lobehub/lobe-chat/issues/4072) ([18a240c](https://github.com/lobehub/lobe-chat/commit/18a240c))
+
+#### Styles
+
+- **misc**: Add support function call for 360AI, closes [#4099](https://github.com/lobehub/lobe-chat/issues/4099) ([536696b](https://github.com/lobehub/lobe-chat/commit/536696b))
+- **misc**: Left sidebar has only assistants, closes [#4108](https://github.com/lobehub/lobe-chat/issues/4108) ([db1f81c](https://github.com/lobehub/lobe-chat/commit/db1f81c))
+
+
+
+
+
+### [Version 1.19.9](https://github.com/lobehub/lobe-chat/compare/v1.19.8...v1.19.9)
+
+Released on **2024-09-20**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix a bug with server agent config when user not exist.
+
+
+
+
+Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Fix a bug with server agent config when user not exist, closes [#4034](https://github.com/lobehub/lobe-chat/issues/4034) ([f6a232b](https://github.com/lobehub/lobe-chat/commit/f6a232b))
+
+
+
+
+
+### [Version 1.16.8](https://github.com/lobehub/lobe-chat/compare/v1.16.7...v1.16.8)
+
+Released on **2024-09-12**
+
+#### 💄 Styles
+
+- **misc**: Improve models and add more info for providers and models.
+
+
+
+
+Improvements and Fixes
+
+#### Styles
+
+- **misc**: Improve models and add more info for providers and models, closes [#3911](https://github.com/lobehub/lobe-chat/issues/3911) ([8a8fc6a](https://github.com/lobehub/lobe-chat/commit/8a8fc6a))
+
+
+
+
+
+### [Version 1.15.21](https://github.com/lobehub/lobe-chat/compare/v1.15.20...v1.15.21)
+
+Released on **2024-09-08**
+
+#### ♻ Code Refactoring
+
+- **misc**: Temperature range from 0 to 2.
+
+
+
+
+Improvements and Fixes
+
+#### Code refactoring
+
+- **misc**: Temperature range from 0 to 2, closes [#3355](https://github.com/lobehub/lobe-chat/issues/3355) ([4a9114e](https://github.com/lobehub/lobe-chat/commit/4a9114e))
+
+
+
+
+
+### [Version 1.14.5](https://github.com/lobehub/lobe-chat/compare/v1.14.4...v1.14.5)
+
+Released on **2024-08-28**
+
+#### 🐛 Bug Fixes
+
+- **misc**: No user name if Cloudflare Zero Trust with onetimepin.
+
+
+
+
+Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: No user name if Cloudflare Zero Trust with onetimepin, closes [#3649](https://github.com/lobehub/lobe-chat/issues/3649) ([5bfee5a](https://github.com/lobehub/lobe-chat/commit/5bfee5a))
+
+
+
+
+
+### [Version 1.12.11](https://github.com/lobehub/lobe-chat/compare/v1.12.10...v1.12.11)
+
+Released on **2024-08-23**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Remove orphan chunks if there is no related file.
+
+
+
+
+Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Remove orphan chunks if there is no related file, closes [#3578](https://github.com/lobehub/lobe-chat/issues/3578) ([36bcaf3](https://github.com/lobehub/lobe-chat/commit/36bcaf3))
+
+
+
+
+
+### [Version 1.12.7](https://github.com/lobehub/lobe-chat/compare/v1.12.6...v1.12.7)
+
+Released on **2024-08-22**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Logout button not shown on mobile view when using nextauth.
+
+
+
+
+Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Logout button not shown on mobile view when using nextauth, closes [#3561](https://github.com/lobehub/lobe-chat/issues/3561) ([0c4efe4](https://github.com/lobehub/lobe-chat/commit/0c4efe4))
+
+
+
+
+
+## [Version 1.8.0](https://github.com/lobehub/lobe-chat/compare/v1.7.10...v1.8.0)
+
+Released on **2024-08-02**
+
+#### ✨ Features
+
+- **misc**: Add NextAuth as authentication service in server database.
+
+
+
+
+Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Add NextAuth as authentication service in server database, closes [#2935](https://github.com/lobehub/lobe-chat/issues/2935) ([5a0b972](https://github.com/lobehub/lobe-chat/commit/5a0b972))
+
+
+
+
+
+### [Version 1.6.13](https://github.com/lobehub/lobe-chat/compare/v1.6.12...v1.6.13)
+
+Released on **2024-07-25**
+
+#### 💄 Styles
+
+- **misc**: Updated Groq model list to include llama-3.1 and llama3-Groq.
+
+
+
+
+Improvements and Fixes
+
+#### Styles
+
+- **misc**: Updated Groq model list to include llama-3.1 and llama3-Groq, closes [#3313](https://github.com/lobehub/lobe-chat/issues/3313) ([a9cfad6](https://github.com/lobehub/lobe-chat/commit/a9cfad6))
+
+
+
+
+
+## [Version 1.6.0](https://github.com/lobehub/lobe-chat/compare/v1.5.5...v1.6.0)
+
+Released on **2024-07-19**
+
+#### ✨ Features
+
+- **misc**: Add `gpt-4o-mini` in OpenAI Provider and set it as the default model.
+
+
+
+
+Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Add `gpt-4o-mini` in OpenAI Provider and set it as the default model, closes [#3256](https://github.com/lobehub/lobe-chat/issues/3256) ([a84d807](https://github.com/lobehub/lobe-chat/commit/a84d807))
+
+
+
+
+
+### [Version 1.3.3](https://github.com/lobehub/lobe-chat/compare/v1.3.2...v1.3.3)
+
+Released on **2024-07-09**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Allow user to use their own WebRTC signaling.
+
+
+
+
+Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Allow user to use their own WebRTC signaling, closes [#3182](https://github.com/lobehub/lobe-chat/issues/3182) ([c7f8f38](https://github.com/lobehub/lobe-chat/commit/c7f8f38))
+
+
+
+
+
+### [Version 1.2.14](https://github.com/lobehub/lobe-chat/compare/v1.2.13...v1.2.14)
+
+Released on **2024-07-08**
+
+#### 💄 Styles
+
+- **misc**: Provider changes with model in model settings.
+
+
+
+
+Improvements and Fixes
+
+#### Styles
+
+- **misc**: Provider changes with model in model settings, closes [#3146](https://github.com/lobehub/lobe-chat/issues/3146) ([e53bb5a](https://github.com/lobehub/lobe-chat/commit/e53bb5a))
+
+
+
+
+
+### [Version 0.162.21](https://github.com/lobehub/lobe-chat/compare/v0.162.20...v0.162.21)
+
+Released on **2024-06-09**
+
+#### 💄 Styles
+
+- **misc**: Do not show noDescription in new sesstion.
+
+
+
+
+Improvements and Fixes
+
+#### Styles
+
+- **misc**: Do not show noDescription in new sesstion, closes [#2749](https://github.com/lobehub/lobe-chat/issues/2749) ([30b00aa](https://github.com/lobehub/lobe-chat/commit/30b00aa))
+
+
+
+
+
+### [Version 0.162.6](https://github.com/lobehub/lobe-chat/compare/v0.162.5...v0.162.6)
+
+Released on **2024-05-28**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix the default agent not work correctly on new device.
+
+
+
+
+Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Fix the default agent not work correctly on new device, closes [#2699](https://github.com/lobehub/lobe-chat/issues/2699) ([e4c7536](https://github.com/lobehub/lobe-chat/commit/e4c7536))
+
+
+
+
+
+### [Version 0.162.1](https://github.com/lobehub/lobe-chat/compare/v0.162.0...v0.162.1)
+
+Released on **2024-05-27**
+
+#### 💄 Styles
+
+- **misc**: Improve the display effect of plug-in API name and description.
+
+
+
+
+Improvements and Fixes
+
+#### Styles
+
+- **misc**: Improve the display effect of plug-in API name and description, closes [#2678](https://github.com/lobehub/lobe-chat/issues/2678) ([19cd0b9](https://github.com/lobehub/lobe-chat/commit/19cd0b9))
+
+
+
+
+
+### [Version 0.161.24](https://github.com/lobehub/lobe-chat/compare/v0.161.23...v0.161.24)
+
+Released on **2024-05-27**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix the missing user id in chat compeletition and fix remove unstarred topic not working.
+
+
+
+
+Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Fix the missing user id in chat compeletition and fix remove unstarred topic not working, closes [#2677](https://github.com/lobehub/lobe-chat/issues/2677) ([c9fb2de](https://github.com/lobehub/lobe-chat/commit/c9fb2de))
+
+
+
+
+
+### [Version 0.161.10](https://github.com/lobehub/lobe-chat/compare/v0.161.9...v0.161.10)
+
+Released on **2024-05-23**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Refactor user store and fix custom model list form.
+
+
+
+
+Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Refactor user store and fix custom model list form, closes [#2620](https://github.com/lobehub/lobe-chat/issues/2620) ([81ea886](https://github.com/lobehub/lobe-chat/commit/81ea886))
+
+
+
+
+
+## [Version 0.161.0](https://github.com/lobehub/lobe-chat/compare/v0.160.8...v0.161.0)
+
+Released on **2024-05-21**
+
+#### ✨ Features
+
+- **misc**: Add system agent to select another model provider for translation.
+
+
+
+
+Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Add system agent to select another model provider for translation, closes [#1902](https://github.com/lobehub/lobe-chat/issues/1902) ([3945387](https://github.com/lobehub/lobe-chat/commit/3945387))
+
+
+
+
+
+## [Version 0.159.0](https://github.com/lobehub/lobe-chat/compare/v0.158.2...v0.159.0)
+
+Released on **2024-05-14**
+
+#### ✨ Features
+
+- **misc**: Support DeepSeek as new model provider.
+
+
+
+
+Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Support DeepSeek as new model provider, closes [#2446](https://github.com/lobehub/lobe-chat/issues/2446) ([18028f3](https://github.com/lobehub/lobe-chat/commit/18028f3))
+
+
+
+
+
+## [Version 0.151.0](https://github.com/lobehub/lobe-chat/compare/v0.150.10...v0.151.0)
+
+Released on **2024-04-29**
+
+#### ✨ Features
+
+- **misc**: Support minimax as a new provider.
+
+
+
+
+Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Support minimax as a new provider, closes [#2087](https://github.com/lobehub/lobe-chat/issues/2087) ([00abd82](https://github.com/lobehub/lobe-chat/commit/00abd82))
+
+
+
+
+
+### [Version 0.150.4](https://github.com/lobehub/lobe-chat/compare/v0.150.3...v0.150.4)
+
+Released on **2024-04-27**
+
+#### 💄 Styles
+
+- **misc**: Hide default model tag and show ollama provider by default.
+
+
+
+
+Improvements and Fixes
+
+#### Styles
+
+- **misc**: Hide default model tag and show ollama provider by default, closes [#2238](https://github.com/lobehub/lobe-chat/issues/2238) ([baa4780](https://github.com/lobehub/lobe-chat/commit/baa4780))
+
+
+
+
+
+### [Version 0.148.5](https://github.com/lobehub/lobe-chat/compare/v0.148.4...v0.148.5)
+
+Released on **2024-04-22**
+
+#### 💄 Styles
+
+- **misc**: Support together ai to fetch model list.
+
+
+
+
+Improvements and Fixes
+
+#### Styles
+
+- **misc**: Support together ai to fetch model list, closes [#2138](https://github.com/lobehub/lobe-chat/issues/2138) ([e6d3e4a](https://github.com/lobehub/lobe-chat/commit/e6d3e4a))
+
+
+
+
+
+### [Version 0.148.3](https://github.com/lobehub/lobe-chat/compare/v0.148.2...v0.148.3)
+
+Released on **2024-04-21**
+
+#### 💄 Styles
+
+- **ollama**: Show size info while download, support cancel donwload, optimize calculation for speed.
+
+
+
+
+Improvements and Fixes
+
+#### Styles
+
+- **ollama**: Show size info while download, support cancel donwload, optimize calculation for speed, closes [#1664](https://github.com/lobehub/lobe-chat/issues/1664) ([9b18f47](https://github.com/lobehub/lobe-chat/commit/9b18f47))
+
+
+
+
+
+### [Version 0.147.19](https://github.com/lobehub/lobe-chat/compare/v0.147.18...v0.147.19)
+
+Released on **2024-04-18**
+
+#### 💄 Styles
+
+- **misc**: Add M and B support max token in ModelInfoTags.
+
+
+
+
+Improvements and Fixes
+
+#### Styles
+
+- **misc**: Add M and B support max token in ModelInfoTags, closes [#2073](https://github.com/lobehub/lobe-chat/issues/2073) ([a985d8f](https://github.com/lobehub/lobe-chat/commit/a985d8f))
+
+
+
+
+
+### [Version 0.147.11](https://github.com/lobehub/lobe-chat/compare/v0.147.10...v0.147.11)
+
+Released on **2024-04-14**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Support drag or copy to upload file by model ability.
+
+
+
+
+Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Support drag or copy to upload file by model ability, closes [#2016](https://github.com/lobehub/lobe-chat/issues/2016) ([2abe37e](https://github.com/lobehub/lobe-chat/commit/2abe37e))
+
+
+
+
+
+### [Version 0.147.6](https://github.com/lobehub/lobe-chat/compare/v0.147.5...v0.147.6)
+
+Released on **2024-04-11**
+
+#### 💄 Styles
+
+- **misc**: Add GPT-4-turbo and 2024-04-09 Turbo Vision model and mistral new model name.
+
+
+
+
+Improvements and Fixes
+
+#### Styles
+
+- **misc**: Add GPT-4-turbo and 2024-04-09 Turbo Vision model and mistral new model name, closes [#1984](https://github.com/lobehub/lobe-chat/issues/1984) ([f1795b1](https://github.com/lobehub/lobe-chat/commit/f1795b1))
+
+
+
+
+
+## [Version 0.147.0](https://github.com/lobehub/lobe-chat/compare/v0.146.2...v0.147.0)
+
+Released on **2024-04-10**
+
+#### ♻ Code Refactoring
+
+- **misc**: Add db migration, add migrations from v3 to v4, clean openai azure code, refactor agent runtime with openai compatible factory, refactor api key form locale, refactor openAI to openai and azure, refactor the hidden to enabled, refactor the key, refactor the model config selector, refactor the route auth as a middleware, refactor the server config to migrate model provider env, refactor the server config to migrate model provider env, rename the key to enabledModels.
+
+#### ✨ Features
+
+- **misc**: Refactor to support azure openai provider, support close openai, support display model list, support model config modal, support model list with model providers, support open router auto model list, support openai model fetcher, support update model config, support user config model.
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix db migration, fix db migration.
+
+#### 💄 Styles
+
+- **misc**: Fix i18n of model list fetcher, improve detail design, improve logo style, update locale.
+
+
+
+
+Improvements and Fixes
+
+#### Code refactoring
+
+- **misc**: Add db migration ([6ceb818](https://github.com/lobehub/lobe-chat/commit/6ceb818))
+- **misc**: Add migrations from v3 to v4 ([199ded2](https://github.com/lobehub/lobe-chat/commit/199ded2))
+- **misc**: Clean openai azure code ([be4bcca](https://github.com/lobehub/lobe-chat/commit/be4bcca))
+- **misc**: Refactor agent runtime with openai compatible factory ([89adf9d](https://github.com/lobehub/lobe-chat/commit/89adf9d))
+- **misc**: Refactor api key form locale ([a069169](https://github.com/lobehub/lobe-chat/commit/a069169))
+- **misc**: Refactor openAI to openai and azure ([2190a95](https://github.com/lobehub/lobe-chat/commit/2190a95))
+- **misc**: Refactor the hidden to enabled ([78a1aac](https://github.com/lobehub/lobe-chat/commit/78a1aac))
+- **misc**: Refactor the key ([d5c82f6](https://github.com/lobehub/lobe-chat/commit/d5c82f6))
+- **misc**: Refactor the model config selector ([d865ca1](https://github.com/lobehub/lobe-chat/commit/d865ca1))
+- **misc**: Refactor the route auth as a middleware ([ef5ee2a](https://github.com/lobehub/lobe-chat/commit/ef5ee2a))
+- **misc**: Refactor the server config to migrate model provider env ([e4f110e](https://github.com/lobehub/lobe-chat/commit/e4f110e))
+- **misc**: Refactor the server config to migrate model provider env ([c398063](https://github.com/lobehub/lobe-chat/commit/c398063))
+- **misc**: Rename the key to enabledModels ([ebfa0aa](https://github.com/lobehub/lobe-chat/commit/ebfa0aa))
+
+#### What's improved
+
+- **misc**: Refactor to support azure openai provider ([d737afe](https://github.com/lobehub/lobe-chat/commit/d737afe))
+- **misc**: Support close openai ([1ff1aef](https://github.com/lobehub/lobe-chat/commit/1ff1aef))
+- **misc**: Support display model list ([e59635f](https://github.com/lobehub/lobe-chat/commit/e59635f))
+- **misc**: Support model config modal ([62d6bb7](https://github.com/lobehub/lobe-chat/commit/62d6bb7))
+- **misc**: Support model list with model providers, closes [#1916](https://github.com/lobehub/lobe-chat/issues/1916) ([0895dd2](https://github.com/lobehub/lobe-chat/commit/0895dd2))
+- **misc**: Support open router auto model list ([1ba90d3](https://github.com/lobehub/lobe-chat/commit/1ba90d3))
+- **misc**: Support openai model fetcher ([56032e6](https://github.com/lobehub/lobe-chat/commit/56032e6))
+- **misc**: Support update model config ([e8ed847](https://github.com/lobehub/lobe-chat/commit/e8ed847))
+- **misc**: Support user config model ([72fd873](https://github.com/lobehub/lobe-chat/commit/72fd873))
+
+#### What's fixed
+
+- **misc**: Fix db migration ([4e75074](https://github.com/lobehub/lobe-chat/commit/4e75074))
+- **misc**: Fix db migration ([571b6dd](https://github.com/lobehub/lobe-chat/commit/571b6dd))
+
+#### Styles
+
+- **misc**: Fix i18n of model list fetcher ([67ed8c2](https://github.com/lobehub/lobe-chat/commit/67ed8c2))
+- **misc**: Improve detail design ([adcce07](https://github.com/lobehub/lobe-chat/commit/adcce07))
+- **misc**: Improve logo style ([c5826ce](https://github.com/lobehub/lobe-chat/commit/c5826ce))
+- **misc**: Update locale ([021bf91](https://github.com/lobehub/lobe-chat/commit/021bf91))
+
+
+
+
+
+### [Version 0.145.7](https://github.com/lobehub/lobe-chat/compare/v0.145.6...v0.145.7)
+
+Released on **2024-04-02**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix DraggablePanel bar interfere with the operation of the scrollbar.
+
+
+
+
+Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Fix DraggablePanel bar interfere with the operation of the scrollbar, closes [#1775](https://github.com/lobehub/lobe-chat/issues/1775) ([4b7b243](https://github.com/lobehub/lobe-chat/commit/4b7b243))
+
+
+
+
+
+### [Version 0.145.1](https://github.com/lobehub/lobe-chat/compare/v0.145.0...v0.145.1)
+
+Released on **2024-03-29**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix Google Gemini pro 1.5 and system role not take effect.
+
+
+
+
+Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Fix Google Gemini pro 1.5 and system role not take effect, closes [#1801](https://github.com/lobehub/lobe-chat/issues/1801) ([0a3e3f7](https://github.com/lobehub/lobe-chat/commit/0a3e3f7))
+
+
+
+
+
+## [Version 0.145.0](https://github.com/lobehub/lobe-chat/compare/v0.144.1...v0.145.0)
+
+Released on **2024-03-29**
+
+#### ✨ Features
+
+- **misc**: Support TogetherAI as new model provider.
+
+
+
+
+Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Support TogetherAI as new model provider, closes [#1709](https://github.com/lobehub/lobe-chat/issues/1709) ([d6921ef](https://github.com/lobehub/lobe-chat/commit/d6921ef))
+
+
+
+
+
+### [Version 0.142.8](https://github.com/lobehub/lobe-chat/compare/v0.142.7...v0.142.8)
+
+Released on **2024-03-28**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix gemini 1.5 pro model id to support gemini new models.
+
+
+
+
+Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Fix gemini 1.5 pro model id to support gemini new models, closes [#1776](https://github.com/lobehub/lobe-chat/issues/1776) ([591dcb3](https://github.com/lobehub/lobe-chat/commit/591dcb3))
+
+
+
+
+
+## [Version 0.142.0](https://github.com/lobehub/lobe-chat/compare/v0.141.2...v0.142.0)
+
+Released on **2024-03-25**
+
+#### ✨ Features
+
+- **misc**: Support 01.AI as a new provider.
+
+
+
+
+Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Support 01.AI as a new provider, closes [#1627](https://github.com/lobehub/lobe-chat/issues/1627) ([08342fd](https://github.com/lobehub/lobe-chat/commit/08342fd))
+
+
+
+
+
+## [Version 0.141.0](https://github.com/lobehub/lobe-chat/compare/v0.140.1...v0.141.0)
+
+Released on **2024-03-22**
+
+#### ✨ Features
+
+- **misc**: Using YJS and WebRTC to support sync data between different devices.
+
+
+
+
+Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Using YJS and WebRTC to support sync data between different devices, closes [#1525](https://github.com/lobehub/lobe-chat/issues/1525) ([60d9186](https://github.com/lobehub/lobe-chat/commit/60d9186))
+
+
+
+
+
+## [Version 0.139.0](https://github.com/lobehub/lobe-chat/compare/v0.138.2...v0.139.0)
+
+Released on **2024-03-16**
+
+#### ✨ Features
+
+- **misc**: Support openrouter as a new model provider.
+
+
+
+
+Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Support openrouter as a new model provider, closes [#1572](https://github.com/lobehub/lobe-chat/issues/1572) ([780b1a2](https://github.com/lobehub/lobe-chat/commit/780b1a2))
+
+
+
+
+
+## [Version 0.138.0](https://github.com/lobehub/lobe-chat/compare/v0.137.0...v0.138.0)
+
+Released on **2024-03-15**
+
+#### ✨ Features
+
+- **misc**: Support groq as a model provider.
+
+
+
+
+Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Support groq as a model provider, closes [#1569](https://github.com/lobehub/lobe-chat/issues/1569) [#1562](https://github.com/lobehub/lobe-chat/issues/1562) [#1570](https://github.com/lobehub/lobe-chat/issues/1570) ([a04c364](https://github.com/lobehub/lobe-chat/commit/a04c364))
+
+
+
+
+
+## [Version 0.137.0](https://github.com/lobehub/lobe-chat/compare/v0.136.0...v0.137.0)
+
+Released on **2024-03-15**
+
+#### ✨ Features
+
+- **ollama**: Improve connection check method and provide selector for user to control model options.
+
+
+
+
+Improvements and Fixes
+
+#### What's improved
+
+- **ollama**: Improve connection check method and provide selector for user to control model options, closes [#1397](https://github.com/lobehub/lobe-chat/issues/1397) ([675902f](https://github.com/lobehub/lobe-chat/commit/675902f))
+
+
+
+
+
+## [Version 0.136.0](https://github.com/lobehub/lobe-chat/compare/v0.135.4...v0.136.0)
+
+Released on **2024-03-15**
+
+#### ✨ Features
+
+- **misc**: Support azure-ad as a new sso provider.
+
+
+
+
+Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Support azure-ad as a new sso provider, closes [#1456](https://github.com/lobehub/lobe-chat/issues/1456) ([6649cd1](https://github.com/lobehub/lobe-chat/commit/6649cd1))
+
+
+
+
+
+### [Version 0.133.2](https://github.com/lobehub/lobe-chat/compare/v0.133.1...v0.133.2)
+
+Released on **2024-03-10**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix qwen model id and improve anthropic logo text color.
+
+
+
+
+Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Fix qwen model id and improve anthropic logo text color, closes [#1524](https://github.com/lobehub/lobe-chat/issues/1524) ([c68f5da](https://github.com/lobehub/lobe-chat/commit/c68f5da))
+
+
+
+
+
+### [Version 0.130.3](https://github.com/lobehub/lobe-chat/compare/v0.130.2...v0.130.3)
+
+Released on **2024-02-29**
+
+#### ♻ Code Refactoring
+
+- **misc**: Refactor the google api route and add more tests for chat route.
+
+
+
+
+ Improvements and Fixes
+
+#### Code refactoring
+
+- **misc**: Refactor the google api route and add more tests for chat route, closes [#1424](https://github.com/lobehub/lobe-chat/issues/1424) ([063a4d5](https://github.com/lobehub/lobe-chat/commit/063a4d5))
+
+
+
+
+
+## [Version 0.127.0](https://github.com/lobehub/lobe-chat/compare/v0.126.5...v0.127.0)
+
+Released on **2024-02-13**
+
+#### ✨ Features
+
+- **llm**: Support Ollama AI Provider for local LLM.
+
+
+
+
+ Improvements and Fixes
+
+#### What's improved
+
+- **llm**: Support Ollama AI Provider for local LLM ([3b6f249](https://github.com/lobehub/lobe-chat/commit/3b6f249))
+
+
+
+
+
+## [Version 0.126.0](https://github.com/lobehub/lobe-chat/compare/v0.125.0...v0.126.0)
+
+Released on **2024-02-09**
+
+#### ✨ Features
+
+- **misc**: Support umami analytics.
+
+#### 🐛 Bug Fixes
+
+- **misc**: The back button on the chat setting page can correctly return to the configured Agent chat page.
+
+
+
+
+ Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Support umami analytics, closes [#1267](https://github.com/lobehub/lobe-chat/issues/1267) ([da7beba](https://github.com/lobehub/lobe-chat/commit/da7beba))
+
+#### What's fixed
+
+- **misc**: The back button on the chat setting page can correctly return to the configured Agent chat page, closes [#1272](https://github.com/lobehub/lobe-chat/issues/1272) ([4cc1ad5](https://github.com/lobehub/lobe-chat/commit/4cc1ad5))
+
+
+
+
+
+### [Version 0.122.6](https://github.com/lobehub/lobe-chat/compare/v0.122.5...v0.122.6)
+
+Released on **2024-01-31**
+
+#### 🐛 Bug Fixes
+
+- **check**: The state of connectivity can only be singular.
+
+
+
+
+ Improvements and Fixes
+
+#### What's fixed
+
+- **check**: The state of connectivity can only be singular, closes [#1201](https://github.com/lobehub/lobe-chat/issues/1201) ([c412baf](https://github.com/lobehub/lobe-chat/commit/c412baf))
+
+
+
+
+
+### [Version 0.119.12](https://github.com/lobehub/lobe-chat/compare/v0.119.11...v0.119.12)
+
+Released on **2024-01-09**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix new line after sending messages with enter key.
+
+
+
+
+ Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Fix new line after sending messages with enter key, closes [#990](https://github.com/lobehub/lobe-chat/issues/990) ([e6ab019](https://github.com/lobehub/lobe-chat/commit/e6ab019))
+
+
+
+
+
+### [Version 0.118.8](https://github.com/lobehub/lobe-chat/compare/v0.118.7...v0.118.8)
+
+Released on **2024-01-03**
+
+#### 💄 Styles
+
+- **misc**: Add Vietnamese files and add the vi-VN option in the General Settings.
+
+
+
+
+ Improvements and Fixes
+
+#### Styles
+
+- **misc**: Add Vietnamese files and add the vi-VN option in the General Settings, closes [#860](https://github.com/lobehub/lobe-chat/issues/860) ([c2e5606](https://github.com/lobehub/lobe-chat/commit/c2e5606))
+
+
+
+
+
+### [Version 0.115.11](https://github.com/lobehub/lobe-chat/compare/v0.115.10...v0.115.11)
+
+Released on **2023-12-25**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix agent system role modal scrolling when content is too long.
+
+
+
+
+ Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Fix agent system role modal scrolling when content is too long, closes [#801](https://github.com/lobehub/lobe-chat/issues/801) ([f482a80](https://github.com/lobehub/lobe-chat/commit/f482a80))
+
+
+
+
+
+### [Version 0.114.4](https://github.com/lobehub/lobe-chat/compare/v0.114.3...v0.114.4)
+
+Released on **2023-12-19**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix agent system role modal scrolling when content is too long.
+
+
+
+
+ Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Fix agent system role modal scrolling when content is too long, closes [#716](https://github.com/lobehub/lobe-chat/issues/716) ([c3e36d1](https://github.com/lobehub/lobe-chat/commit/c3e36d1))
+
+
+
+
+
+## [Version 0.108.0](https://github.com/lobehub/lobe-chat/compare/v0.107.16...v0.108.0)
+
+Released on **2023-12-03**
+
+#### ✨ Features
+
+- **misc**: Hide the password form item in the settings when there is no `ACCESS_CODE` env.
+
+
+
+
+ Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Hide the password form item in the settings when there is no `ACCESS_CODE` env, closes [#568](https://github.com/lobehub/lobe-chat/issues/568) ([3b5f8b2](https://github.com/lobehub/lobe-chat/commit/3b5f8b2))
+
+
+
+
+
+## [Version 0.105.0](https://github.com/lobehub/lobe-chat/compare/v0.104.0...v0.105.0)
+
+Released on **2023-11-22**
+
+#### ✨ Features
+
+- **misc**: Standalone pluginn can get more arguments on init.
+
+
+
+
+ Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Standalone pluginn can get more arguments on init, closes [#498](https://github.com/lobehub/lobe-chat/issues/498) ([a7624f5](https://github.com/lobehub/lobe-chat/commit/a7624f5))
+
+
+
+
+
+## [Version 0.104.0](https://github.com/lobehub/lobe-chat/compare/v0.103.1...v0.104.0)
+
+Released on **2023-11-21**
+
+#### ✨ Features
+
+- **misc**: Support using env variable to set regions for OpenAI Edge Functions..
+
+
+
+
+ Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Support using env variable to set regions for OpenAI Edge Functions., closes [#473](https://github.com/lobehub/lobe-chat/issues/473) ([de6b79e](https://github.com/lobehub/lobe-chat/commit/de6b79e))
+
+
+
+
+
+### [Version 0.99.1](https://github.com/lobehub/lobe-chat/compare/v0.99.0...v0.99.1)
+
+Released on **2023-11-08**
+
+#### 💄 Styles
+
+- **misc**: Add max height to model menu in chat input area.
+
+
+
+
+ Improvements and Fixes
+
+#### Styles
+
+- **misc**: Add max height to model menu in chat input area, closes [#430](https://github.com/lobehub/lobe-chat/issues/430) ([c9a86f3](https://github.com/lobehub/lobe-chat/commit/c9a86f3))
+
+
+
+
+
+## [Version 0.99.0](https://github.com/lobehub/lobe-chat/compare/v0.98.3...v0.99.0)
+
+Released on **2023-11-08**
+
+#### ✨ Features
+
+- **misc**: Add Environment Variable for custom model name when deploying.
+
+
+
+
+ Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Add Environment Variable for custom model name when deploying, closes [#429](https://github.com/lobehub/lobe-chat/issues/429) ([15f9fa2](https://github.com/lobehub/lobe-chat/commit/15f9fa2))
+
+
+
+
+
+### [Version 0.98.3](https://github.com/lobehub/lobe-chat/compare/v0.98.2...v0.98.3)
+
+Released on **2023-11-07**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix redirect to welcome problem when there are topics in inbox.
+
+
+
+
+ Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Fix redirect to welcome problem when there are topics in inbox, closes [#422](https://github.com/lobehub/lobe-chat/issues/422) ([3d2588a](https://github.com/lobehub/lobe-chat/commit/3d2588a))
+
+
+
+
+
+## [Version 0.97.0](https://github.com/lobehub/lobe-chat/compare/v0.96.9...v0.97.0)
+
+Released on **2023-11-05**
+
+#### ✨ Features
+
+- **misc**: Add open new topic when open a topic.
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix toggle back to default topic when clearing topic.
+
+
+
+
+ Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Add open new topic when open a topic ([4df6384](https://github.com/lobehub/lobe-chat/commit/4df6384))
+
+#### What's fixed
+
+- **misc**: Fix toggle back to default topic when clearing topic ([6fe0a5c](https://github.com/lobehub/lobe-chat/commit/6fe0a5c))
+
+
+
+
+
+### [Version 0.96.7](https://github.com/lobehub/lobe-chat/compare/v0.96.6...v0.96.7)
+
+Released on **2023-10-31**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fix a bug when click inbox not switch back to chat page.
+
+
+
+
+ Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Fix a bug when click inbox not switch back to chat page ([31f6d29](https://github.com/lobehub/lobe-chat/commit/31f6d29))
+
+
+
+
+
+### [Version 0.96.2](https://github.com/lobehub/lobe-chat/compare/v0.96.1...v0.96.2)
+
+Released on **2023-10-28**
+
+#### 💄 Styles
+
+- **misc**: Fix some styles and make updates to various files.
+
+
+
+
+ Improvements and Fixes
+
+#### Styles
+
+- **misc**: Fix some styles and make updates to various files ([44a5f0a](https://github.com/lobehub/lobe-chat/commit/44a5f0a))
+
+
+
+
+
+### [Version 0.94.5](https://github.com/lobehub/lobe-chat/compare/v0.94.4...v0.94.5)
+
+Released on **2023-10-22**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Fallback agent market index to en when not find correct locale.
+
+
+
+
+ Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Fallback agent market index to en when not find correct locale, closes [#355](https://github.com/lobehub/lobe-chat/issues/355) ([7a45ab4](https://github.com/lobehub/lobe-chat/commit/7a45ab4))
+
+
+
+
+
+### [Version 0.85.2](https://github.com/lobehub/lobe-chat/compare/v0.85.1...v0.85.2)
+
+Released on **2023-10-10**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Add apikey form when there is no default api key in env.
+
+
+
+
+ Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Add apikey form when there is no default api key in env, closes [#290](https://github.com/lobehub/lobe-chat/issues/290) ([2c907e9](https://github.com/lobehub/lobe-chat/commit/2c907e9))
+
+
+
+
+
+## [Version 0.84.0](https://github.com/lobehub/lobe-chat/compare/v0.83.10...v0.84.0)
+
+Released on **2023-10-10**
+
+#### ✨ Features
+
+- **misc**: Support detect new version and upgrade action.
+
+
+
+
+ Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Support detect new version and upgrade action, closes [#282](https://github.com/lobehub/lobe-chat/issues/282) ([5da19b2](https://github.com/lobehub/lobe-chat/commit/5da19b2))
+
+
+
+
+
+### [Version 0.72.4](https://github.com/lobehub/lobe-chat/compare/v0.72.3...v0.72.4)
+
+Released on **2023-09-10**
+
+#### 🐛 Bug Fixes
+
+- **misc**: Use en-US when no suit lang with plugin index.
+
+
+
+
+ Improvements and Fixes
+
+#### What's fixed
+
+- **misc**: Use en-US when no suit lang with plugin index ([4e9668d](https://github.com/lobehub/lobe-chat/commit/4e9668d))
+
+
+
+
+
+## [Version 0.56.0](https://github.com/lobehub/lobe-chat/compare/v0.55.1...v0.56.0)
+
+Released on **2023-08-24**
+
+#### ✨ Features
+
+- **misc**: Use new plugin manifest to support plugin’s multi api.
+
+
+
+
+ Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Use new plugin manifest to support plugin’s multi api, closes [#101](https://github.com/lobehub/lobe-chat/issues/101) ([4534598](https://github.com/lobehub/lobe-chat/commit/4534598))
+
+
+
+
+
+## [Version 0.54.0](https://github.com/lobehub/lobe-chat/compare/v0.53.0...v0.54.0)
+
+Released on **2023-08-15**
+
+#### ✨ Features
+
+- **misc**: Add new features and improve user interface and functionality.
+
+
+
+
+ Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Add new features and improve user interface and functionality ([1543bd1](https://github.com/lobehub/lobe-chat/commit/1543bd1))
+
+
+
+
+
+## [Version 0.49.0](https://github.com/lobehub/lobe-chat/compare/v0.48.0...v0.49.0)
+
+Released on **2023-08-15**
+
+#### ✨ Features
+
+- **misc**: Add `BackToBottom` to conversation, Update icons and text in various components.
+
+
+
+
+ Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Add `BackToBottom` to conversation ([1433aa9](https://github.com/lobehub/lobe-chat/commit/1433aa9))
+- **misc**: Update icons and text in various components ([0e7a683](https://github.com/lobehub/lobe-chat/commit/0e7a683))
+
+
+
+
+
+## [Version 0.35.0](https://github.com/lobehub/lobe-chat/compare/v0.34.0...v0.35.0)
+
+Released on **2023-07-31**
+
+#### ✨ Features
+
+- **misc**: Add agent settings functionality, new components, and features for AgentMeta, Add and modify translations for various keys in JSON code files.
+
+
+
+
+ Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Add agent settings functionality, new components, and features for AgentMeta ([b1e5ff9](https://github.com/lobehub/lobe-chat/commit/b1e5ff9))
+- **misc**: Add and modify translations for various keys in JSON code files ([503adb4](https://github.com/lobehub/lobe-chat/commit/503adb4))
+
+
+
+
+
+## [Version 0.34.0](https://github.com/lobehub/lobe-chat/compare/v0.33.0...v0.34.0)
+
+Released on **2023-07-31**
+
+#### ✨ Features
+
+- **misc**: Add agent settings functionality, Add new components and features for AgentMeta, Improve organization and functionality of settings and configuration features.
+
+
+
+
+ Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Add agent settings functionality ([b0aaeed](https://github.com/lobehub/lobe-chat/commit/b0aaeed))
+- **misc**: Add new components and features for AgentMeta ([1232d95](https://github.com/lobehub/lobe-chat/commit/1232d95))
+- **misc**: Improve organization and functionality of settings and configuration features ([badde35](https://github.com/lobehub/lobe-chat/commit/badde35))
+
+
+
+
+
+## [Version 0.15.0](https://github.com/lobehub/lobe-chat/compare/v0.14.0...v0.15.0)
+
+Released on **2023-07-24**
+
+#### ✨ Features
+
+- **misc**: Add new features and improve user experience, Import and use constants from "meta.ts" instead of "agentConfig".
+
+
+
+
+ Improvements and Fixes
+
+#### What's improved
+
+- **misc**: Add new features and improve user experience ([64c8782](https://github.com/lobehub/lobe-chat/commit/64c8782))
+- **misc**: Import and use constants from "meta.ts" instead of "agentConfig" ([1eb6a17](https://github.com/lobehub/lobe-chat/commit/1eb6a17))
+
+
+
+
diff --git a/DigitalHumanWeb/CODE_OF_CONDUCT.md b/DigitalHumanWeb/CODE_OF_CONDUCT.md
new file mode 100644
index 0000000..83832d4
--- /dev/null
+++ b/DigitalHumanWeb/CODE_OF_CONDUCT.md
@@ -0,0 +1,128 @@
+# Contributor Covenant Code of Conduct
+
+## Our Pledge
+
+We as members, contributors, and leaders pledge to participate in our
+community a harassment-free experience for everyone, regardless of age, body
+size, visible or invisible disability, ethnicity, sex characteristics, gender
+identity and expression, level of experience, education, socio-economic status,
+nationality, personal appearance, race, religion, or sexual identity
+and orientation.
+
+We pledge to act and interact in ways that contribute to an open, welcoming,
+diverse, inclusive, and healthy community.
+
+## Our Standards
+
+Examples of behavior that contributes to a positive environment for our
+community includes:
+
+- Demonstrating empathy and kindness toward other people
+- Being respectful of differing opinions, viewpoints, and experiences
+- Giving and gracefully accepting constructive feedback
+- Accepting responsibility and apologizing to those affected by our mistakes,
+ and learning from the experience
+- Focusing on what is best not just for us as individuals, but for the
+ overall community
+
+## Examples of unacceptable behavior include:
+
+- The use of sexualized language or imagery, and sexual attention or
+ advances of any kind
+- Trolling, insulting or derogatory comments, and personal or political attacks
+- Public or private harassment
+- Publishing others' private information, such as a physical or email
+ address, without their explicit permission
+- Other conduct that could reasonably be considered inappropriate in a
+ professional setting
+
+## Enforcement Responsibilities
+
+Community leaders are responsible for clarifying and enforcing our standards of
+acceptable behavior and will take appropriate and fair corrective action in
+response to any behavior that they deem inappropriate, threatening, offensive,
+or harmful.
+
+Community leaders have the right and responsibility to remove, edit, or reject
+comments, commits, code, wiki edits, issues, and other contributions that are
+not aligned to this Code of Conduct, and will communicate reasons for moderation
+decisions when appropriate.
+
+## Scope
+
+This Code of Conduct applies within all community spaces and also applies when
+an individual is officially representing the community in public spaces.
+Examples of representing our community include using an official e-mail address,
+posting via an official social media account, or acting as an appointed
+representative at an online or offline event.
+
+## Enforcement
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be
+reported to the community leaders responsible for enforcement at
+.
+All complaints will be reviewed and investigated promptly and fairly.
+
+All community leaders are obligated to respect the privacy and security of the
+reporter of any incident.
+
+## Enforcement Guidelines
+
+Community leaders will follow these Community Impact Guidelines in determining
+the consequences for any action they deem in violation of this Code of Conduct:
+
+### 1. Correction
+
+**Community Impact**: Use of inappropriate language or other behavior deemed
+unprofessional or unwelcome in the community.
+
+**Consequence**: A private, written warning from community leaders, providing
+clarity around the nature of the violation and an explanation of why the
+behavior was inappropriate. A public apology may be requested.
+
+### 2. Warning
+
+**Community Impact**: A violation through a single incident or series
+of actions.
+
+**Consequence**: A warning with consequences for continued behavior. No
+interaction with the people involved, including unsolicited interaction with
+those enforcing the Code of Conduct, for a specified time. This
+includes avoiding interactions in community spaces as well as external channels
+like social media. Violating these terms may lead to a temporary or
+permanent ban.
+
+### 3. Temporary Ban
+
+**Community Impact**: A serious violation of community standards, including
+sustained inappropriate behavior.
+
+**Consequence**: A temporary ban from any sort of interaction or public
+communication with the community for a specified time. No public or
+private interaction with the people involved, including unsolicited interaction
+with those enforcing the Code of Conduct, is allowed during this period.
+Violating these terms may lead to a permanent ban.
+
+### 4. Permanent Ban
+
+**Community Impact**: Demonstrating a pattern of violation of community
+standards, including sustained inappropriate behavior, harassment of an
+individual, or aggression toward or disparagement of classes of individuals.
+
+**Consequence**: A permanent ban from any sort of public interaction within
+the community.
+
+## Attribution
+
+This Code of Conduct is adapted from the [Contributor Covenant][homepage],
+version 2.0, available at
+.
+
+Community Impact Guidelines were inspired by [Mozilla's code of conduct
+enforcement ladder](https://github.com/mozilla/diversity).
+
+For answers to common questions about this code of conduct, see the FAQ at
+. Translations are available at
+.
+
+[homepage]: https://www.contributor-covenant.org
diff --git a/DigitalHumanWeb/CONTRIBUTING.md b/DigitalHumanWeb/CONTRIBUTING.md
new file mode 100644
index 0000000..7056223
--- /dev/null
+++ b/DigitalHumanWeb/CONTRIBUTING.md
@@ -0,0 +1,88 @@
+# Lobe Chat - Contributing Guide 🌟
+
+We're thrilled that you want to contribute to Lobe Chat, the future of communication! 😄
+
+Lobe Chat is an open-source project, and we welcome your collaboration. Before you jump in, let's make sure you're all set to contribute effectively and have loads of fun along the way!
+
+## Table of Contents
+
+- [Fork the Repository](#fork-the-repository)
+- [Clone Your Fork](#clone-your-fork)
+- [Create a New Branch](#create-a-new-branch)
+- [Code Like a Wizard](#code-like-a-wizard)
+- [Committing Your Work](#committing-your-work)
+- [Sync with Upstream](#sync-with-upstream)
+- [Open a Pull Request](#open-a-pull-request)
+- [Review and Collaboration](#review-and-collaboration)
+- [Celebrate 🎉](#celebrate-)
+
+## Fork the Repository
+
+🍴 Fork this repository to your GitHub account by clicking the "Fork" button at the top right. This creates a personal copy of the project you can work on.
+
+## Clone Your Fork
+
+📦 Clone your forked repository to your local machine using the `git clone` command:
+
+```bash
+git clone https://github.com/YourUsername/lobe-chat.git
+```
+
+## Create a New Branch
+
+🌿 Create a new branch for your contribution. This helps keep your work organized and separate from the main codebase.
+
+```bash
+git checkout -b your-branch-name
+```
+
+Choose a meaningful branch name related to your work. It makes collaboration easier!
+
+## Code Like a Wizard
+
+🧙♀️ Time to work your magic! Write your code, fix bugs, or add new features. Be sure to follow our project's coding style. You can check if your code adheres to our style using:
+
+```bash
+pnpm lint
+```
+
+This adds a bit of enchantment to your coding experience! ✨
+
+## Committing Your Work
+
+📝 Ready to save your progress? Commit your changes to your branch.
+
+```bash
+git add .
+git commit -m "Your meaningful commit message"
+```
+
+Please keep your commits focused and clear. And remember to be kind to your fellow contributors; keep your commits concise.
+
+## Sync with Upstream
+
+⚙️ Periodically, sync your forked repository with the original (upstream) repository to stay up-to-date with the latest changes.
+
+```bash
+git remote add upstream https://github.com/lobehub/lobe-chat.git
+git fetch upstream
+git merge upstream/main
+```
+
+This ensures you're working on the most current version of Lobe Chat. Stay fresh! 💨
+
+## Open a Pull Request
+
+🚀 Time to share your contribution! Head over to the original Lobe Chat repository and open a Pull Request (PR). Our maintainers will review your work.
+
+## Review and Collaboration
+
+👓 Your PR will undergo thorough review and testing. The maintainers will provide feedback, and you can collaborate to make your contribution even better. We value teamwork!
+
+## Celebrate 🎉
+
+🎈 Congratulations! Your contribution is now part of Lobe Chat. 🥳
+
+Thank you for making Lobe Chat even more magical. We can't wait to see what you create! 🌠
+
+Happy Coding! 🚀🦄
diff --git a/DigitalHumanWeb/Dockerfile b/DigitalHumanWeb/Dockerfile
new file mode 100644
index 0000000..ae9603a
--- /dev/null
+++ b/DigitalHumanWeb/Dockerfile
@@ -0,0 +1,209 @@
+## Base image for all the stages
+FROM node:20-slim AS base
+
+ARG USE_CN_MIRROR
+
+ENV DEBIAN_FRONTEND="noninteractive"
+
+RUN \
+ # If you want to build docker in China, build with --build-arg USE_CN_MIRROR=true
+ if [ "${USE_CN_MIRROR:-false}" = "true" ]; then \
+ sed -i "s/deb.debian.org/mirrors.ustc.edu.cn/g" "/etc/apt/sources.list.d/debian.sources"; \
+ fi \
+ # Add required package & update base package
+ && apt update \
+ && apt install busybox proxychains-ng -qy \
+ && apt full-upgrade -qy \
+ && apt autoremove -qy --purge \
+ && apt clean -qy \
+ # Configure BusyBox
+ && busybox --install -s \
+ # Add nextjs:nodejs to run the app
+ && addgroup --system --gid 1001 nodejs \
+ && adduser --system --home "/app" --gid 1001 -uid 1001 nextjs \
+ # Set permission for nextjs:nodejs
+ && chown -R nextjs:nodejs "/etc/proxychains4.conf" \
+ # Cleanup temp files
+ && rm -rf /tmp/* /var/lib/apt/lists/* /var/tmp/*
+
+## Builder image, install all the dependencies and build the app
+FROM base AS builder
+
+ARG USE_CN_MIRROR
+
+ENV NEXT_PUBLIC_BASE_PATH=""
+
+# Sentry
+ENV NEXT_PUBLIC_SENTRY_DSN="" \
+ SENTRY_ORG="" \
+ SENTRY_PROJECT=""
+
+# Posthog
+ENV NEXT_PUBLIC_ANALYTICS_POSTHOG="" \
+ NEXT_PUBLIC_POSTHOG_HOST="" \
+ NEXT_PUBLIC_POSTHOG_KEY=""
+
+# Umami
+ENV NEXT_PUBLIC_ANALYTICS_UMAMI="" \
+ NEXT_PUBLIC_UMAMI_SCRIPT_URL="" \
+ NEXT_PUBLIC_UMAMI_WEBSITE_ID=""
+
+# Node
+ENV NODE_OPTIONS="--max-old-space-size=8192"
+
+WORKDIR /app
+
+COPY package.json ./
+COPY .npmrc ./
+
+RUN \
+ # If you want to build docker in China, build with --build-arg USE_CN_MIRROR=true
+ if [ "${USE_CN_MIRROR:-false}" = "true" ]; then \
+ export SENTRYCLI_CDNURL="https://npmmirror.com/mirrors/sentry-cli"; \
+ npm config set registry "https://registry.npmmirror.com/"; \
+ fi \
+ # Set the registry for corepack
+ && export COREPACK_NPM_REGISTRY=$(npm config get registry | sed 's/\/$//') \
+ # Enable corepack
+ && corepack enable \
+ # Use pnpm for corepack
+ && corepack use pnpm \
+ # Install the dependencies
+ && pnpm i \
+ # Add sharp dependencies
+ && mkdir -p /deps \
+ && pnpm add sharp --prefix /deps
+
+COPY . .
+
+# run build standalone for docker version
+RUN npm run build:docker
+
+## Application image, copy all the files for production
+FROM scratch AS app
+
+COPY --from=builder /app/public /app/public
+
+# Automatically leverage output traces to reduce image size
+# https://nextjs.org/docs/advanced-features/output-file-tracing
+COPY --from=builder /app/.next/standalone /app/
+COPY --from=builder /app/.next/static /app/.next/static
+COPY --from=builder /deps/node_modules/.pnpm /app/node_modules/.pnpm
+
+## Production image, copy all the files and run next
+FROM base
+
+# Copy all the files from app, set the correct permission for prerender cache
+COPY --from=app --chown=nextjs:nodejs /app /app
+
+ENV NODE_ENV="production" \
+ NODE_TLS_REJECT_UNAUTHORIZED=""
+
+# set hostname to localhost
+ENV HOSTNAME="0.0.0.0" \
+ PORT="3210"
+
+# General Variables
+ENV ACCESS_CODE="" \
+ API_KEY_SELECT_MODE="" \
+ DEFAULT_AGENT_CONFIG="" \
+ SYSTEM_AGENT="" \
+ FEATURE_FLAGS="" \
+ PROXY_URL=""
+
+# Model Variables
+ENV \
+ # AI21
+ AI21_API_KEY="" \
+ # Ai360
+ AI360_API_KEY="" \
+ # Anthropic
+ ANTHROPIC_API_KEY="" ANTHROPIC_PROXY_URL="" \
+ # Amazon Bedrock
+ AWS_ACCESS_KEY_ID="" AWS_SECRET_ACCESS_KEY="" AWS_REGION="" AWS_BEDROCK_MODEL_LIST="" \
+ # Azure OpenAI
+ AZURE_API_KEY="" AZURE_API_VERSION="" AZURE_ENDPOINT="" AZURE_MODEL_LIST="" \
+ # Baichuan
+ BAICHUAN_API_KEY="" \
+ # DeepSeek
+ DEEPSEEK_API_KEY="" \
+ # Fireworks AI
+ FIREWORKSAI_API_KEY="" FIREWORKSAI_MODEL_LIST="" \
+ # GitHub
+ GITHUB_TOKEN="" GITHUB_MODEL_LIST="" \
+ # Google
+ GOOGLE_API_KEY="" GOOGLE_PROXY_URL="" \
+ # Groq
+ GROQ_API_KEY="" GROQ_MODEL_LIST="" GROQ_PROXY_URL="" \
+ # Minimax
+ MINIMAX_API_KEY="" \
+ # Mistral
+ MISTRAL_API_KEY="" \
+ # Moonshot
+ MOONSHOT_API_KEY="" MOONSHOT_PROXY_URL="" \
+ # Novita
+ NOVITA_API_KEY="" NOVITA_MODEL_LIST="" \
+ # Ollama
+ OLLAMA_MODEL_LIST="" OLLAMA_PROXY_URL="" \
+ # OpenAI
+ OPENAI_API_KEY="" OPENAI_MODEL_LIST="" OPENAI_PROXY_URL="" \
+ # OpenRouter
+ OPENROUTER_API_KEY="" OPENROUTER_MODEL_LIST="" \
+ # Perplexity
+ PERPLEXITY_API_KEY="" PERPLEXITY_PROXY_URL="" \
+ # Qwen
+ QWEN_API_KEY="" QWEN_MODEL_LIST="" \
+ # SiliconCloud
+ SILICONCLOUD_API_KEY="" SILICONCLOUD_MODEL_LIST="" SILICONCLOUD_PROXY_URL="" \
+ # Spark
+ SPARK_API_KEY="" \
+ # Stepfun
+ STEPFUN_API_KEY="" \
+ # Taichu
+ TAICHU_API_KEY="" \
+ # TogetherAI
+ TOGETHERAI_API_KEY="" TOGETHERAI_MODEL_LIST="" \
+ # Upstage
+ UPSTAGE_API_KEY="" \
+ # 01.AI
+ ZEROONE_API_KEY="" ZEROONE_MODEL_LIST="" \
+ # Zhipu
+ ZHIPU_API_KEY="" ZHIPU_MODEL_LIST=""
+
+USER nextjs
+
+EXPOSE 3210/tcp
+
+CMD \
+ if [ -n "$PROXY_URL" ]; then \
+ # Set regex for IPv4
+ IP_REGEX="^(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}$"; \
+ # Set proxychains command
+ PROXYCHAINS="proxychains -q"; \
+ # Parse the proxy URL
+ host_with_port="${PROXY_URL#*//}"; \
+ host="${host_with_port%%:*}"; \
+ port="${PROXY_URL##*:}"; \
+ protocol="${PROXY_URL%%://*}"; \
+ # Resolve to IP address if the host is a domain
+ if ! [[ "$host" =~ "$IP_REGEX" ]]; then \
+ nslookup=$(nslookup -q="A" "$host" | tail -n +3 | grep 'Address:'); \
+ if [ -n "$nslookup" ]; then \
+ host=$(echo "$nslookup" | tail -n 1 | awk '{print $2}'); \
+ fi; \
+ fi; \
+ # Generate proxychains configuration file
+ printf "%s\n" \
+ 'localnet 127.0.0.0/255.0.0.0' \
+ 'localnet ::1/128' \
+ 'proxy_dns' \
+ 'remote_dns_subnet 224' \
+ 'strict_chain' \
+ 'tcp_connect_time_out 8000' \
+ 'tcp_read_time_out 15000' \
+ '[ProxyList]' \
+ "$protocol $host $port" \
+ > "/etc/proxychains4.conf"; \
+ fi; \
+ # Run the server
+ ${PROXYCHAINS} node "/app/server.js";
diff --git a/DigitalHumanWeb/Dockerfile.database b/DigitalHumanWeb/Dockerfile.database
new file mode 100644
index 0000000..3f11381
--- /dev/null
+++ b/DigitalHumanWeb/Dockerfile.database
@@ -0,0 +1,245 @@
+## Base image for all the stages
+FROM node:20-slim AS base
+
+ARG USE_CN_MIRROR
+
+ENV DEBIAN_FRONTEND="noninteractive"
+
+RUN \
+ # If you want to build docker in China, build with --build-arg USE_CN_MIRROR=true
+ if [ "${USE_CN_MIRROR:-false}" = "true" ]; then \
+ sed -i "s/deb.debian.org/mirrors.ustc.edu.cn/g" "/etc/apt/sources.list.d/debian.sources"; \
+ fi \
+ # Add required package & update base package
+ && apt update \
+ && apt install busybox proxychains-ng -qy \
+ && apt full-upgrade -qy \
+ && apt autoremove -qy --purge \
+ && apt clean -qy \
+ # Configure BusyBox
+ && busybox --install -s \
+ # Add nextjs:nodejs to run the app
+ && addgroup --system --gid 1001 nodejs \
+ && adduser --system --home "/app" --gid 1001 -uid 1001 nextjs \
+ # Set permission for nextjs:nodejs
+ && chown -R nextjs:nodejs "/etc/proxychains4.conf" \
+ # Cleanup temp files
+ && rm -rf /tmp/* /var/lib/apt/lists/* /var/tmp/*
+
+## Builder image, install all the dependencies and build the app
+FROM base AS builder
+
+ARG USE_CN_MIRROR
+
+ENV NEXT_PUBLIC_SERVICE_MODE="server" \
+ APP_URL="http://app.com" \
+ DATABASE_DRIVER="node" \
+ DATABASE_URL="postgres://postgres:password@localhost:5432/postgres" \
+ KEY_VAULTS_SECRET="use-for-build"
+
+# Sentry
+ENV NEXT_PUBLIC_SENTRY_DSN="" \
+ SENTRY_ORG="" \
+ SENTRY_PROJECT=""
+
+# Posthog
+ENV NEXT_PUBLIC_ANALYTICS_POSTHOG="" \
+ NEXT_PUBLIC_POSTHOG_HOST="" \
+ NEXT_PUBLIC_POSTHOG_KEY=""
+
+# Umami
+ENV NEXT_PUBLIC_ANALYTICS_UMAMI="" \
+ NEXT_PUBLIC_UMAMI_SCRIPT_URL="" \
+ NEXT_PUBLIC_UMAMI_WEBSITE_ID=""
+
+# Node
+ENV NODE_OPTIONS="--max-old-space-size=8192"
+
+WORKDIR /app
+
+COPY package.json ./
+COPY .npmrc ./
+
+RUN \
+ # If you want to build docker in China, build with --build-arg USE_CN_MIRROR=true
+ if [ "${USE_CN_MIRROR:-false}" = "true" ]; then \
+ export SENTRYCLI_CDNURL="https://npmmirror.com/mirrors/sentry-cli"; \
+ npm config set registry "https://registry.npmmirror.com/"; \
+ fi \
+ # Set the registry for corepack
+ && export COREPACK_NPM_REGISTRY=$(npm config get registry | sed 's/\/$//') \
+ # Enable corepack
+ && corepack enable \
+ # Use pnpm for corepack
+ && corepack use pnpm \
+ # Install the dependencies
+ && pnpm i \
+ # Add sharp and db migration dependencies
+ && mkdir -p /deps \
+ && pnpm add sharp pg drizzle-orm --prefix /deps
+
+COPY . .
+
+# run build standalone for docker version
+RUN npm run build:docker
+
+## Application image, copy all the files for production
+FROM scratch AS app
+
+COPY --from=builder /app/public /app/public
+
+# Automatically leverage output traces to reduce image size
+# https://nextjs.org/docs/advanced-features/output-file-tracing
+COPY --from=builder /app/.next/standalone /app/
+COPY --from=builder /app/.next/static /app/.next/static
+
+# copy dependencies
+COPY --from=builder /deps/node_modules/.pnpm /app/node_modules/.pnpm
+COPY --from=builder /deps/node_modules/pg /app/node_modules/pg
+COPY --from=builder /deps/node_modules/drizzle-orm /app/node_modules/drizzle-orm
+
+# Copy database migrations
+COPY --from=builder /app/src/database/server/migrations /app/migrations
+COPY --from=builder /app/scripts/migrateServerDB/docker.cjs /app/docker.cjs
+COPY --from=builder /app/scripts/migrateServerDB/errorHint.js /app/errorHint.js
+
+## Production image, copy all the files and run next
+FROM base
+
+# Copy all the files from app, set the correct permission for prerender cache
+COPY --from=app --chown=nextjs:nodejs /app /app
+
+ENV NODE_ENV="production" \
+ NODE_TLS_REJECT_UNAUTHORIZED=""
+
+# set hostname to localhost
+ENV HOSTNAME="0.0.0.0" \
+ PORT="3210"
+
+# General Variables
+ENV ACCESS_CODE="" \
+ APP_URL="" \
+ API_KEY_SELECT_MODE="" \
+ DEFAULT_AGENT_CONFIG="" \
+ SYSTEM_AGENT="" \
+ FEATURE_FLAGS="" \
+ PROXY_URL=""
+
+# Database
+ENV KEY_VAULTS_SECRET="" \
+ DATABASE_DRIVER="node" \
+ DATABASE_URL=""
+
+# Next Auth
+ENV NEXT_AUTH_SECRET="" \
+ NEXT_AUTH_SSO_PROVIDERS="" \
+ NEXTAUTH_URL=""
+
+# S3
+ENV NEXT_PUBLIC_S3_DOMAIN="" \
+ S3_PUBLIC_DOMAIN="" \
+ S3_ACCESS_KEY_ID="" \
+ S3_BUCKET="" \
+ S3_ENDPOINT="" \
+ S3_SECRET_ACCESS_KEY=""
+
+# Model Variables
+ENV \
+ # AI21
+ AI21_API_KEY="" \
+ # Ai360
+ AI360_API_KEY="" \
+ # Anthropic
+ ANTHROPIC_API_KEY="" ANTHROPIC_PROXY_URL="" \
+ # Amazon Bedrock
+ AWS_ACCESS_KEY_ID="" AWS_SECRET_ACCESS_KEY="" AWS_REGION="" AWS_BEDROCK_MODEL_LIST="" \
+ # Azure OpenAI
+ AZURE_API_KEY="" AZURE_API_VERSION="" AZURE_ENDPOINT="" AZURE_MODEL_LIST="" \
+ # Baichuan
+ BAICHUAN_API_KEY="" \
+ # DeepSeek
+ DEEPSEEK_API_KEY="" \
+ # Fireworks AI
+ FIREWORKSAI_API_KEY="" FIREWORKSAI_MODEL_LIST="" \
+ # GitHub
+ GITHUB_TOKEN="" GITHUB_MODEL_LIST="" \
+ # Google
+ GOOGLE_API_KEY="" GOOGLE_PROXY_URL="" \
+ # Groq
+ GROQ_API_KEY="" GROQ_MODEL_LIST="" GROQ_PROXY_URL="" \
+ # Minimax
+ MINIMAX_API_KEY="" \
+ # Mistral
+ MISTRAL_API_KEY="" \
+ # Moonshot
+ MOONSHOT_API_KEY="" MOONSHOT_PROXY_URL="" \
+ # Novita
+ NOVITA_API_KEY="" NOVITA_MODEL_LIST="" \
+ # Ollama
+ OLLAMA_MODEL_LIST="" OLLAMA_PROXY_URL="" \
+ # OpenAI
+ OPENAI_API_KEY="" OPENAI_MODEL_LIST="" OPENAI_PROXY_URL="" \
+ # OpenRouter
+ OPENROUTER_API_KEY="" OPENROUTER_MODEL_LIST="" \
+ # Perplexity
+ PERPLEXITY_API_KEY="" PERPLEXITY_PROXY_URL="" \
+ # Qwen
+ QWEN_API_KEY="" QWEN_MODEL_LIST="" \
+ # SiliconCloud
+ SILICONCLOUD_API_KEY="" SILICONCLOUD_MODEL_LIST="" SILICONCLOUD_PROXY_URL="" \
+ # Spark
+ SPARK_API_KEY="" \
+ # Stepfun
+ STEPFUN_API_KEY="" \
+ # Taichu
+ TAICHU_API_KEY="" \
+ # TogetherAI
+ TOGETHERAI_API_KEY="" TOGETHERAI_MODEL_LIST="" \
+ # Upstage
+ UPSTAGE_API_KEY="" \
+ # 01.AI
+ ZEROONE_API_KEY="" ZEROONE_MODEL_LIST="" \
+ # Zhipu
+ ZHIPU_API_KEY=""
+
+USER nextjs
+
+EXPOSE 3210/tcp
+
+CMD \
+ if [ -n "$PROXY_URL" ]; then \
+ # Set regex for IPv4
+ IP_REGEX="^(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}$"; \
+ # Set proxychains command
+ PROXYCHAINS="proxychains -q"; \
+ # Parse the proxy URL
+ host_with_port="${PROXY_URL#*//}"; \
+ host="${host_with_port%%:*}"; \
+ port="${PROXY_URL##*:}"; \
+ protocol="${PROXY_URL%%://*}"; \
+ # Resolve to IP address if the host is a domain
+ if ! [[ "$host" =~ "$IP_REGEX" ]]; then \
+ nslookup=$(nslookup -q="A" "$host" | tail -n +3 | grep 'Address:'); \
+ if [ -n "$nslookup" ]; then \
+ host=$(echo "$nslookup" | tail -n 1 | awk '{print $2}'); \
+ fi; \
+ fi; \
+ # Generate proxychains configuration file
+ printf "%s\n" \
+ 'localnet 127.0.0.0/255.0.0.0' \
+ 'localnet ::1/128' \
+ 'proxy_dns' \
+ 'remote_dns_subnet 224' \
+ 'strict_chain' \
+ 'tcp_connect_time_out 8000' \
+ 'tcp_read_time_out 15000' \
+ '[ProxyList]' \
+ "$protocol $host $port" \
+ > "/etc/proxychains4.conf"; \
+ fi; \
+ # Run migration
+ node "/app/docker.cjs"; \
+ if [ "$?" -eq "0" ]; then \
+ # Run the server
+ ${PROXYCHAINS} node "/app/server.js"; \
+ fi;
diff --git a/DigitalHumanWeb/LICENSE b/DigitalHumanWeb/LICENSE
new file mode 100644
index 0000000..4704b86
--- /dev/null
+++ b/DigitalHumanWeb/LICENSE
@@ -0,0 +1,38 @@
+Apache License Version 2.0
+
+Copyright (c) 2024/06/17 - current LobeHub LLC. All rights reserved.
+
+----------
+
+From 1.0, LobeChat is licensed under the Apache License 2.0, with the following additional conditions:
+
+1. The commercial usage of LobeChat:
+
+ a. LobeChat may be utilized commercially, including as a frontend and backend service without modifying the source code.
+
+ b. a commercial license must be obtained from the producer if you want to develop and distribute a derivative work based on LobeChat.
+
+Please contact hello@lobehub.com by email to inquire about licensing matters.
+
+
+2. As a contributor, you should agree that:
+
+ a. The producer can adjust the open-source agreement to be more strict or relaxed as deemed necessary.
+
+ b. Your contributed code may be used for commercial purposes, including but not limited to its cloud edition.
+
+Apart from the specific conditions mentioned above, all other rights and restrictions follow the Apache License 2.0. Detailed information about the Apache License 2.0 can be found at http://www.apache.org/licenses/LICENSE-2.0.
+
+----------
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
diff --git a/DigitalHumanWeb/README.ja-JP.md b/DigitalHumanWeb/README.ja-JP.md
new file mode 100644
index 0000000..d69681f
--- /dev/null
+++ b/DigitalHumanWeb/README.ja-JP.md
@@ -0,0 +1,809 @@
+
+
+[![][image-banner]][vercel-link]
+
+# Lobe Chat
+
+An open-source, modern-design ChatGPT/LLMs UI/Framework.
+Supports speech-synthesis, multi-modal, and extensible ([function call][docs-functionc-call]) plugin system.
+One-click **FREE** deployment of your private OpenAI ChatGPT/Claude/Gemini/Groq/Ollama chat application.
+
+**English** · [简体中文](./README.zh-CN.md) · [日本語](./README.ja-JP.md) · [Official Site][official-site] · [Changelog](./CHANGELOG.md) · [Documents][docs] · [Blog][blog] · [Feedback][github-issues-link]
+
+
+
+[![][github-release-shield]][github-release-link]
+[![][docker-release-shield]][docker-release-link]
+[![][vercel-shield]][vercel-link]
+[![][discord-shield]][discord-link]
+[![][codecov-shield]][codecov-link]
+[![][github-action-test-shield]][github-action-test-link]
+[![][github-action-release-shield]][github-action-release-link]
+[![][github-releasedate-shield]][github-releasedate-link]
+[![][github-contributors-shield]][github-contributors-link]
+[![][github-forks-shield]][github-forks-link]
+[![][github-stars-shield]][github-stars-link]
+[![][github-issues-shield]][github-issues-link]
+[![][github-license-shield]][github-license-link]
+[![][sponsor-shield]][sponsor-link]
+
+**Share LobeChat Repository**
+
+[![][share-x-shield]][share-x-link]
+[![][share-telegram-shield]][share-telegram-link]
+[![][share-whatsapp-shield]][share-whatsapp-link]
+[![][share-reddit-shield]][share-reddit-link]
+[![][share-weibo-shield]][share-weibo-link]
+[![][share-mastodon-shield]][share-mastodon-link]
+[![][share-linkedin-shield]][share-linkedin-link]
+
+Pioneering the new age of thinking and creating. Built for you, the Super Individual.
+
+[![][github-trending-shield]][github-trending-url]
+
+[![][image-overview]][vercel-link]
+
+
+
+
+Table of contents
+
+#### TOC
+
+- [👋🏻 Getting Started & Join Our Community](#-getting-started--join-our-community)
+- [✨ Features](#-features)
+ - [`1` File Upload/Knowledge Base](#1-file-uploadknowledge-base)
+ - [`2` Multi-Model Service Provider Support](#2-multi-model-service-provider-support)
+ - [`3` Local Large Language Model (LLM) Support](#3-local-large-language-model-llm-support)
+ - [`4` Model Visual Recognition](#4-model-visual-recognition)
+ - [`5` TTS & STT Voice Conversation](#5-tts--stt-voice-conversation)
+ - [`6` Text to Image Generation](#6-text-to-image-generation)
+ - [`7` Plugin System (Function Calling)](#7-plugin-system-function-calling)
+ - [`8` Agent Market (GPTs)](#8-agent-market-gpts)
+ - [`9` Support Local / Remote Database](#9-support-local--remote-database)
+ - [`10` Support Multi-User Management](#10-support-multi-user-management)
+ - [`11` Progressive Web App (PWA)](#11-progressive-web-app-pwa)
+ - [`12` Mobile Device Adaptation](#12-mobile-device-adaptation)
+ - [`13` Custom Themes](#13-custom-themes)
+ - [`*` What's more](#-whats-more)
+- [⚡️ Performance](#️-performance)
+- [🛳 Self Hosting](#-self-hosting)
+ - [`A` Deploying with Vercel, Zeabur or Sealos](#a-deploying-with-vercel-zeabur-or-sealos)
+ - [`B` Deploying with Docker](#b-deploying-with-docker)
+ - [Environment Variable](#environment-variable)
+- [📦 Ecosystem](#-ecosystem)
+- [🧩 Plugins](#-plugins)
+- [⌨️ Local Development](#️-local-development)
+- [🤝 Contributing](#-contributing)
+- [❤️ Sponsor](#️-sponsor)
+- [🔗 More Products](#-more-products)
+
+####
+
+
+
+
+
+## 👋🏻 Getting Started & Join Our Community
+
+We are a group of e/acc design-engineers, hoping to provide modern design components and tools for AIGC.
+By adopting the Bootstrapping approach, we aim to provide developers and users with a more open, transparent, and user-friendly product ecosystem.
+
+Whether for users or professional developers, LobeHub will be your AI Agent playground. Please be aware that LobeChat is currently under active development, and feedback is welcome for any [issues][issues-link] encountered.
+
+| [![][vercel-shield-badge]][vercel-link] | No installation or registration necessary! Visit our website to experience it firsthand. |
+| :---------------------------------------- | :----------------------------------------------------------------------------------------------------------------- |
+| [![][discord-shield-badge]][discord-link] | Join our Discord community! This is where you can connect with developers and other enthusiastic users of LobeHub. |
+
+> \[!IMPORTANT]
+>
+> **Star Us**, You will receive all release notifications from GitHub without any delay \~ ⭐️
+
+[![][image-star]][github-stars-link]
+
+
+ Star History
+
+
+
+
+
+
+## ✨ Features
+
+[![][image-feat-knowledgebase]][docs-feat-knowledgebase]
+
+### `1` [File Upload/Knowledge Base][docs-feat-knowledgebase]
+
+LobeChat supports file upload and knowledge base functionality. You can upload various types of files including documents, images, audio, and video, as well as create knowledge bases, making it convenient for users to manage and search for files. Additionally, you can utilize files and knowledge base features during conversations, enabling a richer dialogue experience.
+
+
+
+> \[!TIP]
+>
+> Learn more on [📘 LobeChat Knowledge Base Launch — From Now On, Every Step Counts](https://lobehub.com/blog/knowledge-base)
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+[![][image-feat-privoder]][docs-feat-provider]
+
+### `2` [Multi-Model Service Provider Support][docs-feat-provider]
+
+In the continuous development of LobeChat, we deeply understand the importance of diversity in model service providers for meeting the needs of the community when providing AI conversation services. Therefore, we have expanded our support to multiple model service providers, rather than being limited to a single one, in order to offer users a more diverse and rich selection of conversations.
+
+In this way, LobeChat can more flexibly adapt to the needs of different users, while also providing developers with a wider range of choices.
+
+#### Supported Model Service Providers
+
+We have implemented support for the following model service providers:
+
+- **AWS Bedrock**: Integrated with AWS Bedrock service, supporting models such as **Claude / LLama2**, providing powerful natural language processing capabilities. [Learn more](https://aws.amazon.com/cn/bedrock)
+- **Anthropic (Claude)**: Accessed Anthropic's **Claude** series models, including Claude 3 and Claude 2, with breakthroughs in multi-modal capabilities and extended context, setting a new industry benchmark. [Learn more](https://www.anthropic.com/claude)
+- **Google AI (Gemini Pro, Gemini Vision)**: Access to Google's **Gemini** series models, including Gemini and Gemini Pro, to support advanced language understanding and generation. [Learn more](https://deepmind.google/technologies/gemini/)
+- **Groq**: Accessed Groq's AI models, efficiently processing message sequences and generating responses, capable of multi-turn dialogues and single-interaction tasks. [Learn more](https://groq.com/)
+- **OpenRouter**: Supports routing of models including **Claude 3**, **Gemma**, **Mistral**, **Llama2** and **Cohere**, with intelligent routing optimization to improve usage efficiency, open and flexible. [Learn more](https://openrouter.ai/)
+- **01.AI (Yi Model)**: Integrated the 01.AI models, with series of APIs featuring fast inference speed, which not only shortened the processing time, but also maintained excellent model performance. [Learn more](https://01.ai/)
+- **Together.ai**: Over 100 leading open-source Chat, Language, Image, Code, and Embedding models are available through the Together Inference API. For these models you pay just for what you use. [Learn more](https://www.together.ai/)
+- **ChatGLM**: Added the **ChatGLM** series models from Zhipuai (GLM-4/GLM-4-vision/GLM-3-turbo), providing users with another efficient conversation model choice. [Learn more](https://www.zhipuai.cn/)
+- **Moonshot AI (Dark Side of the Moon)**: Integrated with the Moonshot series models, an innovative AI startup from China, aiming to provide deeper conversation understanding. [Learn more](https://www.moonshot.cn/)
+- **Minimax**: Integrated the Minimax models, including the MoE model **abab6**, offers a broader range of choices. [Learn more](https://www.minimaxi.com/)
+- **DeepSeek**: Integrated with the DeepSeek series models, an innovative AI startup from China, The product has been designed to provide a model that balances performance with price. [Learn more](https://www.deepseek.com/)
+- **Qwen**: Integrated the Qwen series models, including the latest **qwen-turbo**, **qwen-plus** and **qwen-max**. [Lean more](https://help.aliyun.com/zh/dashscope/developer-reference/model-introduction)
+- **Novita AI**: Access **Llama**, **Mistral**, and other leading open-source models at cheapest prices. Engage in uncensored role-play, spark creative discussions, and foster unrestricted innovation. **Pay For What You Use.** [Learn more](https://novita.ai/llm-api?utm_source=lobechat&utm_medium=ch&utm_campaign=api)
+
+At the same time, we are also planning to support more model service providers, such as Replicate and Perplexity, to further enrich our service provider library. If you would like LobeChat to support your favorite service provider, feel free to join our [community discussion](https://github.com/lobehub/lobe-chat/discussions/1284).
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+[![][image-feat-local]][docs-feat-local]
+
+### `3` [Local Large Language Model (LLM) Support][docs-feat-local]
+
+To meet the specific needs of users, LobeChat also supports the use of local models based on [Ollama](https://ollama.ai), allowing users to flexibly use their own or third-party models.
+
+> \[!TIP]
+>
+> Learn more about [📘 Using Ollama in LobeChat][docs-usage-ollama] by checking it out.
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+[![][image-feat-vision]][docs-feat-vision]
+
+### `4` [Model Visual Recognition][docs-feat-vision]
+
+LobeChat now supports OpenAI's latest [`gpt-4-vision`](https://platform.openai.com/docs/guides/vision) model with visual recognition capabilities,
+a multimodal intelligence that can perceive visuals. Users can easily upload or drag and drop images into the dialogue box,
+and the agent will be able to recognize the content of the images and engage in intelligent conversation based on this,
+creating smarter and more diversified chat scenarios.
+
+This feature opens up new interactive methods, allowing communication to transcend text and include a wealth of visual elements.
+Whether it's sharing images in daily use or interpreting images within specific industries, the agent provides an outstanding conversational experience.
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+[![][image-feat-tts]][docs-feat-tts]
+
+### `5` [TTS & STT Voice Conversation][docs-feat-tts]
+
+LobeChat supports Text-to-Speech (TTS) and Speech-to-Text (STT) technologies, enabling our application to convert text messages into clear voice outputs,
+allowing users to interact with our conversational agent as if they were talking to a real person. Users can choose from a variety of voices to pair with the agent.
+
+Moreover, TTS offers an excellent solution for those who prefer auditory learning or desire to receive information while busy.
+In LobeChat, we have meticulously selected a range of high-quality voice options (OpenAI Audio, Microsoft Edge Speech) to meet the needs of users from different regions and cultural backgrounds.
+Users can choose the voice that suits their personal preferences or specific scenarios, resulting in a personalized communication experience.
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+[![][image-feat-t2i]][docs-feat-t2i]
+
+### `6` [Text to Image Generation][docs-feat-t2i]
+
+With support for the latest text-to-image generation technology, LobeChat now allows users to invoke image creation tools directly within conversations with the agent. By leveraging the capabilities of AI tools such as [`DALL-E 3`](https://openai.com/dall-e-3), [`MidJourney`](https://www.midjourney.com/), and [`Pollinations`](https://pollinations.ai/), the agents are now equipped to transform your ideas into images.
+
+This enables a more private and immersive creative process, allowing for the seamless integration of visual storytelling into your personal dialogue with the agent.
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+[![][image-feat-plugin]][docs-feat-plugin]
+
+### `7` [Plugin System (Function Calling)][docs-feat-plugin]
+
+The plugin ecosystem of LobeChat is an important extension of its core functionality, greatly enhancing the practicality and flexibility of the LobeChat assistant.
+
+
+
+By utilizing plugins, LobeChat assistants can obtain and process real-time information, such as searching for web information and providing users with instant and relevant news.
+
+In addition, these plugins are not limited to news aggregation, but can also extend to other practical functions, such as quickly searching documents, generating images, obtaining data from various platforms like Bilibili, Steam, and interacting with various third-party services.
+
+> \[!TIP]
+>
+> Learn more about [📘 Plugin Usage][docs-usage-plugin] by checking it out.
+
+
+
+| Recent Submits | Description |
+| ---------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
+| [Tongyi wanxiang Image Generator](https://chat-preview.lobehub.com/settings/agent) By **YoungTx** on **2024-08-09** | This plugin uses Alibaba's Tongyi Wanxiang model to generate images based on text prompts. `image` `tongyi` `wanxiang` |
+| [Shopping tools](https://chat-preview.lobehub.com/settings/agent) By **shoppingtools** on **2024-07-19** | Search for products on eBay & AliExpress, find eBay events & coupons. Get prompt examples. `shopping` `e-bay` `ali-express` `coupons` |
+| [Savvy Trader AI](https://chat-preview.lobehub.com/settings/agent) By **savvytrader** on **2024-06-27** | Realtime stock, crypto and other investment data. `stock` `analyze` |
+| [Search1API](https://chat-preview.lobehub.com/settings/agent) By **fatwang2** on **2024-05-06** | Search aggregation service, specifically designed for LLMs `web` `search` |
+
+> 📊 Total plugins: [**50**](https://github.com/lobehub/lobe-chat-plugins)
+
+
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+[![][image-feat-agent]][docs-feat-agent]
+
+### `8` [Agent Market (GPTs)][docs-feat-agent]
+
+In LobeChat Agent Marketplace, creators can discover a vibrant and innovative community that brings together a multitude of well-designed agents,
+which not only play an important role in work scenarios but also offer great convenience in learning processes.
+Our marketplace is not just a showcase platform but also a collaborative space. Here, everyone can contribute their wisdom and share the agents they have developed.
+
+> \[!TIP]
+>
+> By [🤖/🏪 Submit Agents][submit-agents-link], you can easily submit your agent creations to our platform.
+> Importantly, LobeChat has established a sophisticated automated internationalization (i18n) workflow,
+> capable of seamlessly translating your agent into multiple language versions.
+> This means that no matter what language your users speak, they can experience your agent without barriers.
+
+> \[!IMPORTANT]
+>
+> We welcome all users to join this growing ecosystem and participate in the iteration and optimization of agents.
+> Together, we can create more interesting, practical, and innovative agents, further enriching the diversity and practicality of the agent offerings.
+
+
+
+| Recent Submits | Description |
+| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| [Contract Clause Refiner v1.0](https://chat-preview.lobehub.com/market?agent=business-contract) By **[houhoufm](https://github.com/houhoufm)** on **2024-09-24** | Output: {Optimize contract clauses for professional and concise expression} `contract-optimization` `legal-consultation` `copywriting` `terminology` `project-management` |
+| [Meeting Assistant v1.0](https://chat-preview.lobehub.com/market?agent=meeting) By **[houhoufm](https://github.com/houhoufm)** on **2024-09-24** | Professional meeting report assistant, distilling meeting key points into report sentences `meeting-reports` `writing` `communication` `workflow` `professional-skills` |
+| [Stable Album Cover Prompter](https://chat-preview.lobehub.com/market?agent=title-bpm-stimmung) By **[MellowTrixX](https://github.com/MellowTrixX)** on **2024-09-24** | Professional graphic designer for front cover design specializing in creating visual concepts and designs for melodic techno music albums. `album-cover` `prompt` `stable-diffusion` `cover-design` `cover-prompts` |
+| [Advertising Copywriting Master](https://chat-preview.lobehub.com/market?agent=advertising-copywriting-master) By **[leter](https://github.com/leter)** on **2024-09-23** | Specializing in product function analysis and advertising copywriting that resonates with user values `advertising-copy` `user-values` `marketing-strategy` |
+
+> 📊 Total agents: [**392** ](https://github.com/lobehub/lobe-chat-agents)
+
+
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+[![][image-feat-database]][docs-feat-database]
+
+### `9` [Support Local / Remote Database][docs-feat-database]
+
+LobeChat supports the use of both server-side and local databases. Depending on your needs, you can choose the appropriate deployment solution:
+
+- **Local database**: suitable for users who want more control over their data and privacy protection. LobeChat uses CRDT (Conflict-Free Replicated Data Type) technology to achieve multi-device synchronization. This is an experimental feature aimed at providing a seamless data synchronization experience.
+- **Server-side database**: suitable for users who want a more convenient user experience. LobeChat supports PostgreSQL as a server-side database. For detailed documentation on how to configure the server-side database, please visit [Configure Server-side Database](https://lobehub.com/docs/self-hosting/advanced/server-database).
+
+Regardless of which database you choose, LobeChat can provide you with an excellent user experience.
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+[![][image-feat-auth]][docs-feat-auth]
+
+### `10` [Support Multi-User Management][docs-feat-auth]
+
+LobeChat supports multi-user management and provides two main user authentication and management solutions to meet different needs:
+
+- **next-auth**: LobeChat integrates `next-auth`, a flexible and powerful identity verification library that supports multiple authentication methods, including OAuth, email login, credential login, etc. With `next-auth`, you can easily implement user registration, login, session management, social login, and other functions to ensure the security and privacy of user data.
+
+- [**Clerk**](https://go.clerk.com/exgqLG0): For users who need more advanced user management features, LobeChat also supports `Clerk`, a modern user management platform. `Clerk` provides richer functions, such as multi-factor authentication (MFA), user profile management, login activity monitoring, etc. With `Clerk`, you can get higher security and flexibility, and easily cope with complex user management needs.
+
+Regardless of which user management solution you choose, LobeChat can provide you with an excellent user experience and powerful functional support.
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+[![][image-feat-pwa]][docs-feat-pwa]
+
+### `11` [Progressive Web App (PWA)][docs-feat-pwa]
+
+We deeply understand the importance of providing a seamless experience for users in today's multi-device environment.
+Therefore, we have adopted Progressive Web Application ([PWA](https://support.google.com/chrome/answer/9658361)) technology,
+a modern web technology that elevates web applications to an experience close to that of native apps.
+
+Through PWA, LobeChat can offer a highly optimized user experience on both desktop and mobile devices while maintaining its lightweight and high-performance characteristics.
+Visually and in terms of feel, we have also meticulously designed the interface to ensure it is indistinguishable from native apps,
+providing smooth animations, responsive layouts, and adapting to different device screen resolutions.
+
+> \[!NOTE]
+>
+> If you are unfamiliar with the installation process of PWA, you can add LobeChat as your desktop application (also applicable to mobile devices) by following these steps:
+>
+> - Launch the Chrome or Edge browser on your computer.
+> - Visit the LobeChat webpage.
+> - In the upper right corner of the address bar, click on the Install icon.
+> - Follow the instructions on the screen to complete the PWA Installation.
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+[![][image-feat-mobile]][docs-feat-mobile]
+
+### `12` [Mobile Device Adaptation][docs-feat-mobile]
+
+We have carried out a series of optimization designs for mobile devices to enhance the user's mobile experience. Currently, we are iterating on the mobile user experience to achieve smoother and more intuitive interactions. If you have any suggestions or ideas, we welcome you to provide feedback through GitHub Issues or Pull Requests.
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+[![][image-feat-theme]][docs-feat-theme]
+
+### `13` [Custom Themes][docs-feat-theme]
+
+As a design-engineering-oriented application, LobeChat places great emphasis on users' personalized experiences,
+hence introducing flexible and diverse theme modes, including a light mode for daytime and a dark mode for nighttime.
+Beyond switching theme modes, a range of color customization options allow users to adjust the application's theme colors according to their preferences.
+Whether it's a desire for a sober dark blue, a lively peach pink, or a professional gray-white, users can find their style of color choices in LobeChat.
+
+> \[!TIP]
+>
+> The default configuration can intelligently recognize the user's system color mode and automatically switch themes to ensure a consistent visual experience with the operating system.
+> For users who like to manually control details, LobeChat also offers intuitive setting options and a choice between chat bubble mode and document mode for conversation scenarios.
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+### `*` What's more
+
+Beside these features, LobeChat also have much better basic technique underground:
+
+- [x] 💨 **Quick Deployment**: Using the Vercel platform or docker image, you can deploy with just one click and complete the process within 1 minute without any complex configuration.
+- [x] 🌐 **Custom Domain**: If users have their own domain, they can bind it to the platform for quick access to the dialogue agent from anywhere.
+- [x] 🔒 **Privacy Protection**: All data is stored locally in the user's browser, ensuring user privacy.
+- [x] 💎 **Exquisite UI Design**: With a carefully designed interface, it offers an elegant appearance and smooth interaction. It supports light and dark themes and is mobile-friendly. PWA support provides a more native-like experience.
+- [x] 🗣️ **Smooth Conversation Experience**: Fluid responses ensure a smooth conversation experience. It fully supports Markdown rendering, including code highlighting, LaTex formulas, Mermaid flowcharts, and more.
+
+> ✨ more features will be added when LobeChat evolve.
+
+---
+
+> \[!NOTE]
+>
+> You can find our upcoming [Roadmap][github-project-link] plans in the Projects section.
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+## ⚡️ Performance
+
+> \[!NOTE]
+>
+> The complete list of reports can be found in the [📘 Lighthouse Reports][docs-lighthouse]
+
+| Desktop | Mobile |
+| :-----------------------------------------: | :----------------------------------------: |
+| ![][chat-desktop] | ![][chat-mobile] |
+| [📑 Lighthouse Report][chat-desktop-report] | [📑 Lighthouse Report][chat-mobile-report] |
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+## 🛳 Self Hosting
+
+LobeChat provides Self-Hosted Version with Vercel and [Docker Image][docker-release-link]. This allows you to deploy your own chatbot within a few minutes without any prior knowledge.
+
+> \[!TIP]
+>
+> Learn more about [📘 Build your own LobeChat][docs-self-hosting] by checking it out.
+
+### `A` Deploying with Vercel, Zeabur or Sealos
+
+If you want to deploy this service yourself on either Vercel or Zeabur, you can follow these steps:
+
+- Prepare your [OpenAI API Key](https://platform.openai.com/account/api-keys).
+- Click the button below to start deployment: Log in directly with your GitHub account, and remember to fill in the `OPENAI_API_KEY`(required) and `ACCESS_CODE` (recommended) on the environment variable section.
+- After deployment, you can start using it.
+- Bind a custom domain (optional): The DNS of the domain assigned by Vercel is polluted in some areas; binding a custom domain can connect directly.
+
+
+
+| Deploy with Vercel | Deploy with Zeabur | Deploy with Sealos | Deploy with RepoCloud |
+| :-------------------------------------: | :---------------------------------------------------------: | :---------------------------------------------------------: | :---------------------------------------------------------------: |
+| [![][deploy-button-image]][deploy-link] | [![][deploy-on-zeabur-button-image]][deploy-on-zeabur-link] | [![][deploy-on-sealos-button-image]][deploy-on-sealos-link] | [![][deploy-on-repocloud-button-image]][deploy-on-repocloud-link] |
+
+
+
+#### After Fork
+
+After fork, only retain the upstream sync action and disable other actions in your repository on GitHub.
+
+#### Keep Updated
+
+If you have deployed your own project following the one-click deployment steps in the README, you might encounter constant prompts indicating "updates available." This is because Vercel defaults to creating a new project instead of forking this one, resulting in an inability to detect updates accurately.
+
+> \[!TIP]
+>
+> We suggest you redeploy using the following steps, [📘 Auto Sync With Latest][docs-upstream-sync]
+
+
+
+### `B` Deploying with Docker
+
+[![][docker-release-shield]][docker-release-link]
+[![][docker-size-shield]][docker-size-link]
+[![][docker-pulls-shield]][docker-pulls-link]
+
+We provide a Docker image for deploying the LobeChat service on your own private device. Use the following command to start the LobeChat service:
+
+```fish
+$ docker run -d -p 3210:3210 \
+ -e OPENAI_API_KEY=sk-xxxx \
+ -e ACCESS_CODE=lobe66 \
+ --name lobe-chat \
+ lobehub/lobe-chat
+```
+
+> \[!TIP]
+>
+> If you need to use the OpenAI service through a proxy, you can configure the proxy address using the `OPENAI_PROXY_URL` environment variable:
+
+```fish
+$ docker run -d -p 3210:3210 \
+ -e OPENAI_API_KEY=sk-xxxx \
+ -e OPENAI_PROXY_URL=https://api-proxy.com/v1 \
+ -e ACCESS_CODE=lobe66 \
+ --name lobe-chat \
+ lobehub/lobe-chat
+```
+
+> \[!NOTE]
+>
+> For detailed instructions on deploying with Docker, please refer to the [📘 Docker Deployment Guide][docs-docker]
+
+
+
+### Environment Variable
+
+This project provides some additional configuration items set with environment variables:
+
+| Environment Variable | Required | Description | Example |
+| -------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- |
+| `OPENAI_API_KEY` | Yes | This is the API key you apply on the OpenAI account page | `sk-xxxxxx...xxxxxx` |
+| `OPENAI_PROXY_URL` | No | If you manually configure the OpenAI interface proxy, you can use this configuration item to override the default OpenAI API request base URL | `https://api.chatanywhere.cn` or `https://aihubmix.com/v1` The default value is `https://api.openai.com/v1` |
+| `ACCESS_CODE` | No | Add a password to access this service; you can set a long password to avoid leaking. If this value contains a comma, it is a password array. | `awCTe)re_r74` or `rtrt_ewee3@09!` or `code1,code2,code3` |
+| `OPENAI_MODEL_LIST` | No | Used to control the model list. Use `+` to add a model, `-` to hide a model, and `model_name=display_name` to customize the display name of a model, separated by commas. | `qwen-7b-chat,+glm-6b,-gpt-3.5-turbo` |
+
+> \[!NOTE]
+>
+> The complete list of environment variables can be found in the [📘 Environment Variables][docs-env-var]
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+## 📦 Ecosystem
+
+| NPM | Repository | Description | Version |
+| --------------------------------- | --------------------------------------- | ----------------------------------------------------------------------------------------------------- | ----------------------------------------- |
+| [@lobehub/ui][lobe-ui-link] | [lobehub/lobe-ui][lobe-ui-github] | Open-source UI component library dedicated to building AIGC web applications. | [![][lobe-ui-shield]][lobe-ui-link] |
+| [@lobehub/icons][lobe-icons-link] | [lobehub/lobe-icons][lobe-icons-github] | Popular AI / LLM Model Brand SVG Logo and Icon Collection. | [![][lobe-icons-shield]][lobe-icons-link] |
+| [@lobehub/tts][lobe-tts-link] | [lobehub/lobe-tts][lobe-tts-github] | High-quality & reliable TTS/STT React Hooks library | [![][lobe-tts-shield]][lobe-tts-link] |
+| [@lobehub/lint][lobe-lint-link] | [lobehub/lobe-lint][lobe-lint-github] | Configurations for ESlint, Stylelint, Commitlint, Prettier, Remark, and Semantic Release for LobeHub. | [![][lobe-lint-shield]][lobe-lint-link] |
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+## 🧩 Plugins
+
+Plugins provide a means to extend the [Function Calling][docs-functionc-call] capabilities of LobeChat. They can be used to introduce new function calls and even new ways to render message results. If you are interested in plugin development, please refer to our [📘 Plugin Development Guide][docs-plugin-dev] in the Wiki.
+
+- [lobe-chat-plugins][lobe-chat-plugins]: This is the plugin index for LobeChat. It accesses index.json from this repository to display a list of available plugins for LobeChat to the user.
+- [chat-plugin-template][chat-plugin-template]: This is the plugin template for LobeChat plugin development.
+- [@lobehub/chat-plugin-sdk][chat-plugin-sdk]: The LobeChat Plugin SDK assists you in creating exceptional chat plugins for Lobe Chat.
+- [@lobehub/chat-plugins-gateway][chat-plugins-gateway]: The LobeChat Plugins Gateway is a backend service that provides a gateway for LobeChat plugins. We deploy this service using Vercel. The primary API POST /api/v1/runner is deployed as an Edge Function.
+
+> \[!NOTE]
+>
+> The plugin system is currently undergoing major development. You can learn more in the following issues:
+>
+> - [x] [**Plugin Phase 1**](https://github.com/lobehub/lobe-chat/issues/73): Implement separation of the plugin from the main body, split the plugin into an independent repository for maintenance, and realize dynamic loading of the plugin.
+> - [x] [**Plugin Phase 2**](https://github.com/lobehub/lobe-chat/issues/97): The security and stability of the plugin's use, more accurately presenting abnormal states, the maintainability of the plugin architecture, and developer-friendly.
+> - [x] [**Plugin Phase 3**](https://github.com/lobehub/lobe-chat/issues/149): Higher-level and more comprehensive customization capabilities, support for plugin authentication, and examples.
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+## ⌨️ Local Development
+
+You can use GitHub Codespaces for online development:
+
+[![][codespaces-shield]][codespaces-link]
+
+Or clone it for local development:
+
+```fish
+$ git clone https://github.com/lobehub/lobe-chat.git
+$ cd lobe-chat
+$ pnpm install
+$ pnpm dev
+```
+
+If you would like to learn more details, please feel free to look at our [📘 Development Guide][docs-dev-guide].
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+## 🤝 Contributing
+
+Contributions of all types are more than welcome; if you are interested in contributing code, feel free to check out our GitHub [Issues][github-issues-link] and [Projects][github-project-link] to get stuck in to show us what you’re made of.
+
+> \[!TIP]
+>
+> We are creating a technology-driven forum, fostering knowledge interaction and the exchange of ideas that may culminate in mutual inspiration and collaborative innovation.
+>
+> Help us make LobeChat better. Welcome to provide product design feedback, user experience discussions directly to us.
+>
+> **Principal Maintainers:** [@arvinxx](https://github.com/arvinxx) [@canisminor1990](https://github.com/canisminor1990)
+
+[![][pr-welcome-shield]][pr-welcome-link]
+[![][submit-agents-shield]][submit-agents-link]
+[![][submit-plugin-shield]][submit-plugin-link]
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+## ❤️ Sponsor
+
+Every bit counts and your one-time donation sparkles in our galaxy of support! You're a shooting star, making a swift and bright impact on our journey. Thank you for believing in us – your generosity guides us toward our mission, one brilliant flash at a time.
+
+
+
+
+
+
+
+
+
+
+[![][back-to-top]](#readme-top)
+
+
+
+## 🔗 More Products
+
+- **[🅰️ Lobe SD Theme][lobe-theme]:** Modern theme for Stable Diffusion WebUI, exquisite interface design, highly customizable UI, and efficiency-boosting features.
+- **[⛵️ Lobe Midjourney WebUI][lobe-midjourney-webui]:** WebUI for Midjourney, leverages AI to quickly generate a wide array of rich and diverse images from text prompts, sparking creativity and enhancing conversations.
+- **[🌏 Lobe i18n][lobe-i18n] :** Lobe i18n is an automation tool for the i18n (internationalization) translation process, powered by ChatGPT. It supports features such as automatic splitting of large files, incremental updates, and customization options for the OpenAI model, API proxy, and temperature.
+- **[💌 Lobe Commit][lobe-commit]:** Lobe Commit is a CLI tool that leverages Langchain/ChatGPT to generate Gitmoji-based commit messages.
+
+
+
+LobeChat is an open-source, extensible ([Function Calling][fc-url]), high-performance chatbot framework. It supports one-click free deployment of your private ChatGPT/LLM web application.
+
+[Usage Documents](https://lobehub.com/docs) | [使用指南](https://lobehub.com/docs)
+
+
+
+We provide a [Docker image][docker-release-link] for deploying the LobeChat service on your private device.
+
+
+
+### Install Docker Container Environment
+
+(Skip this step if already installed)
+
+
+
+
+
+```fish
+$ apt install docker.io
+```
+
+
+
+
+
+```fish
+$ yum install docker
+```
+
+
+
+
+
+### Run Docker Compose Deployment Command
+
+When using `docker-compose`, the configuration file is as follows:
+
+```yml
+version: '3.8'
+
+services:
+ lobe-chat:
+ image: lobehub/lobe-chat
+ container_name: lobe-chat
+ restart: always
+ ports:
+ - '3210:3210'
+ environment:
+ OPENAI_API_KEY: sk-xxxx
+ OPENAI_PROXY_URL: https://api-proxy.com/v1
+ ACCESS_CODE: lobe66
+```
+
+Run the following command to start the Lobe Chat service:
+
+```bash
+$ docker-compose up -d
+```
+
+### Crontab Automatic Update Script (Optional)
+
+Similarly, you can use the following script to automatically update Lobe Chat. When using `Docker Compose`, no additional configuration of environment variables is required.
+
+```bash
+#!/bin/bash
+# auto-update-lobe-chat.sh
+
+# Set proxy (optional)
+export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
+
+# Pull the latest image and store the output in a variable
+output=$(docker pull lobehub/lobe-chat:latest 2>&1)
+
+# Check if the pull command was executed successfully
+if [ $? -ne 0 ]; then
+ exit 1
+fi
+
+# Check if the output contains a specific string
+echo "$output" | grep -q "Image is up to date for lobehub/lobe-chat:latest"
+
+# If the image is already up to date, do nothing
+if [ $? -eq 0 ]; then
+ exit 0
+fi
+
+echo "Detected Lobe-Chat update"
+
+# Remove the old container
+echo "Removed: $(docker rm -f Lobe-Chat)"
+
+# You may need to navigate to the directory where `docker-compose.yml` is located first
+# cd /path/to/docker-compose-folder
+
+# Run the new container
+echo "Started: $(docker-compose up)"
+
+# Print the update time and version
+echo "Update time: $(date)"
+echo "Version: $(docker inspect lobehub/lobe-chat:latest | grep 'org.opencontainers.image.version' | awk -F'"' '{print $4}')"
+
+# Clean up unused images
+docker images | grep 'lobehub/lobe-chat' | grep -v 'lobehub/lobe-chat-database' | grep -v 'latest' | awk '{print $3}' | xargs -r docker rmi > /dev/null 2>&1
+echo "Removed old images."
+```
+
+This script can also be used in Crontab, but ensure that your Crontab can find the correct Docker command. It is recommended to use absolute paths.
+
+Configure Crontab to execute the script every 5 minutes:
+
+```bash
+*/5 * * * * /path/to/auto-update-lobe-chat.sh >> /path/to/auto-update-lobe-chat.log 2>&1
+```
+
+
+
+[docker-pulls-link]: https://hub.docker.com/r/lobehub/lobe-chat
+[docker-pulls-shield]: https://img.shields.io/docker/pulls/lobehub/lobe-chat?color=45cc11&labelColor=black&style=flat-square
+[docker-release-link]: https://hub.docker.com/r/lobehub/lobe-chat
+[docker-release-shield]: https://img.shields.io/docker/v/lobehub/lobe-chat?color=369eff&label=docker&labelColor=black&logo=docker&logoColor=white&style=flat-square
+[docker-size-link]: https://hub.docker.com/r/lobehub/lobe-chat
+[docker-size-shield]: https://img.shields.io/docker/image-size/lobehub/lobe-chat?color=369eff&labelColor=black&style=flat-square
diff --git a/DigitalHumanWeb/docs/self-hosting/platform/docker-compose.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/platform/docker-compose.zh-CN.mdx
new file mode 100644
index 0000000..ac28171
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/platform/docker-compose.zh-CN.mdx
@@ -0,0 +1,133 @@
+---
+title: 通过 Docker Compose 部署 LobeChat
+description: 学习如何使用 Docker Compose 部署 LobeChat 服务,包括安装 Docker 容器环境和自动更新脚本设置。
+tags:
+ - Docker Compose
+ - LobeChat
+ - Docker 容器
+ - 自动更新脚本
+ - 部署指引
+---
+
+# Docker Compose 部署指引
+
+
+
+We provide a [Docker image][docker-release-link] for you to deploy the LobeChat service on your private device.
+
+
+ ### Install Docker Container Environment
+
+(If already installed, skip this step)
+
+
+
+ ```fish
+ $ apt install docker.io
+ ```
+
+
+
+
+ ```fish
+ $ yum install docker
+ ```
+
+
+
+
+
+### Docker Command Deployment
+
+Use the following command to start the LobeChat service with one click:
+
+```fish
+$ docker run -d -p 3210:3210 \
+ -e OPENAI_API_KEY=sk-xxxx \
+ -e ACCESS_CODE=lobe66 \
+ --name lobe-chat \
+ lobehub/lobe-chat
+```
+
+Command explanation:
+
+- The default port mapping is `3210`, please ensure it is not occupied or manually change the port mapping.
+
+- Replace `sk-xxxx` in the above command with your OpenAI API Key.
+
+- For the complete list of environment variables supported by LobeChat, please refer to the [Environment Variables](/docs/self-hosting/environment-variables) section.
+
+
+ Since the official Docker image build takes about half an hour, if you see the "update available"
+ prompt after deployment, you can wait for the image to finish building before deploying again.
+
+
+
+ The official Docker image does not have a password set. It is strongly recommended to add a
+ password to enhance security, otherwise you may encounter situations like [My API Key was
+ stolen!!!](https://github.com/lobehub/lobe-chat/issues/1123).
+
+
+
+ Note that when the **deployment architecture is inconsistent with the image**, you need to
+ cross-compile **Sharp**, see [Sharp
+ Cross-Compilation](https://sharp.pixelplumbing.com/install#cross-platform) for details.
+
+
+#### Using a Proxy Address
+
+If you need to use the OpenAI service through a proxy, you can configure the proxy address using the `OPENAI_PROXY_URL` environment variable:
+
+```fish
+$ docker run -d -p 3210:3210 \
+ -e OPENAI_API_KEY=sk-xxxx \
+ -e OPENAI_PROXY_URL=https://api-proxy.com/v1 \
+ -e ACCESS_CODE=lobe66 \
+ --name lobe-chat \
+ lobehub/lobe-chat
+```
+
+### Crontab Automatic Update Script (Optional)
+
+If you want to automatically obtain the latest image, you can follow these steps.
+
+First, create a `lobe.env` configuration file with various environment variables, for example:
+
+```env
+OPENAI_API_KEY=sk-xxxx
+OPENAI_PROXY_URL=https://api-proxy.com/v1
+ACCESS_CODE=arthals2333
+OPENAI_MODEL_LIST=-gpt-4,-gpt-4-32k,-gpt-3.5-turbo-16k,gpt-3.5-turbo-1106=gpt-3.5-turbo-16k,gpt-4-0125-preview=gpt-4-turbo,gpt-4-vision-preview=gpt-4-vision
+```
+
+Then, you can use the following script to automate the update:
+
+```bash
+#!/bin/bash
+# auto-update-lobe-chat.sh
+
+# Set up proxy (optional)
+export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
+
+# Pull the latest image and store the output in a variable
+output=$(docker pull lobehub/lobe-chat:latest 2>&1)
+
+# Check if the pull command was executed successfully
+if [ $? -ne 0 ]; then
+ exit 1
+fi
+
+# Check if the output contains a specific string
+echo "$output" | grep -q "Image is up to date for lobehub/lobe-chat:latest"
+
+# If the image is already up to date, do nothing
+if [ $? -eq 0 ]; then
+ exit 0
+fi
+
+echo "Detected Lobe-Chat update"
+
+# Remove the old container
+echo "Removed: $(docker rm -f Lobe-Chat)"
+
+# Run the new container
+echo "Started: $(docker run -d --network=host --env-file /path/to/lobe.env --name=Lobe-Chat --restart=always lobehub/lobe-chat)"
+
+# Print the update time and version
+echo "Update time: $(date)"
+echo "Version: $(docker inspect lobehub/lobe-chat:latest | grep 'org.opencontainers.image.version' | awk -F'"' '{print $4}')"
+
+# Clean up unused images
+docker images | grep 'lobehub/lobe-chat' | grep -v 'lobehub/lobe-chat-database' | grep -v 'latest' | awk '{print $3}' | xargs -r docker rmi > /dev/null 2>&1
+echo "Removed old images."
+```
+
+This script can be used in Crontab, but please ensure that your Crontab can find the correct Docker command. It is recommended to use absolute paths.
+
+Configure Crontab to execute the script every 5 minutes:
+
+```bash
+*/5 * * * * /path/to/auto-update-lobe-chat.sh >> /path/to/auto-update-lobe-chat.log 2>&1
+```
+
+
+
+[docker-pulls-link]: https://hub.docker.com/r/lobehub/lobe-chat
+[docker-pulls-shield]: https://img.shields.io/docker/pulls/lobehub/lobe-chat?color=45cc11&labelColor=black&style=flat-square
+[docker-release-link]: https://hub.docker.com/r/lobehub/lobe-chat
+[docker-release-shield]: https://img.shields.io/docker/v/lobehub/lobe-chat?color=369eff&label=docker&labelColor=black&logo=docker&logoColor=white&style=flat-square
+[docker-size-link]: https://hub.docker.com/r/lobehub/lobe-chat
+[docker-size-shield]: https://img.shields.io/docker/image-size/lobehub/lobe-chat?color=369eff&labelColor=black&style=flat-square
diff --git a/DigitalHumanWeb/docs/self-hosting/platform/docker.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/platform/docker.zh-CN.mdx
new file mode 100644
index 0000000..4c6e3f0
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/platform/docker.zh-CN.mdx
@@ -0,0 +1,227 @@
+---
+title: 通过 Docker 部署 LobeChat
+description: 学习如何使用 Docker 部署 LobeChat 服务,包括安装 Docker 容器环境和使用指令一键启动服务。详细说明如何配置环境变量和使用代理地址。
+tags:
+ - Docker
+ - LobeChat
+ - 部署指引
+ - 环境变量
+ - 代理地址
+ - 自动更新脚本
+---
+
+# Docker 部署指引
+
+
+
+我们提供了 [Docker 镜像][docker-release-link],供你在自己的私有设备上部署 LobeChat 服务。
+
+## 部署指南
+
+
+ ### 安装 Docker 容器环境
+
+(如果已安装,请跳过此步)
+
+
+
+ ```fish
+ $ apt install docker.io
+ ```
+
+
+
+
+ ```fish
+ $ yum install docker
+ ```
+
+
+
+
+
+### Docker 指令部署
+
+使用以下命令即可使用一键启动 LobeChat 服务:
+
+```fish
+$ docker run -d -p 3210:3210 \
+ -e OPENAI_API_KEY=sk-xxxx \
+ -e ACCESS_CODE=lobe66 \
+ --name lobe-chat \
+ lobehub/lobe-chat
+```
+
+指令说明:
+
+- 默认映射端口为 `3210`, 请确保未被占用或手动更改端口映射
+- 使用你的 OpenAI API Key 替换上述命令中的 `sk-xxxx` ,获取 API Key 的方式详见最后一节。
+
+
+ LobeChat 支持的完整环境变量列表请参考 [📘 环境变量](/zh/docs/self-hosting/environment-variables) 部分
+
+
+
+ 由于官方的 Docker
+ 镜像构建大约需要半小时左右,如果在更新部署后会出现「存在更新」的提示,可以等待镜像构建完成后再次部署。
+
+
+
+ 官方 Docker 镜像中未设定密码,强烈建议添加密码以提升安全性,否则你可能会遇到 [My API Key was
+ stolen!!!](https://github.com/lobehub/lobe-chat/issues/1123) 这样的情况
+
+
+
+ 注意,当**部署架构与镜像的不一致时**,需要对 **Sharp** 进行交叉编译,详见 [Sharp
+ 交叉编译](https://sharp.pixelplumbing.com/install#cross-platform)
+
+
+#### 使用代理地址
+
+如果你需要通过代理使用 OpenAI 服务,你可以使用 `OPENAI_PROXY_URL` 环境变量来配置代理地址:
+
+```fish
+$ docker run -d -p 3210:3210 \
+ -e OPENAI_API_KEY=sk-xxxx \
+ -e OPENAI_PROXY_URL=https://api-proxy.com/v1 \
+ -e ACCESS_CODE=lobe66 \
+ --name lobe-chat \
+ lobehub/lobe-chat
+```
+
+### Crontab 自动更新脚本(可选)
+
+如果你想自动获得最新的镜像,你可以如下操作。
+
+首先,新建一个 `lobe.env` 配置文件,内容为各种环境变量,例如:
+
+```env
+OPENAI_API_KEY=sk-xxxx
+OPENAI_PROXY_URL=https://api-proxy.com/v1
+ACCESS_CODE=arthals2333
+OPENAI_MODEL_LIST=-gpt-4,-gpt-4-32k,-gpt-3.5-turbo-16k,gpt-3.5-turbo-1106=gpt-3.5-turbo-16k,gpt-4-0125-preview=gpt-4-turbo,gpt-4-vision-preview=gpt-4-vision
+```
+
+然后,你可以使用以下脚本来自动更新:
+
+```bash
+#!/bin/bash
+# auto-update-lobe-chat.sh
+
+# 设置代理(可选)
+export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
+
+# 拉取最新的镜像并将输出存储在变量中
+output=$(docker pull lobehub/lobe-chat:latest 2>&1)
+
+# 检查拉取命令是否成功执行
+if [ $? -ne 0 ]; then
+ exit 1
+fi
+
+# 检查输出中是否包含特定的字符串
+echo "$output" | grep -q "Image is up to date for lobehub/lobe-chat:latest"
+
+# 如果镜像已经是最新的,则不执行任何操作
+if [ $? -eq 0 ]; then
+ exit 0
+fi
+
+echo "Detected Lobe-Chat update"
+
+# 删除旧的容器
+echo "Removed: $(docker rm -f Lobe-Chat)"
+
+# 运行新的容器
+echo "Started: $(docker run -d --network=host --env-file /path/to/lobe.env --name=Lobe-Chat --restart=always lobehub/lobe-chat)"
+
+# 打印更新的时间和版本
+echo "Update time: $(date)"
+echo "Version: $(docker inspect lobehub/lobe-chat:latest | grep 'org.opencontainers.image.version' | awk -F'"' '{print $4}')"
+
+# 清理不再使用的镜像
+docker images | grep 'lobehub/lobe-chat' | grep -v 'lobehub/lobe-chat-database' | grep -v 'latest' | awk '{print $3}' | xargs -r docker rmi > /dev/null 2>&1
+echo "Removed old images."
+```
+
+此脚本可以在 Crontab 中使用,但请确认你的 Crontab 可以找到正确的 Docker 命令。建议使用绝对路径。
+
+配置 Crontab,每 5 分钟执行一次脚本:
+
+```bash
+*/5 * * * * /path/to/auto-update-lobe-chat.sh >> /path/to/auto-update-lobe-chat.log 2>&1
+```
+
+
+
+## 获取 OpenAI API Key
+
+API Key 是使用 LobeChat 进行大语言模型会话的必要信息,本节以 OpenAI 模型服务商为例,简要介绍获取 API Key 的方式。
+
+### `A` 通过 OpenAI 官方渠道
+
+- 注册一个 [OpenAI 账户](https://platform.openai.com/signup),你需要使用国际手机号、非大陆邮箱进行注册;
+- 注册完毕后,前往 [API Keys](https://platform.openai.com/api-keys) 页面,点击 `Create new secret key` 创建新的 API Key:
+
+
+
+#### 步骤 1:打开创建窗口
+
+
+
+#### 步骤 2:创建 API Key
+
+
+
+#### 步骤 3:获取 API Key
+
+
+
+
+
+将此 API Key 填写到 LobeChat 的 API Key 配置中,即可开始使用。
+
+
+ 账户注册后,一般有 5 美元的免费额度,但有效期只有三个月。如果你希望长期使用你的 API
+ Key,你需要完成支付的信用卡绑定。由于 OpenAI
+ 只支持外币信用卡,因此你需要找到合适的支付渠道,此处不再详细展开。
+
+
+### `B` 通过 OpenAI 第三方代理商
+
+如果你发现注册 OpenAI 账户或者绑定外币信用卡比较麻烦,可以考虑借助一些知名的 OpenAI 第三方代理商来获取 API Key,这可以有效降低获取 OpenAI API Key 的门槛。但与此同时,一旦使用三方服务,你可能也需要承担潜在的风险,请根据你自己的实际情况自行决策。以下是常见的第三方模型代理商列表,供你参考:
+
+| Logo | 服务商 | 特性说明 | Proxy 代理地址 | 链接 |
+| --- | --- | --- | --- | --- |
+| | **AiHubMix** | 使用 OpenAI 企业接口,全站模型价格为官方 **86 折**(含 GPT-4 、Cluade 3.5 等) | `https://aihubmix.com/v1` | [获取](https://lobe.li/CnsM6fH) |
+
+
+ **免责申明**: 在此推荐的 OpenAI API Key 由第三方代理商提供,所以我们不对 API Key 的 **有效性** 和
+ **安全性** 负责,请你自行承担购买和使用 API Key 的风险。
+
+
+[docker-pulls-link]: https://hub.docker.com/r/lobehub/lobe-chat
+[docker-pulls-shield]: https://img.shields.io/docker/pulls/lobehub/lobe-chat?color=45cc11&labelColor=black&style=flat-square
+[docker-release-link]: https://hub.docker.com/r/lobehub/lobe-chat
+[docker-release-shield]: https://img.shields.io/docker/v/lobehub/lobe-chat?color=369eff&label=docker&labelColor=black&logo=docker&logoColor=white&style=flat-square
+[docker-size-link]: https://hub.docker.com/r/lobehub/lobe-chat
+[docker-size-shield]: https://img.shields.io/docker/image-size/lobehub/lobe-chat?color=369eff&labelColor=black&style=flat-square
diff --git a/DigitalHumanWeb/docs/self-hosting/platform/netlify.mdx b/DigitalHumanWeb/docs/self-hosting/platform/netlify.mdx
new file mode 100644
index 0000000..f20084b
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/platform/netlify.mdx
@@ -0,0 +1,154 @@
+---
+title: Deploy LobeChat with Netlify - Step-by-Step Guide
+description: >-
+ Learn how to deploy LobeChat on Netlify with detailed instructions on forking
+ the repository, preparing your OpenAI API Key, importing to Netlify workspace,
+ configuring site name and environment variables, and monitoring deployment
+ progress.
+tags:
+ - Deploy LobeChat
+ - Netlify Deployment
+ - OpenAI API Key
+ - Environment Variables
+ - Custom Domain Setup
+---
+
+# Deploy LobeChat with Netlify
+
+If you want to deploy LobeChat on Netlify, you can follow these steps:
+
+## Deploy LobeChat with Netlify
+
+
+ ### Fork the LobeChat Repository
+
+Click the Fork button to fork the LobeChat repository to your GitHub account.
+
+### Prepare your OpenAI API Key
+
+Go to [OpenAI API Key](https://platform.openai.com/account/api-keys) to obtain your OpenAI API Key.
+
+### Import to Netlify Workspace
+
+
+ After testing, it is currently not supported to have a one-click deployment button similar to
+ Vercel/Zeabur. The reason is unknown. Therefore, manual import is required.
+
+
+Click "Import from git"
+
+
+
+Then click "Deploy with Github" and authorize Netlify to access your GitHub account.
+
+
+
+Next, select the LobeChat project:
+
+
+
+### Configure Site Name and Environment Variables
+
+In this step, you need to configure your site, including the site name, build command, and publish directory. Fill in your site name in the "Site Name" field. If there are no special requirements, you do not need to modify the remaining configurations as we have already set the default configurations.
+
+
+
+Click the "Add environment variables" button to add site environment variables:
+
+
+
+Taking OpenAI as an example, the environment variables you need to add are as follows:
+
+| Environment Variable | Type | Description | Example |
+| --- | --- | --- | --- |
+| `OPENAI_API_KEY` | Required | This is the API key you applied for on the OpenAI account page | `sk-xxxxxx...xxxxxx` |
+| `ACCESS_CODE` | Required | Add a password to access this service. You can set a long password to prevent brute force attacks. When this value is separated by commas, it becomes an array of passwords | `awCT74` or `e3@09!` or `code1,code2,code3` |
+| `OPENAI_PROXY_URL` | Optional | If you manually configure the OpenAI interface proxy, you can use this configuration to override the default OpenAI API request base URL | `https://aihubmix.com/v1`, default value: `https://api.openai.com/v1` |
+
+
+ For a complete list of environment variables supported by LobeChat, please refer to the [📘
+ Environment Variables](/docs/self-hosting/environment-variables)
+
+
+Afteradding the variables, finally click "Deploy lobe-chat" to enter the deployment phase
+
+
+
+### Wait for Deployment to Complete
+
+After clicking deploy, you will enter the site details page, where you can click the "Deploying your site" in blue or the "Building" in yellow to view the deployment progress.
+
+
+
+Upon entering the deployment details, you will see the following interface, indicating that your LobeChat is currently being deployed. Simply wait for the deployment to complete.
+
+
+
+During the deployment and build process:
+
+
+
+### Deployment Successful, Start Using
+
+If your Deploy Log in the interface looks like the following, it means your LobeChat has been successfully deployed.
+
+
+
+At this point, you can click on "Open production deploy" to access your LobeChat site.
+
+
+
+
+## Set up Custom Domain (Optional)
+
+You can use the subdomain provided by Netlify, or choose to bind a custom domain. Currently, the domain provided by Netlify has not been contaminated, and can be accessed directly in most regions.
diff --git a/DigitalHumanWeb/docs/self-hosting/platform/netlify.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/platform/netlify.zh-CN.mdx
new file mode 100644
index 0000000..60db352
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/platform/netlify.zh-CN.mdx
@@ -0,0 +1,148 @@
+---
+title: 在 Netlify 上部署 LobeChat
+description: >-
+ 学习如何在 Netlify 上部署 LobeChat,包括 Fork 仓库、准备 OpenAI API Key、导入到 Netlify
+ 工作台、配置站点名称与环境变量等步骤。
+tags:
+ - Netlify
+ - LobeChat
+ - 部署教程
+ - OpenAI API Key
+ - 环境配置
+---
+
+# 使用 Netlify 部署
+
+如果想在 Netlify 上部署 LobeChat,可以按照以下步骤进行操作:
+
+## Netlify 部署 LobeChat
+
+
+ ### Fork LobeChat 仓库
+
+点击 Fork 按钮,将 LobeChat 仓库 Fork 到你的 GitHub 账号下。
+
+### 准备好你的 OpenAI API Key
+
+前往 [OpenAI API Key](https://platform.openai.com/account/api-keys) 获取你的 OpenAI API Key
+
+### 在 Netlify 工作台导入
+
+经过测试,暂不支持类似 Vercel/Zeabur 的一键部署按钮,原因未知。因此需要手动导入
+
+点击 「Import from git」
+
+
+
+然后点击 「Deploy with Github」,并授权 Netlify 访问你的 GitHub 账号
+
+
+
+然后选择 LobeChat 项目:
+
+
+
+### 配置站点名称与环境变量
+
+在这一步,你需要配置你的站点,包括站点名称、构建命令、发布目录等。在「Site Name」字段填写上你的站点名称。其余配置如果没有特殊要求,无需修改,我们已经设定好了默认配置。
+
+
+
+点击 「Add environment variables」按钮,添加站点环境变量:
+
+
+
+以配置 OpenAI 为例,你需要添加的环境变量如下:
+
+| 环境变量 | 类型 | 描述 | 示例 |
+| --- | --- | --- | --- |
+| `OPENAI_API_KEY` | 必选 | 这是你在 OpenAI 账户页面申请的 API 密钥 | `sk-xxxxxx...xxxxxx` |
+| `ACCESS_CODE` | 必选 | 添加访问此服务的密码,你可以设置一个长密码以防被爆破,该值用逗号分隔时为密码数组 | `awCT74` 或 `e3@09!` or `code1,code2,code3` |
+| `OPENAI_PROXY_URL` | 可选 | 如果你手动配置了 OpenAI 接口代理,可以使用此配置项来覆盖默认的 OpenAI API 请求基础 URL | `https://aihubmix.com/v1` ,默认值:`https://api.openai.com/v1` |
+
+
+ LobeChat 支持的完整环境变量列表请参考 [📘 环境变量](/zh/docs/self-hosting/environment-variables) 部分
+
+
+添加完成后,最后点击「Deploy lobe-chat」 进入部署阶段。
+
+
+
+### 等待部署完成
+
+点击部署后,会进入站点详情页面,你可以点击青色字样的「Deploying your site」或者 「Building」 黄色标签查看部署进度。
+
+
+
+进入部署详情,你会看到下述界面,这意味着你的 LobeChat 正在部署中,只需等待部署完成即可。
+
+
+
+部署构建过程中:
+
+
+
+### 部署成功,开始使用
+
+如果你的界面中的 Deploy Log 如下所示,意味着你的 LobeChat 部署成功了。
+
+
+
+此时,你可以点击「Open production deploy」,即可访问你的 LobeChat 站点
+
+
+
+
+## 绑定自定义域名(可选)
+
+你可以使用 Netlify 提供的子域名,也可以选择绑定自定义域名。目前 Netlify 提供的域名还未被污染,大多数地区都可以直连。
diff --git a/DigitalHumanWeb/docs/self-hosting/platform/railway.mdx b/DigitalHumanWeb/docs/self-hosting/platform/railway.mdx
new file mode 100644
index 0000000..f312329
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/platform/railway.mdx
@@ -0,0 +1,35 @@
+---
+title: Deploy LobeChat with Railway
+description: >-
+ Learn how to deploy LobeChat on Railway and follow the step-by-step process.
+ Get your OpenAI API Key, deploy with a click, and start using it. Optionally,
+ bind a custom domain for your deployment.
+tags:
+ - Deploy LobeChat
+ - Railway Deployment
+ - OpenAI API Key
+ - Custom Domain Binding
+---
+
+# Deploy LobeChat with Railway
+
+If you want to deploy LobeChat on Railway, you can follow the steps below:
+
+## Railway Deployment Process
+
+
+ ### Prepare your OpenAI API Key
+
+Go to [OpenAI API Key](https://platform.openai.com/account/api-keys) to get your OpenAI API Key.
+
+### Click the button below to deploy
+
+[](https://railway.app/template/FB6HrV?referralCode=9bD9mT)
+
+### Once deployed, you can start using it
+
+### Bind a custom domain (optional)
+
+You can use the subdomain provided by Railway, or choose to bind a custom domain. Currently, the domains provided by Railway have not been contaminated, and most regions can connect directly.
+
+
diff --git a/DigitalHumanWeb/docs/self-hosting/platform/railway.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/platform/railway.zh-CN.mdx
new file mode 100644
index 0000000..57fb29a
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/platform/railway.zh-CN.mdx
@@ -0,0 +1,34 @@
+---
+title: 在 Railway 上部署 LobeChat
+description: 学习如何在 Railway 上部署 LobeChat 应用,包括准备 OpenAI API Key、点击按钮进行部署、绑定自定义域名等步骤。
+tags:
+ - Railway
+ - 部署
+ - LobeChat
+ - OpenAI
+ - API Key
+ - 自定义域名
+---
+
+# 使用 Railway 部署
+
+如果想在 Railway 上部署 LobeChat,可以按照以下步骤进行操作:
+
+## Railway 部署流程
+
+
+ ### 准备好你的 OpenAI API Key
+
+前往 [OpenAI API Key](https://platform.openai.com/account/api-keys) 获取你的 OpenAI API Key
+
+### 点击下方按钮进行部署
+
+[](https://railway.app/template/FB6HrV?referralCode=9bD9mT)
+
+### 部署完毕后,即可开始使用
+
+### 绑定自定义域名(可选)
+
+你可以使用 Railway 提供的子域名,也可以选择绑定自定义域名。目前 Railway 提供的域名还未被污染,大多数地区都可以直连。
+
+
diff --git a/DigitalHumanWeb/docs/self-hosting/platform/repocloud.mdx b/DigitalHumanWeb/docs/self-hosting/platform/repocloud.mdx
new file mode 100644
index 0000000..2b07870
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/platform/repocloud.mdx
@@ -0,0 +1,38 @@
+---
+title: Deploy LobeChat on RepoCloud
+description: >-
+ Learn how to deploy LobeChat on RepoCloud with ease. Follow these steps to
+ prepare your OpenAI API Key, deploy the application, and start using it.
+ Optional: Bind a custom domain for a personalized touch.
+tags:
+ - Deploy LobeChat
+ - RepoCloud Deployment
+ - OpenAI API Key
+ - Custom Domain Binding
+---
+
+# Deploy LobeChat with RepoCloud
+
+If you want to deploy LobeChat on RepoCloud, you can follow the steps below:
+
+## RepoCloud Deployment Process
+
+
+ ### Prepare your OpenAI API Key
+
+Go to [OpenAI API Key](https://platform.openai.com/account/api-keys) to get your OpenAI API Key.
+
+### One-click to deploy
+
+[![][deploy-button-image]][deploy-link]
+
+### Once deployed, you can start using it
+
+### Bind a custom domain (optional)
+
+You can use the subdomain provided by RepoCloud, or choose to bind a custom domain. Currently, the domains provided by RepoCloud have not been contaminated, and most regions can connect directly.
+
+
+
+[deploy-button-image]: https://d16t0pc4846x52.cloudfront.net/deploy.svg
+[deploy-link]: https://repocloud.io/details/?app_id=248
diff --git a/DigitalHumanWeb/docs/self-hosting/platform/repocloud.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/platform/repocloud.zh-CN.mdx
new file mode 100644
index 0000000..0bdaabc
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/platform/repocloud.zh-CN.mdx
@@ -0,0 +1,36 @@
+---
+title: 在 RepoCloud 上部署 LobeChat
+description: 学习如何在RepoCloud上部署LobeChat应用,包括准备OpenAI API Key、点击部署按钮、绑定自定义域名等操作。
+tags:
+ - RepoCloud
+ - LobeChat
+ - 部署流程
+ - OpenAI API Key
+ - 自定义域名
+---
+
+# 使用 RepoCloud 部署
+
+如果想在 RepoCloud 上部署 LobeChat,可以按照以下步骤进行操作:
+
+## RepoCloud 部署流程
+
+
+ ### 准备好你的 OpenAI API Key
+
+前往 [OpenAI API Key](https://platform.openai.com/account/api-keys) 获取你的 OpenAI API Key
+
+### 点击下方按钮进行部署
+
+[![][deploy-button-image]][deploy-link]
+
+### 部署完毕后,即可开始使用
+
+### 绑定自定义域名(可选)
+
+你可以使用 RepoCloud 提供的子域名,也可以选择绑定自定义域名。目前 RepoCloud 提供的域名还未被污染,大多数地区都可以直连。
+
+
+
+[deploy-button-image]: https://d16t0pc4846x52.cloudfront.net/deploy.svg
+[deploy-link]: https://repocloud.io/details/?app_id=248
diff --git a/DigitalHumanWeb/docs/self-hosting/platform/sealos.mdx b/DigitalHumanWeb/docs/self-hosting/platform/sealos.mdx
new file mode 100644
index 0000000..440510e
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/platform/sealos.mdx
@@ -0,0 +1,37 @@
+---
+title: Deploy LobeChat on SealOS
+description: >-
+ Learn how to deploy LobeChat on SealOS with ease. Follow the provided steps to
+ set up LobeChat and start using it efficiently.
+tags:
+ - Deploy LobeChat
+ - SealOS Deployment
+ - OpenAI API Key
+ - Custom Domain Binding
+---
+
+# Deploy LobeChat with SealOS
+
+If you want to deploy LobeChat on SealOS, you can follow the steps below:
+
+## SealOS Deployment Process
+
+
+ ### Prepare your OpenAI API Key
+
+Go to [OpenAI](https://platform.openai.com/account/api-keys) to get your OpenAI API Key.
+
+### Click the button below to deploy
+
+[![][deploy-button-image]][deploy-link]
+
+### After deployment, you can start using it
+
+### Bind a custom domain (optional)
+
+You can use the subdomain provided by SealOS, or choose to bind a custom domain. Currently, the domains provided by SealOS have not been contaminated, and can be directly accessed in most regions.
+
+
+
+[deploy-button-image]: https://raw.githubusercontent.com/labring-actions/templates/main/Deploy-on-Sealos.svg
+[deploy-link]: https://cloud.sealos.io/?openapp=system-template%3FtemplateName%3Dlobe-chat
diff --git a/DigitalHumanWeb/docs/self-hosting/platform/sealos.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/platform/sealos.zh-CN.mdx
new file mode 100644
index 0000000..b0bf457
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/platform/sealos.zh-CN.mdx
@@ -0,0 +1,36 @@
+---
+title: 在 SealOS 上部署 LobeChat
+description: 学习如何在 SealOS 上部署 LobeChat,包括准备 OpenAI API Key、点击部署按钮、绑定自定义域名等操作。
+tags:
+ - SealOS
+ - LobeChat
+ - OpenAI API Key
+ - 部署流程
+ - 自定义域名
+---
+
+# 使用 SealOS 部署
+
+如果想在 SealOS 上部署 LobeChat,可以按照以下步骤进行操作:
+
+## SealOS 部署流程
+
+
+ ### 准备好你的 OpenAI API Key
+
+前往 [OpenAI](https://platform.openai.com/account/api-keys) 获取你的 OpenAI API Key
+
+### 点击下方按钮进行部署
+
+[![][deploy-button-image]][deploy-link]
+
+### 部署完毕后,即可开始使用
+
+### 绑定自定义域名(可选)
+
+你可以使用 SealOS 提供的子域名,也可以选择绑定自定义域名。目前 SealOS 提供的域名还未被污染,大多数地区都可以直连。
+
+
+
+[deploy-button-image]: https://raw.githubusercontent.com/labring-actions/templates/main/Deploy-on-Sealos.svg
+[deploy-link]: https://cloud.sealos.io/?openapp=system-template%3FtemplateName%3Dlobe-chat
diff --git a/DigitalHumanWeb/docs/self-hosting/platform/vercel.mdx b/DigitalHumanWeb/docs/self-hosting/platform/vercel.mdx
new file mode 100644
index 0000000..a521cf7
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/platform/vercel.mdx
@@ -0,0 +1,46 @@
+---
+title: Deploy LobeChat with Vercel
+description: >-
+ Learn how to deploy LobeChat on Vercel with ease. Follow the provided steps to
+ prepare your OpenAI API Key, deploy the project, and start using it
+ efficiently.
+tags:
+ - Deploy LobeChat
+ - Vercel Deployment
+ - OpenAI API Key
+ - Custom Domain Binding
+---
+
+# Deploy LobeChat with Vercel
+
+If you want to deploy LobeChat on Vercel, you can follow the steps below:
+
+## Vercel Deployment Process
+
+
+ ### Prepare your OpenAI API Key
+
+Go to [OpenAI API Key](https://platform.openai.com/account/api-keys) to get your OpenAI API Key.
+
+### Click the button below to deploy
+
+[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Flobehub%2Flobe-chat&env=OPENAI_API_KEY,ACCESS_CODE&envDescription=Find%20your%20OpenAI%20API%20Key%20by%20click%20the%20right%20Learn%20More%20button.%20%7C%20Access%20Code%20can%20protect%20your%20website&envLink=https%3A%2F%2Fplatform.openai.com%2Faccount%2Fapi-keys&project-name=lobe-chat&repository-name=lobe-chat)
+
+Simply log in with your GitHub account, and remember to fill in `OPENAI_API_KEY` (required) and `ACCESS_CODE` (recommended) in the environment variables page.
+
+### After deployment, you can start using it
+
+### Bind a custom domain (optional)
+
+Vercel's assigned domain DNS may be polluted in some regions, so binding a custom domain can establish a direct connection.
+
+
+
+## Automatic Synchronization of Updates
+
+If you have deployed your project using the one-click deployment steps mentioned above, you may find that you are always prompted with "updates available." This is because Vercel creates a new project for you by default instead of forking this project, which causes the inability to accurately detect updates.
+
+
+ We recommend following the [Self-Hosting Upstream Sync](/docs/self-hosting/advanced/upstream-sync)
+ steps to Redeploy.
+
diff --git a/DigitalHumanWeb/docs/self-hosting/platform/vercel.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/platform/vercel.zh-CN.mdx
new file mode 100644
index 0000000..60fc6d0
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/platform/vercel.zh-CN.mdx
@@ -0,0 +1,48 @@
+---
+title: 在 Vercel 上部署 LobeChat
+description: 学习如何在 Vercel 上一键部署 LobeChat,准备 OpenAI API Key,点击按钮进行部署,绑定自定义域名,自动同步更新等。
+tags:
+ - Vercel
+ - 部署指引
+ - LobeChat
+ - OpenAI API Key
+ - 自定义域名
+ - 自动同步更新
+---
+
+# Vercel 部署指引
+
+如果想在 Vercel 上部署 LobeChat,可以按照以下步骤进行操作:
+
+## Vercel 部署流程
+
+
+ ### 准备好你的 OpenAI API Key
+
+前往 [OpenAI API Key](https://platform.openai.com/account/api-keys) 获取你的 OpenAI API Key
+
+### 点击下方按钮进行部署
+
+[![][deploy-button-image]][deploy-link]
+
+直接使用 GitHub 账号登录即可,记得在环境变量页填入 `OPENAI_API_KEY` (必填) and `ACCESS_CODE`(推荐);
+
+### 部署完毕后,即可开始使用
+
+### 绑定自定义域名(可选)
+
+Vercel 分配的域名 DNS 在某些区域被污染了,绑定自定义域名即可直连。
+
+
+
+## 自动同步更新
+
+如果你根据上述中的一键部署步骤部署了自己的项目,你可能会发现总是被提示 “有可用更新”。这是因为 Vercel 默认为你创建新项目而非 fork 本项目,这将导致无法准确检测更新。
+
+
+ 我们建议按照 [📘 LobeChat 自部署保持更新](/zh/docs/self-hosting/advanced/upstream-sync)
+ 步骤重新部署。
+
+
+[deploy-button-image]: https://vercel.com/button
+[deploy-link]: https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Flobehub%2Flobe-chat&env=OPENAI_API_KEY,ACCESS_CODE&envDescription=Find%20your%20OpenAI%20API%20Key%20by%20click%20the%20right%20Learn%20More%20button.%20%7C%20Access%20Code%20can%20protect%20your%20website&envLink=https%3A%2F%2Fplatform.openai.com%2Faccount%2Fapi-keys&project-name=lobe-chat&repository-name=lobe-chat
diff --git a/DigitalHumanWeb/docs/self-hosting/platform/zeabur.mdx b/DigitalHumanWeb/docs/self-hosting/platform/zeabur.mdx
new file mode 100644
index 0000000..d6a4705
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/platform/zeabur.mdx
@@ -0,0 +1,84 @@
+---
+title: Deploy LobeChat on Zeabur
+description: >-
+ Learn how to deploy LobeChat on Zeabur with ease. Follow the provided steps to
+ set up your chat application seamlessly.
+tags:
+ - Deploy LobeChat
+ - Zeabur Deployment
+ - OpenAI API Key
+ - Custom Domain Binding
+---
+
+# Deploy LobeChat with Zeabur
+
+If you want to deploy LobeChat on Zeabur, you can follow the steps below:
+
+## Zeabur Deployment Process
+
+
+ ### Prepare your OpenAI API Key
+
+Go to [OpenAI API Key](https://platform.openai.com/account/api-keys) to get your OpenAI API Key.
+
+### Click the button below to deploy
+
+[![][deploy-button-image]][deploy-link]
+
+### Once deployed, you can start using it
+
+### Bind a custom domain (optional)
+
+You can use the subdomain provided by Zeabur, or choose to bind a custom domain. Currently, the domains provided by Zeabur have not been contaminated, and most regions can connect directly.
+
+
+
+[deploy-button-image]: https://zeabur.com/button.svg
+[deploy-link]: https://zeabur.com/templates/VZGGTI
+
+# Deploy LobeChat with Zeabur as serverless function
+
+> Note: There are still issues with [middlewares and rewrites of next.js on Zeabur](https://github.com/lobehub/lobe-chat/pull/2775?notification_referrer_id=NT_kwDOAdi2DrQxMDkyODQ4MDc2NTozMDk3OTU5OA#issuecomment-2146713899), use at your own risk!
+
+Since Zeabur does NOT officially support FREE users deploy containerized service, you may wish to deploy LobeChat as a serverless function service. To deploy LobeChat as a serverless function service on Zeabur, you can follow the steps below:
+
+## Zeabur Deployment Process
+
+
+
+### Fork LobeChat
+
+### Add Zeabur pack config file
+
+Add a `zbpack.json` configuration file with the following content to the root dir of your fork:
+
+```json
+{
+ "ignore_dockerfile": true,
+ "serverless": true
+}
+```
+
+### Prepare your OpenAI API Key
+
+Go to [OpenAI API Key](https://platform.openai.com/account/api-keys) to get your OpenAI API Key.
+
+### Login to your [Zeabur dashboard](https://dash.zeabur.com)
+
+If you do not already have an account, you will need to register one.
+
+### Create a project and service
+
+Create a project, then create a service under this project.
+
+### Link your fork of LobeChat to the just created Zeabur service.
+
+When adding service, choose github. This may triger a oAuth depend on varies factors like how you login to Zeabur and if you have already authorized Zeabur to access all your repos
+
+### Bind a custom domain (optional)
+
+You can create a subdomain provided by Zeabur, or choose to bind a custom domain. Currently, the domains provided by Zeabur have not been contaminated, and most regions can connect directly.
+
+### Zeabur shall start auto build and you should be able to access it by the domain of your choice after a while.
+
+
diff --git a/DigitalHumanWeb/docs/self-hosting/platform/zeabur.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/platform/zeabur.zh-CN.mdx
new file mode 100644
index 0000000..00ee9aa
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/platform/zeabur.zh-CN.mdx
@@ -0,0 +1,83 @@
+---
+title: 在 Zeabur 上部署 LobeChat
+description: 按照指南准备 OpenAI API Key 并点击按钮进行部署。在部署完成后,即可开始使用 LobeChat 并选择是否绑定自定义域名。
+tags:
+ - Zeabur
+ - LobeChat
+ - OpenAI API Key
+ - 部署流程
+ - 自定义域名
+---
+
+# 使用 Zeabur 部署
+
+如果想在 Zeabur 上部署 LobeChat,可以按照以下步骤进行操作:
+
+## Zeabur 部署流程
+
+
+ ### 准备好你的 OpenAI API Key
+
+前往 [OpenAI API Key](https://platform.openai.com/account/api-keys) 获取你的 OpenAI API Key
+
+### 点击下方按钮进行部署
+
+[![][deploy-button-image]][deploy-link]
+
+### 部署完毕后,即可开始使用
+
+### 绑定自定义域名(可选)
+
+你可以使用 Zeabur 提供的子域名,也可以选择绑定自定义域名。目前 Zeabur 提供的域名还未被污染,大多数地区都可以直连。
+
+
+
+[deploy-button-image]: https://zeabur.com/button.svg
+[deploy-link]: https://zeabur.com/templates/VZGGTI
+
+# 使用 Zeabur 将 LobeChat 部署为无服务器函数
+
+> **注意:** 仍然存在关于 [Zeabur 上 next.js 的中间件和重写问题](https://github.com/lobehub/lobe-chat/pull/2775?notification_referrer_id=NT_kwDOAdi2DrQxMDkyODQ4MDc2NTozMDk3OTU5OA#issuecomment-2146713899),请自担风险!
+
+由于 Zeabur 并未官方支持免费用户部署容器化服务,您可能希望将 LobeChat 部署为无服务器函数服务。要在 Zeabur 上将 LobeChat 部署为无服务器函数服务,您可以按照以下步骤操作:
+
+## Zeabur 部署流程
+
+
+
+### Fork LobeChat
+
+### 添加 Zeabur 打包配置文件
+
+在您的分支的根目录下添加一个 `zbpack.json` 配置文件,内容如下:
+
+```json
+{
+ "ignore_dockerfile": true,
+ "serverless": true
+}
+```
+
+### 准备您的 OpenAI API 密钥
+
+前往 [OpenAI API 密钥](https://platform.openai.com/account/api-keys) 获取您的 OpenAI API 密钥。
+
+### 登录到您的 [Zeabur 仪表板](https://dash.zeabur.com)
+
+如果您尚未拥有一个账号,您需要注册一个。
+
+### 创建项目与服务。
+
+创建一个项目,并再这个项目下新建一个服务。
+
+### 将您的 LobeChat 分支链接到刚创建的 Zeabur 服务。
+
+在添加服务时,选择 github。这可能会触发一个 oAuth,取决于诸如您如何登录到 Zeabur以及您是否已经授权 Zeabur 访问所有您的存储库等各种因素。
+
+### 绑定自定义域名(可选)
+
+您可以创建 Zeabur 提供的子域名,或选择绑定自定义域名。目前,Zeabur 提供的域名尚未受到污染,大多数地区可以直接连接。
+
+### Zeabur 将开始自动构建,您应该可以在一段时间后通过您选择的域名访问它。
+
+
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database.mdx b/DigitalHumanWeb/docs/self-hosting/server-database.mdx
new file mode 100644
index 0000000..0e7d476
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database.mdx
@@ -0,0 +1,140 @@
+---
+title: Deploying Server-Side Database for LobeChat
+description: Learn how to deploy LobeChat's server-side database using Postgres.
+tags:
+ - LobeChat
+ - Server-Side Database
+ - Postgres
+ - Deployment Guide
+---
+# Deploying Server-Side Database
+
+LobeChat defaults to using a client-side database (IndexedDB) but also supports deploying a server-side database. LobeChat uses Postgres as the backend storage database.
+
+
+ PostgreSQL is a powerful open-source relational database management system with high scalability and standard SQL support. It provides rich data types, concurrency control, data integrity, security, and programmability, making it suitable for complex applications and large-scale data management.
+
+
+This guide will introduce the process and principles of deploying the server-side database version of LobeChat on any platform from a framework perspective, so you can understand both the what and the why, and then deploy according to your specific needs.
+
+If you are already familiar with the complete principles, you can quickly get started by checking the deployment guides for each platform:
+
+
+
+---
+
+For the server-side database version of LobeChat, a normal deployment process typically involves configuring three modules:
+
+1. Database configuration;
+2. Authentication service configuration;
+3. S3 storage service configuration.
+
+## Configure the Database
+
+Before deployment, make sure you have a Postgres database instance ready. You can choose from the following instances:
+
+- `A.` Use Serverless Postgres instances like Vercel/Neon;
+- `B.` Use self-deployed Postgres instances like Docker/Railway/Zeabur, collectively referred to as Node Postgres instances;
+
+There is a slight difference in the way they are configured in terms of environment variables.
+
+Since we support file-based conversations/knowledge base conversations, we need to install the `pgvector` plugin for Postgres. This plugin provides vector search capabilities and is a key component for LobeChat to implement RAG.
+
+
+### `NEXT_PUBLIC_SERVICE_MODE`
+
+LobeChat supports both client-side and server-side databases, so we provide an environment variable for switching modes, which is `NEXT_PUBLIC_SERVICE_MODE`, with a default value of `client`.
+
+For server-side database deployment scenarios, you need to set `NEXT_PUBLIC_SERVICE_MODE` to `server`.
+
+
+In the official `lobe-chat-database` Docker image, this environment variable is already set to `server` by default. Therefore, if you deploy using the Docker image, you do not need to configure this environment variable again.
+
+
+
+Since environment variables starting with `NEXT_PUBLIC` take effect in the front-end code, they cannot be modified through container runtime injection. (Refer to the `next.js` documentation [Configuring: Environment Variables | Next.js (nextjs.org)](https://nextjs.org/docs/pages/building-your-application/configuring/environment-variables)). This is why we chose to create a separate DB version image.
+
+If you need to modify variables with the `NEXT_PUBLIC` prefix in a Docker deployment, you must build the image yourself and inject your own `NEXT_PUBLIC` prefixed environment variables during the build.
+
+
+### `DATABASE_URL`
+
+The core of configuring the database is to add the `DATABASE_URL` environment variable and fill in the Postgres database connection URL you have prepared. The typical format of the database connection URL is `postgres://username:password@host:port/database`.
+
+
+If you want to enable SSL when connecting to the database, please refer to the [documentation](https://stackoverflow.com/questions/14021998/using-psql-to-connect-to-postgresql-in-ssl-mode) for setup instructions.
+
+
+### `DATABASE_DRIVER`
+
+The `DATABASE_DRIVER` environment variable is used to distinguish between the two types of Postgres database instances, with values of `node` or `neon`.
+
+To streamline deployment, we have set default values based on the characteristics of different platforms:
+
+- On the Vercel platform, `DATABASE_DRIVER` defaults to `neon`;
+- In our provided Docker image `lobe-chat-database`, `DATABASE_DRIVER` defaults to `node`.
+
+Therefore, if you follow the standard deployment methods below, you do not need to manually configure the `DATABASE_DRIVER` environment variable:
+
+- Vercel + Serverless Postgres
+- Docker image + Node Postgres
+
+### `KEY_VAULTS_SECRET`
+
+Considering that users will store sensitive information such as their API Key and baseURL in the database, we need a key to encrypt this information to prevent leakage in case of a database breach. Hence, the `KEY_VAULTS_SECRET` environment variable is used to encrypt sensitive information like user-stored apikeys.
+
+
+You can generate a random 32-character string as the value of `KEY_VAULTS_SECRET` using `openssl rand -base64 32`.
+
+
+
+
+## Configuring Authentication Services
+
+In the server-side database mode, we need an authentication service to distinguish the identities of different users. There are many well-developed authentication solutions in the open-source community. We have integrated two different authentication services to meet the demands of different scenarios, one is Clerk, and the other is NextAuth.
+
+### Clerk
+
+[Clerk](https://clerk.com?utm_source=lobehub&utm_medium=docs) is an authentication SaaS service that provides out-of-the-box authentication capabilities with high productization, low integration costs, and a great user experience. For those who offer SaaS products, Clerk is a good choice. Our official [LobeChat Cloud](https://lobechat.com) uses Clerk as the authentication service.
+
+The integration of Clerk is relatively simple, requiring only the configuration of the `NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY`, `CLERK_SECRET_KEY`, and `CLERK_WEBHOOK_SECRET` environment variables, which can be obtained from the Clerk console.
+
+
+In Vercel deployment mode, we recommend using Clerk as the authentication service for a better user experience.
+
+
+However, this type of authentication relies on Clerk's official service, so there may be some limitations in certain scenarios:
+
+- For example, when using Clerk in China, it may be affected by the network environment.
+- Clerk is not suitable for scenarios that require complete private deployment.
+- It relies on `NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY`, which may not be readily usable with public Docker images.
+
+Therefore, for the above scenarios, we also provide NextAuth as an alternative solution.
+
+### NextAuth
+
+NextAuth is an open-source authentication library that supports multiple identity providers, including Auth0, Cognito, GitHub, Google, Facebook, Apple, Twitter, and more. NextAuth itself provides a complete authentication solution, including user registration, login, password recovery, integration with various identity providers, and more.
+
+For information on configuring NextAuth, you can refer to the [Authentication](/docs/self-hosting/advanced/authentication) documentation.
+
+
+In the official Docker image `lobe-chat-database`, we recommend using NextAuth as the authentication service.
+
+
+## Configuring S3 Storage Service
+
+LobeChat has supported multimodal AI conversations since [a long time ago](https://x.com/lobehub/status/1724289575672291782), involving the function of uploading images to large models. In the client-side database solution, image files are stored as binary data directly in the browser's IndexedDB database. However, this solution is not feasible in the server-side database. Storing file-like data directly in Postgres will greatly waste valuable database storage space and slow down computational performance.
+
+The best practice in this area is to use a file storage service (S3) to store image files, which is also the storage solution relied upon for subsequent file uploads/knowledge base functions.
+
+
+In this documentation, S3 refers to a compatible S3 storage solution, which supports the Amazon S3 API-compatible object storage system. Common examples include Cloudflare R2, Alibaba Cloud OSS, and self-deployable Minio, all of which support the S3-compatible API.
+
+
+For detailed configuration guidelines on S3, please refer to [S3 Object Storage](/docs/self-hosting/advanced/s3) for more information.
+
+## Getting Started with Deployment
+
+The above is a detailed explanation of configuring LobeChat with a server-side database. You can configure it according to your actual situation and then choose a deployment platform that suits you to start deployment:
+
+
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/server-database.zh-CN.mdx
new file mode 100644
index 0000000..8f0c966
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database.zh-CN.mdx
@@ -0,0 +1,148 @@
+---
+title: 使用服务端数据库部署 - 配置数据库、身份验证服务和 S3 存储服务
+description: 本文将介绍服务端数据库版 LobeChat 的部署思路,解释如何配置数据库、身份验证服务和 S3 存储服务。
+tags:
+ - 服务端数据库
+ - Postgres
+ - S3存储服务
+ - 数据库配置
+ - 身份验证服务
+ - 环境变量配置
+---
+
+# 使用服务端数据库部署
+
+LobeChat 默认使用客户端数据库(IndexedDB),同时也支持使用服务端数据库(下简称 DB 版)。LobeChat 采用了 Postgres 作为后端存储数据库。
+
+
+ PostgreSQL是一种强大的开源关系型数据库管理系统,具备高度扩展性和标准SQL支持。它提供了丰富的数据类型、并发处理、数据完整性、安全性及可编程性,适用于复杂应用和大规模数据管理。
+
+
+本文将从框架角度介绍在任何一个平台中部署 DB 版 LobeChat 的流程和原理,让你知其然也知其所以然,最后可以根据自己的实际情况进行部署。
+
+如你已经熟悉完整原理,可以查看各个平台的部署指南快速开始:
+
+
+
+---
+
+对于 LobeChat 的 DB 版,正常的部署流程都需要包含三个模块的配置:
+
+1. 数据库配置;
+2. 身份验证服务配置;
+3. S3 存储服务配置。
+
+## 配置数据库
+
+在部署之前,请确保你已经准备好 Postgres 数据库实例,你可以选择以下任一实例:
+
+- `A.` 使用 Vercel / Neon 等 Serverless Postgres 实例;
+- `B.` 使用 Docker / Railway / Zeabur 等自部署 Postgres 实例,下统称 Node Postgres 实例;
+
+两者的配置方式在环境变量的取值上会略有一点区别,其他方面是一样的。
+
+同时,由于我们支持了文件对话/知识库对话的能力,因此我们需要为 Postgres 安装 `pgvector` 插件,该插件提供了向量搜索的能力,是 LobeChat 实现 RAG 的重要构件之一。
+
+
+
+### `NEXT_PUBLIC_SERVICE_MODE`
+
+LobeChat 同时支持了客户端数据库和服务端数据库,因此我们提供了一个环境变量用于切换模式,这个变量为 `NEXT_PUBLIC_SERVICE_MODE`,该值默认为 `client`。
+
+针对服务端数据库部署场景,你需要将 `NEXT_PUBLIC_SERVICE_MODE` 设置为 `server`。
+
+
+ 在官方的 `lobe-chat-database` Docker 镜像中,已经默认将该环境变量设为 `server`,因此如果你使用
+ Docker 镜像部署,则无需再配置该环境变量。
+
+
+
+ 由于 `NEXT_PUBLIC` 开头的环境变量是在前端代码中生效的,而因此无法通过容器运行时注入进行修改。 (`next.js`的参考文档 [Configuring: Environment Variables | Next.js (nextjs.org)](https://nextjs.org/docs/pages/building-your-application/configuring/environment-variables) ) 这也是为什么我们选择再打一个 DB 版镜像的原因。
+
+如果你需要在 Docker 部署中修改 `NEXT_PUBLIC` 前缀的变量,你必须自行构建镜像,在 build 时就把自己的 `NEXT_PUBLIC` 开头的环境变量打进去。
+
+
+
+### `DATABASE_URL`
+
+配置数据库,核心是添加 `DATABASE_URL` 环境变量,将你准备好的 Postgres 数据库连接 URL 填入其中。数据库连接 URL 的通常格式为 `postgres://username:password@host:port/database`。
+
+
+ 如果希望连接数据库时启用 SSL
+ ,请自行参考[文档](https://stackoverflow.com/questions/14021998/using-psql-to-connect-to-postgresql-in-ssl-mode)进行设置
+
+
+### `DATABASE_DRIVER`
+
+`DATABASE_DRIVER` 环境变量用于区分两种 Postgres 数据库实例,`DATABASE_DRIVER` 的取值为 `node` 或 `neon`。
+
+为提升部署便捷性,我们根据不同的平台特点设置了默认值:
+
+- 在 Vercel 平台下,`DATABASE_DRIVER` 默认为 `neon`;
+- 在我们提供的 Docker 镜像 `lobe-chat-database` 中,`DATABASE_DRIVER` 默认为 `node`。
+
+因此如果你采用了以下标准的部署方式,你无需手动配置 `DATABASE_DRIVER` 环境变量:
+
+- Vercel + Serverless Postgres
+- Docker 镜像 + Node Postgres
+
+### `KEY_VAULTS_SECRET`
+
+考虑到用户会存储自己的 API Key 和 baseURL 等敏感信息到数据库中,因此我们需要一个密钥来加密这些信息,避免数据库被爆破/脱库时这些关键信息被泄露。 因此有了 `KEY_VAULTS_SECRET` 环境变量,用于加密用户存储的 apikey 等敏感信息。
+
+
+ 你可以使用 `openssl rand -base64 32` 生成一个随机的 32 位字符串作为 `KEY_VAULTS_SECRET` 的值。
+
+
+
+
+## 配置身份验证服务
+
+在服务端数据库模式下,我们要为不同用户区分身份,因此需要一个身份验证服务。开源社区中已经存在较多完善的身份验证解决方案。我们在实现过程中集成了两种不同的身份验证服务,用于满足不同场景的诉求,一种是 Clerk ,另外一种是 NextAuth。
+
+### Clerk
+
+[Clerk](https://clerk.com?utm_source=lobehub&utm_medium=docs) 是一个身份验证 SaaS 服务,提供了开箱即用的身份验证能力,产品化程度很高,集成成本较低,体验很好。对于提供 SaaS 化产品的诉求来说,Clerk 是一个不错的选择。我们官方提供的 [LobeChat Cloud](https://lobechat.com),就是使用了 Clerk 作为身份验证服务。
+
+Clerk 的集成也相对简单,只需要配置 `NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY` 、 `CLERK_SECRET_KEY` 和 `CLERK_WEBHOOK_SECRET` 环境变量即可,这三个环境变量可以在 Clerk 控制台中获取。
+
+
+ 在 Vercel 部署模式下,我们推荐使用 Clerk 作为身份验证服务,可以获得更好的用户体验。
+
+
+但是这种身份验证依赖了 Clerk 官方的服务,因此在一些场景下可能会有一些限制:
+
+- 比如在国内使用 Clerk 时,可能会受到网络环境的影响;
+- 需要完全私有化部署的场景下,Clerk 并不适用;
+- 必须依赖 `NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY`,对于公共 Docker 镜像无法开箱即用;
+
+因此针对上述场景,我们也提供了 NextAuth 作为备选方案。
+
+### NextAuth
+
+NextAuth 是一个开源的身份验证库,支持多种身份验证提供商,包括 Auth0、Cognito、GitHub、Google、Facebook、Apple、Twitter 等。NextAuth 本身提供了一套完整的身份验证解决方案,包括用户注册、登录、密码找回、多种身份验证提供商的集成等。
+
+关于 NextAuth 的配置,你可以参考 [身份验证](/zh/docs/self-hosting/advanced/authentication) 的文档获取更多信息。
+
+
+ 在官方的 Docker 镜像 `lobe-chat-database` 中,我们推荐使用 NextAuth 作为身份验证服务。
+
+
+## 配置 S3 存储服务
+
+LobeChat 在 [很早以前](https://x.com/lobehub/status/1724289575672291782) 就支持了多模态的 AI 会话,其中涉及到图片上传给大模型的功能。在客户端数据库方案中,图片文件直接以二进制数据存储在浏览器 IndexedDB 数据库,但在服务端数据库中这个方案并不可行。因为在 Postgres 中直接存储文件类二进制数据会大大浪费宝贵的数据库存储空间,并拖慢计算性能。
+
+这块最佳实践是使用文件存储服务(S3)来存储图片文件,同时 S3 也是文件上传/知识库功能所依赖的大容量静态文件存储方案。
+
+
+ 在本文档库中,S3 所指代的是指兼容 S3 存储方案,即支持 Amazon S3 API 的对象存储系统,常见例如
+ Cloudflare R2 、阿里云 OSS,可以自部署的 minio 等均支持 S3 兼容 API。
+
+
+关于 S3 的详细配置指南,请参阅 [S3 对象存储](/zh/docs/self-hosting/advanced/s3) 了解详情。
+
+## 开始部署
+
+以上就是关于服务端数据库版 LobeChat 的配置详解,你可以根据自己的实际情况进行配置,然后选择适合自己的部署平台开始部署:
+
+
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/docker-compose.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/docker-compose.mdx
new file mode 100644
index 0000000..edbd634
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/docker-compose.mdx
@@ -0,0 +1,675 @@
+---
+title: Deploying LobeChat Server Database with Docker Compose
+description: >-
+ Learn how to deploy LobeChat Server Database using Docker Compose, including
+ configuration tutorials for various services.
+tags:
+ - Docker Compose
+ - LobeChat
+ - Docker Container
+ - Deployment Guide
+---
+
+# Deploying LobeChat server database with Docker Compose
+
+
+
+
+ This article assumes that you are familiar with the basic principles and processes of deploying
+ the LobeChat server database version (hereinafter referred to as DB version), so it only includes
+ the core environment variable configuration. If you are not familiar with the deployment
+ principles of LobeChat DB version, please refer to [Deploying using a Server
+ Database](/zh/docs/self-hosting/server-database).
+
+
+
+ Due to the inability to expose `NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY` using Docker environment variables, you cannot use Clerk as an authentication service when deploying LobeChat using Docker / Docker Compose.
+
+If you do need Clerk as an authentication service, you might consider deploying using Vercel or building your own image.
+
+
+
+Generally speaking, to fully run the LobeChat database version, you need at least the following four services:
+
+- LobeChat database version itself
+- PostgreSQL database with PGVector plugin
+- Object storage service supporting S3 protocol
+- SSO authentication service supported by LobeChat
+
+These services can be combined through self-hosting or online cloud services to meet your needs.
+
+We provide a fully self-built Docker Compose configuration, which you can use directly to start the LobeChat database version or modify to suit your needs.
+
+We default to using [MinIO](https://github.com/minio/minio) as the local S3 object storage service and [Logto](https://github.com/logto-io/logto) as the local authentication service.
+
+## Quick Start
+
+To facilitate quick start, this chapter uses the docker-compose configuration file in the `docker-compose/local` directory. The LobeChat application runs at `http://localhost:3210` after startup and can be run locally.
+
+
+ To facilitate quick start, this docker-compose.yml omits a large number of Secret/Password configurations and is only suitable for quick demonstration or personal local use. Do not use it directly in a production environment! Otherwise, you will be responsible for any security issues!
+
+
+
+ ### Create Configuration Files
+
+Create a new `lobe-chat-db` directory to store your configuration files and subsequent database files.
+
+```sh
+mkdir lobe-chat-db
+```
+
+Pull the configuration files into your directory:
+
+```sh
+curl -fsSL https://raw.githubusercontent.com/lobehub/lobe-chat/HEAD/docker-compose/local-logto/docker-compose.yml > docker-compose.yml
+curl -fsSL https://raw.githubusercontent.com/lobehub/lobe-chat/HEAD/docker-compose/local-logto/.env.example > .env
+```
+
+### Start Services
+
+```sh
+docker compose up -d
+```
+
+### Configure Logto
+
+1. Open `http://localhost:3002` to access the Logto WebUI and register an administrator account.
+
+2. Create a `Next.js (App Router)` application and add the following configurations:
+
+ - `Redirect URI` should be `http://localhost:3210/api/auth/callback/logto`
+ - `Post sign-out redirect URI` should be `http://localhost:3210/`
+
+3. Obtain the `App ID` and `App secrets`, and fill them into your `.env` file corresponding to `LOGTO_CLIENT_ID` and `LOGTO_CLIENT_SECRET`.
+
+### Configure MinIO S3
+
+1. Open `http://localhost:9001` to access the MinIO WebUI. The default admin account password is configured in `.env`.
+
+2. Create a bucket that matches the `MINIO_LOBE_BUCKET` field in your `.env` file, which defaults to `lobe`.
+
+3. Choose a custom policy, copy the following content, and paste it in (if you modified the bucket name, please find and replace accordingly):
+
+ ```json
+ {
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": ["*"]
+ },
+ "Action": ["s3:GetBucketLocation"],
+ "Resource": ["arn:aws:s3:::lobe"]
+ },
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": ["*"]
+ },
+ "Action": ["s3:ListBucket"],
+ "Resource": ["arn:aws:s3:::lobe"],
+ "Condition": {
+ "StringEquals": {
+ "s3:prefix": ["files/*"]
+ }
+ }
+ },
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": ["*"]
+ },
+ "Action": ["s3:PutObject", "s3:DeleteObject", "s3:GetObject"],
+ "Resource": ["arn:aws:s3:::lobe/files/**"]
+ }
+ ],
+ "Version": "2012-10-17"
+ }
+ ```
+
+4. Create a new access key, and fill the generated `Access Key` and `Secret Key` into your `.env` file under `S3_ACCESS_KEY_ID` and `S3_SECRET_ACCESS_KEY`.
+
+### Restart LobeChat Service
+
+```sh
+docker compose up -d
+```
+
+
+ At this point, do not use `docker compose restart lobe` to restart, as this method will not reload the environment variables, and your S3 configuration will not take effect.
+
+
+
+If you see the following logs in the container, it indicates that it has started successfully:
+
+```log
+[Database] Start to migration...
+✅ database migration pass.
+-------------------------------------
+ ▲ Next.js 14.x.x
+ - Local: http://localhost:3210
+ - Network: http://0.0.0.0:3210
+
+ ✓ Starting...
+ ✓ Ready in 95ms
+```
+
+
+
+You have successfully deployed the LobeChat database version, and you can access your LobeChat service at `http://localhost:3210`.
+
+If you encounter issues, please check the Docker logs and console logs, and follow the detailed troubleshooting guide later in the document.
+
+## Deploying to Production
+
+The main difference between production and local operation is the need to use domain addresses instead of localhost. We assume that in addition to the above services, you are also running an Nginx layer for reverse proxy and SSL configuration.
+
+The domain names and corresponding service port descriptions are as follows:
+
+- `lobe.example.com`: your LobeChat service domain, needs to be reverse proxied to the LobeChat service port, default is `3210`
+- `lobe-auth-api.example.com`: your Logto service domain, needs to be reverse proxied to the Logto API service port, default is `3001`
+- `lobe-auth-ui.example.com`: your Logto UI domain, needs to be reverse proxied to the Logto WebUI service port, default is `3002`
+- `lobe-s3-api.example.com`: your MinIO API domain, needs to be reverse proxied to the MinIO API service port, default is `9000`
+- `lobe-s3-ui.example.com`: optional, your MinIO UI domain, needs to be reverse proxied to the MinIO WebUI service port, default is `9001`
+
+And the service port without reverse proxy:
+
+- `postgresql`: your PostgreSQL database service port, default is `5432`
+
+
+ Please note that CORS cross-origin is configured internally in MinIO / Logto service, do not configure CORS additionally in your reverse proxy, as this will cause errors.
+ For minio ports other than 443, Host must be $http_host (with port number), otherwise a 403 error will occur: proxy_set_header Host $http_host.
+
+If you need to configure SSL certificates, please configure them uniformly in the outer Nginx reverse proxy, rather than in MinIO.
+
+
+
+### Configuration Files
+
+```sh
+curl -fsSL https://raw.githubusercontent.com/lobehub/lobe-chat/HEAD/docker-compose/production/docker-compose.yml > docker-compose.yml
+curl -fsSL https://raw.githubusercontent.com/lobehub/lobe-chat/HEAD/docker-compose/production/.env.example > .env
+```
+
+The configuration files include `.env` and `docker-compose.yml`, where the `.env` file is used to configure LobeChat's environment variables, and the `docker-compose.yml` file is used to configure the Postgres, MinIO, and Logto services.
+
+In general, you should only modify sensitive information such as domain names and account passwords, while other configuration items should be set according to the default values.
+
+Refer to the example configurations in the appendix of this article.
+
+### PostgreSQL Database Configuration
+
+You can check the logs using the following command:
+
+```sh
+docker logs -f lobe-database
+```
+
+
+ In our official Docker images, the database schema migration will be automatically executed before
+ starting the image. Our official image guarantees the stability of the "empty database -> complete
+ table" automatic table creation. Therefore, we recommend that your database instance use an empty
+ table instance, thereby avoiding the hassle of manually maintaining table structures or
+ migrations.
+
+
+If you encounter issues when creating tables, you can try using the following commands to forcibly remove the database container and restart:
+
+```sh
+docker compose down # Stop services
+sudo rm -rf ./data # Remove mounted database data
+docker compose up -d # Restart
+```
+
+### Authentication Service Configuration
+
+This article uses Logto as an example to explain the configuration process. If you are using other authentication service providers, please refer to their documentation for configuration.
+
+
+ Please remember to configure the corresponding CORS cross-origin settings for the authentication service provider to ensure that LobeChat can access the authentication service properly.
+
+In this article, you need to allow cross-origin requests from `https://lobe.example.com`.
+
+
+
+You need to first access the WebUI for configuration:
+
+- If you configured the reverse proxy as mentioned earlier, open `https://lobe-auth-ui.example.com`
+- Otherwise, after port mapping, open `http://localhost:3002`
+
+1. Register a new account; the first registered account will automatically become an administrator.
+
+2. In `Applications`, create a `Next.js (App Router)` application with any name.
+
+3. Set `Redirect URI` to `https://lobe.example.com/api/auth/callback/logto`, and `Post sign-out redirect URI` to `https://lobe.example.com/`.
+
+4. Set `CORS allowed origins` to `https://lobe.example.com`.
+
+
+
+5. Obtain `App ID` and `App secrets`, and fill them into your `.env` file under `LOGTO_CLIENT_ID` and `LOGTO_CLIENT_SECRET`.
+
+6. Set `LOGTO_ISSUER` in your `.env` file to `https://lobe-auth-api.example.com/oidc`.
+
+
+
+7. Optional: In the left panel under `Sign-in experience`, in `Sign-up and sign-in - Advanced Options`, disable `Enable user registration` to prohibit user self-registration. If you disable user self-registration, you can only manually add users in the left panel under `User Management`.
+
+
+
+8. Restart the LobeChat service:
+
+ ```sh
+ docker compose up -d
+ ```
+
+
+ Please note that the administrator account is not the same as a registered user; do not use your
+ administrator account to log into LobeChat, as this will only result in an error.
+
+
+### S3 Object Storage Service Configuration
+
+This article uses MinIO as an example to explain the configuration process. If you are using other S3 service providers, please refer to their documentation for configuration.
+
+
+ Please remember to configure the corresponding CORS cross-origin settings for the S3 service provider to ensure that LobeChat can access the S3 service properly.
+
+In this article, you need to allow cross-origin requests from `https://lobe.example.com`. This can be configured in the MinIO WebUI under `Configuration - API - Cors Allow Origin`, or in the Docker Compose under `minio - environment - MINIO_API_CORS_ALLOW_ORIGIN`.
+
+If you configure using the second method (which is also the default method), you will not be able to configure it in the MinIO WebUI anymore.
+
+
+
+You need to first access the WebUI for configuration:
+
+- If you configured the reverse proxy as mentioned earlier, open `https://lobe-s3-ui.example.com`
+- Otherwise, after port mapping, open `http://localhost:9001`
+
+1. Enter your `MINIO_ROOT_USER` and `MINIO_ROOT_PASSWORD` on the login screen, then click login.
+
+2. In the left panel under Administer / Buckets, click `Create Bucket`, enter `lobe` (corresponding to your `S3_BUCKET` environment variable), and then click `Create`.
+
+
+
+3. Select your bucket, click Summary - Access Policy, edit, choose `Custom`, and input the content from `minio-bucket-config.json` (see appendix) and save (again, assuming your bucket name is `lobe`):
+
+
+
+
+
+4. In the left panel under User / Access Keys, click `Create New Access Key`, make no additional modifications, and fill the generated `Access Key` and `Secret Key` into your `.env` file under `S3_ACCESS_KEY_ID` and `S3_SECRET_ACCESS_KEY`.
+
+
+
+5. Restart the LobeChat service:
+
+ ```sh
+ docker compose up -d
+ ```
+
+You have successfully deployed the LobeChat database version, and you can access your LobeChat service at `https://lobe.example.com`.
+
+## Appendix
+
+To facilitate one-click copying, here are the example configuration files needed to configure the server database:
+
+### Local Deployment
+
+#### `.env`
+
+```sh
+# Logto secret
+LOGTO_CLIENT_ID=
+LOGTO_CLIENT_SECRET=
+
+# MinIO S3 configuration
+MINIO_ROOT_USER=YOUR_MINIO_USER
+MINIO_ROOT_PASSWORD=YOUR_MINIO_PASSWORD
+
+# Configure the bucket information of MinIO
+MINIO_LOBE_BUCKET=lobe
+S3_ACCESS_KEY_ID=
+S3_SECRET_ACCESS_KEY=
+
+# Proxy, if you need it
+# HTTP_PROXY=http://localhost:7890
+# HTTPS_PROXY=http://localhost:7890
+
+# Other environment variables, as needed. You can refer to the environment variables configuration for the client version, making sure not to have ACCESS_CODE.
+# OPENAI_API_KEY=sk-xxxx
+# OPENAI_PROXY_URL=https://api.openai.com/v1
+# OPENAI_MODEL_LIST=...
+
+# ----- Other config -----
+# if no special requirements, no need to change
+LOBE_PORT=3210
+LOGTO_PORT=3001
+MINIO_PORT=9000
+
+# Postgres related, which are the necessary environment variables for DB
+LOBE_DB_NAME=lobechat
+POSTGRES_PASSWORD=uWNZugjBqixf8dxC
+
+```
+
+#### `docker-compose.yml`
+
+```yaml
+services:
+ network-service:
+ image: alpine
+ container_name: lobe-network
+ ports:
+ - '${MINIO_PORT}:${MINIO_PORT}' # MinIO API
+ - '9001:9001' # MinIO Console
+ - '${LOGTO_PORT}:${LOGTO_PORT}' # Logto
+ - '3002:3002' # Logto Admin
+ - '${LOBE_PORT}:3210' # LobeChat
+ command: tail -f /dev/null
+ networks:
+ - lobe-network
+
+ postgresql:
+ image: pgvector/pgvector:pg16
+ container_name: lobe-postgres
+ ports:
+ - "5432:5432"
+ volumes:
+ - './data:/var/lib/postgresql/data'
+ environment:
+ - 'POSTGRES_DB=${LOBE_DB_NAME}'
+ - 'POSTGRES_PASSWORD=${POSTGRES_PASSWORD}'
+ healthcheck:
+ test: ['CMD-SHELL', 'pg_isready -U postgres']
+ interval: 5s
+ timeout: 5s
+ retries: 5
+ restart: always
+ networks:
+ - lobe-network
+
+ minio:
+ image: minio/minio
+ container_name: lobe-minio
+ network_mode: 'service:network-service'
+ volumes:
+ - './s3_data:/etc/minio/data'
+ environment:
+ - 'MINIO_ROOT_USER=${MINIO_ROOT_USER}'
+ - 'MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}'
+ - 'MINIO_API_CORS_ALLOW_ORIGIN=http://localhost:${LOBE_PORT}'
+ restart: always
+ command: >
+ server /etc/minio/data --address ":${MINIO_PORT}" --console-address ":9001"
+
+ logto:
+ image: svhd/logto
+ container_name: lobe-logto
+ network_mode: 'service:network-service'
+ depends_on:
+ postgresql:
+ condition: service_healthy
+ environment:
+ - 'TRUST_PROXY_HEADER=1'
+ - 'PORT=${LOGTO_PORT}'
+ - 'DB_URL=postgresql://postgres:${POSTGRES_PASSWORD}@postgresql:5432/logto'
+ - 'ENDPOINT=http://localhost:${LOGTO_PORT}'
+ - 'ADMIN_ENDPOINT=http://localhost:3002'
+ entrypoint: ['sh', '-c', 'npm run cli db seed -- --swe && npm start']
+
+ lobe:
+ image: lobehub/lobe-chat-database
+ container_name: lobe-database
+ network_mode: 'service:network-service'
+ depends_on:
+ postgresql:
+ condition: service_healthy
+ network-service:
+ condition: service_started
+ minio:
+ condition: service_started
+ logto:
+ condition: service_started
+
+ environment:
+ - 'APP_URL=http://localhost:3210'
+ - 'NEXT_AUTH_SSO_PROVIDERS=logto'
+ - 'KEY_VAULTS_SECRET=Kix2wcUONd4CX51E/ZPAd36BqM4wzJgKjPtz2sGztqQ='
+ - 'NEXT_AUTH_SECRET=NX2kaPE923dt6BL2U8e9oSre5RfoT7hg'
+ - 'NEXTAUTH_URL=http://localhost:${LOBE_PORT}/api/auth'
+ - 'LOGTO_ISSUER=http://localhost:${LOGTO_PORT}/oidc'
+ - 'DATABASE_URL=postgresql://postgres:${POSTGRES_PASSWORD}@postgresql:5432/${LOBE_DB_NAME}'
+ - 'S3_ENDPOINT=http://localhost:${MINIO_PORT}'
+ - 'S3_BUCKET=${MINIO_LOBE_BUCKET}'
+ - 'S3_PUBLIC_DOMAIN=http://localhost:${MINIO_PORT}'
+ - 'S3_ENABLE_PATH_STYLE=1'
+ env_file:
+ - .env
+ restart: always
+
+volumes:
+ data:
+ driver: local
+ s3_data:
+ driver: local
+
+networks:
+ lobe-network:
+ driver: bridge
+
+```
+
+### Deploying to Production
+
+#### `.env`
+
+```sh
+# Required: LobeChat domain for tRPC calls
+# Ensure this domain is whitelisted in your NextAuth providers and S3 service CORS settings
+APP_URL=https://lobe.example.com/
+
+# Postgres related environment variables
+# Required: Secret key for encrypting sensitive information. Generate with: openssl rand -base64 32
+KEY_VAULTS_SECRET=Kix2wcUONd4CX51E/ZPAd36BqM4wzJgKjPtz2sGztqQ=
+# Required: Postgres database connection string
+# Format: postgresql://username:password@host:port/dbname
+# If using Docker, you can use the container name as the host
+DATABASE_URL=postgresql://postgres:uWNZugjBqixf8dxC@postgresql:5432/lobe
+
+# NEXT_AUTH related environment variables
+# Supports auth0, Azure AD, GitHub, Authentik, Zitadel, Logto, etc.
+# For supported providers, see: https://lobehub.com/docs/self-hosting/advanced/auth#next-auth
+# If you have ACCESS_CODE, please remove it. We use NEXT_AUTH as the sole authentication source
+# Required: NextAuth secret key. Generate with: openssl rand -base64 32
+NEXT_AUTH_SECRET=NX2kaPE923dt6BL2U8e9oSre5RfoT7hg
+# Required: Specify the authentication provider (e.g., Logto)
+NEXT_AUTH_SSO_PROVIDERS=logto
+# Required: NextAuth URL for callbacks
+NEXTAUTH_URL=https://lobe.example.com/api/auth
+
+# NextAuth providers configuration (example using Logto)
+# For other providers, see: https://lobehub.com/docs/self-hosting/environment-variables/auth
+LOGTO_CLIENT_ID=YOUR_LOGTO_CLIENT_ID
+LOGTO_CLIENT_SECRET=YOUR_LOGTO_CLIENT_SECRET
+LOGTO_ISSUER=https://lobe-auth-api.example.com/oidc
+
+# Proxy settings (if needed, e.g., when using GitHub as an auth provider)
+# HTTP_PROXY=http://localhost:7890
+# HTTPS_PROXY=http://localhost:7890
+
+# S3 related environment variables (example using MinIO)
+# Required: S3 Access Key ID (for MinIO, invalid until manually created in MinIO UI)
+S3_ACCESS_KEY_ID=YOUR_S3_ACCESS_KEY_ID
+# Required: S3 Secret Access Key (for MinIO, invalid until manually created in MinIO UI)
+S3_SECRET_ACCESS_KEY=YOUR_S3_SECRET_ACCESS_KEY
+# Required: S3 Endpoint for server/client connections to S3 API
+S3_ENDPOINT=https://lobe-s3-api.example.com
+# Required: S3 Bucket (invalid until manually created in MinIO UI)
+S3_BUCKET=lobe
+# Required: S3 Public Domain for client access to unstructured data
+S3_PUBLIC_DOMAIN=https://lobe-s3-api.example.com
+# Optional: S3 Enable Path Style
+# Use 0 for mainstream S3 cloud providers; use 1 for self-hosted MinIO
+# See: https://lobehub.com/docs/self-hosting/advanced/s3#s-3-enable-path-style
+S3_ENABLE_PATH_STYLE=1
+
+# Other basic environment variables (as needed)
+# See: https://lobehub.com/docs/self-hosting/environment-variables/basic
+# Note: For server versions, the API must support embedding models (OpenAI text-embedding-3-small) for file processing
+# You don't need to specify this model in OPENAI_MODEL_LIST
+# OPENAI_API_KEY=sk-xxxx
+# OPENAI_PROXY_URL=https://api.openai.com/v1
+# OPENAI_MODEL_LIST=...
+
+```
+
+#### `docker-compose.yml`
+
+```yaml
+services:
+ postgresql:
+ image: pgvector/pgvector:pg16
+ container_name: lobe-postgres
+ ports:
+ - '5432:5432'
+ volumes:
+ - './data:/var/lib/postgresql/data'
+ environment:
+ - 'POSTGRES_DB=lobe'
+ - 'POSTGRES_PASSWORD=uWNZugjBqixf8dxC'
+ healthcheck:
+ test: ['CMD-SHELL', 'pg_isready -U postgres']
+ interval: 5s
+ timeout: 5s
+ retries: 5
+ restart: always
+
+ minio:
+ image: minio/minio
+ container_name: lobe-minio
+ ports:
+ - '9000:9000'
+ - '9001:9001'
+ volumes:
+ - './s3_data:/etc/minio/data'
+ environment:
+ - 'MINIO_ROOT_USER=YOUR_MINIO_USER'
+ - 'MINIO_ROOT_PASSWORD=YOUR_MINIO_PASSWORD'
+ - 'MINIO_DOMAIN=lobe-s3-api.example.com'
+ - 'MINIO_API_CORS_ALLOW_ORIGIN=https://lobe.example.com' # Your LobeChat's domain name.
+ restart: always
+ command: >
+ server /etc/minio/data --address ":9000" --console-address ":9001"
+
+ logto:
+ image: svhd/logto
+ container_name: lobe-logto
+ ports:
+ - '3001:3001'
+ - '3002:3002'
+ depends_on:
+ postgresql:
+ condition: service_healthy
+ environment:
+ - 'TRUST_PROXY_HEADER=1'
+ - 'DB_URL=postgresql://postgres:uWNZugjBqixf8dxC@postgresql:5432/logto'
+ - 'ENDPOINT=https://lobe-auth-api.example.com'
+ - 'ADMIN_ENDPOINT=https://lobe-auth-ui.example.com'
+ entrypoint: ['sh', '-c', 'npm run cli db seed -- --swe && npm start']
+
+ lobe:
+ image: lobehub/lobe-chat-database
+ container_name: lobe-database
+ ports:
+ - '3210:3210'
+ depends_on:
+ - postgresql
+ - minio
+ - logto
+ env_file:
+ - .env
+ restart: always
+
+volumes:
+ data:
+ driver: local
+ s3_data:
+ driver: local
+
+```
+
+#### `minio-bucket-config.json`
+
+```json
+{
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": ["*"]
+ },
+ "Action": ["s3:GetBucketLocation"],
+ "Resource": ["arn:aws:s3:::lobe"]
+ },
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": ["*"]
+ },
+ "Action": ["s3:ListBucket"],
+ "Resource": ["arn:aws:s3:::lobe"],
+ "Condition": {
+ "StringEquals": {
+ "s3:prefix": ["files/*"]
+ }
+ }
+ },
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": ["*"]
+ },
+ "Action": ["s3:PutObject", "s3:DeleteObject", "s3:GetObject"],
+ "Resource": ["arn:aws:s3:::lobe/files/**"]
+ }
+ ],
+ "Version": "2012-10-17"
+}
+```
+
+[docker-pulls-link]: https://hub.docker.com/r/lobehub/lobe-chat-database
+[docker-pulls-shield]: https://img.shields.io/docker/pulls/lobehub/lobe-chat-database?color=45cc11&labelColor=black&style=flat-square
+[docker-release-link]: https://hub.docker.com/r/lobehub/lobe-chat-database
+[docker-release-shield]: https://img.shields.io/docker/v/lobehub/lobe-chat-database?color=369eff&label=docker&labelColor=black&logo=docker&logoColor=white&style=flat-square
+[docker-size-link]: https://hub.docker.com/r/lobehub/lobe-chat-database
+[docker-size-shield]: https://img.shields.io/docker/image-size/lobehub/lobe-chat-database?color=369eff&labelColor=black&style=flat-square
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/docker-compose.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/docker-compose.zh-CN.mdx
new file mode 100644
index 0000000..a518144
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/docker-compose.zh-CN.mdx
@@ -0,0 +1,671 @@
+---
+title: 通过 Docker Compose 部署 LobeChat
+description: 学习如何使用 Docker Compose 部署 LobeChat 服务,包括各种服务的配置教程。
+tags:
+ - Docker Compose
+ - LobeChat
+ - Docker 容器
+ - 部署指引
+---
+
+# 使用 Docker Compose 部署 LobeChat 服务端数据库版本
+
+
+
+
+This article assumes that you are familiar with the basic principles and processes of deploying the LobeChat server database version, so it only includes content related to core environment variable configuration. If you are not familiar with the deployment principles of the LobeChat server database version, please refer to [Deploying Server Database](/docs/self-hosting/server-database) first.
+
+
+
+ Due to the inability to expose `NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY` using Docker environment variables, you cannot use Clerk as an authentication service when deploying LobeChat using Docker / Docker Compose.
+
+If you do need Clerk as an authentication service, you might consider deploying using Vercel or building your own image.
+
+
+
+## Deploying on a Linux Server
+
+Here is the process for deploying the LobeChat server database version on a Linux server:
+
+
+
+### Create a Postgres Database Instance
+
+Please create a Postgres database instance with the PGVector plugin according to your needs, for example:
+
+```sh
+docker network create pg
+
+docker run --name my-postgres --network pg -e POSTGRES_PASSWORD=mysecretpassword -p 5432:5432 -d pgvector/pgvector:pg16
+```
+
+The above command will create a PG instance named `my-postgres` on the network `pg`, where `pgvector/pgvector:pg16` is a Postgres 16 image with the pgvector plugin installed by default.
+
+
+The pgvector plugin provides vector search capabilities for Postgres, which is an important component for LobeChat to implement RAG.
+
+
+
+The above command does not specify a persistent storage location for the pg instance, so it is only for testing/demonstration purposes. Please configure persistent storage for production environments.
+
+
+### Create a file named `lobe-chat.env` to store environment variables:
+
+```shell
+# Website domain
+APP_URL=https://your-prod-domain.com
+
+# DB required environment variables
+KEY_VAULTS_SECRET=jgwsK28dspyVQoIf8/M3IIHl1h6LYYceSYNXeLpy6uk=
+# Postgres database connection string
+# Format: postgres://username:password@host:port/dbname; if your pg instance is a Docker container, use the container name
+DATABASE_URL=postgres://postgres:mysecretpassword@my-postgres:5432/postgres
+
+# NEXT_AUTH related, can use auth0, Azure AD, GitHub, Authentik, zitadel, etc. If you have other access requirements, feel free to submit a PR
+NEXT_AUTH_SECRET=3904039cd41ea1bdf6c93db0db96e250
+NEXT_AUTH_SSO_PROVIDERS=auth0
+NEXTAUTH_URL=https://your-prod-domain.com/api/auth
+AUTH0_CLIENT_ID=xxxxxx
+AUTH0_CLIENT_SECRET=cSX_xxxxx
+AUTH0_ISSUER=https://lobe-chat-demo.us.auth0.com
+
+# S3 related
+S3_ACCESS_KEY_ID=xxxxxxxxxx
+S3_SECRET_ACCESS_KEY=xxxxxxxxxx
+S3_ENDPOINT=https://xxxxxxxxxx.r2.cloudflarestorage.com
+S3_BUCKET=lobechat
+S3_PUBLIC_DOMAIN=https://s3-for-lobechat.your-domain.com
+
+# Other environment variables, as needed. You can refer to the environment variables configuration for the client version, making sure not to have ACCESS_CODE.
+# OPENAI_API_KEY=sk-xxxx
+# OPENAI_PROXY_URL=https://api.openai.com/v1
+# OPENAI_MODEL_LIST=...
+```
+
+### Start the lobe-chat-database Docker image
+
+```sh
+docker run -it -d -p 3210:3210 --network pg --env-file lobe-chat.env --name lobe-chat-database lobehub/lobe-chat-database
+```
+
+You can use the following command to check the logs:
+
+```sh
+docker logs -f lobe-chat-database
+```
+
+If you see the following logs in the container, it means it has started successfully:
+
+```log
+[Database] Start to migration...
+✅ database migration pass.
+-------------------------------------
+ ▲ Next.js 14.x.x
+ - Local: http://localhost:3210
+ - Network: http://0.0.0.0:3210
+
+ ✓ Starting...
+ ✓ Ready in 95ms
+```
+
+
+
+
+In our official Docker image, the database schema migration is automatically executed before starting the image. We ensure stability from an empty database to all tables being formally available. Therefore, we recommend using an empty table instance for your database to avoid the cost of manually maintaining table structure migration.
+
+
+## Using Locally (Mac / Windows)
+
+The data version of LobeChat also supports direct use on a local Mac/Windows machine.
+
+Here, we assume that you have a pg instance available on port 5432 locally on your Mac/Windows, with the account `postgres` and password `mysecretpassword`, accessible at `localhost:5432`.
+
+The script command you need to execute is:
+
+```shell
+$ docker run -it -d --name lobe-chat-database -p 3210:3210 \
+ -e DATABASE_URL=postgres://postgres:mysecretpassword@host.docker.internal:5432/postgres \
+ -e KEY_VAULTS_SECRET=jgwsK28dspyVQoIf8/M3IIHl1h6LYYceSYNXeLpy6uk= \
+ -e NEXT_AUTH_SECRET=3904039cd41ea1bdf6c93db0db96e250 \
+ -e NEXT_AUTH_SSO_PROVIDERS=auth0 \
+ -e AUTH0_CLIENT_ID=xxxxxx \
+ -e AUTH0_CLIENT_SECRET=cSX_xxxxx \
+ -e AUTH0_ISSUER=https://lobe-chat-demo.us.auth0.com \
+ -e APP_URL=http://localhost:3210 \
+ -e NEXTAUTH_URL=http://localhost:3210/api/auth \
+ -e S3_ACCESS_KEY_ID=xxxxxxxxxx \
+ -e S3_SECRET_ACCESS_KEY=xxxxxxxxxx \
+ -e S3_ENDPOINT=https://xxxxxxxxxx.r2.cloudflarestorage.com \
+ -e S3_BUCKET=lobechat \
+ -e S3_PUBLIC_DOMAIN=https://s3-for-lobechat.your-domain.com \
+ lobehub/lobe-chat-database
+```
+
+
+`Docker` uses a virtual machine solution on `Windows` and `macOS`. If you use `localhost` / `127.0.0.1`, it will refer to the container's `localhost`. In this case, try using `host.docker.internal` instead of `localhost`.
+
+
+[docker-pulls-link]: https://hub.docker.com/r/lobehub/lobe-chat-database
+[docker-pulls-shield]: https://img.shields.io/docker/pulls/lobehub/lobe-chat-database?color=45cc11&labelColor=black&style=flat-square
+[docker-release-link]: https://hub.docker.com/r/lobehub/lobe-chat-database
+[docker-release-shield]: https://img.shields.io/docker/v/lobehub/lobe-chat-database?color=369eff&label=docker&labelColor=black&logo=docker&logoColor=white&style=flat-square
+[docker-size-link]: https://hub.docker.com/r/lobehub/lobe-chat-database
+[docker-size-shield]: https://img.shields.io/docker/image-size/lobehub/lobe-chat-database?color=369eff&labelColor=black&style=flat-square
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/docker.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/docker.zh-CN.mdx
new file mode 100644
index 0000000..279437d
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/docker.zh-CN.mdx
@@ -0,0 +1,170 @@
+---
+title: 使用 Docker 部署 LobeChat 数据库
+description: 详细步骤教你如何在 Docker 中部署 LobeChat 服务端数据库。
+tags:
+ - Docker
+ - LobeChat
+ - 数据库部署
+ - Postgres
+---
+
+# 使用 Docker 部署服务端数据库版
+
+
+
+
+ 本文已经假定你了解了 LobeChat 服务端数据库版本(下简称 DB
+ 版)的部署基本原理和流程,因此只包含核心环境变量配置的内容。如果你还不了解 LobeChat DB
+ 版的部署原理,请先查阅 [使用服务端数据库部署](/zh/docs/self-hosting/server-database) 。
+
+
+
+ 由于无法使用 Docker 环境变量暴露 `NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY`,使用 Docker / Docker Compose
+ 部署 LobeChat 时,你不能使用 Clerk 作为登录鉴权服务。
+
+ 如果你确实需要 Clerk 作为登录鉴权服务,你可以考虑使用 Vercel 部署或者自行构建镜像。
+
+
+## 在 Linux 服务器上部署
+
+以下是在 Linux 服务器上部署 LobeChat DB 版的流程:
+
+
+
+### 创建 Postgres 数据库实例
+
+请按照你自己的诉求创建一个带有 PGVector 插件的 Postgres 数据库实例,例如:
+
+```sh
+docker network create pg
+
+docker run --name my-postgres --network pg -e POSTGRES_PASSWORD=mysecretpassword -p 5432:5432 -d pgvector/pgvector:pg16
+```
+
+上述指令会创建一个名为 `my-postgres`,并且网络为 `pg` 的 PG 实例,其中 `pgvector/pgvector:pg16` 是一个 Postgres 16 的镜像,且默认安装了 pgvector 插件。
+
+
+ pgvector 插件为 Postgres 提供了向量搜索的能力,是 LobeChat 实现 RAG 的重要构件之一。
+
+
+
+ 以上指令得到的 pg
+ 实例并没有指定持久化存储位置,因此仅用于测试/演示,生产环境请自行配置持久化存储。
+
+
+### 创建名为 `lobe-chat.env` 文件用于存放环境变量:
+
+```shell
+# 网站域名
+APP_URL=https://your-prod-domain.com
+
+# DB 必须的环境变量
+# 用于加密敏感信息的密钥,可以使用 openssl rand -base64 32 生成
+KEY_VAULTS_SECRET='jgwsK28dspyVQoIf8/M3IIHl1h6LYYceSYNXeLpy6uk='
+# Postgres 数据库连接字符串
+# 格式:postgres://username:password@host:port/dbname,如果你的 pg 实例为 Docker 容器,请使用容器名
+DATABASE_URL=postgres://postgres:mysecretpassword@my-postgres:5432/postgres
+
+# NEXT_AUTH 相关,可以使用 auth0、Azure AD、GitHub、Authentik、zitadel 等,如有其他接入诉求欢迎提 PR
+NEXT_AUTH_SECRET=3904039cd41ea1bdf6c93db0db96e250
+NEXT_AUTH_SSO_PROVIDERS=auth0
+NEXTAUTH_URL=https://your-prod-domain.com/api/auth
+AUTH0_CLIENT_ID=xxxxxx
+AUTH0_CLIENT_SECRET=cSX_xxxxx
+AUTH0_ISSUER=https://lobe-chat-demo.us.auth0.com
+
+# S3 相关
+S3_ACCESS_KEY_ID=xxxxxxxxxx
+S3_SECRET_ACCESS_KEY=xxxxxxxxxx
+S3_ENDPOINT=https://xxxxxxxxxx.r2.cloudflarestorage.com # 用于 S3 API 访问的域名
+S3_BUCKET=lobechat
+S3_PUBLIC_DOMAIN=https://s3-for-lobechat.your-domain.com # 用于外网访问 S3 的公共域名,需配置 CORS
+# S3_REGION=ap-chengdu # 如果需要指定地域
+
+# 其他环境变量,视需求而定
+# OPENAI_API_KEY=sk-xxxx
+# OPENAI_PROXY_URL=https://api.openai.com/v1
+# OPENAI_MODEL_LIST=...
+# ...
+```
+
+### 启动 lobe-chat-database docker 镜像
+
+```sh
+docker run -it -d -p 3210:3210 --network pg --env-file lobe-chat.env --name lobe-chat-database lobehub/lobe-chat-database
+```
+
+你可以使用下述指令检查日志:
+
+```sh
+docker logs -f lobe-chat-database
+```
+
+如果你在容器中看到了以下日志,则说明已经启动成功:
+
+```log
+[Database] Start to migration...
+✅ database migration pass.
+-------------------------------------
+ ▲ Next.js 14.x.x
+ - Local: http://localhost:3210
+ - Network: http://0.0.0.0:3210
+
+ ✓ Starting...
+ ✓ Ready in 95ms
+```
+
+
+
+
+ 在我们官方的 Docker 镜像中,会在启动镜像前自动执行数据库 schema 的 migration
+ ,我们的官方镜像承诺「空数据库 ->
+ 完整表」这一段自动建表的稳定性。因此我们建议你的数据库实例使用一个空表实例,进而省去手动维护表结构或者
+ migration 的麻烦。
+
+
+## 在本地(Mac / Windows) 上使用
+
+LobeChat 的 DB 版也支持直接在本地的 Mac/Windows 本地使用。
+
+在此我们已假设你的本地有一个 5432 端口可用,账号为 `postgres` ,密码是 `mysecretpassword` 的 pg 实例,它在 `localhost:5432` 可用。
+
+那么你需要执行的脚本指令为:
+
+```shell
+$ docker run -it -d --name lobe-chat-database -p 3210:3210 \
+ -e DATABASE_URL=postgres://postgres:mysecretpassword@host.docker.internal:5432/postgres \
+ -e KEY_VAULTS_SECRET=jgwsK28dspyVQoIf8/M3IIHl1h6LYYceSYNXeLpy6uk= \
+ -e NEXT_AUTH_SECRET=3904039cd41ea1bdf6c93db0db96e250 \
+ -e NEXT_AUTH_SSO_PROVIDERS=auth0 \
+ -e AUTH0_CLIENT_ID=xxxxxx \
+ -e AUTH0_CLIENT_SECRET=cSX_xxxxx \
+ -e AUTH0_ISSUER=https://lobe-chat-demo.us.auth0.com \
+ -e APP_URL=http://localhost:3210 \
+ -e NEXTAUTH_URL=http://localhost:3210/api/auth \
+ -e S3_ACCESS_KEY_ID=xxxxxxxxxx \
+ -e S3_SECRET_ACCESS_KEY=xxxxxxxxxx \
+ -e S3_ENDPOINT=https://xxxxxxxxxx.r2.cloudflarestorage.com \
+ -e S3_BUCKET=lobechat \
+ -e S3_PUBLIC_DOMAIN=https://s3-for-lobechat.your-domain.com \
+ lobehub/lobe-chat-database
+```
+
+
+ `Docker` 在 `Windows` 和 `macOS` 上走的是虚拟机方案,如果使用 `localhost` / `127.0.0.1`
+ ,将会走到自身容器的 `localhost`,此时请尝试用 `host.docker.internal` 替代 `localhost`
+
+
+[docker-pulls-link]: https://hub.docker.com/r/lobehub/lobe-chat-database
+[docker-pulls-shield]: https://img.shields.io/docker/pulls/lobehub/lobe-chat-database?color=45cc11&labelColor=black&style=flat-square
+[docker-release-link]: https://hub.docker.com/r/lobehub/lobe-chat-database
+[docker-release-shield]: https://img.shields.io/docker/v/lobehub/lobe-chat-database?color=369eff&label=docker&labelColor=black&logo=docker&logoColor=white&style=flat-square
+[docker-size-link]: https://hub.docker.com/r/lobehub/lobe-chat-database
+[docker-size-shield]: https://img.shields.io/docker/image-size/lobehub/lobe-chat-database?color=369eff&labelColor=black&style=flat-square
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/netlify.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/netlify.mdx
new file mode 100644
index 0000000..b637e20
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/netlify.mdx
@@ -0,0 +1,14 @@
+---
+title: Deploy LobeChat with Database on Netlify
+description: >-
+ Learn how to deploy LobeChat on Netlify with ease, including: database,
+ authentication and S3 storage service.
+tags:
+ - Deploy LobeChat
+ - Netlify Deployment
+---
+
+# Deploy LobeChat with Database on Netlify
+
+TODO
+
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/netlify.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/netlify.zh-CN.mdx
new file mode 100644
index 0000000..c71f604
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/netlify.zh-CN.mdx
@@ -0,0 +1,16 @@
+---
+title: 在 Netlify 上部署 LobeChat 服务端数据库版
+description: >-
+ 学习如何在 Netlify 上部署 LobeChat,包括 Fork 仓库、准备 OpenAI API Key、导入到 Netlify
+ 工作台、配置站点名称与环境变量等步骤。
+tags:
+ - Netlify
+ - LobeChat
+ - 部署教程
+ - OpenAI API Key
+ - 环境配置
+---
+
+# 使用 Netlify 部署 LobeChat 数据库版
+
+TODO
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/railway.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/railway.mdx
new file mode 100644
index 0000000..f213064
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/railway.mdx
@@ -0,0 +1,14 @@
+---
+title: Deploy LobeChat with Database on Railway
+description: >-
+ Learn how to deploy LobeChat on Railway with ease, including: database,
+ authentication and S3 storage service.
+tags:
+ - Deploy LobeChat
+ - Railway Deployment
+---
+
+# Deploy LobeChat with Database on Railway
+
+TODO
+
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/railway.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/railway.zh-CN.mdx
new file mode 100644
index 0000000..c51def9
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/railway.zh-CN.mdx
@@ -0,0 +1,15 @@
+---
+title: 在 Railway 上部署 LobeChat 数据库版
+description: 学习如何在 Railway 上部署 LobeChat 应用,包括准备 OpenAI API Key、点击按钮进行部署、绑定自定义域名等步骤。
+tags:
+ - Railway
+ - 部署
+ - LobeChat
+ - OpenAI
+ - API Key
+ - 自定义域名
+---
+
+# 使用 Railway 部署 LobeChat 数据库版
+
+TODO
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/repocloud.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/repocloud.mdx
new file mode 100644
index 0000000..d29b715
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/repocloud.mdx
@@ -0,0 +1,32 @@
+---
+title: Deploy LobeChat with Database on RepoCloud
+description: Learn how to deploy LobeChat on RepoCloud with ease, including: database, authentication and S3 storage service.
+tags:
+ - Deploy LobeChat
+ - RepoCloud Deployment
+ - OpenAI API Key
+ - Custom Domain Binding
+---
+
+# Deploying LobeChat Database Edition with RepoCloud
+
+If you want to deploy LobeChat Database Edition on RepoCloud, you can follow the steps below:
+
+## RepoCloud Deployment Process
+
+
+ ### Prepare your OpenAI API Key
+
+Go to [OpenAI API Key](https://platform.openai.com/account/api-keys) to get your OpenAI API Key.
+
+### One-click to deploy
+
+[](https://repocloud.io/details/?app_id=248)
+
+### Once deployed, you can start using it
+
+### Bind a custom domain (optional)
+
+You can use the subdomain provided by RepoCloud, or choose to bind a custom domain. Currently, the domains provided by RepoCloud have not been contaminated, and most regions can connect directly.
+
+
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/repocloud.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/repocloud.zh-CN.mdx
new file mode 100644
index 0000000..00efa59
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/repocloud.zh-CN.mdx
@@ -0,0 +1,14 @@
+---
+title: 在 RepoCloud 上部署 LobeChat 数据库版
+description: 学习如何在RepoCloud上部署LobeChat应用,包括准备OpenAI API Key、点击部署按钮、绑定自定义域名等操作。
+tags:
+ - RepoCloud
+ - LobeChat
+ - 部署流程
+ - OpenAI API Key
+ - 自定义域名
+---
+
+# 使用 RepoCloud 部署 LobeChat 数据库版
+
+TODO
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/sealos.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/sealos.mdx
new file mode 100644
index 0000000..92a5135
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/sealos.mdx
@@ -0,0 +1,12 @@
+---
+title: Deploy LobeChat on SealOS
+description: >-
+ Learn how to deploy LobeChat on SealOS with ease. Follow the provided steps to
+ set up LobeChat and start using it efficiently.
+tags:
+ - Deploy LobeChat
+ - SealOS Deployment
+ - OpenAI API Key
+ - Custom Domain Binding
+---
+
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/sealos.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/sealos.zh-CN.mdx
new file mode 100644
index 0000000..f880f45
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/sealos.zh-CN.mdx
@@ -0,0 +1,14 @@
+---
+title: 在 SealOS 上部署 LobeChat
+description: 学习如何在 SealOS 上部署 LobeChat,包括准备 OpenAI API Key、点击部署按钮、绑定自定义域名等操作。
+tags:
+ - SealOS
+ - LobeChat
+ - OpenAI API Key
+ - 部署流程
+ - 自定义域名
+---
+
+# 使用 SealOS 部署 LobeChat 数据库版
+
+TODO
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/vercel.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/vercel.mdx
new file mode 100644
index 0000000..4861d97
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/vercel.mdx
@@ -0,0 +1,428 @@
+---
+title: Deploy LobeChat with database on Vercel
+description: >-
+ Learn how to deploy LobeChat with database on Vercel with ease, including:
+ database, authentication and S3 storage service.
+tags:
+ - Deploy LobeChat
+ - Vercel Deployment
+ - OpenAI API Key
+ - Custom Domain Binding
+---
+
+# Deploying Server Database Version on Vercel
+
+This article will detail how to deploy the server database version of LobeChat on Vercel, including: 1) database configuration; 2) identity authentication service configuration; 3) steps for setting up the S3 storage service.
+
+
+Before proceeding, please make sure of the following:
+
+- Export all data, as after deploying the server-side database, existing user data cannot be automatically migrated and can only be manually imported after backup!
+- The `ACCESS_CODE` in the environment variables is either unset or cleared!
+- When configuring the environment variables required for the server-side database, make sure to fill in all of them before deployment, otherwise you may encounter database migration issues!
+
+
+
+## 1. Configure the Database
+
+
+
+### Prepare the Server Database Instance and Obtain the Connection URL
+
+Before deployment, make sure you have prepared a Postgres database instance. You can choose one of the following methods:
+
+- `A.` Use Serverless Postgres instances like Vercel / Neon;
+- `B.` Use self-deployed Postgres instances like Docker.
+
+The configuration for both methods is slightly different, and will be distinguished in the next step.
+
+### Add Environment Variables in Vercel
+
+In Vercel's deployment environment variables, add `DATABASE_URL` and other environment variables, and fill in the Postgres database connection URL prepared in the previous step. The typical format for the database connection URL is `postgres://username:password@host:port/database`.
+
+
+
+
+
+
+ Please confirm the `Postgres` type provided by your vendor. If it is `Node Postgres`, switch to
+ the `Node Postgres` Tab.
+
+
+Variables to be filled for Serverless Postgres are as follows:
+
+```shell
+# Serverless Postgres DB Url
+DATABASE_URL=
+
+# Specify service mode as server, otherwise it will not enter the server-side database
+NEXT_PUBLIC_SERVICE_MODE=server
+```
+
+An example of filling in Vercel is as follows:
+
+
+
+
+
+
+ Variables to be filled for Node Postgres are as follows:
+
+```shell
+# Node Postgres DB Url
+DATABASE_URL=
+
+# Specify Postgres database driver as node
+DATABASE_DRIVER=node
+
+# Specify service mode as server, otherwise it will not enter the server-side database
+NEXT_PUBLIC_SERVICE_MODE=server
+```
+
+An example of filling in Vercel is as follows:
+
+
+
+
+
+
+
+
+ If you wish to enable SSL when connecting to the database, please refer to the
+ [link](https://stackoverflow.com/questions/14021998/using-psql-to-connect-to-postgresql-in-ssl-mode)
+ for setup instructions.
+
+
+### Add the `KEY_VAULTS_SECRET` Environment Variable
+
+After adding the DATABASE_URL environment variable for the database, you need to add a `KEY_VAULTS_SECRET` environment variable. This variable is used to encrypt sensitive information such as apikeys stored by users. You can generate a random 32-character string as the key using `openssl rand -base64 32`.
+
+```shell
+KEY_VAULTS_SECRET=jgwsK28dspyVQoIf8/M3IIHl1h6LYYceSYNXeLpy6uk=
+```
+
+Make sure to add this to the Vercel environment variables as well.
+
+### Add the `APP_URL` Environment Variable
+
+Finally, you need to add the `APP_URL` environment variable, which specifies the URL address of the LobeChat application.
+
+
+
+## 2. Configure Authentication Service
+
+The server-side database needs to be paired with a user authentication service to function properly. Therefore, the corresponding authentication service needs to be configured.
+
+
+
+### Prepare Clerk Authentication Service
+
+Go to [Clerk](https://clerk.com?utm_source=lobehub&utm_medium=docs) to register and create an application to obtain the corresponding Public Key and Secret Key.
+
+
+ If you are not familiar with Clerk, you can refer to [Authentication
+ Service-Clerk](/en/docs/self-hosting/advanced/authentication#clerk) for details on using Clerk.
+
+
+### Add Public and Private Key Environment Variables in Vercel
+
+In Vercel's deployment environment variables, add the `NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY` and `CLERK_SECRET_KEY` environment variables. You can click on "API Keys" in the menu, then copy the corresponding values and paste them into Vercel's environment variables.
+
+
+
+The environment variables required for this step are as follows:
+
+```shell
+NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=pk_live_xxxxxxxxxxx
+CLERK_SECRET_KEY=sk_live_xxxxxxxxxxxxxxxxxxxxxx
+```
+
+Add the above variables to Vercel:
+
+
+
+### Create and Configure Webhook in Clerk
+
+Since we let Clerk fully handle user authentication and management, we need Clerk to notify our application and store data in the database when there are changes in the user's lifecycle (create, update, delete). We achieve this requirement through the Webhook provided by Clerk.
+
+We need to add an endpoint in Clerk's Webhooks to inform Clerk to send notifications to this endpoint when a user's information changes.
+
+
+
+Fill in the endpoint with the URL of your Vercel project, such as `https://your-project.vercel.app/api/webhooks/clerk`. Then, subscribe to events by checking the three user events (`user.created`, `user.deleted`, `user.updated`), and click create.
+
+
+ The `https://` in the URL is essential to maintain the integrity of the URL.
+
+
+
+
+>
+
+
+
+### Add Webhook Secret to Vercel Environment Variables
+
+After creation, you can find the secret of this Webhook in the bottom right corner:
+
+
+
+>
+
+
+
+The environment variable corresponding to this secret is `CLERK_WEBHOOK_SECRET`:
+
+```shell
+CLERK_WEBHOOK_SECRET=whsec_xxxxxxxxxxxxxxxxxxxxxx
+```
+
+Add it to Vercel's environment variables:
+
+
+
+
+
+By completing these steps, you have successfully configured the Clerk authentication service. Next, we will configure the S3 storage service.
+
+## 3. Configure S3 Storage Service
+
+In the server-side database, we need to configure the S3 storage service to store files.
+
+
+ In this article, S3 refers to a compatible S3 storage solution, which supports object storage
+ systems that comply with the Amazon S3 API. Common examples include Cloudflare R2, Alibaba Cloud
+ OSS, etc., all of which support S3-compatible APIs.
+
+
+
+
+### Configure and Obtain S3 Bucket
+
+You need to go to your S3 service provider (such as AWS S3, Cloudflare R2, etc.) and create a new storage bucket. The following steps will use Cloudflare R2 as an example to explain the creation process.
+
+The interface of Cloudflare R2 is shown below:
+
+
+
+When creating a storage bucket, specify its name and then click create.
+
+
+
+### Obtain Environment Variables for the Bucket
+
+In the settings of the R2 storage bucket, you can view the bucket configuration information:
+
+
+
+The corresponding environment variables are:
+
+```shell
+# Storage bucket name
+S3_BUCKET=lobechat
+# Storage bucket request endpoint (note that the path in this link includes the bucket name, which must be removed, or use the link provided on the S3 API token application page)
+S3_ENDPOINT=https://0b33a03b5c993fd2f453379dc36558e5.r2.cloudflarestorage.com
+# Public access domain for the storage bucket
+S3_PUBLIC_DOMAIN=https://s3-for-lobechat.your-domain.com
+```
+
+
+ `S3_ENDPOINT` must have its path removed, otherwise uploaded files will not be accessible
+
+
+### Obtain S3 Key Environment Variables
+
+You need to obtain the access key for S3 so that the LobeChat server has permission to access the S3 storage service. In R2, you can configure the access key in the account details:
+
+
+
+Click the button in the upper right corner to create an API token and enter the create API Token page.
+
+
+
+Since our server-side database needs to read and write to the S3 storage service, the permission needs to be set to `Object Read and Write`, then click create.
+
+
+
+After creation, you can see the corresponding S3 API token.
+
+
+
+The corresponding environment variables are:
+
+```shell
+S3_ACCESS_KEY_ID=9998d6757e276cf9f1edbd325b7083a6
+S3_SECRET_ACCESS_KEY=55af75d8eb6b99f189f6a35f855336ea62cd9c4751a5cf4337c53c1d3f497ac2
+```
+
+### Adding Corresponding Environment Variables in Vercel
+
+The steps to obtain the required environment variables may vary for different S3 service providers, but the obtained environment variables should be consistent:
+
+
+ The `https://` in the URL is essential and must be maintained for the completeness of the URL.
+
+
+```shell
+# S3 Keys
+S3_ACCESS_KEY_ID=9998d6757e276cf9f1edbd325b7083a6
+S3_SECRET_ACCESS_KEY=55af75d8eb6b99f189f6a35f855336ea62cd9c4751a5cf4337c53c1d3f497ac2
+
+# Bucket name
+S3_BUCKET=lobechat
+# Bucket request endpoint
+S3_ENDPOINT=https://0b33a03b5c993fd2f453379dc36558e5.r2.cloudflarestorage.com
+# Public domain for bucket access
+S3_PUBLIC_DOMAIN=https://s3-dev.your-domain.com
+
+# Bucket region, such as us-west-1, generally not required, but some providers may need to configure
+# S3_REGION=us-west-1
+```
+
+Then, insert the above environment variables into Vercel's environment variables:
+
+
+
+### Configuring Cross-Origin Resource Sharing (CORS)
+
+Since S3 storage services are often on a separate domain, cross-origin access needs to be configured.
+
+In R2, you can find the CORS configuration in the bucket settings:
+
+
+
+Add a CORS rule to allow requests from your domain (in this case, `https://your-project.vercel.app`):
+
+
+
+Example configuration:
+
+```json
+[
+ {
+ "AllowedOrigins": ["https://your-project.vercel.app"],
+ "AllowedMethods": ["GET", "PUT", "HEAD", "POST", "DELETE"],
+ "AllowedHeaders": ["*"]
+ }
+]
+```
+
+After configuring, click save.
+
+
+
+## Four, Deployment and Verification
+
+After completing the steps above, the configuration of the server-side database should be done. Next, we can deploy LobeChat to Vercel and then visit your Vercel link to verify if the server-side database is working correctly.
+
+
+ ### Redeploy the latest commit
+
+After configuring the environment variables, you need to redeploy the latest commit and wait for the deployment to complete.
+
+
+
+### Check if the features are working properly
+
+If you click on the login button in the top left corner and the login popup appears normally, then you have successfully configured it. Enjoy using it\~
+
+
+
+
+
+
+
+## Appendix
+
+### Overview of Server-side Database Environment Variables
+
+For easy copying, here is a summary of the environment variables required to configure the server-side database:
+
+```shell
+APP_URL=https://your-project.com
+
+# Specify the service mode as server
+NEXT_PUBLIC_SERVICE_MODE=server
+
+# Postgres database URL
+DATABASE_URL=
+KEY_VAULTS_SECRET=jgwsK28dspyVQoIf8/M3IIHl1h6LYYceSYNXeLpy6uk=
+
+# Clerk related configurations
+NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=pk_live_xxxxxxxxxxx
+CLERK_SECRET_KEY=sk_live_xxxxxxxxxxxxxxxxxxxxxx
+CLERK_WEBHOOK_SECRET=whsec_xxxxxxxxxxxxxxxxxxxxxx
+
+# S3 related configurations
+# S3 keys
+S3_ACCESS_KEY_ID=9998d6757e276cf9f1edbd325b7083a6
+S3_SECRET_ACCESS_KEY=55af75d8eb6b99f189f6a35f855336ea62cd9c4751a5cf4337c53c1d3f497ac2
+
+# Bucket name
+S3_BUCKET=lobechat
+# Bucket request endpoint
+S3_ENDPOINT=https://0b33a03b5c993fd2f453379dc36558e5.r2.cloudflarestorage.com
+# Public access domain for the bucket
+S3_PUBLIC_DOMAIN=https://s3-for-lobechat.your-domain.com
+# Bucket region, such as us-west-1, generally not needed to add, but some service providers may require configuration
+# S3_REGION=us-west-1
+```
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/vercel.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/vercel.zh-CN.mdx
new file mode 100644
index 0000000..1586530
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/vercel.zh-CN.mdx
@@ -0,0 +1,409 @@
+---
+title: 在 Vercel 上部署 LobeChat 的服务端数据库版本
+description: 本文详细介绍如何在 Vercel 中部署服务端数据库版 LobeChat,包括数据库配置、身份验证服务配置和 S3 存储服务的设置步骤。
+tags:
+ - 服务端数据库
+ - Postgres
+ - Clerk
+ - S3存储服务
+ - Vercel部署
+ - 数据库配置
+ - 身份验证服务
+ - 环境变量配置
+---
+
+# 在 Vercel 上部署服务端数据库版
+
+本文将详细介绍如何在 Vercel 中部署服务端数据库版 LobeChat,包括: 1)数据库配置;2)身份验证服务配置;3) S3 存储服务的设置步骤。
+
+
+进行后续操作前,请务必确认以下事项:
+ - 导出所有数据,部署服务端数据库后,原有用户数据无法自动迁移,只能提前备份后进行手动导入!
+ - 环境变量中的`ACCESS_CODE`未设置或已清除!
+ - 配置服务端数据库所需要的环境变量时,需全部填入后再进行部署,否则可能遭遇数据库迁移问题!
+
+
+## 一、 配置数据库
+
+
+
+### 准备服务端数据库实例,获取连接 URL
+
+在部署之前,请确保你已经准备好 Postgres 数据库实例,你可以选择以下任一方式:
+
+- `A.` 使用 Vercel / Neon 等 Serverless Postgres 实例;
+- `B.` 使用 Docker 等自部署 Postgres 实例。
+
+两者的配置方式略有不同,在下一步会有所区分。
+
+### 在 Vercel 中添加环境变量
+
+在 Vercel 的部署环境变量中,添加 `DATABASE_URL` 等环境变量,将上一步准备好的 Postgres 数据库连接 URL 填入其中。数据库连接 URL 的通常格式为 `postgres://username:password@host:port/database`。
+
+
+
+
+
+
+ 请确认您的供应商所提供的 `Postgres` 类型,若为 `Node Postgres`,请切换到 `Node Postgres` Tab 。
+
+
+
+ Serverless Postgres 需要填写的变量如下:
+
+ ```shell
+ # Serverless Postgres DB Url
+ DATABASE_URL=
+
+ # 指定 service mode 为 server,否则不会进入服务端数据库
+ NEXT_PUBLIC_SERVICE_MODE=server
+ ```
+
+ 在 Vercel 中填写的示例如下:
+
+
+
+
+
+
+ Node Postgres 需要填写的变量如下:
+
+ ```shell
+ # Node Postgres DB Url
+ DATABASE_URL=
+
+ # 指定 Postgres database driver 为 node
+ DATABASE_DRIVER=node
+
+ # 指定 service mode 为 server,否则不会进入服务端数据库
+ NEXT_PUBLIC_SERVICE_MODE=server
+ ```
+
+ 在 Vercel 中填写的示例如下:
+
+
+
+
+
+
+
+
+ 如果希望连接数据库时启用 SSL
+ ,请自行参考[链接](https://stackoverflow.com/questions/14021998/using-psql-to-connect-to-postgresql-in-ssl-mode)进行设置
+
+
+### 添加 `KEY_VAULTS_SECRET` 环境变量
+
+在完成数据库 DATABASE_URL 环境变量添加后,需要添加一个 `KEY_VAULTS_SECRET` 环境变量。该变量用于加密用户存储的 apikey 等敏感信息。你可以使用 `openssl rand -base64 32` 生成一个随机的 32 位字符串作为密钥。
+
+```shell
+KEY_VAULTS_SECRET=jgwsK28dspyVQoIf8/M3IIHl1h6LYYceSYNXeLpy6uk=
+```
+
+同样需要将其添加到 Vercel 环境变量中。
+
+### 添加 `APP_URL` 环境变量
+
+该部分最后需要添加 APP_URL 环境变量,用于指定LobeChat 应用的 URL 地址。
+
+
+
+## 二、 配置身份验证服务
+
+服务端数据库需要搭配用户身份验证服务才可以正常使用。因此需要配置对应的身份验证服务。
+
+
+
+### 准备 Clerk 身份验证服务
+
+前往 [Clerk](https://clerk.com?utm_source=lobehub&utm_medium=docs) 注册并创建应用,获取相应的 Public Key 和 Secret Key。
+
+
+ 如果对 Clerk 不太了解,可以查阅
+ [身份验证服务-Clerk](/zh/docs/self-hosting/advanced/authentication#clerk) 了解 Clerk 的使用详情。
+
+
+### 在 Vercel 中添加公、私钥环境变量
+
+在 Vercel 的部署环境变量中,添加 `NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY` 和 `CLERK_SECRET_KEY` 环境变量。你可以在菜单中点击「API Keys」,然后复制对应的值填入 Vercel 的环境变量中。
+
+
+
+此步骤所需的环境变量如下:
+
+```shell
+NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=pk_live_xxxxxxxxxxx
+CLERK_SECRET_KEY=sk_live_xxxxxxxxxxxxxxxxxxxxxx
+```
+
+添加上述变量到 Vercel 中:
+
+
+
+### 在 Clerk 中创建并配置 Webhook
+
+由于我们让 Clerk 完全接管用户鉴权与管理,因此我们需要在 Clerk 用户生命周期变更时(创建、更新、删除)中通知我们的应用并存储落库。我们通过 Clerk 提供的 Webhook 来实现这一诉求。
+
+我们需要在 Clerk 的 Webhooks 中添加一个端点(Endpoint),告诉 Clerk 当用户发生变更时,向这个端点发送通知。
+
+
+
+在 endppint 中填写你的 Vercel 项目的 URL,如 `https://your-project.vercel.app/api/webhooks/clerk`。然后在订阅事件(Subscribe to events)中,勾选 user 的三个事件(`user.created` 、`user.deleted`、`user.updated`),然后点击创建。
+
+URL的`https://`不可缺失,须保持URL的完整性
+
+
+
+### 将 Webhook 秘钥添加到 Vercel 环境变量
+
+创建完毕后,可以在右下角找到该 Webhook 的秘钥:
+
+
+
+这个秘钥所对应的环境变量名为 `CLERK_WEBHOOK_SECRET`:
+
+```shell
+CLERK_WEBHOOK_SECRET=whsec_xxxxxxxxxxxxxxxxxxxxxx
+```
+
+将其添加到 Vercel 的环境变量中:
+
+
+
+
+
+这样,你已经成功配置了 Clerk 身份验证服务。接下来我们将配置 S3 存储服务。
+
+## 三、 配置 S3 存储服务
+
+在服务端数据库中我们需要配置 S3 存储服务来存储文件。
+
+
+ 在本文,S3所指代的是指兼容 S3 存储方案,即支持 Amazon S3 API 的对象存储系统,常见例如 Cloudflare
+ R2 、阿里云 OSS 等均支持 S3 兼容 API。
+
+
+
+
+ ### 配置并获取 S3 存储桶
+
+ 你需要前往你的 S3 服务提供商(如 AWS S3、Cloudflare R2 等)并创建一个新的存储桶(Bucket)。接下来以 Cloudflare R2 为例,介绍创建流程。
+
+ 下图是 Cloudflare R2 的界面:
+
+
+
+ 创建存储桶时将指定其名称,然后点击创建。
+
+
+ ### 获取存储桶相关环境变量
+
+ 在 R2 存储桶的设置中,可以看到桶配置的信息:
+
+
+
+其对应的环境变量为:
+
+```shell
+# 存储桶的名称
+S3_BUCKET=lobechat
+# 存储桶的请求端点(注意此处链接的路径带存储桶名称,必须删除该路径,或使用申请 S3 API token 页面所提供的链接)
+S3_ENDPOINT=https://0b33a03b5c993fd2f453379dc36558e5.r2.cloudflarestorage.com
+# 存储桶对外的访问域名
+S3_PUBLIC_DOMAIN=https://s3-for-lobechat.your-domain.com
+```
+
+`S3_ENDPOINT`必须删除其路径,否则会无法访问所上传文件
+
+ ### 获取 S3 密钥环境变量
+
+ 你需要获取 S3 的访问密钥,以便 LobeChat 的服务端有权限访问 S3 存储服务。在 R2 中,你可以在账户详情中配置访问密钥:
+
+
+
+ 点击右上角按钮创建 API token,进入创建 API Token 页面
+
+
+
+ 鉴于我们的服务端数据库需要读写 S3 存储服务,因此权限需要选择`对象读与写`,然后点击创建。
+
+
+
+ 创建完成后,就可以看到对应的 S3 API token
+
+
+
+ 其对应的环境变量为:
+
+```shell
+S3_ACCESS_KEY_ID=9998d6757e276cf9f1edbd325b7083a6
+S3_SECRET_ACCESS_KEY=55af75d8eb6b99f189f6a35f855336ea62cd9c4751a5cf4337c53c1d3f497ac2
+```
+
+### 在 Vercel 中添加对应的环境变量
+
+ 不同 S3 服务商获取所需环境变量的步骤可能有所不同,但最终获得到的环境变量应该都是一致的:
+
+URL的`https://`不可缺失,须保持URL的完整性
+
+```shell
+# S3 秘钥
+S3_ACCESS_KEY_ID=9998d6757e276cf9f1edbd325b7083a6
+S3_SECRET_ACCESS_KEY=55af75d8eb6b99f189f6a35f855336ea62cd9c4751a5cf4337c53c1d3f497ac2
+
+# 存储桶的名称
+S3_BUCKET=lobechat
+# 存储桶的请求端点
+S3_ENDPOINT=https://0b33a03b5c993fd2f453379dc36558e5.r2.cloudflarestorage.com
+# 存储桶对外的访问域名
+S3_PUBLIC_DOMAIN=https://s3-dev.your-domain.com
+
+# 桶的区域,如 us-west-1,一般来说不需要添加,但某些服务商则需要配置
+# S3_REGION=us-west-1
+```
+
+然后将上述环境变量填入 Vercel 的环境变量中:
+
+
+
+ ### 配置跨域
+
+ 由于 S3 存储服务往往是一个独立的网址,因此需要配置跨域访问。
+
+ 在 R2 中,你可以在存储桶的设置中找到跨域配置:
+
+
+
+ 添加跨域规则,允许你的域名(在上文是 `https://your-project.vercel.app`)来源的请求:
+
+
+
+示例配置如下:
+
+```json
+[
+ {
+ "AllowedOrigins": ["https://your-project.vercel.app"],
+ "AllowedMethods": ["GET", "PUT", "HEAD", "POST", "DELETE"],
+ "AllowedHeaders": ["*"]
+ }
+]
+```
+
+配置后点击保存即可。
+
+
+
+## 四、部署并验证
+
+通过上述步骤之后,我们应该就完成了服务端数据库的配置。接下来我们可以将 LobeChat 部署到 Vercel 上,然后访问你的 Vercel 链接,验证服务端数据库是否正常工作。
+
+
+ ### 重新部署最新的 commit
+
+配置好环境变量后,你需要重新部署最新的 commit,并等待部署完成。
+
+
+
+### 检查功能是否正常
+
+如果你点击左上角登录,可以正常显示登录弹窗,那么说明你已经配置成功了,尽情享用吧~
+
+
+
+
+
+
+
+## 附录
+
+### 服务端数据库环境变量一览
+
+为方便一键复制,在此汇总配置服务端数据库所需要的环境变量:
+
+```shell
+APP_URL=https://your-project.com
+
+# 指定服务模式为 server
+NEXT_PUBLIC_SERVICE_MODE=server
+
+# Postgres 数据库 URL
+DATABASE_URL=
+KEY_VAULTS_SECRET=jgwsK28dspyVQoIf8/M3IIHl1h6LYYceSYNXeLpy6uk=
+
+# Clerk 相关配置
+NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=pk_live_xxxxxxxxxxx
+CLERK_SECRET_KEY=sk_live_xxxxxxxxxxxxxxxxxxxxxx
+CLERK_WEBHOOK_SECRET=whsec_xxxxxxxxxxxxxxxxxxxxxx
+
+# S3 相关配置
+# S3 秘钥
+S3_ACCESS_KEY_ID=9998d6757e276cf9f1edbd325b7083a6
+S3_SECRET_ACCESS_KEY=55af75d8eb6b99f189f6a35f855336ea62cd9c4751a5cf4337c53c1d3f497ac2
+
+# 存储桶的名称
+S3_BUCKET=lobechat
+# 存储桶的请求端点
+S3_ENDPOINT=https://0b33a03b5c993fd2f453379dc36558e5.r2.cloudflarestorage.com
+# 存储桶对外的访问域名
+S3_PUBLIC_DOMAIN=https://s3-for-lobechat.your-domain.com
+# 桶的区域,如 us-west-1,一般来说不需要添加,但某些服务商则需要配置
+# S3_REGION=us-west-1
+```
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/zeabur.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/zeabur.mdx
new file mode 100644
index 0000000..d7b99d3
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/zeabur.mdx
@@ -0,0 +1,82 @@
+---
+title: Deploying LobeChat Database on Zeabur
+description: >-
+ Learn how to deploy LobeChat on Zeabur with ease. Follow the provided steps to
+ set up your chat application seamlessly.
+tags:
+ - Deploy LobeChat
+ - Zeabur Deployment
+ - OpenAI API Key
+ - Custom Domain Binding
+---
+
+# Deploying LobeChat Database on Zeabur
+
+
+ This article assumes that you are familiar with the basic principles and processes of deploying
+ the LobeChat server database version, so it only includes content related to core environment
+ variable configuration. If you are not familiar with the deployment principles of the LobeChat
+ server database version, please refer to [Deploying Server
+ Database](/docs/self-hosting/server-database) first.
+
+
+The template on Zeabur includes 4 services:
+- Logto for authrization.
+- PostgreSQL with Vector plugin for data storage and indexing.
+- MinIO for image storage.
+- LobeChat database version.
+
+## Deploying on Zeabur
+
+Here is the process for deploying the LobeChat server database version on Zeabur:
+
+
+
+### Go to the template page on Zeabur
+
+Go to the [LobeChat Database template page](https://zeabur.com/templates/RRSPSD) on Zeabur and click on the "Deploy" button.
+
+### Fill in the required environment variables
+
+After you click on the "Deploy" button, you will see a modal pop-up where you can fill in the required environment variables.
+
+Here are the environment variables you need to fill in:
+
+- OpenAI API key: Your OpenAI API key to get responses from OpenAI.
+
+- LobeChat Domain: A free subdomain with `.zeabur.app` suffix.
+
+- MinIO Public Domain: A free subdomain with `.zeabur.app` suffix for yout MinIO web port to enable public access for the uploaded files.
+
+- Logto Console Domain: A free subdomain with `.zeabur.app` suffix for your Logto console.
+
+- Logto API Domain: A free subdomain with `.zeabur.app` suffix for your Logto api.
+
+
+### Select a region and deploy
+
+After you fill all the required environment variables, select a region where you want to deploy your LobeChat Database and click on the "Deploy" button.
+
+You will see another modal pop-up where you can see the deployment progress.
+
+### Configure Logto
+
+After the deployment is done, you need to configure your Logto service to enable authrization.
+
+Access your Logto console with the console domain you just binded, and then create a `Next.js 14(App router)` application to get the client ID and client secret, and fill in the cors and callback URLs.
+You can check [this document](../advanced/auth.mdx) for a more detailed guide.
+
+Fill in those variables into your LobeChat service on Zeabur, here is a more detailed guide for [editing environment variables on Zeabur](https://zeabur.com/docs/deploy/variables).
+
+```
+LOGTO_CLIENT_ID=your_logto_client_id
+LOGTO_CLIENT_SECRET=your_logto_client_secret
+```
+
+### Access your LobeChat Instance
+
+Press on the `LobeChat-Database` and you can see the public domain you just created, click on it to access your LobeChat Database.
+
+You can also bind a custom domain for your services if you want, here is a guide on how to [bind a custom domain on Zeabur](https://zeabur.com/docs/deploy/domain-binding).
+
+
diff --git a/DigitalHumanWeb/docs/self-hosting/server-database/zeabur.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/server-database/zeabur.zh-CN.mdx
new file mode 100644
index 0000000..9057e7a
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/server-database/zeabur.zh-CN.mdx
@@ -0,0 +1,70 @@
+---
+title: 在 Zeabur 上部署 LobeChat
+description: 按照指南准备 OpenAI API Key 并点击按钮进行部署。在部署完成后,即可开始使用 LobeChat 并选择是否绑定自定义域名。
+tags:
+ - Zeabur
+ - LobeChat
+ - OpenAI API Key
+ - 部署流程
+ - 自定义域名
+---
+
+# 使用 Zeabur 部署 LobeChat 数据库版
+
+
+ 本文假设你已经熟悉 LobeChat
+ 服务器数据库版的部署基本原理和流程,因此只包含与核心环境变量配置相关的内容。如果你对 LobeChat
+ 服务器数据库版的部署原理不熟悉,请先参考[部署服务器数据库](/zh/docs/self-hosting/server-database)。
+
+
+在 Zeabur 的模板中总共包含有以下四个服务:
+- Logto 提供身份校验
+- 带有 Vector 插件的 PostgreSQL 来做数据存储和向量化
+- MinIO 作为对象存储
+- LobeChat Database 的实例
+
+## 在 Zeabur 上部署
+
+这里是在 Zeabur 上部署 LobeChat 服务器数据库版的流程:
+
+
+
+### 前往 Zeabur 上的模板页面
+
+前往 [Zeabur 上的 LobeChat 数据库模板页面](https://zeabur.com/templates/RRSPSD) 并点击 "Deploy" 按钮。
+
+### 填写必要的环境变量
+
+在你点击“部署“按钮后,你会看到一个模态弹窗,你可以在这里填写必要的环境变量。
+
+以下是你需要填写的环境变量:
+
+- OpenAI API key: 你的 OpenAI API key 用于获取模型的访问权限。
+- LobeChat Domain: 一个免费的 `.zeabur.app` 后缀的域名。
+- MinIO Public Domain: 一个免费的 `.zeabur.app` 后缀的域名为了暴露 MinIO 服务以公开访问资源。
+- Logto Console Domain: 一个免费的 `.zeabur.app` 后缀的域名来访问 Logto 的控制台。
+- Logto API Domain: 一个免费的 `.zeabur.app` 后缀的域名来访问 Logto 的 API。
+
+### 选择一个区域并部署
+
+在你填写完所有必要的环境变量后,选择一个你想要部署 LobeChat 数据库的区域并点击“部署”按钮。
+
+你会看到另一个模态弹窗,你可以在这里看到部署的进度。
+
+### 配置 Logto
+
+当部署完成后,你会被自动导航到你在 Zeabur 控制台上刚刚创建的项目。
+你需要再进一步配置你的 Logto 服务。
+
+使用你刚绑定的域名来访问你的 Logto 控制台,创建一个新项目以获得对应的客户端 ID 与密钥,将它们填入你的 LobeChat 服务的变量中。
+关于如何填入变量,可以参照 [Zeabur 的官方文档](https://zeabur.com/docs/deploy/variables)。
+
+Logto 的详细配置可以参考[这篇文档](../advanced/auth.zh-CN.mdx)。
+
+### 访问你的 LobeChat
+
+按下 `LobeChat-Database` 你会看到你刚刚创建的公共域名,点击它以访问你的 LobeChat 数据库。
+
+你可以选择绑定一个自定义域名,这里有一个关于如何在 Zeabur 上[绑定自定义域名](https://zeabur.com/docs/deploy/domain-binding)的指南。
+
+
diff --git a/DigitalHumanWeb/docs/self-hosting/start.mdx b/DigitalHumanWeb/docs/self-hosting/start.mdx
new file mode 100644
index 0000000..b56d9c1
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/start.mdx
@@ -0,0 +1,35 @@
+---
+title: Build Your Own LobeChat - Choose Your Deployment Platform
+description: >-
+ Explore multiple deployment platforms like Vercel, Docker, Docker Compose, and
+ more to deploy LobeChat. Choose the platform that best suits your needs.
+tags:
+ - Lobe Chat
+ - Deployment Platform
+ - Vercel
+ - Docker
+ - Docker Compose
+---
+# Build Your Own Lobe Chat
+
+LobeChat supports various deployment platforms, including Vercel, Docker, and Docker Compose. You can choose a deployment platform that suits you to build your own Lobe Chat.
+
+## Quick Deployment
+
+For users who are new to LobeChat, we recommend using the client-side database mode for quick deployment. The advantage of this mode is that deployment can be quickly completed with just one command/button, making it easy for you to quickly get started and experience LobeChat.
+
+You can follow the guide below for quick deployment of LobeChat:
+
+
+
+
+ In the client-side database mode, data is stored locally on the user's device, without cross-device synchronization, and does not support advanced features such as file uploads and knowledge base.
+
+
+## Advanced Mode: Server-Side Database
+
+For users who are already familiar with LobeChat or need cross-device synchronization, you can deploy a version with a server-side database to access a more complete and powerful LobeChat.
+
+
+
+
diff --git a/DigitalHumanWeb/docs/self-hosting/start.zh-CN.mdx b/DigitalHumanWeb/docs/self-hosting/start.zh-CN.mdx
new file mode 100644
index 0000000..26c31a9
--- /dev/null
+++ b/DigitalHumanWeb/docs/self-hosting/start.zh-CN.mdx
@@ -0,0 +1,37 @@
+---
+title: 构建属于自己的 LobeChat - 自选部署平台
+description: >-
+ 选择适合自己的部署平台,构建个性化的 Lobe Chat。支持 Docker、Docker
+ Compose、Netlify、Railway、Repocloud、SealOS、Vercel 和 Zeabur 部署。
+tags:
+ - Lobe Chat
+ - 部署平台
+ - Docker
+ - Netlify
+ - Vercel
+ - 个性化
+---
+
+# 构建属于自己的 Lobe Chat
+
+LobeChat 支持多种部署平台,包括 Vercel、Docker 和 Docker Compose 等,你可以选择适合自己的部署平台进行部署,构建属于自己的 Lobe Chat。
+
+## 快速部署
+
+对于第一次了解 LobeChat 的用户,我们推荐使用客户端数据库的模式快速部署,该模式的优势是一行指令/一个按钮即可快捷完成部署,便于你快速上手与体验 LobeChat。
+
+你可以通过以下指南快速部署 LobeChat:
+
+
+
+
+ 客户端数据库模式下数据均保留在用户本地,不会跨多端同步,也不支持文件上传、知识库等进阶功能。
+
+
+## 进阶模式:服务端数据库
+
+针对已经了解 LobeChat 的用户,或需要多端同步的用户,可以自行部署带有服务端数据库的版本,进而获得更完整、功能更强大的 LobeChat。
+
+
+
+
diff --git a/DigitalHumanWeb/docs/usage/agents/agent-organization.mdx b/DigitalHumanWeb/docs/usage/agents/agent-organization.mdx
new file mode 100644
index 0000000..477dccc
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/agents/agent-organization.mdx
@@ -0,0 +1,69 @@
+---
+title: Efficiently Organize Your AI Assistants with LobeChat
+description: >-
+ Learn how to use LobeChat's grouping, search, and pinning functions to
+ efficiently organize and locate your AI assistants.
+tags:
+ - LobeChat
+ - AI assistants
+ - assistant organization
+ - grouping
+ - search function
+ - pinning function
+---
+
+# Assistant Organization Guide
+
+
+
+LobeChat provides a rich variety of AI assistant resources. Users can easily add various assistants through the assistant market, offering a wide range of application scenarios for AI applications.
+
+When you have added a large number of assistants, finding a specific assistant in the list may become challenging. LobeChat provides `search`, `grouping`, and `pinning` functions to help you better organize assistants and improve efficiency in locating them.
+
+## Assistant Grouping
+
+Firstly, LobeChat's AI assistants support organization through grouping. You can categorize assistants of the same type together and easily search for the required assistants by collapsing and expanding groups.
+
+### Assistant Settings
+
+
+
+- In the menu of an individual assistant, selecting the `Move to Group` option can quickly categorize the assistant into the specified group.
+- If you don't find the group you want, you can choose `Add Group` to quickly create a new group.
+
+### Group Settings
+
+
+
+- In the group menu, you can quickly create a new assistant under that group.
+- Clicking the `Group Management` button allows you to `rename`, `delete`, `sort`, and perform other operations on all groups.
+
+## Assistant Search
+
+
+
+- At the top of the assistant list, you can use the assistant search function to easily locate the assistant you need using keywords.
+
+## Assistant Pinning
+
+
+
+- In the assistant menu, you can use the `Pin` function to pin the assistant to the top.
+- After pinning an assistant, a pinned area will appear at the top of the assistant list, displaying all pinned assistants.
+- For pinned assistants, you can choose `Unpin` to remove them from the pinned area.
diff --git a/DigitalHumanWeb/docs/usage/agents/agent-organization.zh-CN.mdx b/DigitalHumanWeb/docs/usage/agents/agent-organization.zh-CN.mdx
new file mode 100644
index 0000000..5ce6aa5
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/agents/agent-organization.zh-CN.mdx
@@ -0,0 +1,67 @@
+---
+title: LobeChat 助手组织指南 - 提升助手管理效率
+description: 了解如何通过分组、搜索和固定功能更好地组织 LobeChat 的 AI 助手,提升助手管理效率和定位效率。
+tags:
+ - LobeChat
+ - AI 助手
+ - 助手组织
+ - 分组设置
+ - 助手搜索
+ - 助手固定
+---
+
+# 助手组织指南
+
+
+
+LobeChat 提供了丰富的 AI 助手资源,用户可以通过助手市场方便地添加各类助手,为 AI 应用提供了广泛的应用场景。
+
+当你添加了大量助手后,在列表中寻找特定助手可能会变得比较困难。LobeChat 提供了`搜索`、`分组`和`固定`功能,帮助您更好地组织助手,提升定位效率。
+
+## 助手分组
+
+首先 LobeChat 的 AI 助手支持以分组的方式进行组织。你可以将同类型的助手归类到一起,并通过折叠和展开分组的方式方便地查询所需助手。
+
+### 助手设置
+
+
+
+- 在单个助手的菜单中,选择`移动到分组`选项可以快速将该助手归类到指定分组。
+- 如果没有你想要的分组,可以选择`添加分组`,快速创建一个新的分组。
+
+### 分组设置
+
+
+
+- 在分组菜单中,可以快速在该分组下新建助手
+- 点击`分组管理`按钮可以对所有分组进行`重命名`、`删除`、`排序`等操作。
+
+## 助手搜索
+
+
+
+- 在助手列表的顶部,您可以通过助手搜索功能,方便地使用关键词定位到您所需的助手。
+
+## 助手固定
+
+
+
+- 在助手菜单中,你可以使用`固定`功能将该助手固定在顶部。
+- 固定助手后,助手列表的上方将出现一个固定区域,显示所有已固定的助手列表。
+- 对于已固定的助手,你可以选择`解除固定`,将其移出固定区域。
diff --git a/DigitalHumanWeb/docs/usage/agents/concepts.mdx b/DigitalHumanWeb/docs/usage/agents/concepts.mdx
new file mode 100644
index 0000000..3589579
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/agents/concepts.mdx
@@ -0,0 +1,41 @@
+---
+title: Improving User Interaction Efficiency with Agents in LobeChat
+description: >-
+ Discover how LobeChat's innovative approach with Agents enhances user
+ experience by providing dedicated functional modules for efficient task
+ handling and quick access to historical conversations.
+tags:
+ - LobeChat
+ - Agents
+ - User Interaction Efficiency
+ - Task Handling
+ - Historical Conversations
+---
+
+# Topics and Assistants
+
+## ChatGPT and "Topics"
+
+In the official ChatGPT application, there is only the concept of "topics." As shown in the image, the user's historical conversation topics are listed in the sidebar.
+
+
+
+However, in our usage, we have found that this model has many issues. For example, the information indexing of historical conversations is too scattered. Additionally, when dealing with repetitive tasks, it is difficult to have a stable entry point. For instance, if I want ChatGPT to help me translate a document, in this model, I would need to constantly create new topics and then set up the translation prompt I had previously created. When there are high-frequency tasks, this will result in a very inefficient interaction format.
+
+## Topics and "Agent"
+
+Therefore, in LobeChat, we have introduced the concept of **Agents**. An agent is a complete functional module, each with its own responsibilities and tasks. Assistants can help you handle various tasks and provide professional advice and guidance.
+
+
+
+At the same time, we have integrated topics into each agent. The benefit of this approach is that each agent has an independent topic list. You can choose the corresponding agent based on the current task and quickly switch between historical conversation records. This method is more in line with users' habits in common chat software, improving interaction efficiency.
diff --git a/DigitalHumanWeb/docs/usage/agents/concepts.zh-CN.mdx b/DigitalHumanWeb/docs/usage/agents/concepts.zh-CN.mdx
new file mode 100644
index 0000000..d76dcad
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/agents/concepts.zh-CN.mdx
@@ -0,0 +1,38 @@
+---
+title: 在 LobeChat 中进行话题与助手的革新
+description: 了解 LobeChat 中的话题与助手概念,如何提高用户交互效率并解决历史对话信息索引分散的问题。
+tags:
+ - LobeChat
+ - 话题与助手
+ - 交互效率
+ - 历史对话记录
+ - 信息索引
+---
+
+# 话题与助手
+
+## ChatGPT 与「话题」
+
+在 ChatGPT 官方应用中,只存在话题的概念,如图所示,在侧边栏中是用户的历史对话话题列表。
+
+
+
+但在我们的使用过程中其实会发现这种模式存在很多问题,比如历史对话的信息索引过于分散问题,同时当处理一些重复任务时很难有一个稳定的入口,比如我希望有一个稳定的入口可以让 ChatGPT 帮助我翻译文档,在这个模式下,我需要不断新建新的话题同时再设置我之前创建好的翻译 Prompt 设定,当有高频任务存在时,这将是一个效率很低的交互形式。
+
+## 「话题」与「助手」
+
+因此在 LobeChat 中,我们引入了 **助手** 的概念。助手是一个完整的功能模块,每个助手都有自己的职责和任务。助手可以帮助你处理各种任务,并提供专业的建议和指导。
+
+
+
+与此同时,我们将话题索引到每个助手内部。这样做的好处是,每个助手都有一个独立的话题列表,你可以根据当前任务选择对应的助手,并快速切换历史对话记录。这种方式更符合用户对常见聊天软件的使用习惯,提高了交互的效率。
diff --git a/DigitalHumanWeb/docs/usage/agents/custom-agent.mdx b/DigitalHumanWeb/docs/usage/agents/custom-agent.mdx
new file mode 100644
index 0000000..c34eac5
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/agents/custom-agent.mdx
@@ -0,0 +1,58 @@
+---
+title: Custom LobeChat Assistant Guide - Adding and Iterating Assistants
+description: >-
+ Learn how to add assistants to your favorites list in LobeChat through the
+ role market or by creating custom assistants. Explore detailed steps for
+ creating custom assistants and quick setup tips.
+tags:
+ - LobeChat
+ - Adding Assistants
+ - Custom Assistant
+ - Role Market
+ - Creating Assistants
+ - Assistant Configuration
+---
+
+# Custom Assistant Guide
+
+As the basic functional unit of LobeChat, adding and iterating assistants is very important. Now you can add assistants to your favorites list in two ways.
+
+## `A` Add through the role market
+
+If you are a beginner in Prompt writing, you might want to browse the assistant market of LobeChat first. Here, you can find commonly used assistants submitted by others and easily add them to your list with just one click, which is very convenient.
+
+
+
+## `B` Create a custom assistant
+
+When you need to handle specific tasks, you need to consider creating a custom assistant to help you solve the problem. You can add and configure the assistant in detail in the following ways.
+
+
+
+
+
+
+
+
+ **Quick Setup Tip**: You can conveniently modify the Prompt through the quick edit button in the
+ sidebar.
+
+
+
+
+
+
+
+
+If you want to understand Prompt writing tips and common model parameter settings, you can continue to view:
+
+
+
+
+
+
diff --git a/DigitalHumanWeb/docs/usage/agents/custom-agent.zh-CN.mdx b/DigitalHumanWeb/docs/usage/agents/custom-agent.zh-CN.mdx
new file mode 100644
index 0000000..2011e07
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/agents/custom-agent.zh-CN.mdx
@@ -0,0 +1,53 @@
+---
+title: LobeChat 自定义助手指南 - 添加和配置助手的最佳方法
+description: 了解如何通过角色市场或新建自定义助手将助手添加到你的常用列表中。快捷设置技巧和常见的模型参数设置也包括在内。
+tags:
+ - 自定义助手
+ - LobeChat
+ - 添加助手
+ - 配置助手
+ - 角色市场
+ - 快捷设置
+ - 模型参数设置
+---
+
+# 自定义助手指南
+
+作为 LobeChat 的基础职能单位,助手的添加和迭代是非常重要的。现在你可以通过两种方式将助手添加到你的常用列表中
+
+## `A` 通过角色市场添加
+
+如果你是一个 Prompt 编写的新手,不妨先浏览一下 LobeChat 的助手市场。在这里,你可以找到其他人提交的常用助手,并且只需一键添加到你的列表中,非常方便。
+
+
+
+## `B` 通过新建自定义助手
+
+当你需要处理一些特定的任务时,你就需要考虑创建一个自定义助手来帮助你解决问题。可以通过以下方式添加并进行助手的详细配置
+
+
+
+
+
+
+
+**快捷设置技巧**: 可以通过侧边栏的快捷编辑按钮进行 Prompt 的便捷修改
+
+
+
+
+
+
+
+如果你希望理解 Prompt 编写技巧和常见的模型参数设置,可以继续查看:
+
+
+
+
+
+
diff --git a/DigitalHumanWeb/docs/usage/agents/model.mdx b/DigitalHumanWeb/docs/usage/agents/model.mdx
new file mode 100644
index 0000000..57f3504
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/agents/model.mdx
@@ -0,0 +1,79 @@
+---
+title: LobeChat Model Config Guide
+description: >-
+ Explore the capabilities of ChatGPT models from gpt-3.5-turbo to gpt-4-32k,
+ understanding their speed, context limits, and cost. Learn about model
+ parameters like temperature and top-p for better output.
+tags:
+ - ChatGPT Models
+ - Model Parameters
+ - Neural Networks
+ - Language Understanding
+ - Generation Capabilities
+---
+
+# Model Guide
+
+## ChatGPT
+
+- **gpt-3.5-turbo**: Currently the fastest generating ChatGPT model, it is faster but may sacrifice some text quality, with a context length of 4k.
+- **gpt-4**: ChatGPT 4.0 has improved language understanding and generation capabilities compared to 3.5. It can better understand context and context, and generate more accurate and natural responses. This is thanks to improvements in the GPT-4 model, including better language modeling and deeper semantic understanding, but it may be slower than other models, with a context length of 8k.
+- **gpt-4-32k**: Similar to gpt-4, the context limit is increased to 32k tokens, with a higher cost.
+
+## Concept of Model Parameters
+
+LLM seems magical, but it is essentially a probability problem. The neural network generates a bunch of candidate words from the pre-trained model based on the input text and selects the high-probability ones as output. Most of the related parameters are associated with sampling (i.e., how to select the output from the candidate words).
+
+### `temperature`
+
+This parameter controls the randomness of the model's output. The higher the value, the greater the randomness. Generally, when the same prompt is input multiple times, the model's output varies each time.
+
+- Set to 0: Generates a fixed output for each prompt
+- Lower values: More concentrated and deterministic output
+- Higher values: More random output (more creative)
+
+
+ Generally, the longer and clearer the prompt, the better the quality and confidence of the model's
+ output. In such cases, the temperature value can be adjusted appropriately. Conversely, if the
+ prompt is short and ambiguous, setting a relatively high temperature value will result in unstable
+ model output.
+
+
+
+
+### `top_p`
+
+Top_p is also a sampling parameter, but it differs from temperature in its sampling method. Before outputting, the model generates a bunch of tokens, and these tokens are ranked based on their quality. In the top-p sampling mode, the candidate word list is dynamic, and tokens are selected from the tokens based on a percentage. Top_p introduces randomness in token selection, allowing other high-scoring tokens to have a chance of being selected, rather than always choosing the highest-scoring one.
+
+
+ Top\_p is similar to randomness, and it is generally not recommended to change it together with
+ the randomness of temperature.
+
+
+
+
+### `presence_penalty`
+
+The presence penalty parameter can be seen as a punishment for repetitive content in the generated text. When this parameter is set high, the generation model will try to avoid producing repeated words, phrases, or sentences. Conversely, if the presence penalty parameter is set low, the generated text may contain more repetitive content. By adjusting the value of the presence penalty parameter, control over the originality and diversity of the generated text can be achieved. The importance of this parameter is mainly reflected in the following aspects:
+
+- Enhancing the originality and diversity of the generated text: In certain applications, such as creative writing or generating news headlines, it is necessary for the generated text to have high originality and diversity. By increasing the value of the presence penalty parameter, the amount of repeated content in the generated text can be effectively reduced, thereby enhancing its originality and diversity.
+- Preventing the generation of loops and meaningless content: In some cases, the generation model may produce repetitive or meaningless text that usually fails to convey useful information. By appropriately increasing the value of the presence penalty parameter, the probability of generating such meaningless content can be reduced, thereby improving the readability and practicality of the generated text.
+
+
+ It is worth noting that the presence penalty parameter, in conjunction with other parameters such
+ as temperature and top-p, collectively influences the quality of the generated text. Compared to
+ other parameters, the presence penalty parameter primarily focuses on the originality and
+ repetitiveness of the text, while the temperature and top-p parameters more significantly affect
+ the randomness and determinism of the generated text. By adjusting these parameters reasonably,
+ comprehensive control over the quality of the generated text can be achieved.
+
+
+### `frequency_penalty`
+
+It is a mechanism that penalizes frequently occurring new vocabulary in the text to reduce the likelihood of the model repeating the same word. The larger the value, the more likely it is to reduce repeated words.
+
+- `-2.0` When the morning news started broadcasting, I found that my TV now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now **(The highest frequency word is "now", accounting for 44.79%)**
+- `-1.0` He always watches the news in the early morning, in front of the TV watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch **(The highest frequency word is "watch", accounting for 57.69%)**
+- `0.0` When the morning sun poured into the small diner, a tired postman appeared at the door, carrying a bag of letters in his hands. The owner warmly prepared a breakfast for him, and he started sorting the mail while enjoying his breakfast. **(The highest frequency word is "of", accounting for 8.45%)**
+- `1.0` A girl in deep sleep was woken up by a warm ray of sunshine, she saw the first ray of morning light, surrounded by birdsong and flowers, everything was full of vitality. \_ (The highest frequency word is "of", accounting for 5.45%)
+- `2.0` Every morning, he would sit on the balcony to have breakfast. Under the soft setting sun, everything looked very peaceful. However, one day, when he was about to pick up his breakfast, an optimistic little bird flew by, bringing him a good mood for the day. \_ (The highest frequency word is "of", accounting for 4.94%)
diff --git a/DigitalHumanWeb/docs/usage/agents/model.zh-CN.mdx b/DigitalHumanWeb/docs/usage/agents/model.zh-CN.mdx
new file mode 100644
index 0000000..95264e4
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/agents/model.zh-CN.mdx
@@ -0,0 +1,74 @@
+---
+title: ChatGPT 模型指南:参数概念与应用
+description: >-
+ 了解 ChatGPT 模型的不同版本及参数概念,包括 temperature、top_p、presence_penalty 和
+ frequency_penalty。
+tags:
+ - ChatGPT
+ - 模型指南
+ - 参数概念
+ - LLM
+ - 生成模型
+---
+
+# 模型指南
+
+## ChatGPT
+
+- **gpt-3.5-turbo**:目前最生成速度最快的 chatgpt 模型更快,但可能会牺牲一些生成文本的质量,上下文长度为 4k。
+- **gpt-4**:ChatGPT 4.0 在语言理解和生成能力方面相对于 3.5 有所提升。它可以更好地理解上下文和语境,并生成更准确、自然的回答。这得益于 GPT-4 模型的改进,包括更好的语言建模和更深入的语义理解,但它的速度可能比其他模型慢,上下文长度为 8k。
+- **gpt-4-32k**:同 gpt-4,上下文限制增加到 32k token,同时费率更高。
+
+## 模型参数概念
+
+LLM 看似很神奇,但本质还是一个概率问题,神经网络根据输入的文本,从预训练的模型里面生成一堆候选词,选择概率高的作为输出,相关的参数,大多都是跟采样有关(也就是要如何从候选词里选择输出)。
+
+### `temperature`
+
+用于控制模型输出的结果的随机性,这个值越大随机性越大。一般我们多次输入相同的 prompt 之后,模型的每次输出都不一样。
+
+- 设置为 0,对每个 prompt 都生成固定的输出
+- 较低的值,输出更集中,更有确定性
+- 较高的值,输出更随机(更有创意 )
+
+
+ 一般来说,prompt 越长,描述得越清楚,模型生成的输出质量就越好,置信度越高,这时可以适当调高
+ temperature 的值;反过来,如果 prompt 很短,很含糊,这时再设置一个比较高的 temperature
+ 值,模型的输出就很不稳定了。
+
+
+
+
+### `top_p`
+
+核采样 top_p 也是采样参数,跟 temperature 不一样的采样方式。模型在输出之前,会生成一堆 token,这些 token 根据质量高低排名,核采样模式中候选词列表是动态的,从 tokens 里按百分比选择候选词。 top_p 为选择 token 引入了随机性,让其他高分的 token 有被选择的机会,不会总是选最高分的。
+
+top\_p 与随机性类似,一般来说不建议和随机性 temperature 一起更改
+
+
+
+### `presence_penalty`
+
+Presence Penalty 参数可以看作是对生成文本中重复内容的一种惩罚。当该参数设置较高时,生成模型会尽量避免产生重复的词语、短语或句子。相反,如果 Presence Penalty 参数较低,则生成的文本可能会包含更多重复的内容。通过调整 Presence Penalty 参数的值,可以实现对生成文本的原创性和多样性的控制。参数的重要性主要体现在以下几个方面:
+
+- 提高生成文本的独创性和多样性:在某些应用场景下,如创意写作、生成新闻标题等,需要生成的文本具有较高的独创性和多样性。通过增加 Presence Penalty 参数的值,可以有效减少生成文本中的重复内容,从而提高文本的独创性和多样性。
+- 防止生成循环和无意义的内容:在某些情况下,生成模型可能会产生循环、重复的文本,这些文本通常无法传达有效的信息。通过适当增加 Presence Penalty 参数的值,可以降低生成这类无意义内容的概率,提高生成文本的可读性和实用性。
+
+
+ 值得注意的是,Presence Penalty 参数与其他参数(如 Temperature 和
+ top-p)共同影响着生成文本的质量。对比其他参数,Presence Penalty
+ 参数主要关注文本的独创性和重复性,而 Temperature 和 top-p
+ 参数则更多地影响着生成文本的随机性和确定性。通过合理地调整这些参数,可以实现对生成文本质量的综合控制
+
+
+
+
+### `frequency_penalty`
+
+是一种机制,通过对文本中频繁出现的新词汇施加惩罚,以减少模型重复同一词语的可能性,值越大,越有可能降低重复字词。
+
+- `-2.0` 当早间新闻开始播出,我发现我家电视现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在 _(频率最高的词是 “现在”,占比 44.79%)_
+- `-1.0` 他总是在清晨看新闻,在电视前看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看 _(频率最高的词是 “看”,占比 57.69%)_
+- `0.0` 当清晨的阳光洒进小餐馆时,一名疲倦的邮递员出现在门口,他的手中提着一袋信件。店主热情地为他准备了一份早餐,他在享用早餐的同时开始整理邮件。**(频率最高的词是 “的”,占比 8.45%)**
+- `1.0` 一个深度睡眠的女孩被一阵温暖的阳光唤醒,她看到了早晨的第一缕阳光,周围是鸟语花香,一切都充满了生机。_(频率最高的词是 “的”,占比 5.45%)_
+- `2.0` 每天早上,他都会在阳台上坐着吃早餐。在柔和的夕阳照耀下,一切看起来都非常宁静。然而有一天,当他准备端起早餐的时候,一只乐观的小鸟飞过,给他带来了一天的好心情。 _(频率最高的词是 “的”,占比 4.94%)_
diff --git a/DigitalHumanWeb/docs/usage/agents/prompt.mdx b/DigitalHumanWeb/docs/usage/agents/prompt.mdx
new file mode 100644
index 0000000..6d38cf6
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/agents/prompt.mdx
@@ -0,0 +1,110 @@
+---
+title: >-
+ Guide to Using Prompts in LobeChat - How to Write Effective Instructions for
+ Generative AI
+description: >-
+ Learn the basic concepts of prompts and how to write well-structured and
+ effective instructions for generative AI. Improve the quality and
+ effectiveness of prompts to guide AI models accurately.
+tags:
+ - Generative AI
+ - Prompts
+ - Writing Instructions
+ - Structured Prompts
+ - Improving AI Output
+---
+
+# Guide to Using Prompts
+
+## Basic Concepts of Prompts
+
+Generative AI is very useful, but it requires human guidance. In most cases, generative AI can be as capable as a new intern at a company, but it needs clear instructions to perform well. The ability to guide generative AI correctly is a very powerful skill. You can guide generative AI by sending a prompt, which is usually a text instruction. A prompt is the input provided to the assistant, and it will affect the output. A good prompt should be structured, clear, concise, and directive.
+
+## How to Write a Well-Structured Prompt
+
+
+ A structured prompt refers to the construction of the prompt having a clear logic and structure.
+ For example, if you want the model to generate an article, your prompt may need to include the
+ article's topic, outline, and style.
+
+
+Let's look at a basic discussion prompt example:
+
+> _"What are the most urgent environmental issues facing our planet, and what actions can individuals take to help address these issues?"_
+
+We can convert it into a simple prompt for the assistant to answer the following questions: placed at the front.
+
+```prompt
+Answer the following questions:
+What are the most urgent environmental issues facing our planet, and what actions can individuals take to help address these issues?
+```
+
+Since the results generated by this prompt are not consistent, some are only one or two sentences. A typical discussion response should have multiple paragraphs, so these results are not ideal. A good prompt should provide **specific formatting and content instructions**. You need to eliminate ambiguity in the language to improve consistency and quality. Here is a better prompt.
+
+```prompt
+Write a highly detailed paper, including an introduction, body, and conclusion, to answer the following questions:
+What are the most urgent environmental issues facing our planet,
+and what actions can individuals take to help address these issues?
+```
+
+The second prompt generates longer output and better structure. The use of the term "paper" in the prompt is intentional, as the assistant can understand the definition of a paper, making it more likely to generate coherent, structured responses.
+
+## How to Improve Quality and Effectiveness
+
+
+ There are several ways to improve the quality and effectiveness of prompts:
+
+- **Be Clear About Your Needs:** The model's output will strive to meet your needs, so if your needs are not clear, the output may not meet expectations.
+- **Use Correct Grammar and Spelling:** The model will try to mimic your language style, so if your language style is problematic, the output may also be problematic.
+- **Provide Sufficient Contextual Information:** The model will generate output based on the contextual information you provide, so if the information is insufficient, it may not produce the desired results.
+
+
+
+After formulating effective prompts for discussing issues, you now need to refine the generated results. This may involve adjusting the output to fit constraints such as word count or combining concepts from different generated results.
+
+A simple method of iteration is to generate multiple outputs and review them to understand the concepts and structures being used. Once the outputs have been evaluated, you can select the most suitable ones and combine them into a coherent response. Another iterative method is to start small and **gradually expand**. This requires more than one prompt: an initial prompt for drafting the initial one or two paragraphs, followed by additional prompts to expand on the content already written. Here is a potential philosophical discussion prompt:
+
+> _"Is mathematics an invention or a discovery? Use careful reasoning to explain your answer."_
+
+Add it to a simple prompt as follows:
+
+```prompt
+Write an opening paragraph to answer the following question:
+Is mathematics an invention or a discovery? Use careful reasoning to explain your answer.
+```
+
+After generating some responses, we found one that we like:
+
+```prompt
+Philosophers and mathematicians have long debated whether mathematics is an invention or a discovery.
+Some argue that the concepts and principles of mathematics exist independently of human thought, and therefore can be considered discovered rather than invented.
+For example, the concepts of numbers, ratios, and geometric shapes are observed in the natural world and can be abstracted and systematically studied.
+```
+
+It's a good start. Then, use another prompt to expand:
+
+```prompt
+I am writing a detailed essay to answer the following prompt:
+Is mathematics an invention or a discovery? Use careful reasoning to explain your answer. Here is my current situation: Philosophers and mathematicians have long debated whether mathematics is an invention or a discovery.
+Some argue that the concepts and principles of mathematics exist independently of human thought, and therefore can be considered discovered rather than invented.
+For example, the concepts of numbers, ratios, and geometric shapes are observed in the natural world and can be abstracted and systematically studied. Write the next paragraph of my essay.
+```
+
+I used this prompt to generate several extensions and selected one that I liked, resulting in the following:
+
+```prompt
+Philosophers and mathematicians have long debated whether mathematics is invented or discovered.
+Some argue that the concepts and principles of mathematics exist independently of human thought, and therefore can be considered to be discovered rather than invented.
+For example, the concepts of numbers, ratios, and geometric shapes are observed in the natural world and can be abstracted and systematically studied.
+On the other hand, some believe that mathematics is largely an invention of human thought.
+Mathematical principles are based on logical reasoning, which is a characteristic of human thought.
+For instance, Euclidean geometry is based on abstract axioms and premises, accepted as true without the need for proof.
+Therefore, geometry can be considered an invention of human thought rather than a discovery.
+Similarly, mathematical formulas and equations are used to model and predict physical phenomena, which are the result of human reasoning.
+```
+
+Using the prompt extensions, we can iteratively write and iterate at each step. This is very useful for situations that require **generating higher quality output and incremental modifications**.
+
+## Further Reading
+
+- **Learn Prompting**: [https://learnprompting.org/en-US/docs/intro](https://learnprompting.org/en-US/docs/intro)
diff --git a/DigitalHumanWeb/docs/usage/agents/prompt.zh-CN.mdx b/DigitalHumanWeb/docs/usage/agents/prompt.zh-CN.mdx
new file mode 100644
index 0000000..3157892
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/agents/prompt.zh-CN.mdx
@@ -0,0 +1,106 @@
+---
+title: 如何通过 LobeChat 写好结构化 Prompt - 提高生成式 AI 输出质量的关键
+description: 学习如何撰写结构化 Prompt 可以提高生成式 AI 输出的质量和效果。本文介绍了撰写有效 Prompt 的方法和技巧,以及如何逐步扩展和优化生成的结果。
+tags:
+ - 结构化 Prompt
+ - 生成式AI
+ - 提高输出质量
+ - 撰写技巧
+ - 逐步扩展
+---
+
+# Prompt 使用指南
+
+## Prompt 基本概念
+
+生成式 AI 非常有用,但它需要人类指导。通常情况下,生成式 AI 能就像公司新来的实习生一样,非常有能力,但需要清晰的指示才能做得好。能够正确地指导生成式 AI 是一项非常强大的技能。你可以通过发送一个 prompt 来指导生成式 AI,这通常是一个文本指令。Prompt 是向助手提供的输入,它会影响输出结果。一个好的 Prompt 应该是结构化的,清晰的,简洁的,并且具有指向性。
+
+## 如何写好一个结构化 prompt
+
+
+ 结构化 prompt 是指 prompt 的构造应该有明确的逻辑和结构。例如,如果你想让模型生成一篇文章,你的
+ prompt 可能需要包括文章的主题,文章的大纲,文章的风格等信息。
+
+
+让我们看一个基本的讨论问题的例子:
+
+> _"我们星球面临的最紧迫的环境问题是什么,个人可以采取哪些措施来帮助解决这些问题?"_
+
+我们可以将其转化为简单的助手提示,将回答以下问题:放在前面。
+
+```prompt
+回答以下问题:
+我们星球面临的最紧迫的环境问题是什么,个人可以采取哪些措施来帮助解决这些问题?
+```
+
+由于这个提示生成的结果并不一致,有些只有一两个句子。一个典型的讨论回答应该有多个段落,因此这些结果并不理想。一个好的提示应该给出**具体的格式和内容指令**。您需要消除语言中的歧义以提高一致性和质量。这是一个更好的提示。
+
+```prompt
+写一篇高度详细的论文,包括引言、正文和结论段,回答以下问题:
+我们星球面临的最紧迫的环境问题是什么,
+个人可以采取哪些措施来帮助解决这些问题?
+```
+
+第二个提示生成了更长的输出和更好的结构。提示中使用 “论文” 一词是有意的,因为助手可以理解论文的定义,因此更有可能生成连贯的、结构化的回答。
+
+
+
+## 如何提升其质量和效果
+
+
+ 提升 prompt 质量和效果的方法主要有以下几点:
+
+- **尽量明确你的需求:** 模型的输出会尽可能满足你的需求,所以如果你的需求不明确,输出可能会不如预期。
+- **使用正确的语法和拼写:** 模型会尽可能模仿你的语言风格,所以如果你的语言风格有问题,输出可能也会有问题。
+- **提供足够的上下文信息:** 模型会根据你提供的上下文信息生成输出,所以如果你提供的上下文信息不足,可能无法生成你想要的结果。
+
+
+
+在为讨论问题制定有效的提示后,您现在需要细化生成的结果。这可能涉及到调整输出以符合诸如字数等限制,或将不同生成的结果的概念组合在一起。
+
+迭代的一个简单方法是生成多个输出并查看它们,以了解正在使用的概念和结构。一旦评估了输出,您就可以选择最合适的输出并将它们组合成一个连贯的回答。另一种迭代的方法是逐步开始,然后**逐步扩展**。这需要不止一个提示:一个起始提示,用于撰写最初的一两段,然后是其他提示,以扩展已经写过的内容。以下是一个潜在的哲学讨论问题:
+
+> _"数学是发明还是发现?用仔细的推理来解释你的答案。"_
+
+将其添加到一个简单的提示中,如下所示:
+
+```prompt
+写一个起始段回答以下问题:
+数学是发明还是发现?用仔细的推理来解释你的答案。
+```
+
+生成了一些回答,找到了一个我们喜欢的:
+
+```prompt
+哲学家和数学家之间争论不休,数学是发明还是发现。
+一些人认为,数学的概念和原则独立于人类思维存在,因此可以被认为是被发现而不是被发明的。
+例如,数字、比率和几何形状的概念在自然界中被观察到,可以被抽象化和系统化地研究。
+```
+
+是一个不错的开端。然后使用另一个提示来扩展:
+
+```prompt
+我正在撰写一篇详细的短文,回答以下提示:
+数学是发明还是发现?用仔细的推理来解释你的答案。这是我目前的情况: 哲学家和数学家之间争论不休,数学是发明还是发现。
+一些人认为,数学的概念和原则独立于人类思维存在,因此可以被认为是被发现而不是被发明的。
+例如,数字、比率和几何形状的概念在自然界中被观察到,可以被抽象化和系统化地研究。写下我文章的下一段。
+```
+
+使用此提示生成了几个扩展,并选择了一个我喜欢的,最终得到了以下结果:
+
+```prompt
+哲学家和数学家之间争论不休,数学是发明还是发现。
+一些人认为,数学的概念和原则独立于人类思维存在,因此可以被认为是被发现而不是被发明的。
+例如,数字、比率和几何形状的概念在自然界中被观察到,可以被抽象化和系统化地研究。
+另一方面,有人认为数学在很大程度上是人类思维的发明。
+数学原理基于逻辑推理,这是人类思维的一个特征。
+例如,欧几里德几何基于抽象的公理和前提,被接受为真实而无需证明。
+因此,可以认为几何是人类思维的发明,而不是一种发现。
+同样,数学公式和方程用于模拟和预测物理现象,这又是人类推理的结果。
+```
+
+使用扩展提示,我们可以逐步地写作并在每个步骤上进行迭代。这对于需要**生成更高质量的输出并希望逐步修改**的情况非常有用。
+
+## 扩展阅读
+
+- **Learn Prompting**: [https://learnprompting.org/zh-Hans/docs/intro](https://learnprompting.org/zh-Hans/docs/intro)
diff --git a/DigitalHumanWeb/docs/usage/agents/topics.mdx b/DigitalHumanWeb/docs/usage/agents/topics.mdx
new file mode 100644
index 0000000..1c8f296
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/agents/topics.mdx
@@ -0,0 +1,27 @@
+---
+title: LobeChat Topic Usage Guide
+description: >-
+ Learn how to save and manage topics during conversations in LobeChat,
+ including saving topics, accessing the topic list, and pinning favorite
+ topics.
+tags:
+ - Topic Usage
+ - Conversation Management
+ - Save Topic
+ - Topic List
+ - Favorite Topics
+---
+
+# Topic Usage Guide
+
+
+
+- **Save Topic:** During a conversation, if you want to save the current context and start a new topic, you can click the save button next to the send button.
+- **Topic List:** Clicking on a topic in the list allows for quick switching of historical conversation records and continuing the conversation. You can also use the star icon ⭐️ to pin favorite topics to the top, or use the more button on the right to rename or delete topics.
diff --git a/DigitalHumanWeb/docs/usage/agents/topics.zh-CN.mdx b/DigitalHumanWeb/docs/usage/agents/topics.zh-CN.mdx
new file mode 100644
index 0000000..2a90338
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/agents/topics.zh-CN.mdx
@@ -0,0 +1,25 @@
+---
+title: LobeChat 话题使用指南 - 保存话题、快速切换历史记录
+description: 学习如何在 LobeChat 中保存话题、快速切换历史记录,并对话题进行收藏、重命名和删除操作。
+tags:
+ - 话题使用指南
+ - 保存话题
+ - 快速切换历史记录
+ - 话题收藏
+ - 话题重命名
+ - 话题删除
+---
+
+# 话题使用指南
+
+
+
+- **保存话题:** 在聊天过程中,如果想要保存当前上下文并开启新的话题,可以点击发送按钮旁边的保存按钮。
+- **话题列表:** 点击列表中的话题可以快速切换历史对话记录,并继续对话。你还可以通过点击星标图标 ⭐️ 将话题收藏置顶,或者通过右侧更多按钮对话题进行重命名和删除操作。
diff --git a/DigitalHumanWeb/docs/usage/features/agent-market.mdx b/DigitalHumanWeb/docs/usage/features/agent-market.mdx
new file mode 100644
index 0000000..fca028f
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/agent-market.mdx
@@ -0,0 +1,40 @@
+---
+title: Find best Assistants in LobeChat Assistant Market
+description: >-
+ Explore a vibrant community of carefully designed assistants in LobeChat's
+ Assistant Market. Contribute your wisdom and share your personally developed
+ assistants in this collaborative space.
+tags:
+ - LobeChat
+ - Assistant Market
+ - Community
+ - Collaboration
+ - Assistants
+---
+
+# Assistant Market
+
+
+
+In LobeChat's Assistant Market, creators can discover a vibrant and innovative community that brings together numerous carefully designed assistants. These assistants not only play a crucial role in work scenarios but also provide great convenience in the learning process. Our market is not just a showcase platform, but also a collaborative space. Here, everyone can contribute their wisdom and share their personally developed assistants.
+
+
+ By [🤖/🏪 submitting agents](https://github.com/lobehub/lobe-chat-agents), you can easily submit
+ your assistant works to our platform. We particularly emphasize that LobeChat has established a
+ sophisticated automated internationalization (i18n) workflow, which excels in seamlessly
+ converting your assistants into multiple language versions. This means that regardless of the
+ language your users are using, they can seamlessly experience your assistant.
+
+
+
+ We welcome all users to join this ever-growing ecosystem and participate in the iteration and
+ optimization of assistants. Together, let's create more interesting, practical, and innovative
+ assistants, further enriching the diversity and practicality of assistants.
+
diff --git a/DigitalHumanWeb/docs/usage/features/agent-market.zh-CN.mdx b/DigitalHumanWeb/docs/usage/features/agent-market.zh-CN.mdx
new file mode 100644
index 0000000..91d6927
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/agent-market.zh-CN.mdx
@@ -0,0 +1,36 @@
+---
+title: 在 LobeChat 助手市场找到创新 AI 助手
+description: >-
+ LobeChat助手市场是一个充满活力和创新的社区,汇聚了众多精心设计的助手,为工作场景和学习提供便利。欢迎提交你的助手作品,共同创造更多有趣、实用且具有创新性的助手。
+tags:
+ - LobeChat
+ - 助手市场
+ - 创新社区
+ - 协作空间
+ - 助手作品
+ - 自动化国际化
+ - 多语言版本
+---
+
+# 助手市场
+
+
+
+在 LobeChat 的助手市场中,创作者们可以发现一个充满活力和创新的社区,它汇聚了众多精心设计的助手,这些助手不仅在工作场景中发挥着重要作用,也在学习过程中提供了极大的便利。我们的市场不仅是一个展示平台,更是一个协作的空间。在这里,每个人都可以贡献自己的智慧,分享个人开发的助手。
+
+
+ 通过 [🤖/🏪 提交助手](https://github.com/lobehub/lobe-chat-agents)
+ ,你可以轻松地将你的助手作品提交到我们的平台。我们特别强调的是,LobeChat
+ 建立了一套精密的自动化国际化(i18n)工作流程,
+ 它的强大之处在于能够无缝地将你的助手转化为多种语言版本。这意味着,不论你的用户使用何种语言,他们都能无障碍地体验到你的助手。
+
+
+
+ 我们欢迎所有用户加入这个不断成长的生态系统,共同参与到助手的迭代与优化中来。共同创造出更多有趣、实用且具有创新性的助手,进一步丰富助手的多样性和实用性。
+
diff --git a/DigitalHumanWeb/docs/usage/features/auth.mdx b/DigitalHumanWeb/docs/usage/features/auth.mdx
new file mode 100644
index 0000000..ba9afb9
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/auth.mdx
@@ -0,0 +1,65 @@
+---
+title: Support Multi-User Management - Identity Verification Solutions
+description: >-
+ Explore LobeChat's user authentication solutions with next-auth and Clerk for
+ flexible and secure user management. Learn about features like user
+ registration, session management, multi-factor authentication, and more.
+tags:
+ - Multi-User Management
+ - Identity Verification
+ - next-auth
+ - Clerk
+ - User Authentication
+ - Session Management
+ - Multi-Factor Authentication
+ - User Management
+---
+
+# Support Multi-User Management
+
+
+
+In modern applications, user management and identity verification are essential functions. To meet the diverse needs of different users, LobeChat provides two main user authentication and management solutions: `next-auth` and `Clerk`. Whether you are looking for simple user registration and login or need advanced multi-factor authentication and user management, LobeChat can flexibly accommodate your requirements.
+
+## next-auth: Flexible and Powerful Identity Verification Library
+
+LobeChat integrates `next-auth`, a flexible and powerful identity verification library that supports various authentication methods, including OAuth, email login, and credential login. With `next-auth`, you can easily achieve the following functions:
+
+- **User Registration and Login**: Support various authentication methods to meet different user needs.
+- **Session Management**: Efficiently manage user sessions to ensure security.
+- **Social Login**: Support quick login via various social platforms.
+- **Data Security**: Ensure the security and privacy of user data.
+
+
+ Due to workload constraints, integration of next-auth with a server-side database has not been
+ implemented yet. If you need to use a server-side database, please use Clerk.
+
+
+
+ For information on using Next-Auth, you can refer to [Authentication Services - Next
+ Auth](/docs/self-hosting/advanced/authentication#next-auth).
+
+
+## Clerk: Modern User Management Platform
+
+For users requiring advanced user management features, LobeChat also supports [Clerk](https://clerk.com), a modern user management platform. Clerk offers richer functionality to help you achieve higher security and flexibility:
+
+- **Multi-Factor Authentication (MFA)**: Provides higher security protection.
+- **User Profile Management**: Conveniently manage user information and configurations.
+- **Login Activity Monitoring**: Real-time monitoring of user login activities to ensure account security.
+- **Scalability**: Supports complex user management requirements.
+
+
+ For information on using Clerk, you can refer to [Authentication Services -
+ Clerk](/docs/self-hosting/advanced/authentication#clerk) .
+
+
+
+ If you need to use Clerk in conjunction with a server-side database, you can refer to the
+ "Configuring Authentication Services" section in [Deploying with a Server-Side
+ Database](/docs/self-hosting/advanced/server-database).
+
diff --git a/DigitalHumanWeb/docs/usage/features/auth.zh-CN.mdx b/DigitalHumanWeb/docs/usage/features/auth.zh-CN.mdx
new file mode 100644
index 0000000..983c0c7
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/auth.zh-CN.mdx
@@ -0,0 +1,60 @@
+---
+title: 多用户管理支持
+description: LobeChat 提供了多种用户认证和管理方案,以满足不同用户的需求。
+tags:
+ - LobeChat
+ - 用户管理
+ - next-auth
+ - Clerk
+ - 身份验证
+ - 多因素认证
+---
+
+# 身份验证系统 / 多用户管理支持
+
+
+
+在现代应用中,用户管理和身份验证是至关重要的功能。为满足不同用户的多样化需求,LobeChat 提供了两种主要的用户认证和管理方案:`next-auth` 和 `Clerk`。无论您是追求简便的用户注册登录,还是需要更高级的多因素认证和用户管理,LobeChat 都可以灵活实现。
+
+## next-auth:灵活且强大的身份验证库
+
+LobeChat 集成了 `next-auth`,一个灵活且强大的身份验证库,支持多种身份验证方式,包括 OAuth、邮件登录、凭证登录等。通过 `next-auth`,您可以轻松实现以下功能:
+
+- **用户注册和登录**:支持多种认证方式,满足不同用户的需求。
+- **会话管理**:高效管理用户会话,确保安全性。
+- **社交登录**:支持多种社交平台的快捷登录。
+- **数据安全**:保障用户数据的安全性和隐私性。
+
+
+ 由于工作量原因,目前还没有实现 next-auth 与服务端数据库的集成,如果需要使用服务端数据库,请使用
+ Clerk 。
+
+
+
+ 关于 Next-Auth 的使用,可以查阅 [身份验证服务 - Next
+ Auth](/zh/docs/self-hosting/advanced/authentication#next-auth)。
+
+
+## Clerk:现代化用户管理平台
+
+对于需要更高级用户管理功能的用户,LobeChat 还支持 [Clerk](https://clerk.com) ,一个现代化的用户管理平台。Clerk 提供了更丰富的功能,帮助您实现更高的安全性和灵活性:
+
+- **多因素认证 (MFA)**:提供更高的安全保障。
+- **用户配置文件管理**:便捷管理用户信息和配置。
+- **登录活动监控**:实时监控用户登录活动,确保账户安全。
+- **扩展性**:支持复杂的用户管理需求。
+
+
+ 关于 Clerk 的使用,可以查阅 [身份验证服务 -
+ Clerk](/zh/docs/self-hosting/advanced/authentication#clerk)。
+
+
+
+ 如果需要在服务端数据库中搭配使用 Clerk 的使用,可以查阅
+ [使用服务端数据库部署](/zh/docs/self-hosting/advanced/server-database)
+ 中的「配置身份验证服务」部分。
+
diff --git a/DigitalHumanWeb/docs/usage/features/database.mdx b/DigitalHumanWeb/docs/usage/features/database.mdx
new file mode 100644
index 0000000..83ef17b
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/database.mdx
@@ -0,0 +1,60 @@
+---
+title: Local / Cloud Database Solutions for LobeChat
+description: >-
+ Explore the options of local and server-side databases for LobeChat, offering
+ data control, privacy protection, and convenient user experiences.
+tags:
+ - Local Database
+ - Server-Side Database
+ - Data Privacy
+ - Data Control
+ - CRDT Technology
+ - PostgreSQL
+ - Dirzzle ORM
+ - Clerk Authentication
+---
+
+# Local / Cloud Database
+
+
+
+In modern application development, the choice of data storage solution is crucial. To meet the needs of different users, LobeChat offers flexible configurations that support both local and server-side databases. Whether you prioritize data privacy and control or seek a convenient user experience, LobeChat can provide excellent solutions for you.
+
+## Local Database: Data Control and Privacy Protection
+
+For users who prefer more control over their data and value privacy protection, LobeChat offers support for local databases. By using IndexedDB as the storage solution and combining it with dexie as an Object-Relational Mapping (ORM) tool, LobeChat achieves efficient data management.
+
+Additionally, we have introduced Conflict-Free Replicated Data Type (CRDT) technology to ensure a seamless multi-device synchronization experience. This experimental feature aims to provide users with greater autonomy and data security.
+
+
+ LobeChat defaults to the local database solution to reduce the onboarding cost for new users.
+
+
+Furthermore, we have attempted to introduce CRDT technology to achieve cross-device synchronization based on the local database. This experimental feature aims to provide users with greater autonomy and data security.
+
+## Server-Side Database: Convenient and Efficient User Experience
+
+For users who seek a convenient user experience, LobeChat supports PostgreSQL as the server-side database. By managing data with Dirzzle ORM and combining it with Clerk for authentication, LobeChat can offer users an efficient and reliable server-side data management solution.
+
+### Server-Side Database Technology Stack
+
+- **DB**: PostgreSQL (Neon is the default)
+- **ORM**: Dirzzle ORM
+- **Auth**: Clerk
+- **Server Router**: tRPC
+
+## Deployment Solution Selection Guide
+
+### 1. Local Database
+
+The local database solution is suitable for users who wish to have strict control over their data. With LobeChat's support for local databases, you can securely store and manage data without relying on external servers. This solution is particularly suitable for users with high requirements for data privacy.
+
+### 2. Server-Side Database
+
+The server-side database solution is ideal for users who want to simplify data management processes and enjoy a convenient user experience. Through server-side databases and user authentication, LobeChat can ensure the security and efficiency of data. If you want to learn how to configure a server-side database, please refer to our [detailed documentation](/docs/self-hosting/advanced/server-database).
+
+Whether you choose a local database or a server-side database, LobeChat can provide you with an excellent user experience.
diff --git a/DigitalHumanWeb/docs/usage/features/database.zh-CN.mdx b/DigitalHumanWeb/docs/usage/features/database.zh-CN.mdx
new file mode 100644
index 0000000..5716320
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/database.zh-CN.mdx
@@ -0,0 +1,54 @@
+---
+title: LobeChat 支持本地 / 云端数据库存储
+description: LobeChat 支持本地 / 云端数据存储,既能实现 Local First,同时支持数据云同步。
+tags:
+ - LobeChat
+ - IndexedDB
+ - Postgres
+ - Local First
+ - 数据云同步
+ - 数据库
+---
+
+# 本地 / 云端数据存储
+
+
+
+在现代应用开发中,数据存储方案的选择至关重要。为了满足不同用户的需求,LobeChat 提供了同时支持本地数据库和服务端数据库的灵活配置。无论您是注重数据隐私与掌控,还是追求便捷的使用体验,LobeChat 都能为您提供卓越的解决方案。
+
+## 本地数据库:数据掌控与隐私保护
+
+对于希望对数据有更多掌控感和隐私保护的用户,LobeChat 提供了本地数据库支持。采用 IndexedDB 作为存储解决方案,并结合 dexie 作为 ORM(对象关系映射),LobeChat 实现了高效的数据管理。
+
+同时,我们引入了 CRDT(Conflict-Free Replicated Data Type)技术,确保多端同步功能的无缝体验。这一实验性功能旨在为用户提供更高的自主性和数据安全性。
+
+LobeChat 默认采取本地数据库方案,以降低新用户的上手成本。
+
+此外,我们尝试引入了 CRDT(Conflict-Free Replicated Data Type)技术,在本地数据库基础上实现了跨端同步。这一实验性功能旨在为用户提供更高的自主性和数据安全性。
+
+## 服务端数据库:便捷与高效的使用体验
+
+对于追求便捷使用体验的用户,LobeChat 支持 PostgreSQL 作为服务端数据库。通过 Dirzzle ORM 管理数据,结合 Clerk 进行身份验证,LobeChat 能够为用户提供高效、可靠的服务端数据管理方案。
+
+### 服务端数据库技术栈
+
+- **DB**: PostgreSQL(默认使用 Neon)
+- **ORM**: Dirzzle ORM
+- **Auth**: Clerk
+- **Server Router**: tRPC
+
+## 部署方案选择指南
+
+### 1. 本地数据库
+
+本地数据库方案适用于那些希望对数据进行严格控制的用户。通过 LobeChat 的本地数据库支持,您可以在不依赖外部服务器的情况下,安全地存储和管理数据。这一方案特别适合对数据隐私有高要求的用户。
+
+### 2. 服务端数据库
+
+服务端数据库方案则适合那些希望简化数据管理流程,享受便捷使用体验的用户。通过服务端数据库与用户身份验证,LobeChat 能够确保数据的安全性与高效性。如果您希望了解如何配置服务端数据库,请参考我们的[详细文档](/zh/docs/self-hosting/advanced/server-database)。
+
+无论选择本地数据库还是服务端数据库,LobeChat 都能为你提供卓越的用户体验。
diff --git a/DigitalHumanWeb/docs/usage/features/local-llm.mdx b/DigitalHumanWeb/docs/usage/features/local-llm.mdx
new file mode 100644
index 0000000..a84df3c
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/local-llm.mdx
@@ -0,0 +1,56 @@
+---
+title: Using Local LLM in LobeChat
+description: >-
+ Experience groundbreaking AI support with a local LLM in LobeChat powered by
+ Ollama AI. Start conversations effortlessly and enjoy unprecedented
+ interaction speed!
+tags:
+ - Local Large Language Model
+ - Ollama AI
+ - LobeChat
+ - AI communication
+ - Natural Language Processing
+ - Docker deployment
+---
+
+# Local Large Language Model (LLM) Support
+
+
+
+Available in >=0.127.0, currently only supports Docker deployment
+
+With the release of LobeChat v0.127.0, we are excited to introduce a groundbreaking feature - Ollama AI support! 🤯 With the powerful infrastructure of [Ollama AI](https://ollama.ai/) and the [community's collaborative efforts](https://github.com/lobehub/lobe-chat/pull/1265), you can now engage in conversations with a local LLM (Large Language Model) in LobeChat! 🤩
+
+We are thrilled to introduce this revolutionary feature to all LobeChat users at this special moment. The integration of Ollama AI not only signifies a significant technological leap for us but also reaffirms our commitment to continuously pursue more efficient and intelligent communication.
+
+### How to Start a Conversation with Local LLM?
+
+The startup process is exceptionally simple! By running the following Docker command, you can experience conversations with a local LLM in LobeChat:
+
+```bash
+docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://host.docker.internal:11434/v1 lobehub/lobe-chat
+```
+
+Yes, it's that simple! 🤩 You don't need to go through complicated configurations or worry about a complex installation process. We have prepared everything for you. With just one command, you can engage in deep conversations with a local AI.
+
+### Experience Unprecedented Interaction Speed
+
+With the powerful capabilities of Ollama AI, LobeChat has greatly improved its efficiency in natural language processing. Both processing speed and response time have reached new heights. This means that your conversational experience will be smoother, without any waiting, and with instant responses.
+
+### Why Choose a Local LLM?
+
+Compared to cloud-based solutions, a local LLM provides higher privacy and security. All your conversations are processed locally, without passing through any external servers, ensuring the security of your data. Additionally, local processing can reduce network latency, providing you with a more immediate communication experience.
+
+### Embark on Your LobeChat & Ollama AI Journey
+
+Now, let's embark on this exciting journey together! Through the collaboration of LobeChat and Ollama AI, explore the endless possibilities brought by AI. Whether you are a tech enthusiast or simply curious about AI communication, LobeChat will offer you an unprecedented experience.
+
+
+
+
+
diff --git a/DigitalHumanWeb/docs/usage/features/local-llm.zh-CN.mdx b/DigitalHumanWeb/docs/usage/features/local-llm.zh-CN.mdx
new file mode 100644
index 0000000..c48643a
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/local-llm.zh-CN.mdx
@@ -0,0 +1,48 @@
+---
+title: 在 LobeChat 中使用本地大语言模型(LLM)
+description: LobeChat 支持本地 LLM,使用 Ollama AI集成带来高效智能沟通。体验本地大语言模型的隐私性、安全性和即时交流
+tags:
+ - '本地大语言模型,LLM,LobeChat v0.127.0,Ollama AI,Docker 部署'
+---
+
+# 支持本地大语言模型(LLM)
+
+
+
+在 >=v0.127.0 版本中可用,目前仅支持 Docker 部署
+
+随着 LobeChat v0.127.0 的发布,我们迎来了一个激动人心的特性 —— Ollama AI 支持!🤯 在 [Ollama AI](https://ollama.ai/) 强大的基础设施和 [社区的共同努力](https://github.com/lobehub/lobe-chat/pull/1265) 下,现在您可以在 LobeChat 中与本地 LLM (Large Language Model) 进行交流了!🤩
+
+我们非常高兴能在这个特别的时刻,向所有 LobeChat 用户介绍这项革命性的特性。Ollama AI 的集成不仅标志着我们技术上的一个巨大飞跃,更是向用户承诺,我们将不断追求更高效、更智能的沟通方式。
+
+### 如何启动与本地 LLM 的对话?
+
+启动过程异常简单!您只需运行以下 Docker 命令行,就可以在 LobeChat 中体验与本地 LLM 的对话了:
+
+```bash
+docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://host.docker.internal:11434/v1 lobehub/lobe-chat
+```
+
+是的,就是这么简单!🤩 您不需要进行繁杂的配置,也不必担心复杂的安装过程。我们已经为您准备好了一切,只需一行命令,即可开启与本地 AI 的深度对话。
+
+### 体验前所未有的交互速度
+
+借助 Ollama AI 的强大能力,LobeChat 在进行自然语言处理方面的效率得到了极大的提升。无论是处理速度还是响应时间,都达到了新的高度。这意味着您的对话体验将更加流畅,无需等待,即时得到回应。
+
+### 为什么选择本地 LLM?
+
+与基于云的解决方案相比,本地 LLM 提供了更高的隐私性和安全性。您的所有对话都在本地处理,不经过任何外部服务器,确保了您的数据安全性。此外,本地处理还能减少网络延迟,为您带来更加即时的交流体验。
+
+### 开启您的 LobeChat & Ollama AI 之旅
+
+现在,就让我们一起开启这段激动人心的旅程吧!通过 LobeChat 与 Ollama AI 的协作,探索 AI 带来的无限可能。无论您是技术爱好者,还是对 AI 交流充满好奇,LobeChat 都将为您提供一场前所未有的体验。
+
+
+
+
+
diff --git a/DigitalHumanWeb/docs/usage/features/mobile.mdx b/DigitalHumanWeb/docs/usage/features/mobile.mdx
new file mode 100644
index 0000000..d74d728
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/mobile.mdx
@@ -0,0 +1,24 @@
+---
+title: LobeChat with Mobile Device Adaptation
+description: >-
+ Explore the enhanced mobile user experience at LobeChat with optimized designs
+ for smoother interactions. Share your feedback on GitHub!
+tags:
+ - Mobile Device Adaptation
+ - User Experience
+ - Optimized Designs
+ - Feedback
+ - GitHub
+---
+
+# Mobile Device Adaptation
+
+
+
+LobeChat has undergone a series of optimized designs for mobile devices to enhance the user's mobile experience.
+
+Currently, we are iterating on the user experience for mobile devices to achieve a smoother and more intuitive interaction. If you have any suggestions or ideas, we warmly welcome your feedback through GitHub Issues or Pull Requests.
diff --git a/DigitalHumanWeb/docs/usage/features/mobile.zh-CN.mdx b/DigitalHumanWeb/docs/usage/features/mobile.zh-CN.mdx
new file mode 100644
index 0000000..5b881eb
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/mobile.zh-CN.mdx
@@ -0,0 +1,23 @@
+---
+title: LobeChat 移动设备适配 - 提升用户移动体验
+description: LobeChat针对移动设备进行优化设计,版本迭代以实现更流畅直观的交互。欢迎通过GitHub Issues或Pull Requests提供反馈。
+tags:
+ - LobeChat
+ - 移动设备适配
+ - 用户体验
+ - 版本迭代
+ - GitHub
+ - 反馈
+---
+
+# 移动设备适配
+
+
+
+LobeChat 针对移动设备进行了一系列的优化设计,以提升用户的移动体验。
+
+目前,我们正在对移动端的用户体验进行版本迭代,以实现更加流畅和直观的交互。如果您有任何建议或想法,我们非常欢迎您通过 GitHub Issues 或者 Pull Requests 提供反馈。
diff --git a/DigitalHumanWeb/docs/usage/features/more.mdx b/DigitalHumanWeb/docs/usage/features/more.mdx
new file mode 100644
index 0000000..54848e9
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/more.mdx
@@ -0,0 +1,33 @@
+---
+title: More Features in LobeChat - Enhancing Design and Technical Capabilities
+description: >-
+ Explore the additional features offered, including exquisite UI design, smooth
+ conversation experience, fast deployment options, privacy and security
+ measures, and custom domain support.
+tags:
+ - UI Design
+ - Conversation Experience
+ - Deployment
+ - Privacy
+ - Custom Domain
+---
+
+# More Features
+
+In addition to the above features, our design and technical capabilities will provide you with more assurance in usage:
+
+- [x] 💎 **Exquisite UI Design**: Carefully designed interface with elegant appearance and smooth interaction effects, supporting light and dark themes, and adaptable to mobile devices. Supports PWA, providing an experience closer to native applications.
+- [x] 🗣️ **Smooth Conversation Experience**: Responsive design brings a smooth conversation experience and supports full Markdown rendering, including code highlighting, LaTex formulas, Mermaid flowcharts, and more.
+- [x] 💨 **Fast Deployment**: Use the Vercel platform or our Docker image, simply click the deploy button, and deployment can be completed within 1 minute without complex configuration processes.
+- [x] 🔒 **Privacy and Security**: All data is stored locally in the user's browser, ensuring user privacy and security.
+- [x] 🌐 **Custom Domain**: If users have their own domain, they can bind it to the platform for quick access to the chat assistant from anywhere.
+
+> ✨ As the product continues to iterate, we will bring more exciting features!
+
+---
+
+
+ You can find our upcoming [Roadmap][github-project-link] plans in the Projects section.
+
+
+[github-project-link]: https://github.com/lobehub/lobe-chat/projects
diff --git a/DigitalHumanWeb/docs/usage/features/more.zh-CN.mdx b/DigitalHumanWeb/docs/usage/features/more.zh-CN.mdx
new file mode 100644
index 0000000..735db76
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/more.zh-CN.mdx
@@ -0,0 +1,28 @@
+---
+title: 更多 LobeChat 特性 - 提供精致 UI 设计和流畅的对话体验
+description: 了解更多产品特性,包括精致 UI 设计、流畅的对话体验和快速部署功能,为用户带来更好的体验。
+tags:
+ - 精致 UI 设计
+ - 流畅对话体验
+ - 快速部署
+ - 隐私安全
+ - 自定义域名
+---
+
+# 更多特性
+
+除了上述功能特性以外,我们的所具有的设计和技术能力将为你带来了更多使用保障:
+
+- [x] 💎 **精致 UI 设计**:经过精心设计的界面,具有优雅的外观和流畅的交互效果,支持亮暗色主题,适配移动端。支持 PWA,提供更加接近原生应用的体验。
+- [x] 🗣️ **流畅的对话体验**:流式响应带来流畅的对话体验,并且支持完整的 Markdown 渲染,包括代码高亮、LaTex 公式、Mermaid 流程图等。
+- [x] 💨 **快速部署**:使用 Vercel 平台或者我们的 Docker 镜像,只需点击一键部署按钮,即可在 1 分钟内完成部署,无需复杂的配置过程。
+- [x] 🔒 **隐私安全**:所有数据保存在用户浏览器本地,保证用户的隐私安全。
+- [x] 🌐 **自定义域名**:如果用户拥有自己的域名,可以将其绑定到平台上,方便在任何地方快速访问对话助手。
+
+> ✨ 随着产品迭代持续更新,我们将会带来更多更多令人激动的功能!
+
+---
+
+你可以在 Projects 中找到我们后续的 [Roadmap][github-project-link] 计划
+
+[github-project-link]: https://github.com/lobehub/lobe-chat/projects
diff --git a/DigitalHumanWeb/docs/usage/features/multi-ai-providers.mdx b/DigitalHumanWeb/docs/usage/features/multi-ai-providers.mdx
new file mode 100644
index 0000000..c212bad
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/multi-ai-providers.mdx
@@ -0,0 +1,66 @@
+---
+title: LobeChat with Multi AI Providers
+description: >-
+ Discover how LobeChat offers diverse model service provider support, including
+ AWS Bedrock, Google AI Gemini series, ChatGLM, and Moonshot AI, to cater to
+ various user needs. Explore local model support with Ollama integration.
+tags:
+ - LobeChat
+ - model service providers
+ - AWS Bedrock
+ - Google AI Gemini
+ - ChatGLM
+ - Moonshot AI
+ - Together AI
+ - local model support
+ - Ollama
+---
+
+# Multi-Model Service Provider Support
+
+
+
+Available in version 0.123.0 and later
+
+In the continuous development of LobeChat, we deeply understand the importance of diversity in model service providers for meeting the needs of the community when providing AI conversation services. Therefore, we have expanded our support to multiple model service providers, rather than being limited to a single one, in order to offer users a more diverse and rich selection of conversations.
+
+In this way, LobeChat can more flexibly adapt to the needs of different users, while also providing developers with a wider range of choices.
+
+## Supported Model Service Providers
+
+We have implemented support for the following model service providers:
+
+- **AWS Bedrock**: Integrated with AWS Bedrock service, supporting models such as **Claude / LLama2**, providing powerful natural language processing capabilities. [Learn more](https://aws.amazon.com/cn/bedrock)
+- **Anthropic (Claude)**: Accessed Anthropic's **Claude** series models, including Claude 3 and Claude 2, with breakthroughs in multi-modal capabilities and extended context, setting a new industry benchmark. [Learn more](https://www.anthropic.com/claude)
+- **Google AI (Gemini Pro, Gemini Vision)**: Access to Google's **Gemini** series models, including Gemini and Gemini Pro, to support advanced language understanding and generation. [Learn more](https://deepmind.google/technologies/gemini/)
+- **ChatGLM**: Added the **ChatGLM** series models from Zhipuai (GLM-4/GLM-4-vision/GLM-3-turbo), providing users with another efficient conversation model choice. [Learn more](https://www.zhipuai.cn/)
+- **Moonshot AI (Dark Side of the Moon)**: Integrated with the Moonshot series models, an innovative AI startup from China, aiming to provide deeper conversation understanding. [Learn more](https://www.moonshot.cn/)
+- **Groq**: Accessed Groq's AI models, efficiently processing message sequences and generating responses, capable of multi-turn dialogues and single-interaction tasks. [Learn more](https://groq.com/)
+- **OpenRouter**: Supports routing of models including **Claude 3**, **Gemma**, **Mistral**, **Llama2** and **Cohere**, with intelligent routing optimization to improve usage efficiency, open and flexible. [Learn more](https://openrouter.ai/)
+- **01.AI (Yi Model)**: Integrated the 01.AI models, with series of APIs featuring fast inference speed, which not only shortened the processing time, but also maintained excellent model performance. [Learn more](https://01.ai/)
+- **Together.ai**: Over 100 leading open-source Chat, Language, Image, Code, and Embedding models are available through the Together Inference API. For these models you pay just for what you use. [Learn more](https://www.together.ai/)
+- **Minimax**: Integrated the Minimax models, including the MoE model **abab6**, offers a broader range of choices. [Learn more](https://www.minimaxi.com/)
+- **DeepSeek**: Integrated with the DeepSeek series models, an innovative AI startup from China, The product has been designed to provide a model that balances performance with price. [Learn more](https://www.deepseek.com/)
+- **Qwen**: Integrated with the Qwen series models, including the latest **qwen-turbo**, **qwen-plus** and **qwen-max**. [Learn more](https://help.aliyun.com/zh/dashscope/developer-reference/model-introduction)
+
+At the same time, we are also planning to support more model service providers, such as Replicate and Perplexity, to further enrich our service provider library. If you would like LobeChat to support your favorite service provider, feel free to join our [community discussion](https://github.com/lobehub/lobe-chat/discussions/1284).
+
+## Local Model Support
+
+
+
+To meet the specific needs of users, LobeChat also supports the use of local models based on [Ollama](https://ollama.ai), allowing users to flexibly use their own or third-party models. For more details, see [Local Model Support](/docs/usage/features/local-llm).
+
+
+
+
+
diff --git a/DigitalHumanWeb/docs/usage/features/multi-ai-providers.zh-CN.mdx b/DigitalHumanWeb/docs/usage/features/multi-ai-providers.zh-CN.mdx
new file mode 100644
index 0000000..761f6e7
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/multi-ai-providers.zh-CN.mdx
@@ -0,0 +1,66 @@
+---
+title: LobeChat 支持多模型服务商
+description: 了解 LobeChat 在多模型服务商支持方面的最新进展,包括已支持的模型服务商和计划中的扩展,以及本地模型支持的使用方式。
+tags:
+ - LobeChat
+ - AI 会话服务
+ - 模型服务商
+ - 多模型支持
+ - 本地模型支持
+ - AWS Bedrock
+ - Google AI
+ - ChatGLM
+ - Moonshot AI
+ - 01 AI
+ - Together AI
+ - Ollama
+---
+
+# 多模型服务商支持
+
+
+
+在 0.123.0 及以后版本中可用
+
+在 LobeChat 的不断发展过程中,我们深刻理解到在提供 AI 会话服务时模型服务商的多样性对于满足社区需求的重要性。因此,我们不再局限于单一的模型服务商,而是拓展了对多种模型服务商的支持,以便为用户提供更为丰富和多样化的会话选择。
+
+通过这种方式,LobeChat 能够更灵活地适应不同用户的需求,同时也为开发者提供了更为广泛的选择空间。
+
+## 已支持的模型服务商
+
+我们已经实现了对以下模型服务商的支持:
+
+- **AWS Bedrock**:集成了 AWS Bedrock 服务,支持了 **Claude / LLama2** 等模型,提供了强大的自然语言处理能力。[了解更多](https://aws.amazon.com/cn/bedrock)
+- **Google AI (Gemini Pro、Gemini Vision)**:接入了 Google 的 **Gemini** 系列模型,包括 Gemini 和 Gemini Pro,以支持更高级的语言理解和生成。[了解更多](https://deepmind.google/technologies/gemini/)
+- **Anthropic (Claude)**:接入了 Anthropic 的 **Claude** 系列模型,包括 Claude 3 和 Claude 2,多模态突破,超长上下文,树立行业新基准。[了解更多](https://www.anthropic.com/claude)
+- **ChatGLM**:加入了智谱的 **ChatGLM** 系列模型(GLM-4/GLM-4-vision/GLM-3-turbo),为用户提供了另一种高效的会话模型选择。[了解更多](https://www.zhipuai.cn/)
+- **Moonshot AI (月之暗面)**:集成了 Moonshot 系列模型,这是一家来自中国的创新性 AI 创业公司,旨在提供更深层次的会话理解。[了解更多](https://www.moonshot.cn/)
+- **Together.ai**:集成部署了数百种开源模型和向量模型,无需本地部署即可随时访问这些模型。[了解更多](https://www.together.ai/)
+- **01.AI (零一万物)**:集成了零一万物模型,系列 API 具备较快的推理速度,这不仅缩短了处理时间,同时也保持了出色的模型效果。[了解更多](https://www.lingyiwanwu.com/)
+- **Groq**:接入了 Groq 的 AI 模型,高效处理消息序列,生成回应,胜任多轮对话及单次交互任务。[了解更多](https://groq.com/)
+- **OpenRouter**:其支持包括 **Claude 3**,**Gemma**,**Mistral**,**Llama2**和**Cohere**等模型路由,支持智能路由优化,提升使用效率,开放且灵活。[了解更多](https://openrouter.ai/)
+- **Minimax**: 接入了 Minimax 的 AI 模型,包括 MoE 模型 **abab6**,提供了更多的选择空间。[了解更多](https://www.minimaxi.com/)
+- **DeepSeek**: 接入了 DeepSeek 的 AI 模型,包括最新的 **DeepSeek-V2**,提供兼顾性能与价格的模型。[了解更多](https://www.deepseek.com/)
+- **Qwen (通义千问)**: 接入了 Qwen 的 AI 模型,包括最新的 **qwen-turbo**,**qwen-plus** 和 **qwen-max** 等模型。[了解更多](https://help.aliyun.com/zh/dashscope/developer-reference/model-introduction)
+
+同时,我们也在计划支持更多的模型服务商,如 Replicate 和 Perplexity 等,以进一步丰富我们的服务商库。如果你希望让 LobeChat 支持你喜爱的服务商,欢迎加入我们的[社区讨论](https://github.com/lobehub/lobe-chat/discussions/1284)。
+
+## 本地模型支持
+
+
+
+为了满足特定用户的需求,LobeChat 还基于 [Ollama](https://ollama.ai) 支持了本地模型的使用,让用户能够更灵活地使用自己的或第三方的模型,详见 [本地模型支持](/zh/docs/usage/features/local-llm)。
+
+
+
+
+
diff --git a/DigitalHumanWeb/docs/usage/features/plugin-system.mdx b/DigitalHumanWeb/docs/usage/features/plugin-system.mdx
new file mode 100644
index 0000000..80165cf
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/plugin-system.mdx
@@ -0,0 +1,89 @@
+---
+title: Plugin System in LobeChat
+description: >-
+ Explore the diverse plugin ecosystem of LobeChat, extending its capabilities
+ to provide real-time information, interact with various platforms, and
+ simplify user interactions.
+tags:
+ - LobeChat
+ - Plugin Ecosystem
+ - Assistant Functionality
+ - Plugin Development
+ - SDK
+ - Chat Application
+---
+
+# Plugin System
+
+
+
+The plugin ecosystem of LobeChat is an important extension of its core functionality, greatly enhancing the practicality and flexibility of the LobeChat assistant.
+
+
+
+By utilizing plugins, LobeChat assistants can obtain and process real-time information, such as searching for web information and providing users with instant and relevant news.
+
+In addition, these plugins are not limited to news aggregation, but can also extend to other practical functions, such as quickly searching documents, generating images, obtaining data from various platforms like Bilibili, Steam, and interacting with various third-party services.
+
+Learn more about [plugin usage](/docs/usage/plugins/basic) by checking it out.
+
+
+ To help developers better participate in this ecosystem, we provide comprehensive development
+ resources. This includes detailed component development documentation, a fully-featured software
+ development kit (SDK), and template examples, all aimed at simplifying the development process and
+ lowering the entry barrier for developers.
+
+
+
+ We welcome developers to utilize these resources, unleash their creativity, and write
+ feature-rich, user-friendly plugins. Through collective efforts, we can continuously expand the
+ functional boundaries of the chat application and explore a more intelligent and efficient
+ creativity platform.
+
+
+## Plugin Ecosystem
+
+
+ If you are interested in plugin development, please refer to our [📘 Plugin Development
+ Guide](/docs/usage/plugins/development) in the Wiki.
+
+
+- [lobe-chat-plugins][lobe-chat-plugins]: This is the plugin index for LobeChat. It retrieves the list of plugins from the index.json of this repository and displays them to the users.
+- [chat-plugin-template][chat-plugin-template]: Chat Plugin plugin development template, you can quickly create a new plugin project through the project template.
+- [@lobehub/chat-plugin-sdk][chat-plugin-sdk]: The LobeChat plugin SDK can help you create excellent Lobe Chat plugins.
+- [@lobehub/chat-plugins-gateway][chat-plugins-gateway]: The LobeChat plugin gateway is a backend service that serves as the gateway for LobeChat plugins. We deploy this service using Vercel.
+
+### Roadmap Progress
+
+The plugin system of LobeChat has now entered a stable stage, and we have basically completed most of the functionality required by the plugin system. However, we are still planning and considering the new possibilities that plugins can bring to us. You can learn more in the following Issues:
+
+
+ ### ✅ Phase One of Plugins
+
+Implementing the separation of plugins from the main body, splitting the plugins into independent repositories for maintenance, and implementing dynamic loading of plugins. [**#73**](https://github.com/lobehub/lobe-chat/issues/73)
+
+### ✅ Phase Two of Plugins
+
+The security and stability of plugin usage, more accurate presentation of abnormal states, maintainability and developer-friendliness of the plugin architecture. [**#97**](https://github.com/lobehub/lobe-chat/issues/97)
+
+### ✅ Phase Three of Plugins
+
+Higher-level and improved customization capabilities, support for OpenAPI schema invocation, compatibility with ChatGPT plugins, and the addition of Midjourney plugins. [**#411**](https://github.com/lobehub/lobe-chat/discussions/#411)
+
+### 💭 Phase Four of Plugins
+
+Comprehensive authentication, visual configuration of plugin definitions, Plugin SDK CLI, Python language development template, any other ideas? Join the discussion: [**#1310**](https://github.com/lobehub/lobe-chat/discussions/#1310)
+
+
+
+[chat-plugin-sdk]: https://github.com/lobehub/chat-plugin-sdk
+[chat-plugin-template]: https://github.com/lobehub/chat-plugin-template
+[chat-plugins-gateway]: https://github.com/lobehub/chat-plugins-gateway
+[lobe-chat-plugins]: https://github.com/lobehub/lobe-chat-plugins
diff --git a/DigitalHumanWeb/docs/usage/features/plugin-system.zh-CN.mdx b/DigitalHumanWeb/docs/usage/features/plugin-system.zh-CN.mdx
new file mode 100644
index 0000000..1313ab1
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/plugin-system.zh-CN.mdx
@@ -0,0 +1,78 @@
+---
+title: LobeChat 插件生态系统 - 功能扩展与开发资源
+description: 了解 LobeChat 插件生态系统如何增强 LobeChat 助手的实用性和灵活性,以及提供的开发资源和插件开发指南。
+tags:
+ - LobeChat
+ - 插件生态系统
+ - 开发资源
+ - 插件开发指南
+---
+
+# 插件系统
+
+
+
+LobeChat 的插件生态系统是其核心功能的重要扩展,它极大地增强了 LobeChat 助手的实用性和灵活性。
+
+
+
+通过利用插件,LobeChat 的助手们能够实现实时信息的获取和处理,例如搜索网络信息,为用户提供即时且相关的资讯。
+
+此外,这些插件不仅局限于新闻聚合,还可以扩展到其他实用的功能,如快速检索文档、生成图片、获取 Bilibili 、Steam 等各种平台数据,以及与其他各式各样的第三方服务交互。
+
+通过查看 [插件使用](/zh/docs/usage/plugins/basic) 了解更多。
+
+
+ 为了帮助开发者更好地参与到这个生态中来,我们在提供了全面的开发资源。这包括详尽的组件开发文档、功能齐全的软件开发工具包(SDK),以及样板示例,这些都是为了简化开发过程,降低开发者的入门门槛。
+
+
+
+ 我们欢迎开发者利用这些资源,发挥创造力,编写出功能丰富、用户友好的插件。通过共同的努力,我们可以不断扩展聊天应用的功能界限,探索一个更加智能、高效的创造力平台。
+
+
+## 插件生态体系
+
+
+ 如果你对插件开发感兴趣,请在 Wiki 中查阅我们的 [📘
+ 插件开发指南](/zh/docs/usage/plugins/development)。
+
+
+- [lobe-chat-plugins][lobe-chat-plugins]:这是 LobeChat 的插件索引。它从该仓库的 index.json 中获取插件列表并显示给用户。
+- [chat-plugin-template][chat-plugin-template]: Chat Plugin 插件开发模版,你可以通过项目模版快速新建插件项目。
+- [@lobehub/chat-plugin-sdk][chat-plugin-sdk]:LobeChat 插件 SDK 可帮助您创建出色的 Lobe Chat 插件。
+- [@lobehub/chat-plugins-gateway][chat-plugins-gateway]:LobeChat 插件网关是一个后端服务,作为 LobeChat 插件的网关。我们使用 Vercel 部署此服务。
+
+### 路线进展
+
+LobeChat 的插件系统目前已初步进入一个稳定阶段,我们已基本完成大部分插件系统所需的功能,但我们仍然在规划与思考插件能为我们带来的全新可能性。您可以在以下 Issues 中了解更多信息:
+
+
+ ### ✅ 插件一期
+
+实现插件与主体分离,将插件拆分为独立仓库维护,并实现插件的动态加载。 [**#73**](https://github.com/lobehub/lobe-chat/issues/73)
+
+### ✅ 插件二期
+
+插件的安全性与使用的稳定性,更加精准地呈现异常状态,插件架构的可维护性与开发者友好。[**#97**](https://github.com/lobehub/lobe-chat/issues/97)
+
+### ✅ 插件三期
+
+更高阶与完善的自定义能力,支持 OpenAPI schema 调用、兼容 ChatGPT 插件、新增 Midjourney 插件。 [**#411**](https://github.com/lobehub/lobe-chat/discussions/#411)
+
+### 💭 插件四期
+
+完善的鉴权、可视化配置插件定义、 Plugin SDK CLI 、 Python 语言研发模板、还有什么想法?欢迎参与讨论: [**#1310**](https://github.com/lobehub/lobe-chat/discussions/#1310)
+
+
+
+[chat-plugin-sdk]: https://github.com/lobehub/chat-plugin-sdk
+[chat-plugin-template]: https://github.com/lobehub/chat-plugin-template
+[chat-plugins-gateway]: https://github.com/lobehub/chat-plugins-gateway
+[lobe-chat-plugins]: https://github.com/lobehub/lobe-chat-plugins
diff --git a/DigitalHumanWeb/docs/usage/features/pwa.mdx b/DigitalHumanWeb/docs/usage/features/pwa.mdx
new file mode 100644
index 0000000..606093d
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/pwa.mdx
@@ -0,0 +1,74 @@
+---
+title: LobeChat support Progressive Web Apps (PWA)
+description: >-
+ Discover how LobeChat utilizes Progressive Web App (PWA) technology to provide
+ a seamless and near-native app experience on both desktop and mobile devices.
+ Learn how to install LobeChat as a desktop app for enhanced convenience.
+tags:
+ - Progressive Web App
+ - PWA
+ - LobeChat
+ - Web Applications
+ - User Experience
+---
+
+# Progressive Web App (PWA)
+
+
+
+We understand the importance of providing a seamless experience for users in today's multi-device environment. To achieve this, we have adopted Progressive Web App [PWA](https://support.google.com/chrome/answer/9658361) technology, which is a modern web technology that elevates web applications to a near-native app experience. Through PWA, LobeChat is able to provide a highly optimized user experience on both desktop and mobile devices, while maintaining lightweight and high performance characteristics. Visually and perceptually, we have also carefully designed it to ensure that its interface is indistinguishable from a native app, providing smooth animations, responsive layouts, and adaptation to different screen resolutions of various devices.
+
+If you are unfamiliar with the installation process of PWA, you can follow the steps below to add LobeChat as a desktop app (also applicable to mobile devices):
+
+## Running on Chrome / Edge
+
+
+ On macOS, when using a Chrome-installed PWA, it is required that Chrome be open, otherwise Chrome
+ will automatically open and then launch the PWA app.
+
+
+
+
+### Run Chrome or Edge browser on your computer
+
+### Visit the LobeChat webpage
+
+### In the top right corner of the address bar, click the Install icon
+
+### Follow the on-screen instructions to complete the PWA installation
+
+
+
+## Running on Safari
+
+Safari PWA requires macOS Ventura or later. The PWA installed by Safari does not require Safari to be open; you can directly open the PWA app.
+
+
+
+### Run Safari browser on your computer
+
+### Visit the LobeChat webpage
+
+### In the top right corner of the address bar, click the Share icon
+
+### Click Add to Dock
+
+### Follow the on-screen instructions to complete the PWA installation
+
+
+
+
+ The default installed LobeChat PWA icon has a black background, you can use cmd + i to paste the following image to replace it with a white background.
+
+
+
diff --git a/DigitalHumanWeb/docs/usage/features/pwa.zh-CN.mdx b/DigitalHumanWeb/docs/usage/features/pwa.zh-CN.mdx
new file mode 100644
index 0000000..b7b5564
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/pwa.zh-CN.mdx
@@ -0,0 +1,74 @@
+---
+title: LobeChat 支持渐进式 Web 应用(PWA)- 提升用户体验
+description: 了解渐进式 Web 应用(PWA)技术如何提升网页应用至接近原生应用体验,以及如何在桌面和移动设备上提供优化的用户体验。
+tags:
+ - 渐进式 Web 应用
+ - PWA 技术
+ - 用户体验
+ - 桌面应用
+ - 移动设备
+ - 轻量级
+ - 高性能
+ - 响应式布局
+---
+
+# 渐进式 Web 应用(PWA)
+
+
+
+我们深知在当今多设备环境下为用户提供无缝体验的重要性。为此,我们采用了渐进式 Web 应用 [PWA](https://support.google.com/chrome/answer/9658361) 技术,这是一种能够将网页应用提升至接近原生应用体验的现代 Web 技术。通过 PWA,LobeChat 能够在桌面和移动设备上提供高度优化的用户体验,同时保持轻量级和高性能的特点。在视觉和感觉上,我们也经过精心设计,以确保它的界面与原生应用无差别,提供流畅的动画、响应式布局和适配不同设备的屏幕分辨率。
+
+若您未熟悉 PWA 的安装过程,您可以按照以下步骤将 LobeChat 添加为您的桌面应用(也适用于移动设备):
+
+## Chrome / Edge 浏览器上运行
+
+
+ macOS 下,使用 Chrome 安装的 PWA 时,必须要求 Chrome 是打开状态,否则会自动打开 Chrome 再打开 PWA
+ 应用。
+
+
+
+
+### 在电脑上运行 Chrome 或 Edge 浏览器
+
+### 访问 LobeChat 网页
+
+### 在地址栏的右上角,单击 安装 图标
+
+### 根据屏幕上的指示完成 PWA 的安装
+
+
+
+## Safari 浏览器上运行
+
+Safari PWA 需要 macOS Ventura 或更高版本。Safari 安装的 PWA 并不要求 Safari 是打开状态,可以直接打开 PWA 应用。
+
+
+
+### 在电脑上运行 Safari 浏览器
+
+### 访问 LobeChat 网页
+
+### 在地址栏的右上角,单击 分享 图标
+
+### 点选 添加到程序坞
+
+### 根据屏幕上的指示完成 PWA 的安装
+
+
+
+
+ 默认安装的 LobeChat PWA 图标是黑色背景的,您可以在自行使用 cmd + i 粘贴如下图片替换为白色背景的。
+
+
+
diff --git a/DigitalHumanWeb/docs/usage/features/text-to-image.mdx b/DigitalHumanWeb/docs/usage/features/text-to-image.mdx
new file mode 100644
index 0000000..01a7a26
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/text-to-image.mdx
@@ -0,0 +1,26 @@
+---
+title: Text to Image in LobeChat
+description: >-
+ Transform your ideas into images with the latest text-to-image generation
+ technology integrated into LobeChat AI Assistant. Experience a private and
+ immersive creative process.
+tags:
+ - Text to Image Generation
+ - LobeChat AI Assistant
+ - DALL-E 3
+ - MidJourney
+ - Pollinations
+---
+
+# Text to Image Generation
+
+
+
+Supporting the latest text-to-image generation technology, LobeChat now enables users to directly utilize the Text to Image tool during conversations with the assistant. By harnessing the capabilities of AI tools such as [DALL-E 3](https://openai.com/dall-e-3), [MidJourney](https://www.midjourney.com/), and [Pollinations](https://pollinations.ai/), assistants can now transform your ideas into images. This allows for a more private and immersive creative process.
diff --git a/DigitalHumanWeb/docs/usage/features/text-to-image.zh-CN.mdx b/DigitalHumanWeb/docs/usage/features/text-to-image.zh-CN.mdx
new file mode 100644
index 0000000..c512847
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/text-to-image.zh-CN.mdx
@@ -0,0 +1,26 @@
+---
+title: LobeChat 文生图:文本转图片生成技术
+description: >-
+ LobeChat 现在支持最新的文本到图片生成技术,让用户可以在与助手对话中直接调用文生图工具进行创作。利用 DALL-E 3、MidJourney 和
+ Pollinations 等 AI 工具,助手们可以将你的想法转化为图像,让创作过程更私密和沉浸式。
+tags:
+ - LobeChat
+ - 文生图
+ - DALL-E 3
+ - MidJourney
+ - Pollinations
+ - AI工具
+---
+
+# Text to Image 文生图
+
+
+
+支持最新的文本到图片生成技术,LobeChat 现在能够让用户在与助手对话中直接调用文成图工具进行创作。通过利用 [`DALL-E 3`](https://openai.com/dall-e-3)、[`MidJourney`](https://www.midjourney.com/) 和 [`Pollinations`](https://pollinations.ai/) 等 AI 工具的能力, 助手们现在可以将你的想法转化为图像。同时可以更私密和沉浸式的完成你的创造过程。
diff --git a/DigitalHumanWeb/docs/usage/features/theme.mdx b/DigitalHumanWeb/docs/usage/features/theme.mdx
new file mode 100644
index 0000000..6172ff5
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/theme.mdx
@@ -0,0 +1,34 @@
+---
+title: LobeChat support Custom Themes
+description: >-
+ Explore LobeChat's flexible theme modes and color customization options for a
+ personalized interface design. Switch between light and dark modes, customize
+ theme colors, and choose between conversation bubble and document modes.
+tags:
+ - Custom Themes
+ - Personalized User Experiences
+ - Theme Modes
+ - Color Customization
+ - Interface Design
+ - LobeChat
+---
+
+# Custom Themes
+
+
+
+LobeChat places a strong emphasis on personalized user experiences in its interface design, and thus introduces flexible and diverse theme modes, including a light mode for daytime and a dark mode for nighttime.
+
+In addition to theme mode switching, we also provide a series of color customization options, allowing users to adjust the application's theme colors according to their preferences. Whether it's a stable deep blue, a lively peach pink, or a professional gray and white, users can find color choices in LobeChat that match their own style.
+
+
+ The default configuration can intelligently identify the user's system color mode and
+ automatically switch themes to ensure a consistent visual experience with the operating system.
+
+
+For users who prefer to manually adjust details, LobeChat also provides intuitive setting options and offers a choice between conversation bubble mode and document mode for chat scenes.
diff --git a/DigitalHumanWeb/docs/usage/features/theme.zh-CN.mdx b/DigitalHumanWeb/docs/usage/features/theme.zh-CN.mdx
new file mode 100644
index 0000000..778cf2c
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/theme.zh-CN.mdx
@@ -0,0 +1,30 @@
+---
+title: LobeChat 自定义主题 - 个性化体验
+description: 了解 LobeChat 的灵活多变主题模式,包括日间亮色模式和夜间深色模式,以及颜色定制选项,让用户根据喜好调整应用主题色彩。
+tags:
+ - LobeChat
+ - 自定义主题
+ - 主题模式
+ - 颜色定制
+ - 界面设计
+ - 个性化体验
+---
+
+# 自定义主题
+
+
+
+LobeChat 在界面设计上十分考虑用户的个性化体验,因此引入了灵活多变的主题模式,其中包括日间的亮色模式和夜间的深色模式。
+
+除了主题模式的切换,我们还提供了一系列的颜色定制选项,允许用户根据自己的喜好来调整应用的主题色彩。无论是想要沉稳的深蓝,还是希望活泼的桃粉,或者是专业的灰白,用户都能够在 LobeChat 中找到匹配自己风格的颜色选择。
+
+
+ 默认配置能够智能地识别用户系统的颜色模式,自动进行主题切换,以确保应用界面与操作系统保持一致的视觉体验。
+
+
+对于喜欢手动调控细节的用户,LobeChat 同样提供了直观的设置选项,针对聊天场景也提供了对话气泡模式和文档模式的选择。
diff --git a/DigitalHumanWeb/docs/usage/features/tts.mdx b/DigitalHumanWeb/docs/usage/features/tts.mdx
new file mode 100644
index 0000000..755c784
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/tts.mdx
@@ -0,0 +1,52 @@
+---
+title: LobeChat support Speech Synthesis and Recognition (TTS & STT)
+description: >-
+ Experience seamless Text-to-Speech (TTS) and Speech-to-Text (STT) technologies
+ in LobeChat. Choose from a variety of high-quality voices for personalized
+ communication. Learn more about Lobe TTS toolkit @lobehub/tts.
+tags:
+ - LobeChat
+ - TTS
+ - STT
+ - Voice Conversation
+ - Lobe TTS
+ - Text-to-Speech
+ - Speech-to-Text
+ - Voice Options
+---
+
+# TTS & STT Voice Conversation
+
+
+
+LobeChat supports Text-to-Speech (TTS) and Speech-to-Text (STT) technologies. Our application can convert text information into clear voice output, allowing users to interact with our conversational agents as if they were talking to a real person. Users can choose from a variety of voices and pair the appropriate audio with the assistant. Additionally, for users who prefer auditory learning or need to obtain information while busy, TTS provides an excellent solution.
+
+In LobeChat, we have carefully selected a series of high-quality voice options (OpenAI Audio, Microsoft Edge Speech) to meet the needs of users from different regions and cultural backgrounds. Users can choose suitable voices based on personal preferences or specific scenarios, thereby obtaining a personalized communication experience.
+
+## Lobe TTS
+
+
+
+[`@lobehub/tts`](https://tts.lobehub.com) is a high-quality TTS toolkit developed using the TS language, supporting usage in both server and browser environments.
+
+- **Server**: With just 15 lines of code, it can achieve high-quality speech generation capabilities comparable to OpenAI TTS services. It currently supports EdgeSpeechTTS, MicrosoftTTS, OpenAITTS, and OpenAISTT.
+- **Browser**: It provides high-quality React Hooks and visual audio components, supporting common functions such as loading, playing, pausing, and dragging the timeline, and offering extensive audio track style adjustment capabilities.
+
+
+ During the implementation of the TTS feature in LobeChat, we found that there was no good frontend
+ TTS library on the market, which resulted in a lot of effort being spent on implementation,
+ including data conversion, audio progress management, and speech visualization. Adhering to the
+ "Community First" concept, we have polished and open-sourced this implementation, hoping to help
+ community developers who want to implement TTS.
+
diff --git a/DigitalHumanWeb/docs/usage/features/tts.zh-CN.mdx b/DigitalHumanWeb/docs/usage/features/tts.zh-CN.mdx
new file mode 100644
index 0000000..6b5a72d
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/tts.zh-CN.mdx
@@ -0,0 +1,46 @@
+---
+title: LobeChat 支持 TTS & STT 语音会话
+description: LobeChat 支持文字转语音(TTS)和语音转文字(STT)技术,提供高品质声音选项,个性化交流体验。了解更多关于 Lobe TTS 工具包。
+tags:
+ - TTS
+ - STT
+ - 语音会话
+ - LobeChat
+ - Lobe TTS
+ - 文字转语音
+ - 语音转文字
+---
+
+# TTS & STT 语音会话
+
+
+
+LobeChat 支持文字转语音(Text-to-Speech,TTS)和语音转文字(Speech-to-Text,STT)技术,我们的应用能够将文本信息转化为清晰的语音输出,用户可以像与真人交谈一样与我们的对话代理进行交流。用户可以从多种声音中选择,给助手搭配合适的音源。 同时,对于那些倾向于听觉学习或者想要在忙碌中获取信息的用户来说,TTS 提供了一个极佳的解决方案。
+
+在 LobeChat 中,我们精心挑选了一系列高品质的声音选项 (OpenAI Audio, Microsoft Edge Speech),以满足不同地域和文化背景用户的需求。用户可以根据个人喜好或者特定场景来选择合适的语音,从而获得个性化的交流体验。
+
+## Lobe TTS
+
+
+
+[`@lobehub/tts`](https://tts.lobehub.com) 是一个使用 TS 语言开发的,高质量 TTS 工具包,支持在服务端和浏览器中使用。
+
+- **服务端**:只要使用 15 行代码,即可实现对标 OpenAI TTS 服务的高质量语音生成能力。目前支持 EdgeSpeechTTS 与 MicrosoftTTS 与 OpenAITTS、OpenAISTT。
+- **浏览器**:提供了高质量的 React Hooks 与可视化音频组件,支持加载、播放、暂停、拖动时间轴等常用功能,且提供了非常丰富的音轨样式调整能力。
+
+
+ 我们在实现 LobeChat 的 TTS 功能过程中,发现市面上并没有一款很好的 TTS
+ 前端库,导致在实现上耗费了很多精力,包括数据转换、音频进度管理、语音可视化等。秉承「 Community
+ First 」 的理念,我们把这套实现打磨并开源了出来,希望能帮助到想要实现 TTS 的社区开发者们。
+
diff --git a/DigitalHumanWeb/docs/usage/features/vision.mdx b/DigitalHumanWeb/docs/usage/features/vision.mdx
new file mode 100644
index 0000000..bf14216
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/vision.mdx
@@ -0,0 +1,28 @@
+---
+title: LobeChat support Vision Recognition
+description: >-
+ Discover how LobeChat integrates visual recognition capabilities like OpenAI's
+ gpt-4-vision and Google Gemini Pro vision for intelligent conversations based
+ on uploaded images.
+tags:
+ - LobeChat
+ - Model Vision Recognition
+ - Multimodal Interaction
+ - Visual Elements
+ - Intelligent Conversations
+---
+
+# Model Vision Recognition
+
+
+
+LobeChat now supports large language models with visual recognition capabilities such as OpenAI's [`gpt-4-vision`](https://platform.openai.com/docs/guides/vision), Google Gemini Pro vision, and Zhipu GLM-4 Vision, enabling LobeChat to have multimodal interaction capabilities. Users can easily upload or drag and drop images into the chat box, and the assistant will be able to recognize the content of the images and engage in intelligent conversations based on them, creating more intelligent and diverse chat scenarios.
+
+This feature opens up new ways of interaction, allowing communication to extend beyond text and encompass rich visual elements. Whether it's sharing images in daily use or interpreting images in specific industries, the assistant can provide an excellent conversational experience.
diff --git a/DigitalHumanWeb/docs/usage/features/vision.zh-CN.mdx b/DigitalHumanWeb/docs/usage/features/vision.zh-CN.mdx
new file mode 100644
index 0000000..1e2bf3b
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/features/vision.zh-CN.mdx
@@ -0,0 +1,25 @@
+---
+title: LobeChat 支持多模态交互:视觉识别助力智能对话
+description: LobeChat 支持多种具有视觉识别能力的大语言模型,用户可上传或拖拽图片,助手将识别内容并展开智能对话,打造更智能、多元化的聊天场景。
+tags:
+ - LobeChat
+ - 多模态交互
+ - 视觉识别
+ - 智能对话
+ - 大语言模型
+---
+
+# 模型视觉识别
+
+
+
+LobeChat 已经支持 OpenAI 的 [`gpt-4-vision`](https://platform.openai.com/docs/guides/vision) 、Google Gemini Pro vision、智谱 GLM-4 Vision 等具有视觉识别能力的大语言模型,这使得 LobeChat 具备了多模态交互的能力。用户可以轻松上传图片或者拖拽图片到对话框中,助手将能够识别图片内容,并在此基础上进行智能对话,构建更智能、更多元化的聊天场景。
+
+这一特性打开了新的互动方式,使得交流不再局限于文字,而是可以涵盖丰富的视觉元素。无论是日常使用中的图片分享,还是在特定行业内的图像解读,助手都能提供出色的对话体验。
diff --git a/DigitalHumanWeb/docs/usage/foundation/basic.mdx b/DigitalHumanWeb/docs/usage/foundation/basic.mdx
new file mode 100644
index 0000000..015032b
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/foundation/basic.mdx
@@ -0,0 +1,55 @@
+---
+title: Basic Usage Guide for Conversations - Large Language Models (LLMs)
+description: >-
+ Learn about the fundamental functions for interacting with Large Language
+ Models (LLMs) and how to utilize features like model selection, file/image
+ upload, temperature setting, and more.
+tags:
+ - Large Language Models
+ - Model Selection
+ - File Upload
+ - Temperature Setting
+ - Voice Input
+ - Plugin Setting
+---
+
+# Basic Usage Guide for Conversations
+
+
+
+In general, the basic interaction with Large Language Models (LLMs) can be done through the fundamental functions provided in this area (as shown above).
+
+## Basic Function Description
+
+
+
+1. **Model Selection**: Choose the Large Language Model (LLM) to be used in the current conversation. For model settings, refer to [Model Providers](/docs/usage/providers).
+2. **File/Image Upload**: When the selected model supports file or image recognition, users can upload files or images during the conversation with the model.
+3. **Temperature Setting**: Adjust the randomness level of the model's output. The higher the value, the more random the output results. For detailed information, refer to the [Large Language Model Guide](/docs/usage/agents/model).
+4. **History Record Setting**: Set the number of chat records the model needs to remember in this conversation. The longer the history, the more conversation content the model can remember, but it will also consume more context tokens.
+5. **Voice Input**: Click this button to convert speech to text input. For more information, refer to [Speech-to-Text Conversion](/docs/usage/foundation/tts-stt).
+6. **Plugin Setting**: Choose the plugins to enable in this conversation. For more information, refer to [Plugin Usage](/docs/usage/plugins/basic-usage).
+7. **Token Usage**: Display the context length and token consumption of this conversation.
+8. **Start New Topic**: End the current conversation and start a new topic. For more information, refer to [Topic Usage](/docs/usage/agents/topics).
+9. **Send Button**: Send the current input content to the model. The dropdown menu provides additional send operation options.
+
+
+
+
+ - **Send Shortcut**: Set a shortcut to send messages and line breaks using the Enter key or ⌘ +
+ Enter key. - **Add an AI Message**: Manually add and edit a message input by an AI character in
+ the conversation context, which will not trigger a model response. - **Add a User Message**: Add
+ the current input content as a message input by the user character to the conversation context,
+ which will not trigger a model response.
+
diff --git a/DigitalHumanWeb/docs/usage/foundation/basic.zh-CN.mdx b/DigitalHumanWeb/docs/usage/foundation/basic.zh-CN.mdx
new file mode 100644
index 0000000..1d6ecf3
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/foundation/basic.zh-CN.mdx
@@ -0,0 +1,56 @@
+---
+title: 会话基本使用指南 - 大型语言模型交互指南
+description: 了解如何使用大型语言模型进行基本交互,包括模型选择、文件/图片上传、温度设置、历史记录设置等。
+tags:
+ - 大型语言模型
+ - LLM
+ - 模型选择
+ - 文件上传
+ - 温度设置
+ - 历史记录设置
+ - 语音输入
+ - 插件设置
+ - Token 用量
+ - 新建话题
+ - 发送按钮
+---
+
+# 会话基本使用指南
+
+
+
+通常情况下,与大型语言模型 (LLMs) 的基本交互可以通过此区域(如上图)提供的基础功能进行。
+
+## 基本功能说明
+
+
+
+1. **模型选择**:选择当前对话所使用的大型语言模型 (LLM)。模型的设置详见[模型服务商](/zh/docs/usage/providers)。
+2. **文件/图片上传**:当所选模型支持文件或图片识别功能时,用户可以在与模型的对话中上传文件或图片。
+3. **温度设置**:调节模型输出的随机性程度。数值越高,输出结果越随机。详细说明请参考[大语言模型指南](/zh/docs/usage/agents/model)。
+4. **历史记录设置**:设定本次对话中模型需要记忆的聊天记录数量。历史记录越长,模型能够记忆的对话内容越多,但同时也会消耗更多的上下文 token。
+5. **语音输入**:点击该按钮后,可以将语音转换为文字输入。有关详细信息,请参考[语音文字转换](/zh/docs/usage/foundation/tts-stt)。
+6. **插件设置**:选择本次对话中需要启用的插件。有关详细信息,请参考[插件使用](/zh/docs/usage/plugins/basic-usage)。
+7. **Token 用量**:显示本次对话的上下文长度以及 Token 消耗情况。
+8. **新建话题**:结束当前对话并开启一个新的对话主题。有关详细信息,请参考[话题使用](/zh/docs/usage/agents/topics)。
+9. **发送按钮**:将当前输入内容发送至模型。下拉菜单提供额外的发送操作选项。
+
+
+
+
+ - **发送快捷键**:设置使用 Enter 键或 ⌘ + Enter 键发送消息和换行的快捷方式。 -
+ **添加一条AI消息**:在对话上下文中手动添加并编辑一条由 AI 角色输入的消息,该操作不会触发模型响应。
+ -
+ **添加一条用户消息**:将当前输入内容作为用户角色输入的消息添加到对话上下文中,该操作不会触发模型响应。
+
diff --git a/DigitalHumanWeb/docs/usage/foundation/share.mdx b/DigitalHumanWeb/docs/usage/foundation/share.mdx
new file mode 100644
index 0000000..25b438f
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/foundation/share.mdx
@@ -0,0 +1,46 @@
+---
+title: Share Conversation Records - Screenshot Sharing & ShareGPT
+description: >-
+ Learn how to share conversation records in LobeChat using Screenshot Sharing
+ and ShareGPT methods. Capture conversation details in images or generate
+ permanent links effortlessly.
+tags:
+ - Share Conversation Records
+ - Screenshot Sharing
+ - ShareGPT
+ - Conversation Sharing
+ - AI Conversation Sharing
+---
+
+# Share Conversation Records
+
+
+
+By clicking the `Share` button in the top right corner of the chat window, you can share the current conversation records with others. LobeChat supports two sharing methods: `Screenshot Sharing` and `ShareGPT Sharing`.
+
+## Screenshot Sharing
+
+
+
+The screenshot sharing feature will generate and save an image of the current conversation records, with the following options:
+
+- Include Assistant Role Settings: Display the assistant's Prompt information in the screenshot.
+- Include Background Image: Add a gradient background to the generated image.
+- Include Footer: Add LobeChat footer information to the generated image.
+- Image Format: Choose the format for saving the image.
+
+## ShareGPT
+
+
+
+[ShareGPT](https://sharegpt.com/) is an AI conversation sharing platform that allows users to easily share their conversations with Large Language Models (LLMs). Users can generate a permanent link with just one click, making it convenient to share these conversations with friends or others. By integrating ShareGPT functionality, LobeChat can generate links for conversation records with just one click, making sharing easy.
diff --git a/DigitalHumanWeb/docs/usage/foundation/share.zh-CN.mdx b/DigitalHumanWeb/docs/usage/foundation/share.zh-CN.mdx
new file mode 100644
index 0000000..c8be6fa
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/foundation/share.zh-CN.mdx
@@ -0,0 +1,43 @@
+---
+title: 分享会话记录 - LobeChat 分享功能介绍
+description: 了解如何通过 LobeChat 的分享功能分享会话记录,包括截图分享和 ShareGPT 分享方式。通过分享功能,轻松与他人分享您的对话。
+tags:
+ - LobeChat
+ - 分享会话记录
+ - 截图分享
+ - ShareGPT
+ - 对话分享
+---
+
+# 分享会话记录
+
+
+
+通过会话窗口右上角的`分享`按钮,您可以将当前会话记录分享给其他人。LobeChat 支持两种分享方式:`截图分享`和 `ShareGPT 分享`。
+
+## 截图分享
+
+
+
+截图分享功能将生成当前会话记录的图片并保存,其选项说明如下:
+
+- 包含助手角色设置:在截图中显示助手的 Prompt 信息。
+- 包含背景图:在生成的图片中添加渐变背景。
+- 包含页脚:在生成的图片中添加 LobeChat 页脚信息。
+- 图片格式:选择保存图片的格式。
+
+## ShareGPT
+
+
+
+[ShareGPT](https://sharegpt.com/) 是一个 AI 对话分享平台,允许用户便捷地分享他们与大型语言模型 (LLM) 的对话。用户只需点击即可生成永久链接,方便与朋友或其他人分享这些对话。LobeChat 通过集成 ShareGPT 功能,可以一键将对话记录生成链接,方便分享。
diff --git a/DigitalHumanWeb/docs/usage/foundation/text2image.mdx b/DigitalHumanWeb/docs/usage/foundation/text2image.mdx
new file mode 100644
index 0000000..8cb18cd
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/foundation/text2image.mdx
@@ -0,0 +1,49 @@
+---
+title: Guide to Using Text-to-Image Models in LobeChat
+description: >-
+ Learn how to utilize text-to-image generation in LobeChat using DALL-E and
+ Midjourney plugins. Generate images seamlessly with AI assistance.
+tags:
+ - Text-to-Image Models
+ - LobeChat
+ - DALL-E
+ - Midjourney
+ - Plugin Installation
+ - AI Assistance
+---
+
+# Guide to Using Text-to-Image Models in LobeChat
+
+LobeChat supports text-to-image generation through a plugin mechanism. Currently, LobeChat comes with the built-in DALL-E plugin, which allows users to generate images using OpenAI's DALL-E model. Additionally, users can also install the official Midjourney plugin to utilize the Midjourney text-to-image feature.
+
+## DALL-E Model
+
+If you have configured the OpenAI API, you can enable the DALL-E plugin directly in the assistant interface and input prompts in the conversation for AI to generate images for you.
+
+
+
+If the DALL-E plugin is not available, please check if the OpenAI API key has been correctly configured.
+
+## Midjourney Model
+
+LobeChat also offers the Midjourney plugin, which generates images by calling the Midjourney API. Please install the Midjourney plugin in the plugin store beforehand.
+
+
+
+
+ info For plugin installation, please refer to [Plugin
+ Usage](/docs/usage/plugins/basic-usage)
+
+
+When using the Midjourney plugin for the first time, you will need to fill in your Midjourney API key in the plugin settings.
+
+
diff --git a/DigitalHumanWeb/docs/usage/foundation/text2image.zh-CN.mdx b/DigitalHumanWeb/docs/usage/foundation/text2image.zh-CN.mdx
new file mode 100644
index 0000000..3d6b189
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/foundation/text2image.zh-CN.mdx
@@ -0,0 +1,46 @@
+---
+title: LobeChat 文生图模型使用指南
+description: 了解如何在 LobeChat 中使用 DALL-E 和 Midjourney 模型生成图片,配置插件并调用 API。
+tags:
+ - LobeChat
+ - 文生图模型
+ - DALL-E
+ - Midjourney
+ - 插件
+ - API
+---
+
+# 文生图模型使用指南
+
+LobeChat 通过插件机制支持文本生成图片功能。目前,LobeChat 内置了 DALL-E 插件,支持调用 OpenAI 的 DALL-E 模型进行图片生成。此外,用户还可以安装官方提供的 Midjourney 插件,使用 Midjourney 文生图功能。
+
+## DALL-E 模型
+
+如果您已配置 OpenAI API,可以直接在助手界面启用 DALL-E 插件,并在对话中输入提示词,让 AI 为您生成图片。
+
+
+
+如果 DALL-E 插件不可用,请检查 OpenAI API 密钥是否已正确配置。
+
+## Midjourney 模型
+
+LobeChat 还提供 Midjourney 插件,通过 API 调用 Midjourney 生成图片。请提前在插件商店中安装 Midjourney 插件。
+
+
+
+
+ 插件安装请参考[插件使用](/zh/docs/usage/plugins/basic-usage)
+
+
+首次使用 Midjourney 插件时,您需要在插件设置中填写您的 Midjourney API 密钥。
+
+
diff --git a/DigitalHumanWeb/docs/usage/foundation/translate.mdx b/DigitalHumanWeb/docs/usage/foundation/translate.mdx
new file mode 100644
index 0000000..43f8b76
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/foundation/translate.mdx
@@ -0,0 +1,40 @@
+---
+title: Translation of Conversation Records - LobeChat
+description: >-
+ Learn how to translate conversation content in LobeChat with just one click.
+ Customize translation models for accurate results.
+tags:
+ - Translation
+ - Conversation Translation
+ - AI Translation Model
+---
+
+# Translation of Conversation Records
+
+
+
+## Translating Conversation Content
+
+LobeChat supports users to translate conversation content into a specified language with just one click. After selecting the target language, LobeChat will use a pre-set AI model for translation and display the translated results in real-time in the chat window.
+
+
+
+## Translation Model Settings
+
+You can specify the model you wish to use as a translation assistant in the settings.
+
+
+
+- Open the `Settings` panel
+- Find the `Translation Settings` option under `System Assistants`
+- Specify a model for your `Translation Assistant`
diff --git a/DigitalHumanWeb/docs/usage/foundation/translate.zh-CN.mdx b/DigitalHumanWeb/docs/usage/foundation/translate.zh-CN.mdx
new file mode 100644
index 0000000..5ba9784
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/foundation/translate.zh-CN.mdx
@@ -0,0 +1,39 @@
+---
+title: LobeChat 会话翻译功能 - 一键实时翻译对话内容
+description: LobeChat 支持用户一键将对话内容翻译成指定语言,实时显示翻译结果。了解如何设置翻译模型以优化翻译体验。
+tags:
+ - LobeChat
+ - 会话翻译
+ - 实时翻译
+ - 翻译模型设置
+---
+
+# 翻译会话记录
+
+
+
+## 翻译对话中的内容
+
+LobeChat 支持用户一键将对话内容翻译成指定语言。选择目标语言后,LobeChat 将调用预先设置的 AI 模型进行翻译,并将翻译结果实时显示在聊天窗口中。
+
+
+
+## 翻译模型设置
+
+你可以在设置中指定您希望使用的模型作为翻译助手。
+
+
+
+- 打开`设置`面板
+- 在`系统助手`中找到`翻译设置`选项
+- 为你的`翻译助手`指定一个模型
diff --git a/DigitalHumanWeb/docs/usage/foundation/tts-stt.mdx b/DigitalHumanWeb/docs/usage/foundation/tts-stt.mdx
new file mode 100644
index 0000000..a99925d
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/foundation/tts-stt.mdx
@@ -0,0 +1,47 @@
+---
+title: Guide to Text-to-Speech Conversion - LobeChat TTS Feature
+description: >-
+ Learn how to use LobeChat's text-to-speech (TTS) feature for voice input and
+ output. Explore speech-to-text (STT) functionality and customize TTS settings.
+tags:
+ - Text-to-Speech
+ - TTS feature
+ - Speech-to-Text
+ - STT feature
+ - TTS settings
+---
+
+# Guide to Text-to-Speech Conversion
+
+LobeChat supports text-to-speech conversion, allowing users to input content through voice and have the AI output read aloud through speech.
+
+## Text-to-Speech (TTS)
+
+Select any content in the chat window, choose `Text-to-Speech`, and the AI will use the TTS model to read the text content aloud.
+
+
+
+## Speech-to-Text (STT)
+
+Select the voice input feature in the input window, and LobeChat will convert your speech to text and input it into the text box. After completing the input, you can send it directly to the AI.
+
+
+
+## Text-to-Speech Conversion Settings
+
+You can specify the model you want to use for text-to-speech conversion in the settings.
+
+
+
+- Open the `Settings` panel
+- Find the `Text-to-Speech` settings
+- Select the speech service and AI model you prefer
diff --git a/DigitalHumanWeb/docs/usage/foundation/tts-stt.zh-CN.mdx b/DigitalHumanWeb/docs/usage/foundation/tts-stt.zh-CN.mdx
new file mode 100644
index 0000000..b0f885d
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/foundation/tts-stt.zh-CN.mdx
@@ -0,0 +1,45 @@
+---
+title: LobeChat 文字语音转换功能指南
+description: 了解如何在 LobeChat 中使用文字语音转换功能,包括文字转语音(TTS)和语音转文字(STT),以及设置您喜欢的语音模型。
+tags:
+ - LobeChat
+ - 文字语音转换
+ - TTS
+ - STT
+ - 语音模型
+---
+
+# 文字语音转换使用指南
+
+LobeChat 支持文字语音转换功能,允许用户通过语音输入内容,以及将 AI 输出的内容通过语音播报。
+
+## 文字转语音(TTS)
+
+在对话窗口中选中任意内容,选择`文字转语音`,AI 将通过 TTS 模型对文本内容进行语音播报。
+
+
+
+## 语音转文字(STT)
+
+在输入窗口中选择语音输入功能,LobeChat 将您的语音转换为文字并输入到文本框中,完成输入后可以直接发送给 AI。
+
+
+
+## 文字语音转换设置
+
+你可以在设置中为文字语音转换功能指定您希望使用的模型。
+
+
+
+- 打开`设置`面板
+- 找到`文字转语音`设置
+- 选择您所需的语音服务和 AI 模型
diff --git a/DigitalHumanWeb/docs/usage/foundation/vision.mdx b/DigitalHumanWeb/docs/usage/foundation/vision.mdx
new file mode 100644
index 0000000..319bf99
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/foundation/vision.mdx
@@ -0,0 +1,49 @@
+---
+title: Enhancing Multimodal Interaction with Visual Recognition Models
+description: >-
+ Explore how LobeChat integrates visual recognition capabilities into large
+ language models, enabling multimodal interactions for enhanced user
+ experiences.
+tags:
+ - Visual Recognition
+ - Multimodal Interaction
+ - Large Language Models
+ - LobeChat
+ - Custom Model Configuration
+---
+
+# Visual Model User Guide
+
+The ecosystem of large language models that support visual recognition is becoming increasingly rich. Starting from `gpt-4-vision`, LobeChat now supports various large language models with visual recognition capabilities, enabling LobeChat to have multimodal interaction capabilities.
+
+
+
+## Image Input
+
+If the model you are currently using supports visual recognition, you can input image content by uploading a file or dragging the image directly into the input box. The model will automatically recognize the image content and provide feedback based on your prompts.
+
+
+
+## Visual Models
+
+In the model list, models with a `👁️` icon next to their names indicate that the model supports visual recognition. Selecting such a model allows you to send image content.
+
+
+
+## Custom Model Configuration
+
+If you need to add a custom model that is not currently in the list and explicitly supports visual recognition, you can enable the `Visual Recognition` feature in the `Custom Model Configuration` to allow the model to interact with images.
+
+
diff --git a/DigitalHumanWeb/docs/usage/foundation/vision.zh-CN.mdx b/DigitalHumanWeb/docs/usage/foundation/vision.zh-CN.mdx
new file mode 100644
index 0000000..07d1dc7
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/foundation/vision.zh-CN.mdx
@@ -0,0 +1,45 @@
+---
+title: 视觉模型使用指南 - 支持多模态交互的大语言模型
+description: "了解如何在LobeChat中使用支持视觉识别功能的大语言模型,通过上传图片或拖拽图片到输入框进行交互,并选择带有\U0001F441️图标的模型进行图片内容交互。"
+tags:
+ - 视觉模型
+ - 多模态交互
+ - 大语言模型
+ - 自定义模型配置
+---
+
+# 视觉模型使用指南
+
+当前支持视觉识别的大语言模型生态日益丰富。从 `gpt-4-vision` 开始,LobeChat 开始支持各类具有视觉识别能力的大语言模型,这使得 LobeChat 具备了多模态交互的能力。
+
+
+
+## 图片输入
+
+如果你当前使用的模型支持视觉识别功能,您可以通过上传文件或直接将图片拖入输入框的方式输入图片内容。模型会自动识别图片内容,并根据您的提示词给出反馈。
+
+
+
+## 视觉模型
+
+在模型列表中,模型名称后面带有`👁️`图标表示该模型支持视觉识别功能。选择该模型后即可发送图片内容。
+
+
+
+## 自定义模型配置
+
+如果您需要添加当前列表中没有的自定义模型,并且该模型明确支持视觉识别功能,您可以在`自定义模型配置`中开启`视觉识别`功能,使该模型能够与图片进行交互。
+
+
diff --git a/DigitalHumanWeb/docs/usage/plugins/basic-usage.mdx b/DigitalHumanWeb/docs/usage/plugins/basic-usage.mdx
new file mode 100644
index 0000000..51374bc
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/plugins/basic-usage.mdx
@@ -0,0 +1,69 @@
+---
+title: Enhance Your LobeChat Assistant with Plugins
+description: >-
+ Learn how to expand your LobeChat assistant's capabilities by enabling and
+ using various plugins. Access the Plugin Store, install plugins, and configure
+ them to enhance your assistant's functionality.
+tags:
+ - LobeChat plugins
+ - Plugin Store
+ - Using Plugins
+ - Plugin Configuration
+---
+
+# Plugin Usage
+
+The plugin system is a key element in expanding the capabilities of assistants in LobeChat. You can enhance the assistant's abilities by enabling a variety of plugins.
+
+Watch the following video to quickly get started with using LobeChat plugins:
+
+
+
+## Plugin Store
+
+You can access the Plugin Store by navigating to "Extension Tools" -> "Plugin Store" in the session toolbar.
+
+
+
+The Plugin Store allows you to directly install and use plugins within LobeChat.
+
+
+
+## Using Plugins
+
+After installing a plugin, simply enable it under the current assistant to use it.
+
+
+
+## Plugin Configuration
+
+Some plugins may require specific configurations, such as API keys.
+
+After installing a plugin, you can click on "Settings" to enter the plugin's settings and fill in the required configurations:
+
+
+
+
diff --git a/DigitalHumanWeb/docs/usage/plugins/basic-usage.zh-CN.mdx b/DigitalHumanWeb/docs/usage/plugins/basic-usage.zh-CN.mdx
new file mode 100644
index 0000000..a6d5335
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/plugins/basic-usage.zh-CN.mdx
@@ -0,0 +1,67 @@
+---
+title: LobeChat 插件使用指南
+description: 了解如何在 LobeChat 中使用插件来增强助手功能,包括插件商店浏览、安装、配置等操作。
+tags:
+ - LobeChat
+ - 插件
+ - 助手功能
+ - 插件商店
+ - 插件配置
+---
+
+# 插件使用
+
+插件体系是 LobeChat 中扩展助理的能力的关键要素,你可以通过为助手启用各式各样的插件来增强助手的各项能力。
+
+查看以下视频,快速上手使用 LobeChat 插件:
+
+
+
+## 插件商店
+
+你可以在会话工具条中的 「扩展工具」 -> 「插件商店」,进入插件商店。
+
+
+
+插件商店中会在 LobeChat 中可以直接安装并使用的插件。
+
+
+
+## 使用插件
+
+安装完毕插件后,只需在当前助手下开启插件即可使用。
+
+
+
+## 插件配置
+
+部分插件可能需要你进行相应的配置,例如 API Key 等。
+
+你可以在安装插件后,点击设置进入插件的设置填写配置:
+
+
+
+
diff --git a/DigitalHumanWeb/docs/usage/plugins/custom-plugin.mdx b/DigitalHumanWeb/docs/usage/plugins/custom-plugin.mdx
new file mode 100644
index 0000000..ffaba5d
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/plugins/custom-plugin.mdx
@@ -0,0 +1,37 @@
+---
+title: Custom LobeChat Plugins Installation and Development Guide
+description: >-
+ Learn how to install custom plugins in LobeChat and develop your own plugins
+ to enhance your AI assistant's capabilities.
+tags:
+ - Custom Plugins
+ - LobeChat
+ - Plugin Installation
+ - Plugin Development
+ - ChatGPT Plugins
+---
+
+# Custom Plugins
+
+## Installing Custom Plugins
+
+If you wish to install a plugin that is not available in the LobeChat plugin store, such as a custom-developed LobeChat plugin, you can click on "Custom Plugins" to install it:
+
+
+
+In addition, LobeChat's plugin mechanism is compatible with ChatGPT plugins, so you can easily install corresponding ChatGPT plugins.
+
+If you want to try installing custom plugins on your own, you can use the following links to try:
+
+- `Custom Lobe Plugin` Mock Credit Card: [https://lobe-plugin-mock-credit-card.vercel.app/manifest.json](https://lobe-plugin-mock-credit-card.vercel.app/manifest.json)
+- `ChatGPT Plugin` Access Links: [https://www.accesslinks.ai/.well-known/ai-plugin.json](https://www.accesslinks.ai/.well-known/ai-plugin.json)
+
+
+
+
+
+
+
+## Developing Custom Plugins
+
+If you wish to develop a LobeChat plugin on your own, feel free to refer to the [Plugin Development Guide](/docs/usage/plugins/development) to expand the possibilities of your AI assistant!
diff --git a/DigitalHumanWeb/docs/usage/plugins/custom-plugin.zh-CN.mdx b/DigitalHumanWeb/docs/usage/plugins/custom-plugin.zh-CN.mdx
new file mode 100644
index 0000000..8693557
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/plugins/custom-plugin.zh-CN.mdx
@@ -0,0 +1,35 @@
+---
+title: 自定义插件 - LobeChat 插件安装与开发指南
+description: 学习如何安装自定义插件和开发 LobeChat 插件,扩展你的 AI 智能助手的功能。
+tags:
+ - 自定义插件
+ - LobeChat
+ - 插件安装
+ - 插件开发
+ - AI 智能助手
+---
+
+# 自定义插件
+
+## 安装自定义插件
+
+如果你希望安装一个不在 LobeChat 插件商店中的插件,例如自己开发的 LobeChat,你可以点击「自定义插件」进行安装:
+
+
+
+此外,LobeChat 的插件机制兼容了 ChatGPT 的插件,因此你可以一键安装相应的 ChatGPT 插件。
+
+如果你希望尝试自行安装自定义插件,你可以使用以下链接来尝试:
+
+- `自定义 Lobe 插件` Mock Credit Card:[https://lobe-plugin-mock-credit-card.vercel.app/manifest.json](https://lobe-plugin-mock-credit-card.vercel.app/manifest.json)
+- `ChatGPT 插件` Access Links:[https://www.accesslinks.ai/.well-known/ai-plugin.json](https://www.accesslinks.ai/.well-known/ai-plugin.json)
+
+
+
+
+
+
+
+## 开发自定义插件
+
+如果你希望自行开发一个 LobeChat 的插件,欢迎查阅 [插件开发指南](/zh/docs/usage/plugins/development) 以扩展你的 AI 智能助手的可能性边界!
diff --git a/DigitalHumanWeb/docs/usage/plugins/development.mdx b/DigitalHumanWeb/docs/usage/plugins/development.mdx
new file mode 100644
index 0000000..286bd8b
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/plugins/development.mdx
@@ -0,0 +1,329 @@
+---
+title: LobeChat Plugin Development Guide
+description: >-
+ Learn how to create and integrate custom plugins in LobeChat, including plugin
+ composition, custom plugin workflow, local plugin development, manifest
+ structure, project structure, server-side implementation, plugin UI interface,
+ deployment, and release.
+tags:
+ - Plugin Development
+ - LobeChat
+ - Custom Plugins
+ - Plugin Workflow
+ - Manifest Structure
+ - Server-side Implementation
+ - Plugin UI Interface
+ - Deployment
+ - Release
+---
+
+# Plugin Development Guide
+
+## Plugin Composition
+
+A LobeChat plugin consists of the following components:
+
+1. **Plugin Index**: Used to display basic information about the plugin, including the plugin name, description, author, version, and a link to the plugin manifest. The official plugin index can be found at [lobe-chat-plugins](https://github.com/lobehub/lobe-chat-plugins). If you want to publish a plugin to the official plugin marketplace, you need to [submit a PR](https://github.com/lobehub/lobe-chat-plugins/pulls) to this repository.
+2. **Plugin Manifest**: Used to describe the functionality of the plugin, including the server-side description, frontend display information, and version number. For a detailed introduction to the manifest, see [manifest][manifest-docs-url].
+3. **Plugin Services**: Used to implement the server-side and frontend modules described in the plugin manifest, as follows:
+ - **Server-side**: Needs to implement the interface capabilities described in the `api` section of the manifest.
+ - **Frontend UI** (optional): Needs to implement the interface described in the `ui` section of the manifest. This interface will be displayed in plugin messages, allowing for a richer display of information than plain text.
+
+## Custom Plugin Workflow
+
+This section will introduce how to add and use a custom plugin in LobeChat.
+
+
+ ### Create and Launch Plugin Project
+
+You need to first create a plugin project locally, you can use the template we have prepared [lobe-chat-plugin-template][lobe-chat-plugin-template-url]
+
+```bash
+$ git clone https://github.com/lobehub/chat-plugin-template.git
+$ cd chat-plugin-template
+$ npm i
+$ npm run dev
+```
+
+When you see `ready started server on 0.0.0.0:3400, url: http://localhost:3400`, it means the plugin service has been successfully launched locally.
+
+
+
+### Add Local Plugin in LobeChat Role Settings
+
+Next, go to LobeChat, create a new assistant, and go to its session settings page:
+
+
+
+Click the Add button on the right of the plugin list to open the custom plugin adding popup:
+
+
+
+Fill in the **Plugin Description File Url** with `http://localhost:3400/manifest-dev.json`, which is the manifest address of the plugin we started locally.
+
+At this point, you should see that the identifier of the plugin has been automatically recognized as `chat-plugin-template`. Next, you need to fill in the remaining form fields (only the title is required), and then click the Save button to complete the custom plugin addition.
+
+
+
+After adding, you can see the newly added plugin in the plugin list. If you need to modify the plugin configuration, you can click the Settings button on the far right to make changes.
+
+
+
+### Test Plugin Function in Session
+
+Next, we need to test whether the plugin's function is working properly.
+
+Click the Back button to return to the session area, and then send a message to the assistant: "What should I wear?" At this point, the assistant will try to ask you about your gender and current mood.
+
+
+
+After answering, the assistant will initiate the plugin call, retrieve recommended clothing data from the server based on your gender and mood, and push it to you. Finally, it will provide a text summary based on this information.
+
+
+
+After completing these operations, you have understood the basic process of adding custom plugins and using them in LobeChat.
+
+
+
+## Local Plugin Development
+
+In the above process, we have learned how to add and use plugins. Next, we will focus on the process of developing custom plugins.
+
+### Manifest
+
+The `manifest` aggregates information on how the plugin's functionality is implemented. The core fields are `api` and `ui`, which respectively describe the server-side interface capabilities and the front-end rendering interface address of the plugin.
+
+Taking the `manifest` in the template we provided as an example:
+
+```json
+{
+ "api": [
+ {
+ "url": "http://localhost:3400/api/clothes",
+ "name": "recommendClothes",
+ "description": "Recommend clothes to the user based on their mood",
+ "parameters": {
+ "properties": {
+ "mood": {
+ "description": "The user's current mood, with optional values: happy, sad, anger, fear, surprise, disgust",
+ "enums": ["happy", "sad", "anger", "fear", "surprise", "disgust"],
+ "type": "string"
+ },
+ "gender": {
+ "type": "string",
+ "enum": ["man", "woman"],
+ "description": "The user's gender, which needs to be asked for from the user to obtain this information"
+ }
+ },
+ "required": ["mood", "gender"],
+ "type": "object"
+ }
+ }
+ ],
+ "gateway": "http://localhost:3400/api/gateway",
+ "identifier": "chat-plugin-template",
+ "ui": {
+ "url": "http://localhost:3400",
+ "height": 200
+ },
+ "version": "1"
+}
+```
+
+In this manifest, it mainly includes the following parts:
+
+1. `identifier`: This is the unique identifier of the plugin, used to distinguish different plugins. This field needs to be globally unique.
+2. `api`: This is an array containing all the API interface information of the plugin. Each interface includes the url, name, description, and parameters fields, all of which are required. The `description` and `parameters` fields will be sent to GPT as the `functions` parameter of the [Function Call](https://sspai.com/post/81986), and the parameters need to comply with the [JSON Schema](https://json-schema.org/) specification. In this example, the API interface is named `recommendClothes`, and its function is to recommend clothes based on the user's mood and gender. The interface parameters include the user's mood and gender, both of which are required.
+3. `ui`: This field contains information about the plugin's user interface, indicating from which address LobeChat loads the plugin's front-end interface. Since LobeChat plugin interface loading is implemented based on iframes, the height and width of the plugin interface can be specified as needed.
+4. `gateway`: This field specifies the gateway for LobeChat to query the plugin's API interface. LobeChat's default plugin gateway is a cloud-based service, and requests for custom plugins need to be sent to a locally launched service. Remote calls to a local address are generally not feasible. The `gateway` field solves this problem. By specifying the gateway in the manifest, LobeChat will send plugin requests to this address, and the local gateway address will dispatch requests to the local plugin service. Published online plugins do not need to specify this field.
+5. `version`: This is the version number of the plugin, which currently has no effect.
+
+In actual development, you can modify the plugin's description list according to your own needs to declare the functionality you want to implement. For a complete introduction to each field in the manifest, see: [manifest][manifest-docs-url].
+
+### Project Structure
+
+The [lobe-chat-plugin-template][lobe-chat-plugin-template-url] template project uses Next.js as the development framework, and its core directory structure is as follows:
+
+```
+➜ chat-plugin-template
+├── public
+│ └── manifest-dev.json # Manifest file
+├── src
+│ └── pages
+│ │ ├── api # Next.js server-side folder
+│ │ │ ├── clothes.ts # Implementation of the recommendClothes interface
+│ │ │ └── gateway.ts # Local plugin proxy gateway
+│ │ └── index.tsx # Front-end display interface
+```
+
+This template uses Next.js as the development framework. You can use any development framework and language you are familiar with, as long as it can implement the functionality described in the manifest.
+
+Contributions of more plugin templates using different frameworks and languages are also welcome.
+
+### Server-Side
+
+The server-side needs to implement the API interfaces described in the manifest. In the template, we use Vercel's [Edge Runtime](https://nextjs.org/docs/pages/api-reference/edge) to eliminate the need for maintenance.
+
+#### API Implementation
+
+For the Edge Runtime, we provide the `createErrorResponse` method in `@lobehub/chat-plugin-sdk` to quickly return error responses. Currently, the provided error types are detailed in: [PluginErrorType][plugin-error-type-url].
+
+The implementation of the clothes interface in the template is as follows:
+
+```ts
+export default async (req: Request) => {
+ if (req.method !== 'POST') return createErrorResponse(PluginErrorType.MethodNotAllowed);
+
+ const { gender, mood } = (await req.json()) as RequestData;
+
+ const clothes = gender === 'man' ? manClothes : womanClothes;
+
+ const result: ResponseData = {
+ clothes: clothes[mood] || [],
+ mood,
+ today: Date.now(),
+ };
+
+ return new Response(JSON.stringify(result));
+};
+```
+
+Where `manClothes` and `womanClothes` are mock data and can be replaced with database queries in actual scenarios.
+
+#### Plugin Gateway
+
+Since the default plugin gateway for LobeChat is a cloud service `/api/plugins`, the cloud service sends requests to the address specified in the manifest's `api.url` to solve cross-origin issues.
+
+For custom plugins, plugin requests need to be sent to the local service. Therefore, by specifying the gateway in the manifest ([http://localhost:3400/api/gateway](http://localhost:3400/api/gateway)), LobeChat> will directly request this address, and then only the corresponding gateway needs to be created at that address.
+
+```ts
+import { createLobeChatPluginGateway } from '@lobehub/chat-plugins-gateway';
+
+export const config = {
+ runtime: 'edge',
+};
+
+export default createLobeChatPluginGateway();
+```
+
+[`@lobehub/chat-plugins-gateway`](https://github.com/lobehub/chat-plugins-gateway) contains the implementation of the plugin gateway in LobeChat [here](https://github.com/lobehub/lobe-chat/blob/main/src/pages/api/plugins.api.ts). You can use this package directly to create a gateway, allowing LobeChat to access the local plugin service.
+
+### Plugin UI Interface
+
+The custom UI interface for plugins is optional. For example, the official plugin [Web Content Extraction](https://github.com/lobehub/chat-plugin-web-crawler) does not have a corresponding user interface.
+
+
+
+If you want to display richer information in plugin messages or include some interactive operations, you can customize a user interface for the plugin. For example, the following image shows the user interface for the [Search Engine](https://github.com/lobehub/chat-plugin-search-engine) plugin.
+
+
+
+#### Implementation of Plugin UI Interface
+
+LobeChat implements the loading of plugin UI through `iframe` and uses `postMessage` to communicate with the plugin. Therefore, the implementation of the plugin UI is consistent with regular web development. You can use any frontend framework and development language you are familiar with.
+
+
+
+In the template we provide, we use React + Next.js + [antd](https://ant.design/) as the frontend interface framework. You can find the implementation of the user interface in [`src/pages/index.tsx`](https://github.com/lobehub/chat-plugin-template/blob/main/src/pages/index.tsx).
+
+As for plugin communication, we provide relevant methods in [`@lobehub/chat-plugin-sdk`](https://github.com/lobehub/chat-plugin-sdk) to simplify communication between the plugin and LobeChat. You can actively retrieve the current message data from LobeChat using the `fetchPluginMessage` method. For detailed information about this method, see: [fetchPluginMessage][fetch-plugin-message-url].
+
+```tsx
+import { fetchPluginMessage } from '@lobehub/chat-plugin-sdk';
+import { memo, useEffect, useState } from 'react';
+
+import { ResponseData } from '@/type';
+
+const Render = memo(() => {
+ const [data, setData] = useState();
+
+ useEffect(() => {
+ // Retrieve the current plugin message from LobeChat
+ fetchPluginMessage().then((e: ResponseData) => {
+ setData(e);
+ });
+ }, []);
+
+ return <>...>;
+});
+
+export default Render;
+```
+
+## Plugin Deployment and Release
+
+Once you have finished developing the plugin, you can deploy it using your preferred method, such as using Vercel or packaging it as a Docker container for release, and so on.
+
+If you want more people to use your plugin, feel free to [submit it for listing](https://github.com/lobehub/lobe-chat-plugins) on the plugin marketplace.
+
+[![][submit-plugin-shield]][submit-plugin-url]
+
+### Plugin Shield
+
+[](https://github.com/lobehub/lobe-chat-plugins)
+
+```md
+[](https://github.com/lobehub/lobe-chat-plugins)
+```
+
+## Links
+
+- **📘 Pluging SDK Documentation**: [https://chat-plugin-sdk.lobehub.com](https://chat-plugin-sdk.lobehub.com)
+- **🚀 chat-plugin-template**: [https://github.com/lobehub/chat-plugin-template](https://github.com/lobehub/chat-plugin-template)
+- **🧩 chat-plugin-sdk**: [https://github.com/lobehub/chat-plugin-sdk](https://github.com/lobehub/chat-plugin-sdk)
+- **🚪 chat-plugin-gateway**: [https://github.com/lobehub/chat-plugins-gateway](https://github.com/lobehub/chat-plugins-gateway)
+- **🏪 lobe-chat-plugins**: [https://github.com/lobehub/lobe-chat-plugins](https://github.com/lobehub/lobe-chat-plugins)
+
+[fetch-plugin-message-url]: https://github.com/lobehub/chat-plugin-template
+[lobe-chat-plugin-template-url]: https://github.com/lobehub/chat-plugin-template
+[manifest-docs-url]: https://chat-plugin-sdk.lobehub.com/guides/plugin-manifest
+[plugin-error-type-url]: https://github.com/lobehub/chat-plugin-template
+[submit-plugin-shield]: https://img.shields.io/badge/🧩/🏪_submit_plugin-%E2%86%92-95f3d9?labelColor=black&style=for-the-badge
+[submit-plugin-url]: https://github.com/lobehub/lobe-chat-plugins
diff --git a/DigitalHumanWeb/docs/usage/plugins/development.zh-CN.mdx b/DigitalHumanWeb/docs/usage/plugins/development.zh-CN.mdx
new file mode 100644
index 0000000..052f516
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/plugins/development.zh-CN.mdx
@@ -0,0 +1,323 @@
+---
+title: LobeChat 插件开发指南
+description: 学习如何在 LobeChat 中添加和使用自定义插件,包括创建插件项目、在角色设置中添加本地插件、测试插件功能以及插件开发流程和部署。
+tags:
+ - LobeChat
+ - 插件开发
+ - 自定义插件
+ - 插件部署
+ - 插件发布
+ - 插件UI
+ - 插件SDK
+---
+
+# 插件开发指南
+
+## 插件构成
+
+一个 LobeChat 的插件由以下几个部分组成:
+
+1. **插件索引**:用于展示插件的基本信息,包括插件名称、描述、作者、版本、插件描述清单的链接,官方的插件索引地址:[lobe-chat-plugins](https://github.com/lobehub/lobe-chat-plugins)。若想上架插件到官方插件市场,需要 [提交 PR](https://github.com/lobehub/lobe-chat-plugins/pulls) 到该仓库;
+2. **插件描述清单 (manifest)**:用于描述插件的功能实现,包含了插件的服务端描述、前端展示信息、版本号等。关于 manifest 的详细介绍,详见 [manifest][manifest-docs-url];
+3. **插件服务**:用于实现插件描述清单中所描述的服务端和前端模块,分别如下:
+ - **服务端**:需要实现 manifest 中描述的 `api` 部分的接口能力;
+ - **前端 UI**(可选):需要实现 manifest 中描述的 `ui` 部分的界面,该界面将会在插件消息中透出,进而实现比文本更加丰富的信息展示方式。
+
+## 自定义插件流程
+
+本节将会介绍如何在 LobeChat 中添加和使用一个自定义插件。
+
+
+ ### 创建并启动插件项目
+
+你需要先在本地创建一个插件项目,可以使用我们准备好的模板 [lobe-chat-plugin-template][lobe-chat-plugin-template-url]
+
+```bash
+$ git clone https://github.com/lobehub/chat-plugin-template.git
+$ cd chat-plugin-template
+$ npm i
+$ npm run dev
+```
+
+当出现`ready started server on 0.0.0.0:3400, url: http://localhost:3400` 时,说明插件服务已经在本地启动成功。
+
+
+
+### 在 LobeChat 角色设置中添加本地插件
+
+接下来进入到 LobeChat 中,创建一个新的助手,并进入它的会话设置页:
+
+
+
+点击插件列表右侧的 添加 按钮,打开自定义插件添加弹窗:
+
+
+
+在 **插件描述文件 Url** 地址 中填入 `http://localhost:3400/manifest-dev.json` ,这是我们本地启动的插件描述清单地址。
+
+此时,你应该可以看到看到插件的标识符一栏已经被自动识别为 `chat-plugin-template`。接下来你需要填写剩下的表单字段(只有标题必填),然后点击 保存 按钮,即可完成自定义插件添加。
+
+
+
+完成添加后,在插件列表中就能看到刚刚添加的插件,如果需要修改插件的配置,可以点击最右侧的 设置 按钮进行修改。
+
+
+
+### 会话测试插件功能
+
+接来下我们需要测试这个插件的功能是否正常。
+
+点击 返回 按钮回到会话区,然后向助手发送消息:「我应该穿什么? 」此时助手将会尝试向你询问,了解你的性别与当前的心情。
+
+
+
+当回答完毕后,助手将会发起插件的调用,根据你的性别、心情,从服务端获取推荐的衣服数据,并推送给你。最后基于这些信息做一轮文本总结。
+
+
+
+当完成这些操作后,你已经了解了添加自定义插件,并在 LobeChat 中使用的基础流程。
+
+
+
+## 本地插件开发
+
+在上述流程中,我们已经了解插件的添加和使用的方式,接下来重点介绍自定义插件开发的过程。
+
+### manifest
+
+`manifest` 聚合了插件功能如何实现的信息。核心字段为 `api` 与 `ui`,分别描述了插件的服务端接口能力与前端渲染的界面地址。
+
+以我们提供的模板中的 `manifest` 为例:
+
+```json
+{
+ "api": [
+ {
+ "url": "http://localhost:3400/api/clothes",
+ "name": "recommendClothes",
+ "description": "根据用户的心情,给用户推荐他有的衣服",
+ "parameters": {
+ "properties": {
+ "mood": {
+ "description": "用户当前的心情,可选值有:开心(happy), 难过(sad),生气 (anger),害怕(fear),惊喜( surprise),厌恶 (disgust)",
+ "enums": ["happy", "sad", "anger", "fear", "surprise", "disgust"],
+ "type": "string"
+ },
+ "gender": {
+ "type": "string",
+ "enum": ["man", "woman"],
+ "description": "对话用户的性别,需要询问用户后才知道这个信息"
+ }
+ },
+ "required": ["mood", "gender"],
+ "type": "object"
+ }
+ }
+ ],
+ "gateway": "http://localhost:3400/api/gateway",
+ "identifier": "chat-plugin-template",
+ "ui": {
+ "url": "http://localhost:3400",
+ "height": 200
+ },
+ "version": "1"
+}
+```
+
+在这份 manifest 中,主要包含了以下几个部分:
+
+1. `identifier`:这是插件的唯一标识符,用来区分不同的插件,这个字段需要全局唯一。
+2. `api`:这是一个数组,包含了插件的所有 API 接口信息。每个接口都包含了 url、name、description 和 parameters 字段,均为必填项。其中 `description` 和 `parameters` 两个字段,将会作为 [Function Call](https://sspai.com/post/81986) 的 `functions` 参数发送给 gpt, parameters 需要符合 [JSON Schema](https://json-schema.org/) 规范。 在这个例子中,api 接口名为 `recommendClothes` ,这个接口的功能是根据用户的心情和性别来推荐衣服。接口的参数包括用户的心情和性别,这两个参数都是必填项。
+3. `ui`:这个字段包含了插件的用户界面信息,指明了 LobeChat 从哪个地址加载插件的前端界面。由于 LobeChat 插件界面加载是基于 iframe 实现的,因此可以按需指定插件界面的高度、宽度。
+4. `gateway`:这个字段指定了 LobeChat 查询 api 接口的网关。LobeChat 默认的插件网关是云端服务,而自定义插件的请求需要发给本地启动的服务,远端调用本地地址,一般调用不通。gateway 字段解决了该问题。通过在 manifest 中指定 gateway,LobeChat 将会向该地址发送插件请求,本地的网关地址将会调度请求到本地的插件服务。发布到线上的插件可以不用指定该字段。
+5. `version`:这是插件的版本号,现阶段暂时没有作用;
+
+在实际开发中,你可以根据自己的需求,修改插件的描述清单,声明想要实现的功能。 关于 manifest 各个字段的完整介绍,参见:[manifest][manifest-docs-url]。
+
+### 项目结构
+
+[lobe-chat-plugin-template][lobe-chat-plugin-template-url] 这个模板项目使用了 Next.js 作为开发框架,它的核心目录结构如下:
+
+```
+➜ chat-plugin-template
+├── public
+│ └── manifest-dev.json # 描述清单文件
+├── src
+│ └── pages
+│ │ ├── api # nextjs 服务端文件夹
+│ │ │ ├── clothes.ts # recommendClothes 接口实现
+│ │ │ └── gateway.ts # 本地插件代理网关
+│ │ └── index.tsx # 前端展示界面
+```
+
+本模板使用 Next.js 作为开发框架。你可以使用任何你熟悉的开发框架与开发语言,只要能够实现 manifest 中描述的功能即可。
+
+同时也欢迎大家贡献更多框架与语言的插件模板。
+
+### 服务端
+
+服务端需要实现 manifest 中描述的 api 接口。在模板中,我们使用了 vercel 的 [Edge Runtime](https://nextjs.org/docs/pages/api-reference/edge),免去运维。
+
+#### API 实现
+
+针对 Edge Runtime ,我们在 `@lobehub/chat-plugin-sdk` 提供了 `createErrorResponse` 方法,用于快速返回错误响应。目前提供的错误类型详见:[PluginErrorType][plugin-error-type-url]。
+
+模板中的 clothes 接口实现如下:
+
+```ts
+export default async (req: Request) => {
+ if (req.method !== 'POST') return createErrorResponse(PluginErrorType.MethodNotAllowed);
+
+ const { gender, mood } = (await req.json()) as RequestData;
+
+ const clothes = gender === 'man' ? manClothes : womanClothes;
+
+ const result: ResponseData = {
+ clothes: clothes[mood] || [],
+ mood,
+ today: Date.now(),
+ };
+
+ return new Response(JSON.stringify(result));
+};
+```
+
+其中 `maniClothes` 和 `womanClothes` 是 mock 数据,在实际场景中,可以替换为数据库查询等。
+
+#### Plugin Gateway
+
+由于 LobeChat 默认的插件网关是云端服务 `/api/plugins`,云端服务通过 manifest 上的 `api.url` 地址发送请求,以解决跨域问题。
+
+针对自定义插件,插件请求需要发送给本地服务, 因此通过在 manifest 中指定网关 ([http://localhost:3400/api/gateway),LobeChat]() 将会直接请求该地址,然后只需要在该地址下创建对应的网关即可。
+
+```ts
+import { createLobeChatPluginGateway } from '@lobehub/chat-plugins-gateway';
+
+export const config = {
+ runtime: 'edge',
+};
+
+export default createLobeChatPluginGateway();
+```
+
+[`@lobehub/chat-plugins-gateway`](https://github.com/lobehub/chat-plugins-gateway) 包含了 LobeChat 中插件网关的[实现](https://github.com/lobehub/lobe-chat/blob/main/src/pages/api/plugins.api.ts),你可以直接使用该包创建网关,进而让 LobeChat 访问到本地的插件服务。
+
+### 插件 UI 界面
+
+自定义插件的 UI 界面是一个可选项。例如 官方插件 [「🧩 / 🕸 网页内容提取」](https://github.com/lobehub/chat-plugin-web-crawler),没有实现相应的用户界面。
+
+
+
+如果你希望在插件消息中展示更加丰富的信息,或者包含一些富交互操作,你可以为插件定制一个用户界面。例如下图则为[「搜索引擎」](https://github.com/lobehub/chat-plugin-search-engine)插件的用户界面。
+
+
+
+#### 插件 UI 界面实现
+
+LobeChat 通过 `iframe` 实现插件 ui 的加载,使用 `postMessage` 实现主体与插件的通信。因此, 插件 UI 的实现方式与普通的网页开发一致,你可以使用任何你熟悉的前端框架与开发语言。
+
+
+
+在我们提供的模板中使用了 React + Next.js + [antd](https://ant.design/) 作为前端界面框架,你可以在 [`src/pages/index.tsx`](https://github.com/lobehub/chat-plugin-template/blob/main/src/pages/index.tsx) 中找到用户界面的实现。
+
+其中关于插件通信,我们在 [`@lobehub/chat-plugin-sdk`](https://github.com/lobehub/chat-plugin-sdk) 提供了相关方法,用于简化插件与 LobeChat 的通信。你可以通过 `fetchPluginMessage` 方法主动向 LobeChat 获取当前消息的数据。关于该方法的详细介绍,参见:[fetchPluginMessage][fetch-plugin-message-url]。
+
+```tsx
+import { fetchPluginMessage } from '@lobehub/chat-plugin-sdk';
+import { memo, useEffect, useState } from 'react';
+
+import { ResponseData } from '@/type';
+
+const Render = memo(() => {
+ const [data, setData] = useState();
+
+ useEffect(() => {
+ // 从 LobeChat 获取当前插件的消息
+ fetchPluginMessage().then((e: ResponseData) => {
+ setData(e);
+ });
+ }, []);
+
+ return <>...>;
+});
+
+export default Render;
+```
+
+## 插件部署与发布
+
+当你完成插件的开发后,你可以使用你习惯的方式进行插件的部署。例如使用 vercel ,或者打包成 docker 发布等等。
+
+如果你希望插件被更多人使用,欢迎将你的插件 [提交上架](https://github.com/lobehub/lobe-chat-plugins) 到插件市场。
+
+[![][submit-plugin-shield]][submit-plugin-url]
+
+### 插件 Shield
+
+[](https://github.com/lobehub/lobe-chat-plugins)
+
+```markdown
+[](https://github.com/lobehub/lobe-chat-plugins)
+```
+
+## 链接
+
+- **📘 Pluging SDK 文档**: [https://chat-plugin-sdk.lobehub.com](https://chat-plugin-sdk.lobehub.com)
+- **🚀 chat-plugin-template**: [https://github.com/lobehub/chat-plugin-template](https://github.com/lobehub/chat-plugin-template)
+- **🧩 chat-plugin-sdk**: [https://github.com/lobehub/chat-plugin-sdk](https://github.com/lobehub/chat-plugin-sdk)
+- **🚪 chat-plugin-gateway**: [https://github.com/lobehub/chat-plugins-gateway](https://github.com/lobehub/chat-plugins-gateway)
+- **🏪 lobe-chat-plugins**: [https://github.com/lobehub/lobe-chat-plugins](https://github.com/lobehub/lobe-chat-plugins)
+
+[fetch-plugin-message-url]: https://github.com/lobehub/chat-plugin-template
+[lobe-chat-plugin-template-url]: https://github.com/lobehub/chat-plugin-template
+[manifest-docs-url]: https://chat-plugin-sdk.lobehub.com/guides/plugin-manifest
+[plugin-error-type-url]: https://github.com/lobehub/chat-plugin-template
+[submit-plugin-shield]: https://img.shields.io/badge/🧩/🏪_submit_plugin-%E2%86%92-95f3d9?labelColor=black&style=for-the-badge
+[submit-plugin-url]: https://github.com/lobehub/lobe-chat-plugins
diff --git a/DigitalHumanWeb/docs/usage/plugins/store.mdx b/DigitalHumanWeb/docs/usage/plugins/store.mdx
new file mode 100644
index 0000000..049688c
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/plugins/store.mdx
@@ -0,0 +1,30 @@
+---
+title: LobeChat Plugin Store
+description: >-
+ Learn how to access the Plugin Store in LobeChat to easily install and use
+ various plugins for enhanced functionality.
+tags:
+ - Plugin Store
+ - LobeChat
+ - Install Plugins
+ - Extension Tools
+ - Enhanced Functionality
+---
+
+# Plugin Store
+
+You can access the plugin store by going to `Extension Tools` -> `Plugin Store` in the session toolbar.
+
+
+
+In the plugin store, you can directly install and use plugins in LobeChat.
+
+
diff --git a/DigitalHumanWeb/docs/usage/plugins/store.zh-CN.mdx b/DigitalHumanWeb/docs/usage/plugins/store.zh-CN.mdx
new file mode 100644
index 0000000..75fa365
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/plugins/store.zh-CN.mdx
@@ -0,0 +1,27 @@
+---
+title: LobeChat 插件商店
+description: 在 LobeChat 中浏览和安装各种实用插件,提升会话工具条的功能和体验。
+tags:
+ - LobeChat
+ - 插件商店
+ - 扩展工具
+ - 会话工具条
+---
+
+# 插件商店
+
+你可以在会话工具条中的 `扩展工具` -> `插件商店`,进入插件商店。
+
+
+
+插件商店中会在 LobeChat 中可以直接安装并使用的插件。
+
+
diff --git a/DigitalHumanWeb/docs/usage/providers.mdx b/DigitalHumanWeb/docs/usage/providers.mdx
new file mode 100644
index 0000000..75be2d0
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers.mdx
@@ -0,0 +1,36 @@
+---
+title: Enhancing LobeChat with Multiple Model Providers for AI Conversations
+description: >-
+ Discover how LobeChat offers diverse AI conversation options by supporting
+ multiple model providers, providing flexibility and a wide range of choices
+ for users and developers.
+tags:
+ - LobeChat
+ - AI Conversations
+ - Model Providers
+ - Diversity
+ - Flexibility
+ - Google AI
+ - ChatGLM
+ - Moonshot AI
+ - 01 AI
+ - Together AI
+ - Ollama
+---
+
+# Using Multiple Model Providers in LobeChat
+
+
+
+In the continuous development of LobeChat, we deeply understand the importance of diversity in model providers for providing AI conversation services to meet the needs of the community. Therefore, we have expanded our support to multiple model providers instead of being limited to a single one, in order to offer users a more diverse and rich selection of conversation options.
+
+This approach allows LobeChat to adapt more flexibly to different user needs and provides developers with a wider range of choices.
+
+## Tutorial on Using Model Providers
+
+
diff --git a/DigitalHumanWeb/docs/usage/providers.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers.zh-CN.mdx
new file mode 100644
index 0000000..83abd4c
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers.zh-CN.mdx
@@ -0,0 +1,34 @@
+---
+title: 在 LobeChat 中使用多模型服务商
+description: 了解 LobeChat 在多模型服务商支持方面的最新进展,包括已支持的模型服务商和计划中的扩展,以及本地模型支持的使用方式。
+tags:
+ - LobeChat
+ - AI 会话服务
+ - 模型服务商
+ - 多模型支持
+ - 本地模型支持
+ - AWS Bedrock
+ - Google AI
+ - ChatGLM
+ - Moonshot AI
+ - 01 AI
+ - Together AI
+ - Ollama
+---
+
+# 在 LobeChat 中使用多模型服务商
+
+
+
+在 LobeChat 的不断发展过程中,我们深刻理解到在提供 AI 会话服务时模型服务商的多样性对于满足社区需求的重要性。因此,我们不再局限于单一的模型服务商,而是拓展了对多种模型服务商的支持,以便为用户提供更为丰富和多样化的会话选择。
+
+通过这种方式,LobeChat 能够更灵活地适应不同用户的需求,同时也为开发者提供了更为广泛的选择空间。
+
+## 模型服务商使用教程
+
+
diff --git a/DigitalHumanWeb/docs/usage/providers/01ai.mdx b/DigitalHumanWeb/docs/usage/providers/01ai.mdx
new file mode 100644
index 0000000..6904dbc
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/01ai.mdx
@@ -0,0 +1,85 @@
+---
+title: Using Zero One AI API Key in LobeChat
+description: >-
+ Learn how to integrate and use Zero One AI in LobeChat with step-by-step
+ instructions. Obtain an API key, configure Zero One AI, and start
+ conversations with AI models.
+tags:
+ - 01.AI
+ - Zero One AI
+ - Web UI
+ - API key
+ - AI models
+---
+
+# Using Zero One AI in LobeChat
+
+
+
+[Zero One AI](https://www.01.ai/) is a global company dedicated to AI 2.0 large model technology and applications. Its billion-parameter Yi-Large closed-source model, when evaluated on Stanford University's English ranking AlpacaEval 2.0, is on par with GPT-4.
+
+This document will guide you on how to use Zero One AI in LobeChat:
+
+
+
+### Step 1: Obtain Zero One AI API Key
+
+- Register and log in to the [Zero One AI Large Model Open Platform](https://platform.lingyiwanwu.com/)
+- Go to the `Dashboard` and access the `API Key Management` menu
+- A system-generated API key has been created for you automatically, or you can create a new one on this interface
+
+
+
+- Account verification is required for first-time use
+
+
+
+- Click on the created API key
+- Copy and save the API key in the pop-up dialog box
+
+
+
+### Step 2: Configure Zero One AI in LobeChat
+
+- Access the `Settings` interface in LobeChat
+- Find the setting for `Zero One AI` under `Language Model`
+
+
+
+- Open Zero One AI and enter the obtained API key
+- Choose a 01.AI model for your AI assistant to start the conversation
+
+
+
+
+ During usage, you may need to pay the API service provider. Please refer to Zero One AI's relevant
+ fee policies.
+
+
+
+
+You can now use the models provided by Zero One AI for conversations in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/01ai.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/01ai.zh-CN.mdx
new file mode 100644
index 0000000..0a31699
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/01ai.zh-CN.mdx
@@ -0,0 +1,85 @@
+---
+title: 在 LobeChat 中使用 01.AI 零一万物 API Key
+description: >-
+ 学习如何在 LobeChat 中配置并使用 01.AI 零一万物提供的 AI 模型进行对话。获取 API 密钥、填入设置项、选择模型,开始与 AI
+ 助手交流。
+tags:
+ - LobeChat
+ - 01.AI
+ - Zero One AI
+ - 零一万物
+ - Web UI
+ - API密钥
+ - 配置指南
+---
+
+# 在 LobeChat 中使用零一万物
+
+
+
+[零一万物](https://www.01.ai/)是一家致力于AI 2.0大模型技术和应用的全球公司,其发布的千亿参数的Yi-Large闭源模型,在斯坦福大学的英语排行AlpacaEval 2.0上,与GPT-4互有第一。
+
+本文档将指导你如何在 LobeChat 中使用零一万物:
+
+
+
+### 步骤一:获取零一万物 API 密钥
+
+- 注册并登录 [零一万物大模型开放平台](https://platform.lingyiwanwu.com/)
+- 进入`工作台`并访问`API Key管理`菜单
+- 系统已为你自动创建了一个 API 密钥,你也可以在此界面创建新的 API 密钥
+
+
+
+- 初次使用时需要完成账号认证
+
+
+
+- 点击创建好的 API 密钥
+- 在弹出的对话框中复制并保存 API 密钥
+
+
+
+### 步骤二:在LobeChat 中配置零一万物
+
+- 访问 LobeChat 的`设置`界面
+- 在`语言模型`下找到`零一万物`的设置项
+
+
+
+- 打开零一万物并填入获得的 API 密钥
+- 为你的 AI 助手选择一个 01.AI 的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考零一万物的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用零一万物提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/anthropic.mdx b/DigitalHumanWeb/docs/usage/providers/anthropic.mdx
new file mode 100644
index 0000000..c652159
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/anthropic.mdx
@@ -0,0 +1,78 @@
+---
+title: Using Anthropic Claude API Key in LobeChat
+description: >-
+ Learn how to integrate Anthropic Claude API in LobeChat to enhance your AI
+ assistant capabilities. Support Claude 3.5 sonnet / Claude 3 Opus / Claude 3
+ haiku
+tags:
+ - Anthropic Claude
+ - API Key
+ - AI assistant
+ - Web UI
+---
+
+# Using Anthropic Claude in LobeChat
+
+
+
+The Anthropic Claude API is now available for everyone to use. This document will guide you on how to use [Anthropic Claude](https://www.anthropic.com/api) in LobeChat:
+
+
+
+### Step 1: Obtain Anthropic Claude API Key
+
+- Create an [Anthropic Claude API](https://www.anthropic.com/api) account.
+- Get your [API key](https://console.anthropic.com/settings/keys).
+
+
+
+
+ The Claude API currently offers $5 of free credits, but it is only available in certain specific
+ countries/regions. You can go to Dashboard > Claim to see if it is applicable to your
+ country/region.
+
+
+- Set up your billing for the API key to work on [https://console.anthropic.com/settings/plans](https://console.anthropic.com/settings/plans) (choose the "Build" plan so you can add credits and only pay for usage).
+
+
+
+### Step 2: Configure Anthropic Claude in LobeChat
+
+- Access the `Settings` interface in LobeChat.
+- Find the setting for `Anthropic Claude` under `Language Models`.
+
+
+
+- Enter the obtained API key.
+- Choose an Anthropic Claude model for your AI assistant to start the conversation.
+
+
+
+
+ During usage, you may need to pay the API service provider. Please refer to Anthropic Claude's
+ relevant pricing policies.
+
+
+
+
+You can now engage in conversations using the models provided by Anthropic Claude in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/anthropic.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/anthropic.zh-CN.mdx
new file mode 100644
index 0000000..866d9e8
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/anthropic.zh-CN.mdx
@@ -0,0 +1,75 @@
+---
+title: 在 LobeChat 中使用 Anthropic Claude API Key
+description: >-
+ 学习如何在 LobeChat 中配置和使用 Anthropic Claude API, Claude 3.5 sonnet / Claude 3 Opus
+ / Claude 3 haiku
+tags:
+ - Anthropic Claude
+ - API
+ - WebUI
+ - AI助手
+---
+
+# 在 LobeChat 中使用 Anthropic Claude
+
+
+
+Anthropic Claude API 现在可供所有人使用, 本文档将指导你如何在 LobeChat 中使用 [Anthropic Claude](https://www.anthropic.com/api):
+
+
+
+### 步骤一:获取 Anthropic Claude API 密钥
+
+- 创建一个 [Anthropic Claude API](https://www.anthropic.com/api) 帐户
+- 获取您的 [API 密钥](https://console.anthropic.com/settings/keys)
+
+
+
+
+ Claude API 现在提供 5 美元的免费积分,但是,它仅适用于某些特定国家/地区,您可以转到 Dashboard >
+ Claim 查看它是否适用于您所在的国家/地区。
+
+
+- 设置您的账单,让 API 密钥在 https://console.anthropic.com/settings/plans 上工作(选择“生成”计划,以便您可以添加积分并仅为使用量付费)
+
+
+
+### 步骤二:在 LobeChat 中配置 Anthropic Claude
+
+- 访问 LobeChat 的`设置`界面
+- 在`语言模型`下找到`Anthropic Claude`的设置项
+
+
+
+- 填入获得的 API 密钥
+- 为你的 AI 助手选择一个 Anthropic Claude 的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 Anthropic Claude 的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用 Anthropic Claude 提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/azure.mdx b/DigitalHumanWeb/docs/usage/providers/azure.mdx
new file mode 100644
index 0000000..102039b
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/azure.mdx
@@ -0,0 +1,88 @@
+---
+title: Using Azure OpenAI API Key in LobeChat
+description: >-
+ Learn how to integrate and configure Azure OpenAI in LobeChat to enhance your
+ AI assistant capabilities. Follow these steps to obtain the API key, configure
+ the settings, and start engaging in conversations.
+tags:
+ - Azure OpenAI
+ - AI assistant
+ - API key
+ - Configuration
+ - Conversation models
+---
+
+# Using Azure OpenAI in LobeChat
+
+
+
+This document will guide you on how to use [Azure OpenAI](https://oai.azure.com/) in LobeChat:
+
+
+
+### Step 1: Obtain Azure OpenAI API Key
+
+- If you haven't registered yet, you need to create an [Azure OpenAI account](https://oai.azure.com/).
+
+
+
+- After registration, go to the `Deployments` page and create a new deployment with your selected model.
+
+
+
+
+
+- Navigate to the `Chat` page and click on `View Code` to obtain your endpoint and key.
+
+
+
+
+
+### Step 2: Configure Azure OpenAI in LobeChat
+
+- Access the `Settings` interface in LobeChat.
+- Find the setting for `Azure OpenAI` under `Language Model`.
+
+
+
+- Enter the API key you obtained.
+- Choose an Azure OpenAI model for your AI assistant to start the conversation.
+
+
+
+
+ During usage, you may need to pay the API service provider. Please refer to Azure OpenAI's
+ relevant pricing policies.
+
+
+
+
+Now you can engage in conversations using the models provided by Azure OpenAI in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/azure.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/azure.zh-CN.mdx
new file mode 100644
index 0000000..9346a27
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/azure.zh-CN.mdx
@@ -0,0 +1,79 @@
+---
+title: 在 LobeChat 中使用 Azure OpenAI API Key
+description: 学习如何在 LobeChat 中配置和使用 Azure OpenAI 模型进行对话,包括获取 API 密钥和选择模型。
+tags:
+ - Azure OpenAI
+ - API Key
+ - Web UI
+---
+
+# 在 LobeChat 中使用 Azure OpenAI
+
+
+
+本文档将指导你如何在 LobeChat 中使用 [Azure OpenAI](https://oai.azure.com/):
+
+
+
+### 步骤一:获取 Azure OpenAI API 密钥
+
+- 如果尚未注册,则必须注册 [Azure OpenAI 帐户](https://oai.azure.com/)。
+
+
+
+- 注册完毕后,转到 `Deployments` 页面,然后使用您选择的模型创建新部署。
+
+
+
+- 转到 `Chat` 页面,然后单击 `View Code` 以获取您的终结点和密钥。
+
+
+
+
+### 步骤二:在 LobeChat 中配置 Azure OpenAI
+
+- 访问 LobeChat 的`设置`界面
+- 在`语言模型`下找到`Azure OpenAI`的设置项
+
+
+
+- 填入获得的 API 密钥
+- 为你的 AI 助手选择一个 Azure OpenAI 的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 Azure OpenAI 的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用 Azure OpenAI 提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/baichuan.mdx b/DigitalHumanWeb/docs/usage/providers/baichuan.mdx
new file mode 100644
index 0000000..f1ee996
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/baichuan.mdx
@@ -0,0 +1,64 @@
+---
+title: Using Baichuan API Key in LobeChat
+description: >-
+ Learn how to integrate Baichuan AI into LobeChat for enhanced conversational
+ experiences. Follow the steps to configure Baichuan AI and start using its
+ models.
+tags:
+ - LobeChat
+ - Baichuan
+ - API Key
+ - Web UI
+---
+
+# Using Baichuan in LobeChat
+
+
+
+This article will guide you on how to use Baichuan in LobeChat:
+
+
+
+### Step 1: Obtain Baichuan Intelligent API Key
+
+- Create a [Baichuan Intelligent](https://platform.baichuan-ai.com/homePage) account
+- Create and obtain an [API key](https://platform.baichuan-ai.com/console/apikey)
+
+
+
+### Step 2: Configure Baichuan in LobeChat
+
+- Visit the `Settings` interface in LobeChat
+- Find the setting for `Baichuan` under `Language Model`
+
+
+
+- Enter the obtained API key
+- Choose a Baichuan model for your AI assistant to start the conversation
+
+
+
+
+ During usage, you may need to pay the API service provider, please refer to Baichuan's relevant
+ pricing policies.
+
+
+
+
+You can now use the models provided by Baichuan for conversation in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/baichuan.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/baichuan.zh-CN.mdx
new file mode 100644
index 0000000..f4101be
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/baichuan.zh-CN.mdx
@@ -0,0 +1,61 @@
+---
+title: 在 LobeChat 中使用百川 API Key
+description: 学习如何在 LobeChat 中配置和使用百川的API Key,以便开始对话和交互。
+tags:
+ - LobeChat
+ - 百川
+ - 百川智能
+ - API密钥
+ - Web UI
+---
+
+# 在 LobeChat 中使用百川
+
+
+
+本文将指导你如何在 LobeChat 中使用百川:
+
+
+
+### 步骤一:获取百川智能 API 密钥
+
+- 创建一个[百川智能](https://platform.baichuan-ai.com/homePage)账户
+- 创建并获取 [API 密钥](https://platform.baichuan-ai.com/console/apikey)
+
+
+
+### 步骤二:在 LobeChat 中配置百川
+
+- 访问 LobeChat 的`设置`界面
+- 在`语言模型`下找到`百川`的设置项
+
+
+
+- 填入获得的 API 密钥
+- 为你的 AI 助手选择一个百川的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考百川的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用百川提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/bedrock.mdx b/DigitalHumanWeb/docs/usage/providers/bedrock.mdx
new file mode 100644
index 0000000..4770bd2
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/bedrock.mdx
@@ -0,0 +1,139 @@
+---
+title: Using Amazon Bedrock API Key in LobeChat
+description: >-
+ Learn how to integrate Amazon Bedrock models into LobeChat for AI-powered
+ conversations. Follow these steps to grant access, obtain API keys, and
+ configure Amazon Bedrock.
+tags:
+ - Amazon Bedrock
+ - Claude 3.5 sonnect
+ - API keys
+ - Claude 3 Opus
+ - Web UI
+---
+
+# Using Amazon Bedrock in LobeChat
+
+
+
+Amazon Bedrock is a fully managed foundational model API service that allows users to access models from leading AI companies (such as AI21 Labs, Anthropic, Cohere, Meta, Stability AI) and Amazon's own foundational models.
+
+This document will guide you on how to use Amazon Bedrock in LobeChat:
+
+
+### Step 1: Grant Access to Amazon Bedrock Models in AWS
+
+- Access and log in to the [AWS Console](https://console.aws.amazon.com/)
+- Search for `bedrock` and enter the `Amazon Bedrock` service
+
+
+
+- Select `Models access` from the left menu
+
+
+
+- Open model access permissions based on your needs
+
+
+
+Some models may require additional information from you
+
+### Step 2: Obtain API Access Keys
+
+- Continue searching for IAM in the AWS console and enter the IAM service
+
+
+
+- In the `Users` menu, create a new IAM user
+
+
+
+- Enter the user name in the pop-up dialog box
+
+
+
+- Add permissions for this user or join an existing user group to ensure access to Amazon Bedrock
+
+
+
+- Create an access key for the added user
+
+
+
+- Copy and securely store the access key and secret access key, as they will be needed later
+
+
+
+
+ Please securely store the keys as they will only be shown once. If you lose them accidentally, you
+ will need to create a new access key.
+
+
+### Step 3: Configure Amazon Bedrock in LobeChat
+
+- Access the `Settings` interface in LobeChat
+- Find the setting for `Amazon Bedrock` under `Language Models` and open it
+
+
+
+- Open Amazon Bedrock and enter the obtained access key and secret access key
+- Choose an Amazon Bedrock model for your assistant to start the conversation
+
+
+
+
+ You may incur charges while using the API service, please refer to Amazon Bedrock's pricing
+ policy.
+
+
+
+
+You can now engage in conversations using the models provided by Amazon Bedrock in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/bedrock.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/bedrock.zh-CN.mdx
new file mode 100644
index 0000000..ef70dc3
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/bedrock.zh-CN.mdx
@@ -0,0 +1,134 @@
+---
+title: 在 LobeChat 中使用 Amazon Bedrock API Key
+description: 学习如何在 LobeChat 中配置和使用 Amazon Bedrock,一个完全托管的基础模型API服务,以便开始对话。
+tags:
+ - Amazon Bedrock
+ - Claude 3.5 sonnect
+ - API keys
+ - Claude 3 Opus
+ - Web UI
+---
+
+# 在 LobeChat 中使用 Amazon Bedrock
+
+
+
+Amazon Bedrock 是一个完全托管的基础模型API服务,允许用户通过API访问来自领先AI公司(如AI21 Labs、Anthropic、Cohere、Meta、Stability AI)和Amazon自家的基础模型。
+
+本文档将指导你如何在 LobeChat 中使用 Amazon Bedrock:
+
+
+### 步骤一:在 AWS 中打开 Amazon Bedrock 模型的访问权限
+
+- 访问并登录 [AWS Console](https://console.aws.amazon.com/)
+- 搜索 beckrock 并进入 `Amazon Bedrock` 服务
+
+
+
+- 在左侧菜单中选择 `Models acess`
+
+
+
+- 根据你所需要的模型,打开模型访问权限
+
+
+
+某些模型可能需要你提供额外的信息
+
+### 步骤二:获取 API 访问密钥
+
+- 继续在 AWS console 中搜索 IAM,进入 IAM 服务
+
+
+
+- 在 `用户` 菜单中,创建一个新的 IAM 用户
+
+
+
+- 在弹出的对话框中,输入用户名称
+
+
+
+- 为这个用户添加权限,或者加入一个已有的用户组,确保用户拥有 Amazon Bedrock 的访问权限
+
+
+
+- 为已添加的用户创建访问密钥
+
+
+
+- 复制并妥善保存访问密钥以及秘密访问密钥,后续将会用到
+
+
+
+
+ 请安全地存储密钥,因为它只会出现一次。如果您意外丢失它,您将需要创建一个新访问密钥。
+
+
+### 步骤三:在 LobeChat 中配置 Amazon Bedrock
+
+- 访问LobeChat的`设置`界面
+- 在`语言模型`下找到`Amazon Bedrock`的设置项并打开
+
+
+
+- 打开 Amazon Bedrock 并填入获得的访问密钥与秘密访问密钥
+- 为你的助手选择一个 Amazone Bedrock 的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 Amazon Bedrock 的费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用 Amazone Bedrock 提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/deepseek.mdx b/DigitalHumanWeb/docs/usage/providers/deepseek.mdx
new file mode 100644
index 0000000..fb5c77c
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/deepseek.mdx
@@ -0,0 +1,90 @@
+---
+title: Using DeepSeek API Key in LobeChat
+description: >-
+ Learn how to use DeepSeek-V2 in LobeChat, obtain API keys. Get started with
+ DeepSeek integration now!
+tags:
+ - DeepSeek
+ - LobeChat
+ - DeepSeek-V2
+ - API Key
+ - Web UI
+---
+
+# Using DeepSeek in LobeChat
+
+
+
+[DeepSeek](https://www.deepseek.com/) is an advanced open-source Large Language Model (LLM). The latest version, DeepSeek-V2, has made significant optimizations in architecture and performance, reducing training costs by 42.5% and inference costs by 93.3%.
+
+This document will guide you on how to use DeepSeek in LobeChat:
+
+
+
+### Step 1: Obtain DeepSeek API Key
+
+- First, you need to register and log in to the [DeepSeek](https://platform.deepseek.com/) open platform.
+
+New users will receive a free quota of 500M Tokens
+
+- Go to the `API keys` menu and click on `Create API Key`.
+
+
+
+- Enter the API key name in the pop-up dialog box.
+
+
+
+- Copy the generated API key and save it securely.
+
+
+
+
+ Please store the key securely as it will only appear once. If you accidentally lose it, you will
+ need to create a new key.
+
+
+### Step 2: Configure DeepSeek in LobeChat
+
+- Access the `App Settings` interface in LobeChat.
+- Find the setting for `DeepSeek` under `Language Models`.
+
+
+
+- Open DeepSeek and enter the obtained API key.
+- Choose a DeepSeek model for your assistant to start the conversation.
+
+
+
+
+ You may need to pay the API service provider during usage, please refer to DeepSeek's relevant
+ pricing policies.
+
+
+
+
+You can now engage in conversations using the models provided by Deepseek in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/deepseek.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/deepseek.zh-CN.mdx
new file mode 100644
index 0000000..7cbb4a0
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/deepseek.zh-CN.mdx
@@ -0,0 +1,85 @@
+---
+title: 在 LobeChat 中使用 DeepSeek API Key
+description: 学习如何在 LobeChat 中配置和使用 DeepSeek 语言模型,获取 API 密钥并开始对话。
+tags:
+ - LobeChat
+ - DeepSeek
+ - API密钥
+ - Web UI
+---
+
+# 在 LobeChat 中使用 DeepSeek
+
+
+
+[DeepSeek](https://www.deepseek.com/) 是一款先进的开源大型语言模型(LLM)。最新版本 DeepSeek-V2 在架构和性能上进行了显著优化,同时训练成本降低了42.5%,推理成本降低了93.3%。
+
+本文档将指导你如何在 LobeChat 中使用 DeepSeek:
+
+
+
+### 步骤一:获取 DeepSeek API 密钥
+
+- 首先,你需要注册并登录 [DeepSeek](https://platform.deepseek.com/) 开放平台
+
+当前新用户将会获赠 500M Tokens 的免费额度
+
+- 进入 `API keys` 菜单,并点击 `创建 API Key`
+
+
+
+- 在弹出的对话框中输入 API 密钥名称
+
+
+
+- 复制得到的 API 密钥并妥善保存
+
+
+
+
+ 请安全地存储密钥,因为它只会出现一次。如果你意外丢失它,您将需要创建一个新密钥。
+
+
+### 步骤二:在 LobeChat 中配置 DeepSeek
+
+- 访问 LobeChat 的 `应用设置`界面
+- 在 `语言模型` 下找到 `DeepSeek` 的设置项
+
+
+
+- 打开 DeepSeek 并填入获取的 API 密钥
+- 为你的助手选择一个 DeepSeek 模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 DeepSeek 的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用 Deepseek 提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/gemini.mdx b/DigitalHumanWeb/docs/usage/providers/gemini.mdx
new file mode 100644
index 0000000..7492937
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/gemini.mdx
@@ -0,0 +1,82 @@
+---
+title: Using Google Gemini API Key in LobeChat
+description: >-
+ Learn how to integrate and utilize Google Gemini AI in LobeChat to enhance
+ your conversational experience. Follow these steps to configure Google Gemini
+ and start leveraging its powerful capabilities.
+tags:
+ - Google Gemini
+ - AI integration
+ - Google AI Studio
+ - Web UI
+---
+
+# Using Google Gemini in LobeChat
+
+
+
+Gemini AI is a set of large language models (LLMs) created by Google AI, known for its cutting-edge advancements in multimodal understanding and processing. It is essentially a powerful artificial intelligence tool capable of handling various tasks involving different types of data, not just text.
+
+This document will guide you on how to use Google Gemini in LobeChat:
+
+
+
+### Step 1: Obtain Google API Key
+
+- Visit and log in to [Google AI Studio](https://aistudio.google.com/)
+- Navigate to `Get API Key` in the menu and click on `Create API Key`
+
+
+
+- Select a project and create an API key, or create one in a new project
+
+
+
+- Copy the API key from the pop-up dialog
+
+
+
+### Step 2: Configure OpenAI in LobeChat
+
+- Go to the `Settings` interface in LobeChat
+- Find the setting for `Google Gemini` under `Language Models`
+
+
+
+- Enable Google Gemini and enter the obtained API key
+- Choose a Gemini model for your assistant to start the conversation
+
+
+
+
+ During usage, you may need to pay the API service provider, please refer to Google Gemini's
+ pricing policy.
+
+
+
+
+Congratulations! You can now use Google Gemini in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/gemini.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/gemini.zh-CN.mdx
new file mode 100644
index 0000000..de796fc
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/gemini.zh-CN.mdx
@@ -0,0 +1,78 @@
+---
+title: 在 LobeChat 中使用 Google Gemini API Key
+description: 本文将指导你如何在 LobeChat 中配置并使用 Google Gemini,一个由 Google AI 创建的强大语言模型。
+tags:
+ - Google Gemini
+ - Google AI
+ - API 密钥
+ - Web UI
+---
+
+# 在 LobeChat 中使用 Google Gemini
+
+
+
+Gemini AI是由 Google AI 创建的一组大型语言模型(LLM),以其在多模式理解和处理方面的尖端进步而闻名。它本质上是一个强大的人工智能工具,可以处理涉及不同类型数据的各种任务,而不仅仅是文本。
+
+本文档将指导你如何在 LobeChat 中使用 Google Gemini:
+
+
+
+### 步骤一:获取 Google 的 API 密钥
+
+- 访问并登录 [Google AI Studio](https://aistudio.google.com/)
+- 在 `获取 API 密钥` 菜单中 `创建 API 密钥`
+
+
+
+- 选择一个项目并创建 API 密钥,或者在新项目中创建 API 密钥
+
+
+
+- 在弹出的对话框中复制 API 密钥
+
+
+
+### 步骤二:在 LobeChat 中配置OpenAI
+
+- 访问LobeChat的`设置`界面
+- 在`语言模型`下找到`Google Gemini`的设置项
+
+
+
+- 打开 Google Gemini 并填入获得的 API 密钥
+- 为你的助手选择一个 Gemini 的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 Google Gemini 的费用政策。
+
+
+
+
+至此,你已经可以在 LobeChat 中使用 Google Gemini 啦。
diff --git a/DigitalHumanWeb/docs/usage/providers/groq.mdx b/DigitalHumanWeb/docs/usage/providers/groq.mdx
new file mode 100644
index 0000000..7bf7e75
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/groq.mdx
@@ -0,0 +1,72 @@
+---
+title: Using Groq API Key in LobeChat
+description: >-
+ Learn how to obtain GroqCloud API keys and configure Groq in LobeChat for
+ optimal performance.
+tags:
+ - LPU Inference Engine
+ - GroqCloud
+ - LLAMA3
+ - Qwen2
+ - API keys
+ - Web UI
+---
+
+# Using Groq in LobeChat
+
+
+
+Groq's [LPU Inference Engine](https://wow.groq.com/news_press/groq-lpu-inference-engine-leads-in-first-independent-llm-benchmark/) has excelled in the latest independent Large Language Model (LLM) benchmark, redefining the standard for AI solutions with its remarkable speed and efficiency. By integrating LobeChat with Groq Cloud, you can now easily leverage Groq's technology to accelerate the operation of large language models in LobeChat.
+
+
+ Groq's LPU Inference Engine achieved a sustained speed of 300 tokens per second in internal
+ benchmark tests, and according to benchmark tests by ArtificialAnalysis.ai, Groq outperformed
+ other providers in terms of throughput (241 tokens per second) and total time to receive 100
+ output tokens (0.8 seconds).
+
+
+This document will guide you on how to use Groq in LobeChat:
+
+
+ ### Obtaining GroqCloud API Keys
+
+First, you need to obtain an API Key from the [GroqCloud Console](https://console.groq.com/).
+
+
+
+Create an API Key in the `API Keys` menu of the console.
+
+
+
+
+ Safely store the key from the pop-up as it will only appear once. If you accidentally lose it, you
+ will need to create a new key.
+
+
+### Configure Groq in LobeChat
+
+You can find the Groq configuration option in `Settings` -> `Language Model`, where you can input the API Key you just obtained.
+
+
+
+
+Next, select a Groq-supported model in the assistant's model options, and you can experience the powerful performance of Groq in LobeChat.
+
+
diff --git a/DigitalHumanWeb/docs/usage/providers/groq.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/groq.zh-CN.mdx
new file mode 100644
index 0000000..20cf478
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/groq.zh-CN.mdx
@@ -0,0 +1,67 @@
+---
+title: 在 LobeChat 中使用 Groq API Key
+description: 了解如何获取 GroqCloud API Key,并在 LobeChat 中配置 Groq,体验 Groq 强大的性能。
+tags:
+ - LLAMA3
+ - Qwen2
+ - API keys
+ - Web UI
+ - API Key
+---
+
+# 在 LobeChat 中使用 Groq
+
+
+
+Groq 的 [LPU 推理引擎](https://wow.groq.com/news_press/groq-lpu-inference-engine-leads-in-first-independent-llm-benchmark/) 在最新的独立大语言模型(LLM)基准测试中表现卓越,以其惊人的速度和效率重新定义了 AI 解决方案的标准。通过 LobeChat 与 Groq Cloud 的集成,你现在可以轻松地利用 Groq 的技术,在 LobeChat 中加速大语言模型的运行。
+
+
+ Groq LPU 推理引擎在内部基准测试中连续达到每秒 300 个令牌的速度,据 ArtificialAnalysis.ai
+ 的基准测试确认,Groq 在吞吐量(每秒 241 个令牌)和接收 100 个输出令牌的总时间(0.8
+ 秒)方面优于其他提供商。
+
+
+本文档将指导你如何在 LobeChat 中使用 Groq:
+
+
+ ### 获取 GroqCloud API Key
+
+首先,你需要到 [GroqCloud Console](https://console.groq.com/) 中获取一个 API Key。
+
+
+
+在控制台的 `API Keys` 菜单中创建一个 API Key。
+
+
+
+
+ 妥善保存弹窗中的 key,它只会出现一次,如果不小心丢失了,你需要重新创建一个 key。
+
+
+### 在 LobeChat 中配置 Groq
+
+你可以在 `设置` -> `语言模型` 中找到 Groq 的配置选项,将刚才获取的 API Key 填入。
+
+
+
+
+接下来,在助手的模型选项中,选中一个 Groq 支持的模型,就可以在 LobeChat 中体验 Groq 强大的性能了。
+
+
diff --git a/DigitalHumanWeb/docs/usage/providers/minimax.mdx b/DigitalHumanWeb/docs/usage/providers/minimax.mdx
new file mode 100644
index 0000000..5841e5d
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/minimax.mdx
@@ -0,0 +1,87 @@
+---
+title: Using Minimax API Key in LobeChat
+description: >-
+ Learn how to use MiniMax in LobeChat to enhance AI conversations. Obtain
+ MiniMax API key, configure MiniMax in LobeChat settings, and select a model
+ for your AI assistant.
+tags:
+ - MiniMax
+ - Web UI
+ - API Key
+ - MiniMax Models
+---
+
+# Using Minimax in LobeChat
+
+
+
+[MiniMax](https://www.minimaxi.com/) is a general artificial intelligence technology company founded in 2021, dedicated to co-creating intelligence with users. MiniMax has independently developed universal large models of different modalities, including trillion-parameter MoE text large models, speech large models, and image large models. They have also launched applications like Hai Luo AI.
+
+This document will guide you on how to use Minimax in LobeChat:
+
+
+
+### Step 1: Obtain MiniMax API Key
+
+- Register and log in to the [MiniMax Open Platform](https://www.minimaxi.com/platform)
+- In `Account Management`, locate the `API Key` menu and create a new key
+
+
+
+- Enter a name for the API key and create it
+
+
+
+- Copy the API key from the pop-up dialog box and save it securely
+
+
+
+
+ Please store the key securely as it will only appear once. If you accidentally lose it, you will
+ need to create a new key.
+
+
+### Step 2: Configure MiniMax in LobeChat
+
+- Go to the `Settings` interface of LobeChat
+- Find the setting for `MiniMax` under `Language Model`
+
+
+
+- Open Minimax and enter the obtained API key
+- Choose a MiniMax model for your AI assistant to start the conversation
+
+
+
+
+ During usage, you may need to pay the API service provider, please refer to MiniMax's relevant
+ pricing policies.
+
+
+
+
+You can now use the models provided by MiniMax to have conversations in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/minimax.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/minimax.zh-CN.mdx
new file mode 100644
index 0000000..c612b6e
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/minimax.zh-CN.mdx
@@ -0,0 +1,84 @@
+---
+title: 在 LobeChat 中使用 Minimax API Key
+description: >-
+ 学习如何在 LobeChat 中配置并使用 MiniMax 智能模型进行对话。获取 MiniMax API 密钥、配置步骤详解,开始与 MiniMax
+ 模型交互。
+tags:
+ - LobeChat
+ - MiniMax
+ - API密钥
+ - Web UI
+---
+
+# 在 LobeChat 中使用 Minimax
+
+
+
+[MiniMax](https://www.minimaxi.com/) 是 2021 年成立的通用人工智能科技公司,致力于与用户共创智能。MiniMax 自主研发了不同模态的通用大模型,其中包括万亿参数的 MoE 文本大模型、语音大模型以及图像大模型。并推出了海螺 AI 等应用。
+
+本文档将指导你如何在 LobeChat 中使用 Minimax:
+
+
+
+### 步骤一:获取 MiniMax API 密钥
+
+- 注册并登录 [MiniMax 开放平台](https://www.minimaxi.com/platform)
+- 在 `账户管理` 中找到 `接口密钥` 菜单,并创建新的密钥
+
+
+
+- 填写一个 API 密钥的名称并创建
+
+
+
+- 在弹出的对话框中复制 API 密钥,并妥善保存
+
+
+
+
+ 请安全地存储密钥,因为它只会出现一次。如果您意外丢失它,您将需要创建一个新密钥。
+
+
+### 步骤二:在LobeChat 中配置 MiniMax
+
+- 访问 LobeChat 的`设置`界面
+- 在`语言模型`下找到`MiniMax`的设置项
+
+
+
+- 打开 Minimax 并填入获得的 API 密钥
+- 为你的 AI 助手选择一个 MiniMax 的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 MiniMax 的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用 MiniMax 提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/mistral.mdx b/DigitalHumanWeb/docs/usage/providers/mistral.mdx
new file mode 100644
index 0000000..11dc8d5
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/mistral.mdx
@@ -0,0 +1,69 @@
+---
+title: Using Mistral AI API Key in LobeChat
+description: >-
+ Learn how to integrate Mistral AI into LobeChat for enhanced conversational
+ experiences. Follow the steps to configure Mistral AI and start using its
+ models.
+tags:
+ - Mistral AI
+ - Web UI
+ - API key
+---
+
+# Using Mistral AI in LobeChat
+
+
+
+The Mistral AI API is now available for everyone to use. This document will guide you on how to use [Mistral AI](https://mistral.ai/) in LobeChat:
+
+
+
+### Step 1: Obtain Mistral AI API Key
+
+- Create a [Mistral AI](https://mistral.ai/) account
+- Obtain your [API key](https://console.mistral.ai/user/api-keys/)
+
+
+
+### Step 2: Configure Mistral AI in LobeChat
+
+- Go to the `Settings` interface in LobeChat
+- Find the setting for `Mistral AI` under `Language Model`
+
+
+
+
+ If you are using mistral.ai, your account must have a valid subscription for the API key to work
+ properly. Newly created API keys may take 2-3 minutes to become active. If the "Test" button
+ fails, please retry after 2-3 minutes.
+
+
+- Enter the obtained API key
+- Choose a Mistral AI model for your AI assistant to start the conversation
+
+
+
+
+ During usage, you may need to pay the API service provider, please refer to Mistral AI's relevant
+ pricing policies.
+
+
+
+
+You can now engage in conversations using the models provided by Mistral AI in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/mistral.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/mistral.zh-CN.mdx
new file mode 100644
index 0000000..7b69167
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/mistral.zh-CN.mdx
@@ -0,0 +1,64 @@
+---
+title: 在 LobeChat 中使用 Mistral AI API Key
+description: 学习如何在 LobeChat 中配置并使用 Mistral AI,包括获取 API 密钥和选择适合的 AI 模型进行对话。
+tags:
+ - Web UI
+ - Mistral AI
+ - API Key
+---
+
+# 在 LobeChat 中使用 Mistral AI
+
+
+
+Mistral AI API 现在可供所有人使用, 本文档将指导你如何在 LobeChat 中使用 [Mistral AI](https://mistral.ai/):
+
+
+
+### 步骤一:获取 Mistral AI API 密钥
+
+- 创建一个 [Mistral AI](https://mistral.ai/) 帐户
+- 获取您的 [API 密钥](https://console.mistral.ai/user/api-keys/)
+
+
+
+### 步骤二:在 LobeChat 中配置 Mistral AI
+
+- 访问 LobeChat 的`设置`界面
+- 在`语言模型`下找到`Mistral AI`的设置项
+
+
+
+
+ 如果您使用的是 mistral.ai,则您的帐户必须具有有效的订阅才能使 API 密钥正常工作。新创建的 API
+ 密钥需要 2-3 分钟才能开始工作。如果单击“测试”按钮但失败,请在 2-3 分钟后重试。
+
+
+- 填入获得的 API 密钥
+- 为你的 AI 助手选择一个 Mistral AI 的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 Mistral AI 的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用 Mistral AI 提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/moonshot.mdx b/DigitalHumanWeb/docs/usage/providers/moonshot.mdx
new file mode 100644
index 0000000..1e57d7a
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/moonshot.mdx
@@ -0,0 +1,68 @@
+---
+title: Using Moonshot AI API Key in LobeChat
+description: >-
+ Learn how to integrate Moonshot AI into LobeChat for AI-powered conversations.
+ Follow the steps to get the API key, configure Moonshot AI, and start engaging
+ with AI models.
+tags:
+ - Moonshot AI
+ - Web UI
+ - API Key
+---
+
+# Using Moonshot AI in LobeChat
+
+
+
+The Moonshot AI API is now available for everyone to use. This document will guide you on how to use [Moonshot AI](https://www.moonshot.cn/) in LobeChat:
+
+
+
+### Step 1: Get Moonshot AI API Key
+
+- Apply for your [API key](https://platform.moonshot.cn/console/api-keys)
+
+
+
+### Step 2: Configure Moonshot AI in LobeChat
+
+- Visit the `Settings` interface in LobeChat
+- Find the setting for `Moonshot AI` under `Language Models`
+
+
+
+
+ If you are using mistral.ai, your account must have a valid subscription for the API key to work
+ properly. Newly created API keys may take 2-3 minutes to become active. If the "Test" button
+ fails, please retry after 2-3 minutes.
+
+
+- Enter the API key you obtained
+- Choose a Moonshot AI model for your AI assistant to start the conversation
+
+
+
+
+ During usage, you may need to pay the API service provider according to Moonshot AI's related
+ pricing policies.
+
+
+
+
+You can now engage in conversations using the models provided by Moonshot AI in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/moonshot.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/moonshot.zh-CN.mdx
new file mode 100644
index 0000000..c2a416e
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/moonshot.zh-CN.mdx
@@ -0,0 +1,63 @@
+---
+title: 在 LobeChat 中使用 Moonshot(月之暗面) AI API Key
+description: 学习如何在 LobeChat 中配置和使用 Moonshot AI,包括获取 API 密钥和选择适合的 AI 模型进行对话。
+tags:
+ - Moonshot AI
+ - Web UI
+ - API Key
+---
+
+# 在 LobeChat 中使用 Moonshot AI
+
+
+
+Moonshot AI API 现在可供所有人使用, 本文档将指导你如何在 LobeChat 中使用 [Moonshot AI](https://www.moonshot.cn/):
+
+
+
+### 步骤一:获取 Moonshot AI API 密钥
+
+- 申请您的 [API 密钥](https://platform.moonshot.cn/console/api-keys)
+
+
+
+### 步骤二:在 LobeChat 中配置 Moonshot AI
+
+- 访问 LobeChat 的`设置`界面
+- 在`语言模型`下找到`Moonshot AI`的设置项
+
+
+
+
+ 如果您使用的是 mistral.ai,则您的帐户必须具有有效的订阅才能使 API 密钥正常工作。新创建的 API
+ 密钥需要 2-3 分钟才能开始工作。如果单击“测试”按钮但失败,请在 2-3 分钟后重试。
+
+
+- 填入获得的 API 密钥
+- 为你的 AI 助手选择一个 Moonshot AI 的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 Moonshot AI 的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用 Moonshot AI 提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/novita.mdx b/DigitalHumanWeb/docs/usage/providers/novita.mdx
new file mode 100644
index 0000000..4776b4b
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/novita.mdx
@@ -0,0 +1,80 @@
+---
+title: Using Novita AI API Key in LobeChat
+description: >-
+ Learn how to integrate Novita AI's language model APIs into LobeChat. Follow
+ the steps to register, create an Novita AI API key, configure settings, and
+ chat with our various AI models.
+tags:
+ - Novita AI
+ - Llama3
+ - Mistral
+ - uncensored
+ - API key
+ - Web UI
+---
+
+# Using Novita AI in LobeChat
+
+
+
+[Novita AI](https://novita.ai/) is an AI API platform that provides a variety of LLM and image generation APIs, supporting Llama3 (8B, 70B), Mistral, and many other cutting-edge models. We offer a variety of censored and uncensored models to meet your different needs.
+
+This document will guide you on how to integrate Novita AI in LobeChat:
+
+
+
+### Step 1: Register and Log in to Novita AI
+
+- Visit [Novita.ai](https://novita.ai/) and create an account
+- You can log in with your Google or Github account
+- Upon registration, Novita AI will provide a $0.5 credit.
+
+
+
+### Step 2: Obtain the API Key
+
+- Visit Novita AI's [key management page](https://novita.ai/dashboard/key), create and copy an API Key.
+
+
+
+### Step 3: Configure Novita AI in LobeChat
+
+- Visit the `Settings` interface in LobeChat
+- Find the setting for `novita.ai` under `Language Model`
+
+
+
+- Open novita.ai and enter the obtained API key
+- Choose a Novita AI model for your assistant to start the conversation
+
+
+
+
+ During usage, you may need to pay the API service provider, please refer to Novita AI's pricing
+ policy.
+
+
+
+
+You can now engage in conversations using the models provided by Novita AI in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/novita.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/novita.zh-CN.mdx
new file mode 100644
index 0000000..501496b
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/novita.zh-CN.mdx
@@ -0,0 +1,78 @@
+---
+title: 在 LobeChat 中使用 Novita AI API Key
+description: >-
+ 学习如何将 Novita AI 的大语言模型 API 集成到 LobeChat 中。跟随以下步骤注册 Novita AI 账号、创建 API
+ Key、充值信用额度并在 LobeChat 中进行设置。并与我们的多种 AI 模型交谈。
+tags:
+ - Novita AI
+ - Llama3
+ - Mistral
+ - uncensored
+ - API key
+ - Web UI
+---
+
+# 在 LobeChat 中使用 Novita AI
+
+
+
+[Novita AI](https://novita.ai/) 是一个 AI API 平台,它提供多种大语言模型与 AI 图像生成的 API 服务。支持 Llama3 (8B, 70B),Mistral 和其他最新的模型。
+
+本文档将指导你如何在 LobeChat 中使用 Novita AI:
+
+
+
+### 步骤一:注册 Novita AI 账号并登录
+
+- 访问 [Novita.ai](https://novita.ai/) 并创建账号
+- 你可以使用 Google 或者 Github 账号登录
+- 注册后,Novita AI 会赠送 0.5 美元的使用额度
+
+
+
+### 步骤二:创建 API 密钥
+
+- 访问 Novita AI 的 [密钥管理页面](https://novita.ai/dashboard/key) ,创建并且复制一个 API 密钥.
+
+
+
+### 步骤三:在 LobeChat 中配置 Novita AI
+
+- 访问 LobeChat 的 `设置` 界面
+- 在 `语言模型` 下找到 `novita.ai` 的设置项
+- 打开 novita.ai 并填入获得的 API 密钥
+
+
+
+- 为你的助手选择一个 Novita AI 模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 Novita AI 的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用 Novita AI 提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/ollama.mdx b/DigitalHumanWeb/docs/usage/providers/ollama.mdx
new file mode 100644
index 0000000..2d7f03d
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/ollama.mdx
@@ -0,0 +1,205 @@
+---
+title: Using Ollama in LobeChat
+description: >-
+ Learn how to use Ollama in LobeChat, run LLM locally, and experience
+ cutting-edge AI usage.
+tags:
+ - Ollama
+ - Local LLM
+ - Ollama WebUI
+ - Web UI
+ - API Key
+---
+
+# Using Ollama in LobeChat
+
+
+
+Ollama is a powerful framework for running large language models (LLMs) locally, supporting various language models including Llama 2, Mistral, and more. Now, LobeChat supports integration with Ollama, meaning you can easily enhance your application by using the language models provided by Ollama in LobeChat.
+
+This document will guide you on how to use Ollama in LobeChat:
+
+
+
+## Using Ollama on macOS
+
+
+
+### Local Installation of Ollama
+
+[Download Ollama for macOS](https://ollama.com/download?utm_source=lobehub&utm_medium=docs&utm_campaign=download-macos) and unzip/install it.
+
+### Configure Ollama for Cross-Origin Access
+
+Due to Ollama's default configuration, which restricts access to local only, additional environment variable setting `OLLAMA_ORIGINS` is required for cross-origin access and port listening. Use `launchctl` to set the environment variable:
+
+```bash
+launchctl setenv OLLAMA_ORIGINS "*"
+```
+
+After setting up, restart the Ollama application.
+
+### Conversing with Local Large Models in LobeChat
+
+Now, you can start conversing with the local LLM in LobeChat.
+
+
+
+
+
+## Using Ollama on Windows
+
+
+
+### Local Installation of Ollama
+
+[Download Ollama for Windows](https://ollama.com/download?utm_source=lobehub&utm_medium=docs&utm_campaign=download-windows) and install it.
+
+### Configure Ollama for Cross-Origin Access
+
+Since Ollama's default configuration allows local access only, additional environment variable setting `OLLAMA_ORIGINS` is needed for cross-origin access and port listening.
+
+On Windows, Ollama inherits your user and system environment variables.
+
+1. First, exit the Ollama program by clicking on it in the Windows taskbar.
+2. Edit system environment variables from the Control Panel.
+3. Edit or create the Ollama environment variable `OLLAMA_ORIGINS` for your user account, setting the value to `*`.
+4. Click `OK/Apply` to save and restart the system.
+5. Run `Ollama` again.
+
+### Conversing with Local Large Models in LobeChat
+
+Now, you can start conversing with the local LLM in LobeChat.
+
+
+
+## Using Ollama on Linux
+
+
+
+### Local Installation of Ollama
+
+Install using the following command:
+
+```bash
+curl -fsSL https://ollama.com/install.sh | sh
+```
+
+Alternatively, you can refer to the [Linux manual installation guide](https://github.com/ollama/ollama/blob/main/docs/linux.md).
+
+### Configure Ollama for Cross-Origin Access
+
+Due to Ollama's default configuration, which allows local access only, additional environment variable setting `OLLAMA_ORIGINS` is required for cross-origin access and port listening. If Ollama runs as a systemd service, use `systemctl` to set the environment variable:
+
+1. Edit the systemd service by calling `sudo systemctl edit ollama.service`:
+
+```bash
+sudo systemctl edit ollama.service
+```
+
+2. Add `Environment` under `[Service]` for each environment variable:
+
+```bash
+[Service]
+Environment="OLLAMA_HOST=0.0.0.0"
+Environment="OLLAMA_ORIGINS=*"
+```
+
+3. Save and exit.
+4. Reload `systemd` and restart Ollama:
+
+```bash
+sudo systemctl daemon-reload
+sudo systemctl restart ollama
+```
+
+### Conversing with Local Large Models in LobeChat
+
+Now, you can start conversing with the local LLM in LobeChat.
+
+
+
+## Deploying Ollama using Docker
+
+
+
+### Pulling Ollama Image
+
+If you prefer using Docker, Ollama provides an official Docker image that you can pull using the following command:
+
+```bash
+docker pull ollama/ollama
+```
+
+### Configure Ollama for Cross-Origin Access
+
+Since Ollama's default configuration allows local access only, additional environment variable setting `OLLAMA_ORIGINS` is needed for cross-origin access and port listening.
+
+If Ollama runs as a Docker container, you can add the environment variable to the `docker run` command.
+
+```bash
+docker run -d --gpus=all -v ollama:/root/.ollama -e OLLAMA_ORIGINS="*" -p 11434:11434 --name ollama ollama/ollama
+```
+
+### Conversing with Local Large Models in LobeChat
+
+Now, you can start conversing with the local LLM in LobeChat.
+
+
+
+## Installing Ollama Models
+
+Ollama supports various models, which you can view in the [Ollama Library](https://ollama.com/library) and choose the appropriate model based on your needs.
+
+### Installation in LobeChat
+
+In LobeChat, we have enabled some common large language models by default, such as llama3, Gemma, Mistral, etc. When you select a model for conversation, we will prompt you to download that model.
+
+
+
+Once downloaded, you can start conversing.
+
+### Pulling Models to Local with Ollama
+
+Alternatively, you can install models by executing the following command in the terminal, using llama3 as an example:
+
+```bash
+ollama pull llama3
+```
+
+
+
+## Custom Configuration
+
+You can find Ollama's configuration options in `Settings` -> `Language Models`, where you can configure Ollama's proxy, model names, etc.
+
+
+
+
+ Visit [Integrating with Ollama](/docs/self-hosting/examples/ollama) to learn how to deploy
+ LobeChat to meet integration needs with Ollama.
+
diff --git a/DigitalHumanWeb/docs/usage/providers/ollama.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/ollama.zh-CN.mdx
new file mode 100644
index 0000000..2b899b4
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/ollama.zh-CN.mdx
@@ -0,0 +1,203 @@
+---
+title: 在 LobeChat 中使用 Ollama
+description: 了解如何在 LobeChat 中使用 Ollama ,在你的本地运行大型语言模型,获得最前沿的 AI 使用体验。
+tags:
+ - Ollama
+ - Web UI
+ - API Key
+ - Local LLM
+ - Ollama WebUI
+---
+
+# 在 LobeChat 中使用 Ollama
+
+
+
+Ollama 是一款强大的本地运行大型语言模型(LLM)的框架,支持多种语言模型,包括 Llama 2, Mistral 等。现在,LobeChat 已经支持与 Ollama 的集成,这意味着你可以在 LobeChat 中轻松使用 Ollama 提供的语言模型来增强你的应用。
+
+本文档将指导你如何在 LobeChat 中使用 Ollama:
+
+
+
+## 在 macOS 下使用 Ollama
+
+
+
+### 本地安装 Ollama
+
+[下载 Ollama for macOS](https://ollama.com/download?utm_source=lobehub&utm_medium=docs&utm_campaign=download-macos) 并解压、安装。
+
+### 配置 Ollama 允许跨域访问
+
+由于 Ollama 的默认参数配置,启动时设置了仅本地访问,所以跨域访问以及端口监听需要进行额外的环境变量设置 `OLLAMA_ORIGINS`。使用 `launchctl` 设置环境变量:
+
+```bash
+launchctl setenv OLLAMA_ORIGINS "*"
+```
+
+完成设置后,需要重启 Ollama 应用程序。
+
+### 在 LobeChat 中与本地大模型对话
+
+接下来,你就可以使用 LobeChat 与本地 LLM 对话了。
+
+
+
+
+
+## 在 windows 下使用 Ollama
+
+
+
+### 本地安装 Ollama
+
+[下载 Ollama for Windows](https://ollama.com/download?utm_source=lobehub&utm_medium=docs&utm_campaign=download-windows) 并安装。
+
+### 配置 Ollama 允许跨域访问
+
+由于 Ollama 的默认参数配置,启动时设置了仅本地访问,所以跨域访问以及端口监听需要进行额外的环境变量设置 `OLLAMA_ORIGINS`。
+
+在 Windows 上,Ollama 继承了您的用户和系统环境变量。
+
+1. 首先通过 Windows 任务栏点击 Ollama 退出程序。
+2. 从控制面板编辑系统环境变量。
+3. 为您的用户账户编辑或新建 Ollama 的环境变量 `OLLAMA_ORIGINS`,值设为 `*` 。
+4. 点击`OK/应用`保存后重启系统。
+5. 重新运行`Ollama`。
+
+### 在 LobeChat 中与本地大模型对话
+
+接下来,你就可以使用 LobeChat 与本地 LLM 对话了。
+
+
+
+## 在 linux 下使用 Ollama
+
+
+
+### 本地安装 Ollama
+
+通过以下命令安装:
+
+```bash
+curl -fsSL https://ollama.com/install.sh | sh
+```
+
+或者,你也可以参考 [Linux 手动安装指南](https://github.com/ollama/ollama/blob/main/docs/linux.md)。
+
+### 配置 Ollama 允许跨域访问
+
+由于 Ollama 的默认参数配置,启动时设置了仅本地访问,所以跨域访问以及端口监听需要进行额外的环境变量设置 `OLLAMA_ORIGINS`。如果 Ollama 作为 systemd 服务运行,应该使用`systemctl`设置环境变量:
+
+1. 通过调用`sudo systemctl edit ollama.service`编辑 systemd 服务。
+
+```bash
+sudo systemctl edit ollama.service
+```
+
+2. 对于每个环境变量,在`[Service]`部分下添加`Environment`:
+
+```bash
+[Service]
+Environment="OLLAMA_HOST=0.0.0.0"
+Environment="OLLAMA_ORIGINS=*"
+```
+
+3. 保存并退出。
+4. 重载 `systemd` 并重启 Ollama:
+
+```bash
+sudo systemctl daemon-reload
+sudo systemctl restart ollama
+```
+
+### 在 LobeChat 中与本地大模型对话
+
+接下来,你就可以使用 LobeChat 与本地 LLM 对话了。
+
+
+
+## 使用 docker 部署使用 Ollama
+
+
+
+### 拉取 Ollama 镜像
+
+如果你更倾向于使用 Docker,Ollama 也提供了官方 Docker 镜像,你可以通过以下命令拉取:
+
+```bash
+docker pull ollama/ollama
+```
+
+### 配置 Ollama 允许跨域访问
+
+由于 Ollama 的默认参数配置,启动时设置了仅本地访问,所以跨域访问以及端口监听需要进行额外的环境变量设置 `OLLAMA_ORIGINS`。
+
+如果 Ollama 作为 Docker 容器运行,你可以将环境变量添加到 `docker run` 命令中。
+
+```bash
+docker run -d --gpus=all -v ollama:/root/.ollama -e OLLAMA_ORIGINS="*" -p 11434:11434 --name ollama ollama/ollama
+```
+
+### 在 LobeChat 中与本地大模型对话
+
+接下来,你就可以使用 LobeChat 与本地 LLM 对话了。
+
+
+
+## 安装 Ollama 模型
+
+Ollama 支持多种模型,你可以在 [Ollama Library](https://ollama.com/library) 中查看可用的模型列表,并根据需求选择合适的模型。
+
+### LobeChat 中安装
+
+在 LobeChat 中,我们默认开启了一些常用的大语言模型,例如 llama3、 Gemma 、 Mistral 等。当你选中模型进行对话时,我们会提示你需要下载该模型。
+
+
+
+下载完成后即可开始对话。
+
+### 用 Ollama 拉取模型到本地
+
+当然,你也可以通过在终端执行以下命令安装模型,以 llama3 为例:
+
+```bash
+ollama pull llama3
+```
+
+
+
+## 自定义配置
+
+你可以在 `设置` -> `语言模型` 中找到 Ollama 的配置选项,你可以在这里配置 Ollama 的代理、模型名称等。
+
+
+
+
+ 你可以前往 [与 Ollama 集成](/zh/docs/self-hosting/examples/ollama) 了解如何部署 LobeChat
+ ,以满足与 Ollama 的集成需求。
+
diff --git a/DigitalHumanWeb/docs/usage/providers/ollama/gemma.mdx b/DigitalHumanWeb/docs/usage/providers/ollama/gemma.mdx
new file mode 100644
index 0000000..80997f1
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/ollama/gemma.mdx
@@ -0,0 +1,68 @@
+---
+title: Using Google Gemma Model in LobeChat
+description: >-
+ Learn how to integrate and utilize Google Gemma in LobeChat, an open-source
+ large language model, in LobeChat with the help of Ollama. Follow these steps
+ to pull and select the Gemma model for natural language processing tasks.
+tags:
+ - Google Gemma
+ - LobeChat
+ - Ollama
+ - Natural Language Processing
+ - Language Model
+---
+
+# Using Google Gemma Model
+
+
+
+[Gemma](https://blog.google/technology/developers/gemma-open-models/) is an open-source large language model (LLM) from Google, designed to provide a more general and flexible model for various natural language processing tasks. Now, with the integration of LobeChat and [Ollama](https://ollama.com/), you can easily use Google Gemma in LobeChat.
+
+This document will guide you on how to use Google Gemma in LobeChat:
+
+
+ ### Install Ollama locally
+
+First, you need to install Ollama. For the installation process, please refer to the [Ollama usage documentation](/docs/usage/providers/ollama).
+
+### Pull Google Gemma model to local using Ollama
+
+After installing Ollama, you can install the Google Gemma model using the following command, using the 7b model as an example:
+
+```bash
+ollama pull gemma
+```
+
+
+
+### Select Gemma model
+
+In the session page, open the model panel and then select the Gemma model.
+
+
+
+
+ If you do not see the Ollama provider in the model selection panel, please refer to [Integrating
+ with Ollama](/docs/self-hosting/examples/ollama) to learn how to enable the Ollama provider in
+ LobeChat.
+
+
+
+
+Now, you can start conversing with the local Gemma model using LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/ollama/gemma.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/ollama/gemma.zh-CN.mdx
new file mode 100644
index 0000000..9ecc4ea
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/ollama/gemma.zh-CN.mdx
@@ -0,0 +1,67 @@
+---
+title: 在 LobeChat 中使用 Google Gemma 模型
+description: >-
+ 通过 LobeChat 与 Ollama 的集成,轻松使用 Google Gemma 模型进行自然语言处理任务。安装 Ollama,拉取 Gemma
+ 模型,选择模型面板中的 Gemma 模型,开始对话。
+tags:
+ - Google Gemma
+ - LobeChat
+ - Ollama
+ - 自然语言处理
+ - 模型选择
+---
+
+# 使用 Google Gemma 模型
+
+
+
+[Gemma](https://blog.google/technology/developers/gemma-open-models/) 是 Google 开源的一款大语言模型(LLM),旨在提供一个更加通用、灵活的模型用于各种自然语言处理任务。现在,通过 LobeChat 与 [Ollama](https://ollama.com/) 的集成,你可以轻松地在 LobeChat 中使用 Google Gemma。
+
+本文档将指导你如何在 LobeChat 中使用 Google Gemma:
+
+
+ ### 本地安装 Ollama
+
+首先,你需要安装 Ollama,安装过程请查阅 [Ollama 使用文件](/zh/docs/usage/providers/ollama)。
+
+### 用 Ollama 拉取 Google Gemma 模型到本地
+
+在安装完成 Ollama 后,你可以通过以下命令安装 Google Gemma 模型,以 7b 模型为例:
+
+```bash
+ollama pull gemma
+```
+
+
+
+### 选择 Gemma 模型
+
+在会话页面中,选择模型面板打开,然后选择 Gemma 模型。
+
+
+
+
+ 如果你没有在模型选择面板中看到 Ollama 服务商,请查阅 [与 Ollama
+ 集成](/zh/docs/self-hosting/examples/ollama) 了解如何在 LobeChat 中开启 Ollama 服务商。
+
+
+
+
+接下来,你就可以使用 LobeChat 与本地 Gemma 模型对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/ollama/qwen.mdx b/DigitalHumanWeb/docs/usage/providers/ollama/qwen.mdx
new file mode 100644
index 0000000..243c02f
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/ollama/qwen.mdx
@@ -0,0 +1,69 @@
+---
+title: Using Local Qwen Model in LobeChat
+description: 通过LobeChat和Ollama的集成,您可以轻松在LobeChat中使用Qwen。本文将指导您如何在LobeChat中使用本地部署版本的Qwen。
+tags:
+ - Qwen
+ - LobeChat
+ - Ollama
+ - 本地部署
+ - AI大模型
+---
+
+# Using the Local Qwen Model
+
+
+
+[Qwen](https://github.com/QwenLM/Qwen1.5) is a large language model (LLM) open-sourced by Alibaba Cloud. It is officially defined as a constantly evolving AI large model, and it achieves more accurate Chinese recognition capabilities through more training set content.
+
+
+
+Now, through the integration of LobeChat and [Ollama](https://ollama.com/), you can easily use Qwen in LobeChat. This document will guide you on how to use the local deployment version of Qwen in LobeChat:
+
+
+ ## Local Installation of Ollama
+
+First, you need to install Ollama. For the installation process, please refer to the [Ollama Usage Document](/docs/usage/providers/ollama).
+
+## Pull the Qwen Model to Local with Ollama
+
+After installing Ollama, you can install the Qwen model with the following command, taking the 14b model as an example:
+
+```bash
+ollama pull qwen:14b
+```
+
+
+ The local version of Qwen provides different model sizes to choose from. Please refer to the
+ [Qwen's Ollama integration page](https://ollama.com/library/qwen) to understand how to choose the
+ model size.
+
+
+
+
+### Select the Qwen Model
+
+In the LobeChat conversation page, open the model selection panel, and then select the Qwen model.
+
+
+
+
+ If you do not see the Ollama provider in the model selection panel, please refer to [Integration with Ollama](/docs/self-hosting/examples/ollama) to learn how to enable the Ollama provider in LobeChat.
+
+
+
+
+Next, you can have a conversation with the local Qwen model in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/ollama/qwen.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/ollama/qwen.zh-CN.mdx
new file mode 100644
index 0000000..db797cc
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/ollama/qwen.zh-CN.mdx
@@ -0,0 +1,66 @@
+---
+title: 在 LobeChat 中使用本地通义千问 Qwen 模型
+description: 通过 LobeChat 与 Ollama 的集成,轻松在本地部署的通义千问 Qwen 模型中进行对话。学习如何安装和选择 Qwen 模型。
+tags:
+ - 通义千问
+ - Qwen模型
+ - LobeChat集成
+ - Ollama
+ - 本地部署
+---
+
+# 使用本地通义千问 Qwen 模型
+
+
+
+[通义千问](https://github.com/QwenLM/Qwen1.5) 是阿里云开源的一款大语言模型(LLM),官方定义是一个不断进化的 AI 大模型,并通过更多的训练集内容达到更精准的中文识别能力。
+
+
+
+现在,通过 LobeChat 与 [Ollama](https://ollama.com/) 的集成,你可以轻松地在 LobeChat 中使用 通义千问。
+
+本文档将指导你如何在 LobeChat 中使用通义千问本地部署版:
+
+
+ ### 本地安装 Ollama
+
+首先,你需要安装 Ollama,安装过程请查阅 [Ollama 使用文件](/zh/docs/usage/providers/ollama)。
+
+### 用 Ollama 拉取 Qwen 模型到本地
+
+在安装完成 Ollama 后,你可以通过以下命令安装 Qwen 模型,以 14b 模型为例:
+
+```bash
+ollama pull qwen:14b
+```
+
+
+
+### 选择 Qwen 模型
+
+在会话页面中,选择模型面板打开,然后选择 Qwen 模型。
+
+
+
+
+ 如果你没有在模型选择面板中看到 Ollama 服务商,请查阅 [与 Ollama
+ 集成](/zh/docs/self-hosting/examples/ollama) 了解如何在 LobeChat 中开启 Ollama 服务商。
+
+
+
+
+接下来,你就可以使用 LobeChat 与本地 Qwen 模型对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/openai.mdx b/DigitalHumanWeb/docs/usage/providers/openai.mdx
new file mode 100644
index 0000000..ae624bf
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/openai.mdx
@@ -0,0 +1,94 @@
+---
+title: Using OpenAI API Key in LobeChat
+description: >-
+ Learn how to integrate OpenAI API Key in LobeChat. Support GPT-4o /
+ GPT-4-turbo / GPT-4-vision
+tags:
+ - OpenAI
+ - ChatGPT
+ - GPT-4
+ - GPT-4o
+ - API Key
+ - Web UI
+---
+
+# Using OpenAI in LobeChat
+
+
+
+This document will guide you on how to use [OpenAI](https://openai.com/) in LobeChat:
+
+
+
+### Step 1: Obtain OpenAI API Key
+
+- Register for an [OpenAI account](https://platform.openai.com/signup). You will need to register using an international phone number and a non-mainland email address.
+
+- After registration, go to the [API Keys](https://platform.openai.com/api-keys) page and click on `Create new secret key` to generate a new API Key.
+
+- Open the creation window
+
+
+
+- Create API Key
+
+
+
+- Retrieve API Key
+
+
+
+
+ After registering, you generally have a free credit of $5, but it is only valid for three months.
+
+
+### Step 2: Configure OpenAI in LobeChat
+
+- Visit the `Settings` page in LobeChat
+- Find the setting for `OpenAI` under `Language Model`
+
+
+
+- Enter the obtained API Key
+- Choose an OpenAI model for your AI assistant to start the conversation
+
+
+
+
+ During usage, you may need to pay the API service provider. Please refer to OpenAI's relevant
+ pricing policies.
+
+
+
+
+You can now engage in conversations using the models provided by OpenAI in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/openai.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/openai.zh-CN.mdx
new file mode 100644
index 0000000..390ec9e
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/openai.zh-CN.mdx
@@ -0,0 +1,87 @@
+---
+title: 在 LobeChat 中使用 OpenAI API Key
+description: 学习如何在 LobeChat 中配置和使用 OpenAI API Key,支持 GPT-4o / GPT-4-turbo / GPT-4-vision
+tags:
+ - ChatGPT
+ - GPT-4
+ - GPT-4o
+ - API Key
+ - Web UI
+---
+
+# 在 LobeChat 中使用 OpenAI
+
+
+
+本文档将指导你如何在 LobeChat 中使用 [OpenAI](https://openai.com/):
+
+
+
+### 步骤一:获取 OpenAI API 密钥
+
+- 注册一个 [OpenAI 账户](https://platform.openai.com/signup),你需要使用国际手机号、非大陆邮箱进行注册;
+- 注册完毕后,前往 [API Keys](https://platform.openai.com/api-keys) 页面,点击 `Create new secret key` 创建新的 API Key:
+
+- 打开创建窗口
+
+
+
+- 创建 API Key
+
+
+
+- 获取 API Key
+
+
+
+账户注册后,一般有 5 美元的免费额度,但有效期只有三个月。
+
+### 步骤二:在 LobeChat 中配置 OpenAI
+
+- 访问 LobeChat 的`设置`界面
+- 在`语言模型`下找到`OpenAI`的设置项
+
+
+
+- 填入获得的 API 密钥
+- 为你的 AI 助手选择一个 OpenAI 的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 OpenAI 的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用 OpenAI 提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/openrouter.mdx b/DigitalHumanWeb/docs/usage/providers/openrouter.mdx
new file mode 100644
index 0000000..3fc4c68
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/openrouter.mdx
@@ -0,0 +1,110 @@
+---
+title: Using OpenRouter API Key in LobeChat
+description: >-
+ Learn how to integrate and utilize OpenRouter's language model APIs in
+ LobeChat. Follow these steps to register, create an API key, recharge credit,
+ and configure OpenRouter for seamless conversations.
+tags:
+ - OpenRouter
+ - LobeChat
+ - API Key
+ - Web UI
+---
+
+# Using OpenRouter in LobeChat
+
+
+
+[OpenRouter](https://openrouter.ai/) is a service that provides a variety of excellent large language model APIs, supporting models such as OpenAI (including GPT-3.5/4), Anthropic (Claude2, Instant), LLaMA 2, and PaLM Bison.
+
+This document will guide you on how to use OpenRouter in LobeChat:
+
+
+
+### Step 1: Register and Log in to OpenRouter
+
+- Visit [OpenRouter.ai](https://openrouter.ai/) and create an account
+- You can log in using your Google account or MetaMask wallet
+
+
+
+### Step 2: Create an API Key
+
+- Go to the `Keys` menu or visit [OpenRouter Keys](https://openrouter.ai/keys) directly
+- Click on `Create Key` to start the creation process
+- Name your API key in the pop-up dialog, for example, "LobeChat Key"
+- Leave the `Credit limit` blank to indicate no amount limit
+
+
+
+- Copy the API key from the pop-up dialog and save it securely
+
+
+
+
+ Please store the key securely as it will only appear once. If you lose it accidentally, you will
+ need to create a new key.
+
+
+### Step 3: Recharge Credit
+
+- Go to the `Credit` menu or visit [OpenRouter Credit](https://openrouter.ai/credits) directly
+- Click on `Manage Credits` to recharge your credit, you can check model prices at [https://openrouter.ai/models](https://openrouter.ai/models)
+- OpenRouter provides some free models that can be used without recharging
+
+
+
+### Step 4: Configure OpenRouter in LobeChat
+
+- Visit the `Settings` interface in LobeChat
+- Find the setting for `OpenRouter` under `Language Models`
+- Enable OpenRouter and enter the API key you obtained
+
+
+
+- Choose an OpenRouter model for your assistant to start the conversation
+
+
+
+
+ You may need to pay the API service provider during usage, please refer to OpenRouter's relevant
+ fee policies.
+
+
+
+
+You can now engage in conversations using the models provided by OpenRouter in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/openrouter.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/openrouter.zh-CN.mdx
new file mode 100644
index 0000000..7db17f3
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/openrouter.zh-CN.mdx
@@ -0,0 +1,104 @@
+---
+title: 在 LobeChat 中使用 OpenRouter API Key
+description: 学习如何在 LobeChat 中注册、创建 API Key、充值信用额度并配置 OpenRouter,以便开始使用多种优秀大语言模型 API。
+tags:
+ - OpenRouter
+ - API Key
+ - Web UI
+---
+
+# 在 LobeChat 中使用 OpenRouter
+
+
+
+[OpenRouter](https://openrouter.ai/) 是一个提供多种优秀大语言模型 API 的服务,它支持 OpenAI (包括 GPT-3.5/4)、Anthropic (Claude2、Instant)、LLaMA 2 和 PaLM Bison 等众多模型。
+
+本文档将指导你如何在 LobeChat 中使用 OpenRouter:
+
+
+
+### 步骤一:注册 OpenRouter 账号并登录
+
+- 访问 [OpenRouter.ai](https://openrouter.ai/) 并创建一个账号
+- 你可以用 Google 账号或 MetaMask 钱包登录
+
+
+
+### 步骤二:创建 API 密钥
+
+- 进入 `Keys` 菜单或直接访问 [OpenRouter Keys](https://openrouter.ai/keys)
+- 点击 `Create Key` 开始创建
+- 在弹出对话框中为 API 密钥取一个名字,例如 "LobeChat Key"
+- 留空 `Credit limit` 表示不设置金额限制
+
+
+
+- 在弹出的对话框中复制 API 密钥,并妥善保存
+
+
+
+
+ 请安全地存储密钥,因为它只会出现一次。如果您意外丢失它,您将需要创建一个新密钥。
+
+
+### 步骤三:充值信用额度
+
+- 进入 `Credit` 菜单,或直接访问 [OpenRouter Credit](https://openrouter.ai/credits)
+- 点击 `Manage Credits` 充值信用额度,在 [https://openrouter.ai/models](https://openrouter.ai/models) 中可以查看模型价格
+- OpenRouter 提供了一些免费模型,未充值的情况下可以使用
+
+
+
+### 步骤四:在 LobeChat 中配置 OpenRouter
+
+- 访问 LobeChat 的 `设置` 界面
+- 在 `语言模型` 下找到 `OpenRouter` 的设置项
+- 打开 OpenRouter 并填入获得的 API 密钥
+
+
+
+- 为你的助手选择一个 OpenRouter 模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 OpenRouter 的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用 OpenRouter 提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/perplexity.mdx b/DigitalHumanWeb/docs/usage/providers/perplexity.mdx
new file mode 100644
index 0000000..d35b9e0
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/perplexity.mdx
@@ -0,0 +1,62 @@
+---
+title: Using Perplexity AI API Key in LobeChat
+description: >-
+ Learn how to integrate and use Perplexity AI in LobeChat to enhance your AI
+ assistant's capabilities.
+tags:
+ - Perplexity AI
+ - API key
+ - Web UI
+---
+
+# Using Perplexity AI in LobeChat
+
+
+
+The Perplexity AI API is now available for everyone to use. This document will guide you on how to use [Perplexity AI](https://www.perplexity.ai/) in LobeChat:
+
+
+
+### Step 1: Obtain Perplexity AI API Key
+
+- Create a [Perplexity AI](https://www.perplexity.ai/) account
+- Obtain your [API key](https://www.perplexity.ai/settings/api)
+
+
+
+### Step 2: Configure Perplexity AI in LobeChat
+
+- Go to the `Settings` interface in LobeChat
+- Find the setting for `Perplexity AI` under `Language Model`
+
+
+
+- Enter the API key you obtained
+- Choose a Perplexity AI model for your AI assistant to start the conversation
+
+
+
+
+ During usage, you may need to pay the API service provider. Please refer to Perplexity AI's
+ relevant pricing policies.
+
+
+
+
+You can now engage in conversations using the models provided by Perplexity AI in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/perplexity.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/perplexity.zh-CN.mdx
new file mode 100644
index 0000000..7b1b7c3
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/perplexity.zh-CN.mdx
@@ -0,0 +1,59 @@
+---
+title: 在 LobeChat 中使用 Perplexity AI API Key
+description: 学习如何在 LobeChat 中配置和使用 Perplexity AI,获取 API 密钥并选择适合的语言模型开始对话。
+tags:
+ - Perplexity AI
+ - API key
+ - Web UI
+---
+
+# 在 LobeChat 中使用 Perplexity AI
+
+
+
+Perplexity AI API 现在可供所有人使用, 本文档将指导你如何在 LobeChat 中使用 [Perplexity AI](https://www.perplexity.ai/):
+
+
+
+### 步骤一:获取 Perplexity AI API 密钥
+
+- 创建一个 [Perplexity AI](https://www.perplexity.ai/) 帐户
+- 获取您的 [API 密钥](https://www.perplexity.ai/settings/api)
+
+
+
+### 步骤二:在 LobeChat 中配置 Perplexity AI
+
+- 访问 LobeChat 的`设置`界面
+- 在`语言模型`下找到`Perplexity AI`的设置项
+
+
+
+- 填入获得的 API 密钥
+- 为你的 AI 助手选择一个 Perplexity AI 的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 Perplexity AI 的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用 Perplexity AI 提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/qwen.mdx b/DigitalHumanWeb/docs/usage/providers/qwen.mdx
new file mode 100644
index 0000000..6291a6f
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/qwen.mdx
@@ -0,0 +1,92 @@
+---
+title: Using Qwen2 API Key in LobeChat
+description: >-
+ Learn how to integrate and utilize Tongyi Qianwen, a powerful language model
+ by Alibaba Cloud, in LobeChat for various tasks. Follow the steps to activate
+ the service, obtain the API key, and configure Tongyi Qianwen for seamless
+ interaction.
+tags:
+ - Tongyi Qianwen
+ - Alibaba Cloud
+ - DashScope
+ - API key
+ - Web UI
+---
+
+# Using Tongyi Qianwen in LobeChat
+
+
+
+[Tongyi Qianwen](https://tongyi.aliyun.com/) is a large-scale language model independently developed by Alibaba Cloud, with powerful natural language understanding and generation capabilities. It can answer various questions, create text content, express opinions, write code, and play a role in multiple fields.
+
+This document will guide you on how to use Tongyi Qianwen in LobeChat:
+
+
+
+### Step 1: Activate DashScope Model Service
+
+- Visit and log in to Alibaba Cloud's [DashScope](https://dashscope.console.aliyun.com/) platform.
+- If it is your first time, you need to activate the DashScope service.
+- If you have already activated it, you can skip this step.
+
+
+
+### Step 2: Obtain DashScope API Key
+
+- Go to the `API-KEY` interface and create an API key.
+
+
+
+- Copy the API key from the pop-up dialog box and save it securely.
+
+
+
+
+ Please store the key securely as it will only appear once. If you accidentally lose it, you will
+ need to create a new key.
+
+
+### Step 3: Configure Tongyi Qianwen in LobeChat
+
+- Visit the `Settings` interface in LobeChat.
+- Find the setting for `Tongyi Qianwen` under `Language Model`.
+
+
+
+- Open Tongyi Qianwen and enter the obtained API key.
+- Choose a Qwen model for your AI assistant to start the conversation.
+
+
+
+
+ During usage, you may need to pay the API service provider. Please refer to Tongyi Qianwen's
+ relevant pricing policies.
+
+
+
+
+You can now engage in conversations using the models provided by Tongyi Qianwen in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/qwen.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/qwen.zh-CN.mdx
new file mode 100644
index 0000000..09bbb44
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/qwen.zh-CN.mdx
@@ -0,0 +1,87 @@
+---
+title: 在 LobeChat 中使用通义千问 API Key
+description: 学习如何在 LobeChat 中配置和使用阿里云的通义千问模型,提供强大的自然语言理解和生成能力。
+tags:
+ - LobeChat
+ - 通义千问
+ - DashScope
+ - DashScope
+ - API key
+ - Web UI
+---
+
+# 在 LobeChat 中使用通义千问
+
+
+
+[通义千问](https://tongyi.aliyun.com/)是阿里云自主研发的超大规模语言模型,具有强大的自然语言理解和生成能力。它可以回答各种问题、创作文字内容、表达观点看法、撰写代码等,在多个领域发挥作用。
+
+本文档将指导你如何在 LobeChat 中使用通义千问:
+
+
+
+### 步骤一:开通 DashScope 模型服务
+
+- 访问并登录阿里云 [DashScope](https://dashscope.console.aliyun.com/) 平台
+- 初次进入时需要开通 DashScope 服务
+- 若你已开通,可跳过该步骤
+
+
+
+### 步骤二:获取 DashScope API 密钥
+
+- 进入`API-KEY` 界面,并创建一个 API 密钥
+
+
+
+- 在弹出的对话框中复制 API 密钥,并妥善保存
+
+
+
+
+ 请安全地存储密钥,因为它只会出现一次。如果您意外丢失它,您将需要创建一个新密钥。
+
+
+### 步骤三:在LobeChat 中配置通义千问
+
+- 访问 LobeChat 的`设置`界面
+- 在`语言模型`下找到`通义千问`的设置项
+
+
+
+- 打开通义千问并填入获得的 API 密钥
+- 为你的 AI 助手选择一个 Qwen 的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考通义千问的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用通义千问提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/siliconcloud.mdx b/DigitalHumanWeb/docs/usage/providers/siliconcloud.mdx
new file mode 100644
index 0000000..4d5ab5b
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/siliconcloud.mdx
@@ -0,0 +1,47 @@
+---
+title: Using SiliconCloud API Key in LobeChat
+description: >-
+ Learn how to configure and use SiliconCloud's large language models in
+ LobeChat, get your API key, and start chatting.
+tags:
+ - LobeChat
+ - SiliconCloud
+ - API Key
+ - Web UI
+---
+
+# Using SiliconCloud in LobeChat
+
+[SiliconCloud](https://siliconflow.cn/zh-cn/siliconcloud) is a cost-effective large model service provider, offering various services such as text generation and image generation.
+
+This document will guide you on how to use SiliconCloud in LobeChat:
+
+
+
+### Step 1: Get your SiliconCloud API Key
+
+- First, you need to register and log in to [SiliconCloud](https://cloud.siliconflow.cn/auth/login)
+
+Currently, new users can get 14 yuan free credit upon registration
+
+- Go to the `API Key` menu and click `Create New API Key`
+
+- Click copy API key and keep it safe
+
+### Step 2: Configure SiliconCloud in LobeChat
+
+- Visit the `App Settings` interface of LobeChat
+
+- Under `Language Model`, find the `SiliconCloud` settings
+
+- Enable SiliconCloud and enter the obtained API key
+
+- Choose a SiliconCloud model for your assistant and start chatting
+
+
+ You may need to pay the API service provider during use. Please refer to SiliconCloud's relevant fee policy.
+
+
+
+
+Now you can use the models provided by SiliconCloud for conversation in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/siliconcloud.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/siliconcloud.zh-CN.mdx
new file mode 100644
index 0000000..0d4b13c
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/siliconcloud.zh-CN.mdx
@@ -0,0 +1,45 @@
+---
+title: 在 LobeChat 中使用 SiliconCloud API Key
+description: 学习如何在 LobeChat 中配置和使用 SiliconCloud 提供的大语言模型,获取 API 密钥并开始对话。
+tags:
+ - LobeChat
+ - SiliconCloud
+ - API密钥
+ - Web UI
+---
+
+# 在 LobeChat 中使用 SiliconCloud
+
+[SiliconCloud](https://siliconflow.cn/zh-cn/siliconcloud) 是高性价比的大模型服务提供商,提供文本生成与图片生成等多种服务。
+
+本文档将指导你如何在 LobeChat 中使用 SiliconCloud:
+
+
+
+### 步骤一:获取 SiliconCloud API 密钥
+
+- 首先,你需要注册并登录 [SiliconCloud](https://cloud.siliconflow.cn/auth/login)
+
+当前新用户注册可获赠 14 元免费额度
+
+- 进入 `API密钥` 菜单,并点击 `创建新API密钥`
+
+- 点击复制 API 密钥并妥善保存
+
+### 步骤二:在 LobeChat 中配置 SiliconCloud
+
+- 访问 LobeChat 的 `应用设置` 界面
+
+- 在 `语言模型` 下找到 `SiliconCloud` 的设置项
+
+- 打开 SiliconCloud 并填入获取的 API 密钥
+
+- 为你的助手选择一个 SiliconCloud 模型即可开始对话
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 SiliconCloud 的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用 SiliconCloud 提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/stepfun.mdx b/DigitalHumanWeb/docs/usage/providers/stepfun.mdx
new file mode 100644
index 0000000..430daa6
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/stepfun.mdx
@@ -0,0 +1,66 @@
+---
+title: Using Stepfun API Key in LobeChat
+description: >-
+ Learn how to integrate Stepfun AI models into LobeChat for engaging
+ conversations. Obtain Stepfun API key, configure Stepfun in LobeChat settings,
+ and select a model to start chatting.
+tags:
+ - Stepfun
+ - API key
+ - Web UI
+---
+
+# Using Stepfun in LobeChat
+
+
+
+[Stepfun](https://www.stepfun.com/) is a startup focusing on the research and development of Artificial General Intelligence (AGI). They have released the Step-1 billion-parameter language model, Step-1V billion-parameter multimodal model, and the Step-2 trillion-parameter MoE language model preview.
+
+This document will guide you on how to use Stepfun in LobeChat:
+
+
+
+### Step 1: Obtain Stepfun API Key
+
+- Visit and log in to the [Stepfun Open Platform](https://platform.stepfun.com/)
+- Go to the `API Key` menu, where the system has already created an API key for you
+- Copy the created API key
+
+
+
+### Step 2: Configure Stepfun in LobeChat
+
+- Visit the `Settings` interface in LobeChat
+- Find the setting for Stepfun under `Language Models`
+
+
+
+- Open Stepfun and enter the obtained API key
+- Choose a Stepfun model for your AI assistant to start the conversation
+
+
+
+
+ During usage, you may need to pay the API service provider, please refer to Stepfun's relevant
+ pricing policies.
+
+
+
+
+You can now use the models provided by Stepfun to have conversations in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/stepfun.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/stepfun.zh-CN.mdx
new file mode 100644
index 0000000..df6a855
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/stepfun.zh-CN.mdx
@@ -0,0 +1,62 @@
+---
+title: 在 LobeChat 中使用 Stepfun 阶跃星辰 API Key
+description: 学习如何在 LobeChat 中配置和使用 Stepfun 阶跃星辰的人工智能模型,包括获取 API Key 和选择模型开始对话。
+tags:
+ - Stepfun 阶跃星辰
+ - API key
+ - Web UI
+---
+
+# 在 LobeChat 中使用 Stepfun 阶跃星辰
+
+
+
+[Stepfun 阶跃星辰](https://www.stepfun.com/)是一家专注于通用人工智能(AGI)研发的创业公司,目前已推出Step-1千亿参数语言大模型、Step-1V千亿参数多模态大模型,以及Step-2万亿参数MoE语言大模型预览版。
+
+本文档将指导你如何在 LobeChat 中使用 Stepfun 阶跃星辰:
+
+
+
+### 步骤一:获取 Stepfun 阶跃星辰 API 密钥
+
+- 访问并登录 [Stepfun Stepfun 阶跃星辰开放平台](https://platform.stepfun.com/)
+- 进入`接口密钥`菜单,系统已为你创建好 API 密钥
+- 复制已创建的 API 密钥
+
+
+
+### 步骤二:在LobeChat 中配置 Stepfun Stepfun 阶跃星辰
+
+- 访问 LobeChat 的`设置`界面
+- 在`语言模型`下找到` Stepfun 阶跃星辰`的设置项
+
+
+
+- 打开 Stepfun 阶跃星辰并填入获得的 API 密钥
+- 为你的 AI 助手选择一个 Stepfun 阶跃星辰的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 Stepfun 阶跃星辰的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用 Stepfun 阶跃星辰提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/taichu.mdx b/DigitalHumanWeb/docs/usage/providers/taichu.mdx
new file mode 100644
index 0000000..5ccee72
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/taichu.mdx
@@ -0,0 +1,64 @@
+---
+title: Using Taichu API Key in LobeChat
+description: >-
+ Learn how to integrate Taichu AI into LobeChat for enhanced conversational
+ experiences. Follow the steps to configure Taichu AI and start using its
+ models.
+tags:
+ - LobeChat
+ - Taichu
+ - API Key
+ - Web UI
+---
+
+# Using Taichu in LobeChat
+
+
+
+This article will guide you on how to use Taichu in LobeChat:
+
+
+
+### Step 1: Obtain Taichu API Key
+
+- Create an account on [Taichu](https://ai-maas.wair.ac.cn/)
+- Create and obtain an [API key](https://ai-maas.wair.ac.cn/#/settlement/api/key)
+
+
+
+### Step 2: Configure Taichu in LobeChat
+
+- Go to the `Settings` interface in LobeChat
+- Find the setting for `Taichu` under `Language Model`
+
+
+
+- Enter the obtained API key
+- Choose a Purple Taichu model for your AI assistant to start the conversation
+
+
+
+
+ During usage, you may need to pay the API service provider, please refer to Taichu's relevant
+ pricing policies.
+
+
+
+
+Now you can start conversing with the models provided by Taichu in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/taichu.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/taichu.zh-CN.mdx
new file mode 100644
index 0000000..6e87ce0
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/taichu.zh-CN.mdx
@@ -0,0 +1,61 @@
+---
+title: 在 LobeChat 中使用紫东太初 API Key
+description: 学习如何在 LobeChat 中配置和使用紫东太初的API Key,以便开始对话和交互。
+tags:
+ - LobeChat
+ - 太初
+ - 紫东太初
+ - API密钥
+ - Web UI
+---
+
+# 在 LobeChat 中使用紫东太初
+
+
+
+本文将指导你如何在 LobeChat 中使用紫东太初:
+
+
+
+### 步骤一:获取紫东太初 API 密钥
+
+- 创建一个[紫东太初](https://ai-maas.wair.ac.cn/)账户
+- 创建并获取 [API 密钥](https://ai-maas.wair.ac.cn/#/settlement/api/key)
+
+
+
+### 步骤二:在 LobeChat 中配置紫东太初
+
+- 访问 LobeChat 的`设置`界面
+- 在`语言模型`下找到`紫东太初`的设置项
+
+
+
+- 填入获得的 API 密钥
+- 为你的 AI 助手选择一个紫东太初的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考紫东太初的相关费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用紫东太初提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/togetherai.mdx b/DigitalHumanWeb/docs/usage/providers/togetherai.mdx
new file mode 100644
index 0000000..fd158c1
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/togetherai.mdx
@@ -0,0 +1,72 @@
+---
+title: Using Together AI in LobeChat API Key
+description: >-
+ Learn how to integrate Together AI into LobeChat, obtain the API key,
+ configure settings, and start conversations with AI models.
+tags:
+ - Together AI
+ - API key
+ - Web UI
+---
+
+# Using Together AI in LobeChat
+
+
+
+[together.ai](https://www.together.ai/) is a platform focused on the field of Artificial Intelligence Generated Content (AIGC), founded in June 2022. It is dedicated to building a cloud platform for running, training, and fine-tuning open-source models, providing scalable computing power at prices lower than mainstream vendors.
+
+This document will guide you on how to use Together AI in LobeChat:
+
+
+
+### Step 1: Obtain the API Key for Together AI
+
+- Visit and log in to [Together AI API](https://api.together.ai/)
+- Upon initial login, the system will automatically create an API key for you and provide a $5.0 credit
+
+
+
+- If you haven't saved it, you can also view the API key at any time in the `API Key` interface under `Settings`
+
+
+
+### Step 2: Configure Together AI in LobeChat
+
+- Visit the `Settings` interface in LobeChat
+- Find the setting for `together.ai` under `Language Model`
+
+
+
+- Open together.ai and enter the obtained API key
+- Choose a Together AI model for your assistant to start the conversation
+
+
+
+
+ During usage, you may need to pay the API service provider, please refer to Together AI's pricing
+ policy.
+
+
+
+
+You can now engage in conversations using the models provided by Together AI in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/togetherai.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/togetherai.zh-CN.mdx
new file mode 100644
index 0000000..7e16495
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/togetherai.zh-CN.mdx
@@ -0,0 +1,70 @@
+---
+title: 在 LobeChat 中使用 Together AI API Key
+description: 学习如何在 LobeChat 中配置和使用 Together AI 的API Key,以便开始对话和交互。
+tags:
+ - LobeChat
+ - Together AI
+ - API密钥
+ - Web UI
+---
+
+# 在 LobeChat 中使用 Together AI
+
+
+
+[together.ai](https://www.together.ai/) 是一家专注于生成式人工智能(AIGC)领域的平台,成立于2022年6月。 它致力于构建用于运行、训练和微调开源模型的云平台,以低于主流供应商的价格提供可扩展的计算能力。
+
+本文档将指导你如何在 LobeChat 中使用 Together AI:
+
+
+
+### 步骤一:获取 Together AI 的 API 密钥
+
+- 访问并登录 [Together AI API](https://api.together.ai/)
+- 初次登录时系统会自动为你创建好 API 密钥并赠送 $5.0 的额度
+
+
+
+- 如果你没有保存,也可以在后续任意时间,通过 `设置` 中的 `API 密钥` 界面查看
+
+
+
+### 步骤二:在 LobeChat 中配置 Together AI
+
+- 访问LobeChat的`设置`界面
+- 在`语言模型`下找到`together.ai`的设置项
+
+
+
+- 打开 together.ai 并填入获得的 API 密钥
+- 为你的助手选择一个 Together AI 的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 Together AI 的费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用 Together AI 提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/providers/zhipu.mdx b/DigitalHumanWeb/docs/usage/providers/zhipu.mdx
new file mode 100644
index 0000000..048cc6c
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/zhipu.mdx
@@ -0,0 +1,67 @@
+---
+title: Using Zhipu ChatGLM API Key in LobeChat
+description: >-
+ Learn how to integrate and utilize Zhipu AI models in LobeChat for enhanced
+ conversational experiences. Obtain the API key, configure settings, and start
+ engaging with cognitive intelligence.
+tags:
+ - Zhipu AI
+ - ChatGLM
+ - API Key
+ - Web UI
+---
+
+# Using Zhipu ChatGLM in LobeChat
+
+
+
+[Zhipu AI](https://www.zhipuai.cn/) is a high-tech company originating from the Department of Computer Science at Tsinghua University. Established in 2019, the company focuses on natural language processing, machine learning, and big data analysis, dedicated to expanding the boundaries of artificial intelligence technology in the field of cognitive intelligence.
+
+This document will guide you on how to use Zhipu AI in LobeChat:
+
+
+
+### Step 1: Obtain the API Key for Zhipu AI
+
+- Visit and log in to the [Zhipu AI Open Platform](https://open.bigmodel.cn/)
+- Upon initial login, the system will automatically create an API key for you and gift you a resource package of 25M Tokens
+- Navigate to the `API Key` section at the top to view your API key
+
+
+
+### Step 2: Configure Zhipu AI in LobeChat
+
+- Visit the `Settings` interface in LobeChat
+- Under `Language Model`, locate the settings for Zhipu AI
+
+
+
+- Open Zhipu AI and enter the obtained API key
+- Choose a Zhipu AI model for your assistant to start the conversation
+
+
+
+
+ During usage, you may need to pay the API service provider, please refer to Zhipu AI's pricing
+ policy.
+
+
+
+
+You can now engage in conversations using the models provided by Zhipu AI in LobeChat.
diff --git a/DigitalHumanWeb/docs/usage/providers/zhipu.zh-CN.mdx b/DigitalHumanWeb/docs/usage/providers/zhipu.zh-CN.mdx
new file mode 100644
index 0000000..811b426
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/providers/zhipu.zh-CN.mdx
@@ -0,0 +1,61 @@
+---
+title: 在 LobeChat 中使用智谱 ChatGLM API Key
+description: 学习如何在 LobeChat 中配置和使用智谱AI的 API Key,开始与智谱AI提供的模型进行对话。
+tags:
+ - 智谱AI
+ - ChatGLM
+ - API Key
+ - Web UI
+---
+
+# 在 LobeChat 中使用智谱 ChatGLM
+
+
+
+[智谱AI](https://www.zhipuai.cn/) 是一家源自清华大学计算机系技术成果的高科技公司,成立于2019年,专注于自然语言处理、机器学习和大数据分析,致力于在认知智能领域拓展人工智能技术的边界。
+
+本文档将指导你如何在 LobeChat 中使用智谱 AI:
+
+
+
+### 步骤一:获取智谱 AI 的 API 密钥
+
+- 访问并登录 [智谱AI开放平台](https://open.bigmodel.cn/)
+- 初次登录时系统会自动为你创建好 API 密钥并赠送 25M Tokens 的资源包
+- 进入顶部的 `API密钥` 可以查看你的 API
+
+
+
+### 步骤二:在 LobeChat 中配置智谱AI
+
+- 访问LobeChat的`设置`界面
+- 在`语言模型`下找到`智谱AI`的设置项
+
+
+
+- 打开智谱 AI 并填入获得的 API 密钥
+- 为你的助手选择一个智谱AI的模型即可开始对话
+
+
+
+
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考智谱AI的费用政策。
+
+
+
+
+至此你已经可以在 LobeChat 中使用智谱AI提供的模型进行对话了。
diff --git a/DigitalHumanWeb/docs/usage/start.mdx b/DigitalHumanWeb/docs/usage/start.mdx
new file mode 100644
index 0000000..856a6a3
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/start.mdx
@@ -0,0 +1,48 @@
+---
+title: Get started with LobeChat
+description: >-
+ Explore the exciting features in LobeChat, including Vision Model, TTS & STT,
+ Local LLMs, and Multi AI Providers. Discover more about Agent Market, Plugin
+ System, and Personalization.
+tags:
+ - Feature Overview
+ - Vision Model
+ - TTS & STT
+ - Local LLMs
+ - Multi AI Providers
+ - Agent Market
+ - Plugin System
+---
+
+# ✨ Feature Overview
+
+
+
+
+
+## Experience Features
+
+
diff --git a/DigitalHumanWeb/docs/usage/start.zh-CN.mdx b/DigitalHumanWeb/docs/usage/start.zh-CN.mdx
new file mode 100644
index 0000000..421df1a
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/start.zh-CN.mdx
@@ -0,0 +1,40 @@
+---
+title: 开始使用 LobeChat
+description: 了解 LobeChat 的功能特性,包括视觉识别、语音会话、多 AI 服务商等,体验助手市场、本地大语言模型、插件系统等功能。
+tags:
+ - LobeChat
+ - 功能特性
+ - 视觉识别
+ - 语音会话
+ - AI 服务商
+ - 助手市场
+ - 本地大语言模型
+ - 插件系统
+---
+
+# ✨ LobeChat 功能特性一览
+
+
+
+
+
+## 体验特性
+
+
diff --git a/DigitalHumanWeb/docs/usage/tools-calling.mdx b/DigitalHumanWeb/docs/usage/tools-calling.mdx
new file mode 100644
index 0000000..490bb8b
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/tools-calling.mdx
@@ -0,0 +1,3 @@
+# Tools Calling
+
+TODO
diff --git a/DigitalHumanWeb/docs/usage/tools-calling.zh-CN.mdx b/DigitalHumanWeb/docs/usage/tools-calling.zh-CN.mdx
new file mode 100644
index 0000000..8b37f6c
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/tools-calling.zh-CN.mdx
@@ -0,0 +1,256 @@
+---
+title: 大模型工具调用(Tools Calling)评测
+description: 基于 LobeChat 测试主流支持工具调用(Tools Calling) 的大模型,并客观呈现评测结果
+tags:
+ - Tools Calling
+ - Benchmark
+ - Function Calling 评测
+ - 工具调用
+ - LobeChat 插件
+---
+
+# 大模型工具调用(Tools Calling)评测
+
+Tools Calling 是大语言模型的高级能力。你可以通过在 API 请求中传入一组工具列表,让模型智能地选择具体使用哪个工具,并在返回请求中输出工具调用的 JSON 参数。
+
+
+ 如果你之前没有了解过 Tools Calling, 可以查看 [Function Call: Chat
+ 应用的插件基石与交互技术的变革黎明](https://lobehub.com/zh/blog/openai-function-call) 这篇文章。
+
+
+随着社区中越来越多的大语言模型支持了 Tools Calling 能力,同时得益于 LobeChat 的 Agent Runtime 架构,我们几乎实现了所有主流大语言模型( OpenAI 、Claude 、Gemini 等等)的 Tools Calling 调用能力。
+
+LobeChat 的插件实现基于模型的 Tools Calling 能力,模型本身的 Tools Calling 能力决定插件调用是否正常。作为上层应用,我们针对各个模型的 Tools Calling 做了较为完善的测试,以便帮助我们的用户了解现有的模型能力,更好地进行抉择。
+
+## 评测任务介绍
+
+我们基于实际真实的用户场景出发构建了两大组测试任务,第一组为简单的调用指令(天气查询),第二组为复杂调用指令(文生图)。这两组指令的系统描述如下:
+
+
+
+
+```md
+## Tools
+
+You can use these tools below:
+
+### Realtime Weather
+
+Get realtime weather information
+
+The APIs you can use:
+
+#### `realtime-weather____fetchCurrentWeather`
+
+获取当前天气情况
+```
+
+
+
+
+
+```md
+## Tools
+
+You can use these tools below:
+
+### DALL·E 3
+
+Whenever a description of an image is given, use lobe-image-designer to create the images and then summarize the prompts used to generate the images in plain text. If the user does not ask for a specific number of images, default to creating four captions to send to lobe-image-designer that are written to be as diverse as possible.
+
+All captions sent to lobe-image-designer must abide by the following policies:
+
+1. If the description is not in English, then translate it.
+2. Do not create more than 4 images, even if the user requests more.
+3. Don't create images of politicians or other public figures. Recommend other ideas instead.
+4. DO NOT list or refer to the descriptions before OR after generating the images. They should ONLY ever be written out ONCE, in the `prompts` field of the request. You do not need to ask for permission to generate, just do it!
+5. Always mention the image type (photo, oil painting, watercolor painting, illustration, cartoon, drawing, vector, render, etc.) at the beginning of the caption. Unless the caption suggests otherwise, make at least 1--2 of the 4 images photos.
+6. Diversify depictions of ALL images with people to include DESCENT and GENDER for EACH person using direct terms. Adjust only human descriptions.
+
+- EXPLICITLY specify these attributes, not abstractly reference them. The attributes should be specified in a minimal way and should directly describe their physical form.
+- Your choices should be grounded in reality. For example, all of a given OCCUPATION should not be the same gender or race. Additionally, focus on creating diverse, inclusive, and exploratory scenes via the properties you choose during rewrites. Make choices that may be insightful or unique sometimes.
+- Use "various" or "diverse" ONLY IF the description refers to groups of more than 3 people. Do not change the number of people requested in the original description.
+- Don't alter memes, fictional character origins, or unseen people. Maintain the original prompt's intent and prioritize quality.
+- Do not create any imagery that would be offensive.
+
+8. Silently modify descriptions that include names or hints or references of specific people or celebrities by carefully selecting a few minimal modifications to substitute references to the people with generic descriptions that don't divulge any information about their identities, except for their genders and physiques. Do this EVEN WHEN the instructions ask for the prompt to not be changed. Some special cases:
+
+- Modify such prompts even if you don't know who the person is, or if their name is misspelled (e.g. "Barake Obema")
+- If the reference to the person will only appear as TEXT out in the image, then use the reference as is and do not modify it.
+- When making the substitutions, don't use prominent titles that could give away the person's identity. E.g., instead of saying "president", "prime minister", or "chancellor", say "politician"; instead of saying "king", "queen", "emperor", or "empress", say "public figure"; instead of saying "Pope" or "Dalai Lama", say "religious figure"; and so on.
+- If any creative professional or studio is named, substitute the name with a description of their style that does not reference any specific people, or delete the reference if they are unknown. DO NOT refer to the artist or studio's style.
+
+The prompt must intricately describe every part of the image in concrete, objective detail. THINK about what the end goal of the description is, and extrapolate that to what would make satisfying images. All descriptions sent to lobe-image-designer should be a paragraph of text that is extremely descriptive and detailed. Each should be more than 3 sentences long.
+
+The APIs you can use:
+
+#### `lobe-image-designer____text2image____builtin`
+
+Create images from a text-only prompt.
+```
+
+
+
+
+
+如上所示,简单调用指令在插件调用时它的系统描述(system role)相对简单,复杂调用指令的系统描述会复杂很多。这两组不同复杂度的指令可以比较好地区分出模型对于系统指令的遵循能力:
+
+- **天气查询可以测试模型的基础 Tools Calling 能力,确认模型是否存在「虚假宣传」的情况。** 就我们实际的测试来看,的确存在一些模型号称具有 Tools Calling 能力,但是处于完全不可用的状态;
+- **文生图可以测试模型指令跟随能力的上限。** 例如基础模型(例如 GPT-3.5)可能只能生成 1 张图片的 prompt,而高级模型(例如 GPT-4o)则能够生成 1~4 张图片的 prompt。
+
+### 简单调用指令:天气查询
+
+天气查询是 Tools Calling 中一个经典的例子。
+
+天气查询插件采用的是我们自己做的一个简单的插件,它的工具定义如下:
+
+```json
+{
+ "function": {
+ "description": "获取当前天气情况",
+ "name": "realtime-weather____fetchCurrentWeather",
+ "parameters": {
+ "properties": {
+ "city": {
+ "description": "城市名称",
+ "type": "string"
+ }
+ },
+ "required": ["city"],
+ "type": "object"
+ }
+ },
+ "type": "function"
+}
+```
+
+针对这一个工具,我们构建的测试组中包含了三个指令:
+
+| 指令编号 | 指令内容 | 基础 Tools Calling 调用 | 并发调用 | 复合指令跟随 |
+| --- | --- | --- | --- | --- |
+| 指令 ① | 告诉我杭州和北京的天气,先回答我好的 | 🟢 | 🟢 | 🟢 |
+| 指令 ② | 告诉我杭州和北京的天气 | 🟢 | 🟢 | - |
+| 指令 ③ | 告诉我杭州的天气 | 🟢 | - | - |
+
+上述三个指令的复杂度逐渐递减,我们可以通过这三个指令来测试模型对于简单指令的处理能力。
+
+- 指令 ① 测试的能力项包含 「基础 Tools Calling 调用」、「并发调用」、「复合指令跟随」三项。
+- 指令 ② 测试的能力项包含 「基础 Tools Calling 调用」、「并发调用」 两项。
+- 指令 ③ 测试的能力项仅包含「基础 Tools Calling 调用」。
+
+
+ 将指令 ① 、② 、③ 按照难度递减的方式排序的目的,是为了降低测试的成本。因为当模型能通过指令 ①
+ 的测试时,我们就不需要继续测试指令 ② 和指令 ③ ,必然能通过。
+
+
+测试能力项详细说明:
+
+
+
+
+ 根据我们实际的日常使用,工具调用往往会和普通文本生成结合在一起回答。例如比较经典的 Code Interpreter 插件,ChatGPT 往往会先回复一些代码生成的思路,然后再调用 Code Interpreter 插件生成代码。
+
+ 这种情况下,我们需要模型能够正确地识别出用户的意图,然后调用对应的工具。
+
+ 因此, 指令 ① 中的「告诉我杭州和北京的天气,先回答我好的」就是一个复合指令跟随的例子。前半句期望模型调用天气查询工具,后半句期望模型回答「好的」。并且理想的顺序应该是先回答「好的」,然后再调用天气查询工具。
+
+
+
+
+ 并发工具调用(Parallel function calling)是指模型能够同时调用多个工具,或同时调用一个工具多次,这在对话中可以大大降低用户等待的时间,提升用户体验。
+
+ 并发工具调用能力由 OpenAI 于 2023年11月率先提出,目前支持并发工具调用的模型并不算多,属于是 Tools Calling 的进阶能力。
+
+ 指令 ② 中的「告诉我杭州和北京的天气」就是一个期望执行并发调用的例子。理想的情况下,单个模型的返回应该存在两个工具的调用返回。
+
+
+
+
+ 基础工具调用不必再赘述,这是 Tools Calling 的基础能力。
+
+ 指令 ③ 中的「告诉我杭州的天气」就是最基本的工具调用的例子。
+
+
+
+
+### 复杂调用指令:文生图
+
+文生图的 Tools Calling 基本照搬了 ChatGPT Plus 的指令,它的复杂度相对较高,可以测试模型对于复杂指令的跟随能力。工具定义如下:
+
+```json
+{
+ "function": {
+ "description": "Create images from a text-only prompt.",
+ "name": "lobe-image-designer____text2image____builtin",
+ "parameters": {
+ "properties": {
+ "prompts": {
+ "description": "The user's original image description, potentially modified to abide by the lobe-image-designer policies. If the user does not suggest a number of captions to create, create four of them. If creating multiple captions, make them as diverse as possible. If the user requested modifications to previous images, the captions should not simply be longer, but rather it should be refactored to integrate the suggestions into each of the captions. Generate no more than 4 images, even if the user requests more.",
+ "items": {
+ "type": "string"
+ },
+ "maxItems": 4,
+ "minItems": 1,
+ "type": "array"
+ },
+ "quality": {
+ "default": "standard",
+ "description": "The quality of the image that will be generated. hd creates images with finer details and greater consistency across the image.",
+ "enum": ["standard", "hd"],
+ "type": "string"
+ },
+ "seeds": {
+ "description": "A list of seeds to use for each prompt. If the user asks to modify a previous image, populate this field with the seed used to generate that image from the image lobe-image-designer metadata.",
+ "items": {
+ "type": "integer"
+ },
+ "type": "array"
+ },
+ "size": {
+ "default": "1024x1024",
+ "description": "The resolution of the requested image, which can be wide, square, or tall. Use 1024x1024 (square) as the default unless the prompt suggests a wide image, 1792x1024, or a full-body portrait, in which case 1024x1792 (tall) should be used instead. Always include this parameter in the request.",
+ "enum": ["1792x1024", "1024x1024", "1024x1792"],
+ "type": "string"
+ },
+ "style": {
+ "default": "vivid",
+ "description": "The style of the generated images. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images.",
+ "enum": ["vivid", "natural"],
+ "type": "string"
+ }
+ },
+ "required": ["prompts"],
+ "type": "object"
+ }
+ },
+ "type": "function"
+}
+```
+
+针对这一个工具,我们构建的测试组中包含了两个指令:
+
+| 指令编号 | 指令内容 | 流式调用 | 复杂 Tools Calling 调用 | 并发调用 | 复合指令跟随 |
+| --- | --- | --- | --- | --- | --- |
+| 指令 ① | 我要画 3 幅画,第一幅画的主体为一只达芬奇风格的小狗,第二幅是毕加索风格的大雁,最后一幅是莫奈风格的狮子。每一幅都需要产出 2 个 prompts。请先说明你的构思,然后开始生成相应的图片。 | 🟢 | 🟢 | 🟢 | 🟢 |
+| 指令 ② | 画一只小狗 | 🟢 | 🟢 | - | - |
+
+此外,由于文生图的 prompts 的生成时间较长,这一组指令也可以清晰地测试出模型的 API 是否支持流式 Tools Calling。
+
+## 评测结果
+
+各模型的评测细节可以点击查看:
+
+
+
+
+
+
+
+
+
+### 结果汇总
+
+TODO
diff --git a/DigitalHumanWeb/docs/usage/tools-calling/anthropic.mdx b/DigitalHumanWeb/docs/usage/tools-calling/anthropic.mdx
new file mode 100644
index 0000000..a6ed471
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/tools-calling/anthropic.mdx
@@ -0,0 +1,185 @@
+---
+title: Anthropic Claude 系列 Tools Calling 评测
+description: >-
+ 使用 LobeChat 测试 Anthropic Claude 系列模型(Claude 3.5 sonnet / Claude 3 Opus /
+ Claude 3 haiku) 的工具调用(Function Calling)能力,并展现评测结果
+tags:
+ - Tools Calling
+ - Benchmark
+ - Function Calling 评测
+ - 工具调用
+ - 插件
+---
+
+# Anthropic Claude Series Tools Calling
+
+Overview of Anthropic Claude Series model Tools Calling capabilities:
+
+| Model | Support Tools Calling | Stream | Parallel | Simple Instruction Score | Complex Instruction |
+| --- | --- | --- | --- | --- | --- |
+| Claude 3.5 Sonnet | ✅ | ✅ | ✅ | 🌟🌟🌟 | 🌟🌟 |
+| Claude 3 Opus | ✅ | ✅ | ❌ | 🌟 | ⛔️ |
+| Claude 3 Sonnet | ✅ | ✅ | ❌ | 🌟🌟 | ⛔️ |
+| Claude 3 Haiku | ✅ | ✅ | ❌ | 🌟🌟 | ⛔️ |
+
+## Claude 3.5 Sonnet
+
+### Simple Instruction Call: Weather Query
+
+Test Instruction: Instruction ①
+
+
+
+
+
+
+ Tools Calling Raw Output:
+
+```yml
+
+```
+
+
+
+### Complex Instruction Call: Literary Map
+
+Test Instruction: Instruction ②
+
+
+
+From the above video:
+
+1. Sonnet 3.5 supports Stream Tools Calling and Parallel Tools Calling;
+2. In Stream Tools Calling, it is observed that creating long sentences will cause a delay (as seen in the Tools Calling raw output `[chunk 40]` and `[chunk 41]` with a delay of 6s). Therefore, there will be a relatively long waiting time at the beginning stage of Tools Calling.
+
+
+
+
+ Tools Calling Raw Output:
+
+```yml
+
+```
+
+
+
+## Claude 3 Opus
+
+### Simple Instruction Call: Weather Query
+
+Test Instruction: Instruction ①
+
+
+
+From the above video:
+
+1. Claude 3 Opus outputs a `` tag at the beginning of Tools Calling, which is not very helpful for users and consumes more tokens;
+2. Opus triggers Tools Calling twice, indicating that it does not support Parallel Tools Calling;
+3. The raw output of Tools Calling shows that Opus also supports Stream Tools Calling.
+
+
+
+
+ Tools Calling Raw Output:
+
+
+
+### Complex Instruction Call: Literary Map
+
+Test Instruction: Instruction ②
+
+
+
+From the above video:
+
+1. Combining with simple tasks, Opus will always output a `` tag, which significantly impacts the user experience;
+2. Opus outputs the prompts field as a string instead of an array, causing an error and preventing the plugin from being called correctly.
+
+
+
+
+ Tools Calling Raw Output:
+
+
+
+## Claude 3 Sonnet
+
+### Simple Instruction Call: Weather Query
+
+Test Instruction: Instruction ①
+
+
+
+From the above video, it can be seen that Claude 3 Sonnet triggers Tools Calling twice, indicating that it does not support Parallel Tools Calling.
+
+
+
+
+ Tools Calling Raw Output:
+
+
+
+### Complex Instruction Call: Literary Map
+
+Test Instruction: Instruction ②
+
+
+
+From the above video, it can be seen that Sonnet 3 fails in the complex instruction call. The error is due to prompts being expected as an array but generated as a string.
+
+
+
+
+ Tools Calling Raw Output:
+
+
+
+## Claude 3 Haiku
+
+
+
+From the above video:
+
+1. Claude 3 Haiku triggers Tools Calling twice, indicating that it also does not support Parallel Tools Calling;
+2. Haiku does not provide a good response and directly calls the tool;
+
+
+
+### Complex Instruction Call: Literary Map
+
+Test Instruction: Instruction ②
+
+
+
+From the above video, it can be seen that Haiku 3 also fails in the complex instruction call. The error is the same as prompts generating a string instead of an array.
+
+
+
+
+ Tools Calling Raw Output:
+
+
diff --git a/DigitalHumanWeb/docs/usage/tools-calling/anthropic.zh-CN.mdx b/DigitalHumanWeb/docs/usage/tools-calling/anthropic.zh-CN.mdx
new file mode 100644
index 0000000..95db4ff
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/tools-calling/anthropic.zh-CN.mdx
@@ -0,0 +1,185 @@
+---
+title: Anthropic Claude 系列 Tools Calling 评测
+description: >-
+ 使用 LobeChat 测试 Anthropic Claude 系列模型(Claude 3.5 sonnet / Claude 3 Opus /
+ Claude 3 haiku) 的工具调用(Function Calling)能力,并展现评测结果
+tags:
+ - Tools Calling
+ - Benchmark
+ - Function Calling 评测
+ - 工具调用
+ - 插件
+---
+
+# Anthropic Claude 系列 Tools Calling
+
+Anthropic Claude 系列模型 Tools Calling 能力一览:
+
+| 模型 | 支持 Tools Calling | 流式 (Stream) | 并发(Parallel) | 简单指令得分 | 复杂指令 |
+| --- | --- | --- | --- | --- | --- |
+| Claude 3.5 Sonnet | ✅ | ✅ | ✅ | 🌟🌟🌟 | 🌟🌟 |
+| Claude 3 Opus | ✅ | ✅ | ❌ | 🌟 | ⛔️ |
+| Claude 3 Sonnet | ✅ | ✅ | ❌ | 🌟🌟 | ⛔️ |
+| Claude 3 Haiku | ✅ | ✅ | ❌ | 🌟🌟 | ⛔️ |
+
+## Claude 3.5 Sonnet
+
+### 简单调用指令:天气查询
+
+测试指令:指令 ①
+
+
+
+
+
+
+ Tools Calling 原始输出:
+
+```yml
+
+```
+
+
+
+### 复杂调用指令:文生图
+
+测试指令:指令 ②
+
+
+
+从上述视频中可以看到:
+
+1. Sonnet 3.5 支持流式 Tools Calling 和 Parallel Tools Calling;
+2. 在流式 Tools Calling 时,表现出来的特征是在创建长句会等待住(详见 Tools Calling 原始输出 `[chunk 40]` 和 `[chunk 41]` 中间的耗时达到 6s)。所以相对来说会在 Tools Calling 的起始阶段有一个较长的等待时间。
+
+
+
+
+ Tools Calling 原始输出:
+
+```yml
+
+```
+
+
+
+## Claude 3 Opus
+
+### 简单调用指令:天气查询
+
+测试指令:指令 ①
+
+
+
+从上述视频中看到:
+
+1. Claude 3 Opus 在调用 Tools 的起点会输出一段 `` 标签的内容,这段内容对于用户来说几乎没有什么帮助,反而带来了较多的 Token 消耗;
+2. Opus 会触发两次 Tools Calling,说明它并不支持 Parallel Tools Calling;
+3. 从 Tools Calling 的原始输出来看, Opus 也是支持流式 Tools Calling 的
+
+
+
+
+ Tools Calling 原始输出:
+
+
+
+### 复杂调用指令:文生图
+
+测试指令:指令 ②
+
+
+
+从上述视频中看到:
+
+1. 结合简单任务, Opus 的工具调用一定会输出 `` 标签,这其实对体验影响非常大
+2. Opus 输出的 prompts 字段是字符串,而不是数组,导致报错,无法正常调用插件。
+
+
+
+
+ Tools Calling 原始输出:
+
+
+
+## Claude 3 Sonnet
+
+### 简单调用指令:天气查询
+
+测试指令:指令 ①
+
+
+
+从上述视频中可以看出,Claude 3 Sonnet 会调用两次 Tools Calling,说明它并不支持 Parallel Tools Calling。
+
+
+
+
+ Tools Calling 原始输出:
+
+
+
+### 复杂调用指令:文生图
+
+测试指令:指令 ②
+
+
+
+从上述视频中可以看到, Sonnet 3 在复杂指令调用下就失败了。报错原因是 prompts 原本预期为一个数组,但是生成的却是一个字符串。
+
+
+
+
+ Tools Calling 原始输出:
+
+
+
+## Claude 3 Haiku
+
+
+
+从上述视频中可以看出:
+
+1. Claude 3 Haiku 会调用两次 Tools Calling,说明它也不支持 Parallel Tools Calling;
+2. Haiku 并没有回答好的,也是直接调用的工具;
+
+
+
+### 复杂调用指令:文生图
+
+测试指令:指令 ②
+
+
+
+从上述视频中可以看到, Haiku 3 在复杂指令调用下也是失败的。报错原因同样是 prompts 生成了字符串而不是数组。
+
+
+
+
+ Tools Calling 原始输出:
+
+
diff --git a/DigitalHumanWeb/docs/usage/tools-calling/google.mdx b/DigitalHumanWeb/docs/usage/tools-calling/google.mdx
new file mode 100644
index 0000000..2df7326
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/tools-calling/google.mdx
@@ -0,0 +1,116 @@
+---
+title: Google Gemini 系列 Tool Calling 评测
+description: >-
+ 使用 LobeChat 测试 Google Gemini 系列模型(Gemini 1.5 Pro / Gemini 1.5 Flash)
+ 的工具调用(Function Calling)能力,并展现评测结果
+tags:
+ - Tools Calling
+ - Benchmark
+ - Function Calling 评测
+ - 工具调用
+ - 插件
+---
+
+# Google Gemini Series Tool Calling
+
+Overview of Google Gemini series model Tools Calling capabilities:
+
+| Model | Tools Calling Support | Streaming | Parallel | Simple Instruction Score | Complex Instruction |
+| --- | --- | --- | --- | --- | --- |
+| Gemini 1.5 Pro | ✅ | ❌ | ✅ | ⛔ | ⛔ |
+| Gemini 1.5 Flash | ❌ | ❌ | ❌ | ⛔ | ⛔ |
+
+
+ Based on our actual tests, we strongly recommend not enabling plugins for Gemini because as of
+ July 7, 2024, its Tools Calling capability is extremely poor.
+
+
+## Gemini 1.5 Pro
+
+### Simple Instruction Call: Weather Query
+
+Test Instruction: Instruction ①
+
+
+
+In the json output from Gemini, the name is incorrect, so LobeChat cannot recognize which plugin it called. (In the input, the name of the weather plugin is `realtime-weather____fetchCurrentWeather`, while Gemini returns `weather____fetchCurrentWeather`).
+
+
+
+
+ Original Tools Calling Output:
+
+```yml
+[stream start] 2024-7-7 17:53:25.647
+[chunk 0] 2024-7-7 17:53:25.654
+{"candidates":[{"content":{"parts":[{"text":"好的"}],"role":"model"},"finishReason":"STOP","index":0}],"usageMetadata":{"promptTokenCount":95,"candidatesTokenCount":1,"totalTokenCount":96}}
+
+[chunk 1] 2024-7-7 17:53:26.288
+{"candidates":[{"content":{"parts":[{"text":"\n\n"}],"role":"model"},"finishReason":"STOP","index":0,"safetyRatings":[{"category":"HARM_CATEGORY_SEXUALLY_EXPLICIT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HATE_SPEECH","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HARASSMENT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_DANGEROUS_CONTENT","probability":"NEGLIGIBLE"}]}],"usageMetadata":{"promptTokenCount":95,"candidatesTokenCount":1,"totalTokenCount":96}}
+
+[chunk 2] 2024-7-7 17:53:26.336
+{"candidates":[{"content":{"parts":[{"functionCall":{"name":"weather____fetchCurrentWeather","args":{"city":"Hangzhou"}}},{"functionCall":{"name":"weather____fetchCurrentWeather","args":{"city":"Beijing"}}}],"role":"model"},"finishReasoSTOP","index":0,"safetyRatings":[{"category":"HARM_CATEGORY_SEXUALLY_EXPLICIT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HATE_SPEECH","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HARASSMENT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_DANGEROUS_CONTENT","probability":"NEGLIGIBLE"}]}],"usageMetadata":{"promptTokenCount":95,"candidatesTokenCount":79,"totalTokenCount":174}}
+
+[stream finished] total chunks: 3
+```
+
+
+
+### Complex Instruction Call: Image Generation
+
+Test Instruction: Instruction ②
+
+
+
+When testing a set of complex instructions, Google throws an error directly:
+
+```json
+{
+ "message": "[400 Bad Request] Invalid JSON payload received. Unknown name \"maxItems\" at 'tools[0].function_declarations[0].parameters.properties[0].value': Cannot find field.\nInvalid JSON payload received. Unknown name \"minItems\" at 'tools[0].function_declarations[0].parameters.properties[0].value': Cannot find field.\nInvalid JSON payload received. Unknown name \"default\" at 'tools[0].function_declarations[0].parameters.properties[1].value': Cannot find field.\nInvalid JSON payload received. Unknown name \"default\" at 'tools[0].function_declarations[0].parameters.properties[3].value': Cannot find field.\nInvalid JSON payload received. Unknown name \"default\" at 'tools[0].function_declarations[0].parameters.properties[4].value': Cannot find field. [{\"@type\":\"type.googleapis.com/google.rpc.BadRequest\",\"fieldViolations\":[{\"field\":\"tools[0].function_declarations[0].parameters.properties[0].value\",\"description\":\"Invalid JSON payload received. Unknown name \\\"maxItems\\\" at 'tools[0].function_declarations[0].parameters.properties[0].value': Cannot find field.\"},{\"field\":\"tools[0].function_declarations[0].parameters.properties[0].value\",\"description\":\"Invalid JSON payload received. Unknown name \\\"minItems\\\" at 'tools[0].function_declarations[0].parameters.properties[0].value': Cannot find field.\"},{\"field\":\"tools[0].function_declarations[0].parameters.properties[1].value\",\"description\":\"Invalid JSON payload received. Unknown name \\\"default\\\" at 'tools[0].function_declarations[0].parameters.properties[1].value': Cannot find field.\"},{\"field\":\"tools[0].function_declarations[0].parameters.properties[3].value\",\"description\":\"Invalid JSON payload received. Unknown name \\\"default\\\" at 'tools[0].function_declarations[0].parameters.properties[3].value': Cannot find field.\"},{\"field\":\"tools[0].function_declarations[0].parameters.properties[4].value\",\"description\":\"Invalid JSON payload received. Unknown name \\\"default\\\" at 'tools[0].function_declarations[0].parameters.properties[4].value': Cannot find field.\"}]}]"
+}
+```
+
+The error above mentions that it does not support a schema containing `maxItems`, so Gemini 1.5 Pro is essentially unable to use the DallE plugin.
+
+Related issues:
+
+- [Support for minItems and maxItems for FunctionDeclarationSchemaType.ARRAY?](https://github.com/google-gemini/generative-ai-js/issues/200)
+- [Gemini Models unusable when dalle plugin is enabled](https://github.com/lobehub/lobe-chat/issues/2537)
+
+Based on the above two tests, Google's Tool Calling capability seems to be supported, but it is almost unusable in daily use. I personally think it is equivalent to false advertising.
+
+## Gemini 1.5 Flash
+
+### Simple Command: Weather Query
+
+Test Command: Command ①
+
+
+
+Gemini 1.5 Flash is more abstract, and the call ends as soon as it is made. Combining the original output below, it can be seen that Gemini 1.5 Flash does not output Tool Calling data, so it can be considered completely unusable.
+
+```yml
+stream start] 2024-7-7 19:4:50.936
+[chunk 0] 2024-7-7 19:4:50.943
+{"candidates":[{"content":{"parts":[{"text":"Okay"}],"role":"model"},"finishReason":"STOP","index":0}],"usageMetadata":{"promptTokenCount":96,"candidatesTokenCount":1,"totalTokenCount":97}}
+
+[chunk 1] 2024-7-7 19:4:52.209
+{"candidates":[{"content":{"parts":[{"text":", please wait, I am checking the weather information for Hangzhou and Beijing."}],"role":"model"},"finishReason":"STOP","index":0,"safetyRatings":[{"category":"HARM_CATEGORY_SEXUALLY_EXPLICIT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HATE_SPEECH","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HARASSMENT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_DANGEROUS_CONTENT","probability":"NEGLIGIBLE"}]}],"usageMetadata":{"promptTokenCount":96,"candidatesTokenCount":16,"totalTokenCount":112}}
+
+[chunk 2] 2024-7-7 19:4:53.288
+{"candidates":[{"content":{"parts":[{"text":"\n"}],"role":"model"},"finishReason":"STOP","index":0,"safetyRatings":[{"category":"HARM_CATEGORY_SEXUALLY_EXPLICIT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HATE_SPEECH","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HARASSMENT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_DANGEROUS_CONTENT","probability":"NEGLIGIBLE"}]}],"usageMetadata":{"promptTokenCount":96,"candidatesTokenCount":16,"totalTokenCount":112}}
+
+[stream finished] total chunks: 3
+```
+
+### Complex Command: Wenshengtu
+
+Test Command: Command ②
+
+This command, like the complex commands of Gemini 1.5 Pro, throws an error directly, so it will not be further elaborated.
diff --git a/DigitalHumanWeb/docs/usage/tools-calling/google.zh-CN.mdx b/DigitalHumanWeb/docs/usage/tools-calling/google.zh-CN.mdx
new file mode 100644
index 0000000..fc3c78d
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/tools-calling/google.zh-CN.mdx
@@ -0,0 +1,116 @@
+---
+title: Google Gemini 系列 Tools Calling 评测
+description: >-
+ 使用 LobeChat 测试 Google Gemini 系列模型(Gemini 1.5 Pro / Gemini 1.5
+ Flash)的工具调用(Function Calling)能力,并展现评测结果
+tags:
+ - Tools Calling
+ - Benchmark
+ - Function Calling 评测
+ - 工具调用
+ - 插件
+---
+
+# Google Gemini 系列 Tools Calling
+
+Google Gemini 系列模型 Tools Calling 能力一览:
+
+| 模型 | 支持 Tools Calling | 流式 (Stream) | 并发(Parallel) | 简单指令得分 | 复杂指令 |
+| --- | --- | --- | --- | --- | --- |
+| Gemini 1.5 Pro | ✅ | ❌ | ✅ | ⛔ | ⛔ |
+| Gemini 1.5 Flash | ❌ | ❌ | ❌ | ⛔ | ⛔ |
+
+
+ 根据我们的的实际测试,强烈建议不要给 Gemini 开启插件,因为目前(截止2024.07.07)它的 Tools Calling
+ 能力实在太烂了。
+
+
+## Gemini 1.5 Pro
+
+### 简单调用指令:天气查询
+
+测试指令:指令 ①
+
+
+
+Gemini 输出的 json 中,name 是错误的,因此 LobeChat 无法识别到它调用了什么插件。(入参中,天气插件的 name 为 `realtime-weather____fetchCurrentWeather`,而 Gemini 返回的是 `weather____fetchCurrentWeather`)。
+
+
+
+
+ Tools Calling 原始输出:
+
+```yml
+[stream start] 2024-7-7 17:53:25.647
+[chunk 0] 2024-7-7 17:53:25.654
+{"candidates":[{"content":{"parts":[{"text":"好的"}],"role":"model"},"finishReason":"STOP","index":0}],"usageMetadata":{"promptTokenCount":95,"candidatesTokenCount":1,"totalTokenCount":96}}
+
+[chunk 1] 2024-7-7 17:53:26.288
+{"candidates":[{"content":{"parts":[{"text":"\n\n"}],"role":"model"},"finishReason":"STOP","index":0,"safetyRatings":[{"category":"HARM_CATEGORY_SEXUALLY_EXPLICIT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HATE_SPEECH","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HARASSMENT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_DANGEROUS_CONTENT","probability":"NEGLIGIBLE"}]}],"usageMetadata":{"promptTokenCount":95,"candidatesTokenCount":1,"totalTokenCount":96}}
+
+[chunk 2] 2024-7-7 17:53:26.336
+{"candidates":[{"content":{"parts":[{"functionCall":{"name":"weather____fetchCurrentWeather","args":{"city":"杭州"}}},{"functionCall":{"name":"weather____fetchCurrentWeather","args":{"city":"北京"}}}],"role":"model"},"finishReasoSTOP","index":0,"safetyRatings":[{"category":"HARM_CATEGORY_SEXUALLY_EXPLICIT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HATE_SPEECH","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HARASSMENT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_DANGEROUS_CONTENT","probability":"NEGLIGIBLE"}]}],"usageMetadata":{"promptTokenCount":95,"candidatesTokenCount":79,"totalTokenCount":174}}
+
+[stream finished] total chunks: 3
+```
+
+
+
+### 复杂调用指令:文生图
+
+测试指令:指令 ②
+
+
+
+在测试复杂指令集时,Google 直接抛错:
+
+```json
+{
+ "message": "[400 Bad Request] Invalid JSON payload received. Unknown name \"maxItems\" at 'tools[0].function_declarations[0].parameters.properties[0].value': Cannot find field.\nInvalid JSON payload received. Unknown name \"minItems\" at 'tools[0].function_declarations[0].parameters.properties[0].value': Cannot find field.\nInvalid JSON payload received. Unknown name \"default\" at 'tools[0].function_declarations[0].parameters.properties[1].value': Cannot find field.\nInvalid JSON payload received. Unknown name \"default\" at 'tools[0].function_declarations[0].parameters.properties[3].value': Cannot find field.\nInvalid JSON payload received. Unknown name \"default\" at 'tools[0].function_declarations[0].parameters.properties[4].value': Cannot find field. [{\"@type\":\"type.googleapis.com/google.rpc.BadRequest\",\"fieldViolations\":[{\"field\":\"tools[0].function_declarations[0].parameters.properties[0].value\",\"description\":\"Invalid JSON payload received. Unknown name \\\"maxItems\\\" at 'tools[0].function_declarations[0].parameters.properties[0].value': Cannot find field.\"},{\"field\":\"tools[0].function_declarations[0].parameters.properties[0].value\",\"description\":\"Invalid JSON payload received. Unknown name \\\"minItems\\\" at 'tools[0].function_declarations[0].parameters.properties[0].value': Cannot find field.\"},{\"field\":\"tools[0].function_declarations[0].parameters.properties[1].value\",\"description\":\"Invalid JSON payload received. Unknown name \\\"default\\\" at 'tools[0].function_declarations[0].parameters.properties[1].value': Cannot find field.\"},{\"field\":\"tools[0].function_declarations[0].parameters.properties[3].value\",\"description\":\"Invalid JSON payload received. Unknown name \\\"default\\\" at 'tools[0].function_declarations[0].parameters.properties[3].value': Cannot find field.\"},{\"field\":\"tools[0].function_declarations[0].parameters.properties[4].value\",\"description\":\"Invalid JSON payload received. Unknown name \\\"default\\\" at 'tools[0].function_declarations[0].parameters.properties[4].value': Cannot find field.\"}]}]"
+}
+```
+
+上述抛错中提到并不支持包含 `maxItems` 的 schema,因此 Gemini 1.5 Pro 相当于无法使用 DallE 插件。
+
+相关 issue:
+
+- [Support for minItems and maxItems for FunctionDeclarationSchemaType.ARRAY?](https://github.com/google-gemini/generative-ai-js/issues/200)
+- [Gemini Models unusable when dalle plugin is enabled](https://github.com/lobehub/lobe-chat/issues/2537)
+
+综合以上两个测试来看,Google 的 Tool Calling 能力似乎是支持了,但是几乎没法在日常中使用,我个人认为已经等于虚假宣传了。
+
+## Gemini 1.5 Flash
+
+### 简单调用指令:天气查询
+
+测试指令:指令 ①
+
+
+
+而 Gemini 1.5 flash 更为抽象,说完调用就结束了。结合以下原始输出可以看到,Gemini 1.5 Flash 并没有输出 Tool Calling 的数据,因此可以说是完全不可用。
+
+```yml
+stream start] 2024-7-7 19:4:50.936
+[chunk 0] 2024-7-7 19:4:50.943
+{"candidates":[{"content":{"parts":[{"text":"好的"}],"role":"model"},"finishReason":"STOP","index":0}],"usageMetadata":{"promptTokenCount":96,"candidatesTokenCount":1,"totalTokenCount":97}}
+
+[chunk 1] 2024-7-7 19:4:52.209
+{"candidates":[{"content":{"parts":[{"text":",请稍等,我正在查询杭州和北京的天气信息。 "}],"role":"model"},"finishReason":"STOP","index":0,"safetyRatings":[{"category":"HARM_CATEGORY_SEXUALLY_EXPLICIT","probability":"NEGLIGIBLE"ATEGORY_HATE_SPEECH","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HARASSMENT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_DANGEROUS_CONTENT","probability":"NEGLIGIBLE"}]}],"usageMetadata":{"promptTokenCount":96,"candidatesTokenCount":16,"totalTokenCount":112}}
+
+[chunk 2] 2024-7-7 19:4:53.288
+{"candidates":[{"content":{"parts":[{"text":"\n"}],"role":"model"},"finishReason":"STOP","index":0,"safetyRatings":[{"category":"HARM_CATEGORY_SEXUALLY_EXPLICIT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HATE_SPEECH","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HARASSMENT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_DANGEROUS_CONTENT","probability":"NEGLIGIBLE"}]}],"usageMetadata":{"promptTokenCount":96,"candidatesTokenCount":16,"totalTokenCount":112}}
+
+[stream finished] total chunks: 3
+```
+
+### 复杂调用指令:文生图
+
+测试指令:指令 ②
+
+该指令和 Gemini 1.5 Pro 的复杂指令一样,直接抛错,因此不再详细展开。
diff --git a/DigitalHumanWeb/docs/usage/tools-calling/groq.mdx b/DigitalHumanWeb/docs/usage/tools-calling/groq.mdx
new file mode 100644
index 0000000..1333ed7
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/tools-calling/groq.mdx
@@ -0,0 +1 @@
+TODO
diff --git a/DigitalHumanWeb/docs/usage/tools-calling/groq.zh-CN.mdx b/DigitalHumanWeb/docs/usage/tools-calling/groq.zh-CN.mdx
new file mode 100644
index 0000000..baabe3b
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/tools-calling/groq.zh-CN.mdx
@@ -0,0 +1,189 @@
+---
+title: Groq Tools Calling
+description: >-
+ 了解 Groq 平台模型 Tools Calling的能力一览,包括LLAMA3 70B、LLAMA3
+ 8B和Mixtral-8x7B的简单和复杂指令调用情况。
+tags:
+ - Groq 平台模型
+ - Tools Calling
+ - LLAMA3 70B
+ - LLAMA3 8B
+ - Mixtral-8x7B
+---
+
+# Groq 平台模型 Tools Calling 评测(Llama 3/Mistral)
+
+由于 Groq 本身不支持 stream,因此 Tools Calling 的调用是普通请求。
+
+Groq 平台的模型 Tools Calling 能力一览:
+
+| 模型 | 支持 Tools Calling | 流式 (Stream) | 并发(Parallel) | 简单指令得分 | 复杂指令 |
+| ------------ | ------------------ | --------------- | ---------------- | ------------ | -------- |
+| LLAMA3 70B | ✅ | ❌ | ✅ | 🌟🌟 | 🌟🌟 |
+| LLAMA3 8B | ✅ | ❌ | ✅ | 🌟🌟 | 🌟 |
+| Mixtral-8x7B | ✅ | ❌ | ✅ | ⛔ | 🌟🌟 |
+
+## LLAMA3 70B
+
+### 简单调用指令:天气查询
+
+测试指令:指令 ①
+
+
+
+从上述视频中可以看到 LLAMA3 70B 支持并发 Tools Calling,可以同时调用多次天气查询。
+
+
+
+
+ Tools Calling 原始输出:
+
+```yml
+[no stream response] 2024-7-8 15:50:40.166
+
+{"id":"chatcmpl-ec4b6c0b-1078-4f50-a39c-e58b3b1f9c31","object":"chat.completion","created":1720425030,"model":"llama3-70b-8192","choices":[{"index":0,"message":{"role":"assistant","tool_calls":[{"id":"call_v89g","type":"function","function":{"name":"realtime-weather____fetchCurrentWeather","arguments":"{\"city\":\"杭州\"}"}},{"id":"call_jxwk","type":"function","function":{"name":"realtime-weather____fetchCurrentWeather","arguments":"{\"city\":\"北京}}]},"logprobs":null,"finish_reason":"tool_calls"}],"usage":{"prompt_tokens":969,"prompt_time":0.224209489,"completion_tokens":68,"completion_time":0.194285714,"total_tokens":1037,"total_time":0.418495203},"system_fingerprint":"fp_87cbfbbc4d","x_groq":{"id":"req_01j28n57x9e78a6bfbn9sdn139"}}
+
+```
+
+
+
+### 复杂调用指令:文生图
+
+测试指令:指令 ②
+
+
+
+
+
+
+Tools Calling 原始输出:
+
+```yml
+[no stream response] 2024-7-8 18:0:34.811
+
+{"id":"chatcmpl-e3b59ca9-1172-4ae2-96c7-3d6997a1f8a8","object":"chat.completion","created":1720432834,"model":"llama3-70b-8192","choices":[{"index":0,"message":{"role":"assistant","tool_calls":[{"id":"call_azm9","type":"function","function":{"name":"lobe-image-designer____text2image____builtin","arguments":"{\"prompts\":[\"A small, fluffy, and playful golden retriever puppy with a white patch on its forehead, sitting on a green grass field with a bright blue sky in the background, photo.\",\"A cute, little, brown and white Dalmatian puppy with a red collar, running around in a park with a sunny day, illustration.\",\"A tiny, grey and white Poodle puppy with a pink ribbon, sitting on a white couch with a few toys surrounding it, watercolor painting.\",\"A sweet, small, black and white Chihuahua puppy with a pink bow, lying on a soft, white blanket with a few stuffed animals nearby, oil painting.\"],\"quality\":\"standard\",\"seeds\":[],\"size\":\"1024x1024\",\"style\":\"vivid\"}"}}]},"logprobs":null,"finish_reason":"tool_calls"}],"usage":{"prompt_tokens":2305,"prompt_time":3.027052298,"completion_tokens":246,"completion_time":0.702857143,"total_tokens":2551,"total_time":3.729909441},"system_fingerprint":"fp_7ab5f7e105","x_groq":{"id":"req_01j28wk2q0efvs22qatw7rd0ds"}}
+
+POST /api/chat/groq 200 in 17462ms
+```
+
+
+
+## LLAMA3-8B
+
+### 简单调用指令:天气查询
+
+测试指令:指令 ①
+
+
+
+从上述视频中可以看到 LLAMA3-8B 对于天气插件可以正常调用,并获得正确的总结结果。但是它并没有完全 follow 我们的描述指令,没有回答「好的」。
+
+
+
+
+ Tools Calling 原始输出:
+
+```yml
+[no stream response] 2024-7-9 11:33:16.920
+
+{"id":"chatcmpl-f3672d59-e91d-4253-af1b-bfc4e0912085","object":"chat.completion","created":1720495996,"model":"llama3-8b-8192","choices":[{"index":0,"message":{"role":"assistant","tool_calls":[{"id":"call_rjtk","type":"function","function":{"name":"realtime-weather____fetchCurrentWeather","arguments":"{\"city\":\"杭州市\"}"}},{"id":"call_7pqh","type":"functi,"function":{"name":"realtime-weather____fetchCurrentWeather","arguments":"{\"city\":\"北京市\"}"}}]},"logprobs":null,"finish_reason":"tool_calls"}],"usage":{"prompt_tokens":969,"ppt_time":0.145428625,"completion_tokens":128,"completion_time":0.101364747,"total_tokens":1097,"total_time":0.246793372},"system_fingerprint":"fp_33d61fdfc3","x_groq":{"id":"req_01j2artze1exz82nettf2h9066"}}
+
+POST /api/chat/groq 200 in 1649ms
+```
+
+
+
+### 复杂调用指令:文生图
+
+测试指令:指令 ②
+
+
+
+LLAMA3 8B 在 DallE 的输出场景下,只会输出 1 张图片,而不是像 LLAMA3 70B 一样输出 4 张,意味着在复杂 Tools 指令层面,能力和 GPT 3.5 Turbo 接近,不如 GPT 4。
+
+
+
+
+Tools Calling 原始输出:
+
+```yml
+[no stream response] 2024-7-9 11:58:27.40
+
+{"id":"chatcmpl-3c38f4d2-3424-416c-9fb0-0969d2683959","object":"chat.completion","created":1720497506,"model":"llama3-8b-8192","choices":[{"index":0,"message":{"role":"assistant","tool_calls":[{"id":"call_k6xj","type":"function","function":{"name":"lobe-image-designer____text2image____builtin","arguments":"{\"prompts\":[\"Create a watercolor painting of a small white dog with a pink nose, wearing a red collar and sitting on a green grass. The dog's ears should be floppy and its fur should be curly.\"],\"quality\":\"standard\",\"seeds\":[],\"size\":\"1024x1024\",\"style\":\"natural\"}"}}]},"logprobs":null,"finish_reason":"tool_calls"}],"usage":{"prompt_tokens":2282,"prompt_time":0.342335558,"completion_tokens":148,"completion_time":0.118023813,"total_tokens":2430,"total_time":0.460359371},"system_fingerprint":"fp_179b0f92c9","x_groq":{"id":"req_01j2at921tec8aymdq48czcw1y"}}
+
+POST /api/chat/groq 200 in 2517ms
+```
+
+
+
+## Mixtral-8x7B
+
+### 简单调用指令:天气查询
+
+测试指令:指令 ①
+
+
+
+从上述视频中可以看到 Mixtral-8x7B 对于天气插件的查询输出的参数有问题,导致无法正常调用插件。
+
+
+
+
+ Tools Calling 原始输出:
+
+```yml
+
+[no stream response] 2024-7-8 22:18:19.682
+
+{"id":"chatcmpl-9f89d669-5642-48be-b5cd-7a29756800c0","object":"chat.completion","created":1720448299,"model":"mixtral-8x7b-32768","choices":[{"index":0,"message":{"role":"assistant","tool_calls":[{"id":"call_719t","type":"function","function":{"name":"realtime-weather____fetchCurrentWeather","arguments":"{\"city\":\"Hangzhou,Beijing\"}"}}]},"logprobs":null,"finish_reason":"tool_calls"}],"usage":{"prompt_tokens":1262,"prompt_time":0.116684046,"completion_tokens":102,"completion_time":0.163113006,"total_tokens":1364,"total_time":0.279797052},"system_fingerprint":"fp_c5f20b5bb1","x_groq":{"id":"req_01j29bbc8xen2s3thp9qen5bys"}}
+
+POST /api/chat/groq 200 in 4860ms
+```
+
+
+
+### 复杂调用指令:文生图
+
+测试指令:指令 ②
+
+
+
+非常意外地,Mixtral 8x7B 在文生图的 Tools Calling 的场景下,居然可以正常出图,而且出图的能力基本和 LLAMA3 70B 持平。
+
+
+
+
+
+ Tools Calling 原始输出:
+
+```yml
+[no stream response] 2024-7-9 12:6:38.11
+
+{"id":"chatcmpl-a90069f3-b119-41b9-b8d7-c7f97373529e","object":"chat.completion","created":1720497998,"model":"mixtral-8x7b-32768","choices":[{"index":0,"message":{"role":"assistant","tool_calls":[{"id":"call_hw8t","type":"function","function":{"name":"lobe-image-designer____text2image____builtin","arguments":"{\"prompts\":[\"A colorful and playful illustration of a friendly small dog, looking directly at the viewer with a wagging tail and bright, expressive eyes. The dog's fur is glossy and well-groomed, with a mix of black, white, and brown colors. The background is a lush, green park with a clear blue sky and a few fluffy clouds.\",\"An oil painting of a small, energetic dog in a sunlit park, chasing a frisbee. The dog is a mix of black and white fur, with a distinct brown spot above its left eye. The park features tall trees, providing dappled shade across the grassy landscape. The frisbee is bright orange and stands out against the natural surroundings.\",\"A realistic watercolor painting of a small, fluffy white dog curled up next to a warm fireplace during a cozy winter evening. The dog's eyes are closed in contentment, and a single red bow is tied around its neck. The background includes a plush armchair, a stack of books, and a softly lit room.\",\"A fun and engaging cartoon of a small dog sitting at a café table, enjoying a cup of coffee and a croissant. The dog has a expressive face and a blue scarf around its neck. The café has a vintage, 1920's style and a red awning, with a bustling city background.\"],\"quality\":\"standard\",\"size\":\"1024x1024\",\"style\":\"vivid\"}"}}]},"logprobs":null,"finish_reason":"tool_calls"}],"usage":{"prompt_tokens":2920,"prompt_time":0.228639219,"completion_tokens":465,"completion_time":0.755757988,"total_tokens":3385,"total_time":0.984397207},"system_fingerprint":"fp_c5f20b5bb1","x_groq":{"id":"req_01j2atr155f0nv8rmfk448e2at"}}
+
+POST /api/chat/groq 200 in 6216ms
+
+```
+
+
diff --git a/DigitalHumanWeb/docs/usage/tools-calling/moonshot.mdx b/DigitalHumanWeb/docs/usage/tools-calling/moonshot.mdx
new file mode 100644
index 0000000..1333ed7
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/tools-calling/moonshot.mdx
@@ -0,0 +1 @@
+TODO
diff --git a/DigitalHumanWeb/docs/usage/tools-calling/moonshot.zh-CN.mdx b/DigitalHumanWeb/docs/usage/tools-calling/moonshot.zh-CN.mdx
new file mode 100644
index 0000000..7b532b7
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/tools-calling/moonshot.zh-CN.mdx
@@ -0,0 +1,24 @@
+---
+title: Moonshot 系列 Tools Calling 评测
+description: 使用 LobeChat 测试 Moonshot 系列模型(Moonshot-1) 的工具调用(Function Calling)能力,并展现评测结果
+tags:
+ - Tools Calling
+ - Benchmark
+ - Function Calling
+ - 工具调用
+ - 插件
+---
+
+# Moonshot 系列工具调用(Tools Calling)
+
+### 简单调用指令:天气查询
+
+测试指令:指令 ①
+
+TODO
+
+### 复杂调用指令:文生图
+
+测试指令:指令 ②
+
+TODO
diff --git a/DigitalHumanWeb/docs/usage/tools-calling/openai.mdx b/DigitalHumanWeb/docs/usage/tools-calling/openai.mdx
new file mode 100644
index 0000000..98fde36
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/tools-calling/openai.mdx
@@ -0,0 +1,139 @@
+---
+title: OpenAI GPT 系列 Tools Calling 评测
+description: >-
+ 使用 LobeChat 测试 OpenAI GPT 系列模型(GPT 3.5-turbo / GPT-4 /GPT-4o) 的工具调用(Function
+ Calling)能力,并展现评测结果
+tags:
+ - Tools Calling
+ - Benchmark
+ - Function Calling
+ - 工具调用
+ - 插件
+---
+
+# OpenAI GPT Series Tool Calling
+
+Overview of the Tool Calling capabilities of OpenAI GPT series models:
+
+| Model | Tool Calling Support | Streaming | Parallel | Simple Instruction Score | Complex Instruction Score |
+| --- | --- | --- | --- | --- | --- |
+| GPT-3.5-turbo | ✅ | ✅ | ✅ | 🌟🌟🌟 | 🌟 |
+| GPT-4-turbo | ✅ | ✅ | ✅ | 🌟🌟 | 🌟🌟 |
+| GPT-4o | ✅ | ✅ | ✅ | 🌟🌟🌟 | 🌟🌟 |
+
+
+ For testing instructions, see [Tools Calling - Evaluation Task
+ Introduction](/docs/usage/tools-calling#evaluation-task-introduction)
+
+
+## GPT 3.5-turbo
+
+### Simple Instruction Call: Weather Inquiry
+
+Test Instruction: Instruction ①
+
+
+
+
+
+
+Streaming Tool Calling Raw Output:
+
+
+
+### Complex Instruction Call: Wenshengtu
+
+Test Instruction: Instruction ②
+
+
+
+
+
+
+Streaming Tool Calling Raw Output:
+
+
+
+## GPT-4 Turbo
+
+### Simple Instruction Call: Weather Inquiry
+
+Test Instruction: Instruction ①
+
+Unlike GPT-3.5 Turbo, GPT-4 Turbo did not respond with "okay" when calling Tool Calling, and after multiple tests, it remained the same. Therefore, in this follow-up of a compound instruction, it is not as good as GPT-3.5 Turbo, but the remaining two capabilities are still good.
+
+Of course, it is also possible that GPT-4 Turbo's model has more "autonomy" and believes that it does not need to output this "okay."
+
+
+
+
+
+
+Streaming Tool Calling Raw Output:
+
+
+
+### Complex Instruction Call: Wenshengtu
+
+Test Instruction: Instruction ②
+
+
+
+
+
+
+Streaming Tool Calling Raw Output:
+
+
+
+## GPT-4o
+
+### Simple Instruction Call: Weather Inquiry
+
+Test Instruction: Instruction ①
+
+Similar to GPT-3.5, GPT-4o performs very well in following compound instructions in simple instruction calls.
+
+
+
+
+
+
+ Streaming Tool Calling Raw Output:
+
+
+
+### Complex Instruction Call: Wenshengtu
+
+Test Instruction: Instruction ②
+
+
+
+
+
+
+ Streaming Tool Calling Raw Output:
+
+```yml
+
+```
+
+
diff --git a/DigitalHumanWeb/docs/usage/tools-calling/openai.zh-CN.mdx b/DigitalHumanWeb/docs/usage/tools-calling/openai.zh-CN.mdx
new file mode 100644
index 0000000..8d401ab
--- /dev/null
+++ b/DigitalHumanWeb/docs/usage/tools-calling/openai.zh-CN.mdx
@@ -0,0 +1,139 @@
+---
+title: OpenAI GPT 系列 Tools Calling 评测
+description: >-
+ 使用 LobeChat 测试 OpenAI GPT 系列模型(GPT 3.5-turbo / GPT-4 /GPT-4o) 的工具调用(Function
+ Calling)能力,并展现评测结果
+tags:
+ - Tools Calling
+ - Benchmark
+ - Function Calling
+ - 工具调用
+ - 插件
+---
+
+# OpenAI GPT 系列工具调用(Tools Calling)
+
+OpenAI GPT 系列模型 Tool Calling 能力一览:
+
+| 模型 | 支持 Tool Calling | 流式 (Stream) | 并发(Parallel) | 简单指令得分 | 复杂指令 |
+| ------------- | ----------------- | --------------- | ---------------- | ------------ | -------- |
+| GPT-3.5-turbo | ✅ | ✅ | ✅ | 🌟🌟🌟 | 🌟 |
+| GPT-4-turbo | ✅ | ✅ | ✅ | 🌟🌟 | 🌟🌟 |
+| GPT-4o | ✅ | ✅ | ✅ | 🌟🌟🌟 | 🌟🌟 |
+
+
+ 关于测试指令,详见 [工具调用 Tools Calling -
+ 评测任务介绍](/zh/docs/usage/tools-calling#评测任务介绍)
+
+
+## GPT 3.5-turbo
+
+### 简单调用指令:天气查询
+
+测试指令:指令 ①
+
+
+
+
+
+
+流式 Tool Calling 原始输出:
+
+
+
+### 复杂调用指令:文生图
+
+测试指令:指令 ②
+
+
+
+
+
+
+流式 Tool Calling 原始输出:
+
+
+
+## GPT-4 Turbo
+
+### 简单调用指令:天气查询
+
+测试指令:指令 ①
+
+GPT-4 Turbo 在调用 Tool Calling 时并没有像 GPT-3.5 Turbo 一样回复「好的」,且经过多次测试始终一样,因此在这一条复合指令的跟随中反而不如 GPT-3.5 Turbo,但剩余两项能力均不错。
+
+当然,也有可能是因为 GPT-4 Turbo 的模型更加有“自主意识”,认为不需要输出这一句“好的”。
+
+
+
+
+
+
+流式 Tool Calling 原始输出:
+
+
+
+### 复杂调用指令:文生图
+
+测试指令:指令 ②
+
+
+
+
+
+
+流式 Tool Calling 原始输出:
+
+
+
+## GPT 4o
+
+### 简单调用指令:天气查询
+
+测试指令:指令 ①
+
+GPT-4o 和 3.5 一样,在简单调用指令中,能够达到非常不错的复合指令遵循能力。
+
+
+
+
+
+
+ 流式 Tool Calling 原始输出:
+
+
+
+### 复杂调用指令:文生图
+
+测试指令:指令 ②
+
+
+
+
+
+
+ 流式 Tool Calling 原始输出:
+
+```yml
+
+```
+
+
diff --git a/DigitalHumanWeb/drizzle.config.ts b/DigitalHumanWeb/drizzle.config.ts
new file mode 100644
index 0000000..2f6460d
--- /dev/null
+++ b/DigitalHumanWeb/drizzle.config.ts
@@ -0,0 +1,29 @@
+import * as dotenv from 'dotenv';
+import type { Config } from 'drizzle-kit';
+
+// Read the .env file if it exists, or a file specified by the
+
+// dotenv_config_path parameter that's passed to Node.js
+
+dotenv.config();
+
+let connectionString = process.env.DATABASE_URL;
+
+if (process.env.NODE_ENV === 'test') {
+ console.log('current ENV:', process.env.NODE_ENV);
+ connectionString = process.env.DATABASE_TEST_URL;
+}
+
+if (!connectionString)
+ throw new Error('`DATABASE_URL` or `DATABASE_TEST_URL` not found in environment');
+
+export default {
+ dbCredentials: {
+ url: connectionString,
+ },
+ dialect: 'postgresql',
+ out: './src/database/server/migrations',
+
+ schema: './src/database/server/schemas/lobechat',
+ strict: true,
+} satisfies Config;
diff --git a/DigitalHumanWeb/locales/ar/auth.json b/DigitalHumanWeb/locales/ar/auth.json
new file mode 100644
index 0000000..1ec1966
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "تسجيل الدخول",
+ "loginOrSignup": "تسجيل الدخول / التسجيل",
+ "profile": "الملف الشخصي",
+ "security": "الأمان",
+ "signout": "تسجيل الخروج",
+ "signup": "التسجيل"
+}
diff --git a/DigitalHumanWeb/locales/ar/chat.json b/DigitalHumanWeb/locales/ar/chat.json
new file mode 100644
index 0000000..51659ce
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "النموذج"
+ },
+ "agentDefaultMessage": "مرحبًا، أنا **{{name}}**، يمكنك بدء المحادثة معي على الفور، أو يمكنك الذهاب إلى [إعدادات المساعد]({{url}}) لإكمال معلوماتي.",
+ "agentDefaultMessageWithSystemRole": "مرحبًا، أنا **{{name}}**، {{systemRole}}، دعنا نبدأ الدردشة!",
+ "agentDefaultMessageWithoutEdit": "مرحبًا، أنا **{{name}}**، دعنا نبدأ المحادثة!",
+ "agents": "مساعد",
+ "artifact": {
+ "generating": "جاري الإنشاء",
+ "thinking": "جاري التفكير",
+ "thought": "عملية التفكير",
+ "unknownTitle": "عمل غير مسمى"
+ },
+ "backToBottom": "العودة إلى الأسفل",
+ "chatList": {
+ "longMessageDetail": "عرض التفاصيل"
+ },
+ "clearCurrentMessages": "مسح رسائل الجلسة الحالية",
+ "confirmClearCurrentMessages": "سيتم مسح رسائل الجلسة الحالية قريبًا، وبمجرد المسح لن يمكن استعادتها، يرجى تأكيد الإجراء الخاص بك",
+ "confirmRemoveSessionItemAlert": "سيتم حذف هذا المساعد قريبًا، وبمجرد الحذف لن يمكن استعادته، يرجى تأكيد الإجراء الخاص بك",
+ "confirmRemoveSessionSuccess": "تم حذف المساعد بنجاح",
+ "defaultAgent": "المساعد الافتراضي",
+ "defaultList": "القائمة الافتراضية",
+ "defaultSession": "المساعد الافتراضي",
+ "duplicateSession": {
+ "loading": "جاري النسخ...",
+ "success": "تم النسخ بنجاح",
+ "title": "{{title}} نسخة"
+ },
+ "duplicateTitle": "{{title}} نسخة",
+ "emptyAgent": "لا يوجد مساعد",
+ "historyRange": "نطاق التاريخ",
+ "inbox": {
+ "desc": "قم بتشغيل مجموعة الدماغ وأشعل شرارة التفكير. مساعدك الذكي، هنا حيث يمكنك التواصل بكل شيء",
+ "title": "دردشة عشوائية"
+ },
+ "input": {
+ "addAi": "إضافة رسالة AI",
+ "addUser": "إضافة رسالة مستخدم",
+ "more": "المزيد",
+ "send": "إرسال",
+ "sendWithCmdEnter": "اضغط {{meta}} + Enter للإرسال",
+ "sendWithEnter": "اضغط Enter للإرسال",
+ "stop": "توقف",
+ "warp": "تغيير السطر"
+ },
+ "knowledgeBase": {
+ "all": "جميع المحتويات",
+ "allFiles": "جميع الملفات",
+ "allKnowledgeBases": "جميع قواعد المعرفة",
+ "disabled": "الوضع الحالي للنشر لا يدعم محادثات قاعدة المعرفة. إذا كنت بحاجة إلى استخدامها، يرجى التبديل إلى نشر قاعدة البيانات على الخادم أو استخدام خدمة {{cloud}}.",
+ "library": {
+ "action": {
+ "add": "إضافة",
+ "detail": "تفاصيل",
+ "remove": "إزالة"
+ },
+ "title": "الملفات/قاعدة المعرفة"
+ },
+ "relativeFilesOrKnowledgeBases": "ملفات/قواعد معرفة مرتبطة",
+ "title": "قاعدة المعرفة",
+ "uploadGuide": "يمكنك عرض الملفات التي تم تحميلها في «قاعدة المعرفة»",
+ "viewMore": "عرض المزيد"
+ },
+ "messageAction": {
+ "delAndRegenerate": "حذف وإعادة الإنشاء",
+ "regenerate": "إعادة الإنشاء"
+ },
+ "newAgent": "مساعد جديد",
+ "pin": "تثبيت",
+ "pinOff": "إلغاء التثبيت",
+ "rag": {
+ "referenceChunks": "مراجع",
+ "userQuery": {
+ "actions": {
+ "delete": "حذف الاستعلام",
+ "regenerate": "إعادة توليد الاستعلام"
+ }
+ }
+ },
+ "regenerate": "إعادة الإنشاء",
+ "roleAndArchive": "الدور والأرشيف",
+ "searchAgentPlaceholder": "مساعد البحث...",
+ "sendPlaceholder": "أدخل محتوى الدردشة...",
+ "sessionGroup": {
+ "config": "إدارة المجموعات",
+ "confirmRemoveGroupAlert": "سيتم حذف هذه المجموعة قريبًا، وبعد الحذف، سيتم نقل مساعدي هذه المجموعة إلى القائمة الافتراضية، يرجى تأكيد إجراءك",
+ "createAgentSuccess": "تم إنشاء المساعد بنجاح",
+ "createGroup": "إضافة مجموعة جديدة",
+ "createSuccess": "تم الإنشاء بنجاح",
+ "creatingAgent": "جاري إنشاء المساعد...",
+ "inputPlaceholder": "الرجاء إدخال اسم المجموعة...",
+ "moveGroup": "نقل إلى مجموعة",
+ "newGroup": "مجموعة جديدة",
+ "rename": "إعادة تسمية المجموعة",
+ "renameSuccess": "تمت إعادة التسمية بنجاح",
+ "sortSuccess": "تمت إعادة ترتيب بنجاح",
+ "sorting": "جاري تحديث ترتيب المجموعة...",
+ "tooLong": "يجب أن يكون طول اسم المجموعة بين 1 و 20"
+ },
+ "shareModal": {
+ "download": "تحميل اللقطة",
+ "imageType": "نوع الصورة",
+ "screenshot": "لقطة شاشة",
+ "settings": "إعدادات التصدير",
+ "shareToShareGPT": "إنشاء رابط مشاركة ShareGPT",
+ "withBackground": "تضمين صورة الخلفية",
+ "withFooter": "تضمين تذييل",
+ "withPluginInfo": "تضمين معلومات البرنامج المساعد",
+ "withSystemRole": "تضمين دور المساعد"
+ },
+ "stt": {
+ "action": "إدخال صوتي",
+ "loading": "جارٍ التعرف...",
+ "prettifying": "جارٍ التجميل..."
+ },
+ "temp": "مؤقت",
+ "tokenDetails": {
+ "chats": "رسائل المحادثة",
+ "rest": "المتبقي",
+ "systemRole": "تعيين الدور",
+ "title": "تفاصيل الرمز",
+ "tools": "تعيين الإضافات",
+ "total": "الإجمالي",
+ "used": "المستخدم"
+ },
+ "tokenTag": {
+ "overload": "تجاوز الحد",
+ "remained": "متبقي",
+ "used": "مستخدم"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "إعادة تسمية ذكية",
+ "duplicate": "إنشاء نسخة",
+ "export": "تصدير الموضوع"
+ },
+ "checkOpenNewTopic": "هل ترغب في فتح موضوع جديد؟",
+ "checkSaveCurrentMessages": "هل ترغب في حفظ الدردشة الحالية كموضوع؟",
+ "confirmRemoveAll": "سيتم حذف جميع المواضيع قريبًا، وبمجرد الحذف لن يمكن استعادتها، يرجى التحلي بالحذر.",
+ "confirmRemoveTopic": "سيتم حذف هذا الموضوع قريبًا، وبمجرد الحذف لن يمكن استعادته، يرجى التحلي بالحذر.",
+ "confirmRemoveUnstarred": "سيتم حذف المواضيع غير المحفوظة قريبًا، وبمجرد الحذف لن يمكن استعادتها، يرجى التحلي بالحذر.",
+ "defaultTitle": "الموضوع الافتراضي",
+ "duplicateLoading": "جاري نسخ الموضوع...",
+ "duplicateSuccess": "تم نسخ الموضوع بنجاح",
+ "guide": {
+ "desc": "انقر فوق زر الإرسال الأيسر لحفظ الجلسة الحالية كموضوع تاريخي وبدء جلسة جديدة",
+ "title": "قائمة المواضيع"
+ },
+ "openNewTopic": "فتح موضوع جديد",
+ "removeAll": "حذف جميع المواضيع",
+ "removeUnstarred": "حذف المواضيع غير المحفوظة",
+ "saveCurrentMessages": "حفظ الجلسة الحالية كموضوع",
+ "searchPlaceholder": "البحث في المواضيع...",
+ "title": "الموضوع"
+ },
+ "translate": {
+ "action": "ترجمة",
+ "clear": "مسح الترجمة"
+ },
+ "tts": {
+ "action": "قراءة صوتية",
+ "clear": "مسح الصوت"
+ },
+ "updateAgent": "تحديث معلومات المساعد",
+ "upload": {
+ "action": {
+ "fileUpload": "رفع ملف",
+ "folderUpload": "رفع مجلد",
+ "imageDisabled": "النموذج الحالي لا يدعم التعرف على الصور، يرجى تغيير النموذج لاستخدامه",
+ "imageUpload": "رفع صورة",
+ "tooltip": "رفع"
+ },
+ "clientMode": {
+ "actionFiletip": "رفع ملف",
+ "actionTooltip": "رفع",
+ "disabled": "النموذج الحالي لا يدعم التعرف على الصور وتحليل الملفات، يرجى تغيير النموذج لاستخدامه"
+ },
+ "preview": {
+ "prepareTasks": "تحضير الأجزاء...",
+ "status": {
+ "pending": "يتم التحضير للتحميل...",
+ "processing": "يتم معالجة الملف..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/clerk.json b/DigitalHumanWeb/locales/ar/clerk.json
new file mode 100644
index 0000000..3702351
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "الرجوع",
+ "badge__default": "افتراضي",
+ "badge__otherImpersonatorDevice": "جهاز تمثيل آخر",
+ "badge__primary": "أساسي",
+ "badge__requiresAction": "يتطلب إجراء",
+ "badge__thisDevice": "هذا الجهاز",
+ "badge__unverified": "غير موثق",
+ "badge__userDevice": "جهاز المستخدم",
+ "badge__you": "أنت",
+ "createOrganization": {
+ "formButtonSubmit": "إنشاء منظمة",
+ "invitePage": {
+ "formButtonReset": "تخطي"
+ },
+ "title": "إنشاء منظمة"
+ },
+ "dates": {
+ "lastDay": "أمس في {{ date | timeString('en-US') }}",
+ "next6Days": "{{ date | weekday('en-US','long') }} في {{ date | timeString('en-US') }}",
+ "nextDay": "غدًا في {{ date | timeString('en-US') }}",
+ "numeric": "{{ date | numeric('en-US') }}",
+ "previous6Days": "الماضي {{ date | weekday('en-US','long') }} في {{ date | timeString('en-US') }}",
+ "sameDay": "اليوم في {{ date | timeString('en-US') }}"
+ },
+ "dividerText": "أو",
+ "footerActionLink__useAnotherMethod": "استخدام طريقة أخرى",
+ "footerPageLink__help": "المساعدة",
+ "footerPageLink__privacy": "الخصوصية",
+ "footerPageLink__terms": "البنود",
+ "formButtonPrimary": "متابعة",
+ "formButtonPrimary__verify": "التحقق",
+ "formFieldAction__forgotPassword": "هل نسيت كلمة المرور؟",
+ "formFieldError__matchingPasswords": "تتطابق كلمات المرور.",
+ "formFieldError__notMatchingPasswords": "كلمات المرور غير متطابقة.",
+ "formFieldError__verificationLinkExpired": "انتهت صلاحية رابط التحقق. يرجى طلب رابط جديد.",
+ "formFieldHintText__optional": "اختياري",
+ "formFieldHintText__slug": "الـ Slug هو معرف يمكن قراءته بواسطة الإنسان ويجب أن يكون فريدًا. غالبًا ما يُستخدم في عناوين URL.",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "حذف الحساب",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "example@email.com, example2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "my-org",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "تمكين الدعوات التلقائية لهذا النطاق",
+ "formFieldLabel__backupCode": "رمز النسخ الاحتياطي",
+ "formFieldLabel__confirmDeletion": "تأكيد",
+ "formFieldLabel__confirmPassword": "تأكيد كلمة المرور",
+ "formFieldLabel__currentPassword": "كلمة المرور الحالية",
+ "formFieldLabel__emailAddress": "عنوان البريد الإلكتروني",
+ "formFieldLabel__emailAddress_username": "عنوان البريد الإلكتروني أو اسم المستخدم",
+ "formFieldLabel__emailAddresses": "عناوين البريد الإلكتروني",
+ "formFieldLabel__firstName": "الاسم الأول",
+ "formFieldLabel__lastName": "الاسم الأخير",
+ "formFieldLabel__newPassword": "كلمة مرور جديدة",
+ "formFieldLabel__organizationDomain": "نطاق",
+ "formFieldLabel__organizationDomainDeletePending": "حذف الدعوات والاقتراحات المعلقة",
+ "formFieldLabel__organizationDomainEmailAddress": "عنوان البريد الإلكتروني للتحقق",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "أدخل عنوان بريد إلكتروني تحت هذا النطاق لتلقي رمز والتحقق من هذا النطاق.",
+ "formFieldLabel__organizationName": "الاسم",
+ "formFieldLabel__organizationSlug": "Slug",
+ "formFieldLabel__passkeyName": "اسم مفتاح الوصول",
+ "formFieldLabel__password": "كلمة المرور",
+ "formFieldLabel__phoneNumber": "رقم الهاتف",
+ "formFieldLabel__role": "الدور",
+ "formFieldLabel__signOutOfOtherSessions": "تسجيل الخروج من جميع الأجهزة الأخرى",
+ "formFieldLabel__username": "اسم المستخدم",
+ "impersonationFab": {
+ "action__signOut": "تسجيل الخروج",
+ "title": "تم تسجيل الدخول بواسطة {{identifier}}"
+ },
+ "locale": "ar",
+ "maintenanceMode": "نحن حاليًا في وضع الصيانة، ولكن لا تقلق، لن يستغرق الأمر أكثر من بضع دقائق.",
+ "membershipRole__admin": "مسؤول",
+ "membershipRole__basicMember": "عضو",
+ "membershipRole__guestMember": "ضيف",
+ "organizationList": {
+ "action__createOrganization": "إنشاء منظمة",
+ "action__invitationAccept": "الانضمام",
+ "action__suggestionsAccept": "طلب الانضمام",
+ "createOrganization": "إنشاء منظمة",
+ "invitationAcceptedLabel": "انضمام",
+ "subtitle": "لمتابعة {{applicationName}}",
+ "suggestionsAcceptedLabel": "في انتظار الموافقة",
+ "title": "اختر حسابًا",
+ "titleWithoutPersonal": "اختر منظمة"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "دعوات تلقائية",
+ "badge__automaticSuggestion": "اقتراحات تلقائية",
+ "badge__manualInvitation": "لا تسجيل تلقائي",
+ "badge__unverified": "غير موثق",
+ "createDomainPage": {
+ "subtitle": "أضف النطاق للتحقق. يمكن للمستخدمين الذين لديهم عناوين بريد إلكتروني في هذا النطاق الانضمام إلى المنظمة تلقائيًا أو طلب الانضمام.",
+ "title": "إضافة النطاق"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "تعذر إرسال الدعوات. هناك دعوات معلقة بالفعل لعناوين البريد الإلكتروني التالية: {{email_addresses}}.",
+ "formButtonPrimary__continue": "إرسال الدعوات",
+ "selectDropdown__role": "اختر الدور",
+ "subtitle": "أدخل أو الصق عناوين بريد إلكتروني واحدة أو أكثر، مفصولة بمسافات أو فواصل.",
+ "successMessage": "تم إرسال الدعوات بنجاح",
+ "title": "دعوة أعضاء جدد"
+ },
+ "membersPage": {
+ "action__invite": "دعوة",
+ "activeMembersTab": {
+ "menuAction__remove": "إزالة العضو",
+ "tableHeader__actions": "",
+ "tableHeader__joined": "انضم",
+ "tableHeader__role": "الدور",
+ "tableHeader__user": "المستخدم"
+ },
+ "detailsTitle__emptyRow": "لا يوجد أعضاء لعرضهم",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "قم بدعوة المستخدمين عن طريق ربط نطاق البريد الإلكتروني بالمنظمة. سيتمكن أي شخص يسجل الدخول بنطاق بريد إلكتروني متطابق من الانضمام إلى المنظمة في أي وقت.",
+ "headerTitle": "دعوات تلقائية",
+ "primaryButton": "إدارة النطاقات الموثقة"
+ },
+ "table__emptyRow": "لا توجد دعوات لعرضها"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "إلغاء الدعوة",
+ "tableHeader__invited": "تمت الدعوة"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "المستخدمون الذين يسجلون الدخول بنطاق بريد إلكتروني متطابق، سيتمكنون من رؤية اقتراح لطلب الانضمام إلى منظمتك.",
+ "headerTitle": "اقتراحات تلقائية",
+ "primaryButton": "إدارة النطاقات الموثقة"
+ },
+ "menuAction__approve": "الموافقة",
+ "menuAction__reject": "رفض",
+ "tableHeader__requested": "طلب الوصول",
+ "table__emptyRow": "لا توجد طلبات لعرضها"
+ },
+ "start": {
+ "headerTitle__invitations": "دعوات",
+ "headerTitle__members": "أعضاء",
+ "headerTitle__requests": "طلبات"
+ }
+ },
+ "navbar": {
+ "description": "إدارة منظمتك.",
+ "general": "عام",
+ "members": "أعضاء",
+ "title": "المنظمة"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "اكتب \"{{organizationName}}\" أدناه للمتابعة.",
+ "messageLine1": "هل أنت متأكد أنك تريد حذف هذه المنظمة؟",
+ "messageLine2": "هذا الإجراء دائم ولا يمكن التراجع عنه.",
+ "successMessage": "لقد حذفت المنظمة.",
+ "title": "حذف المنظمة"
+ },
+ "leaveOrganization": {
+ "actionDescription": "اكتب \"{{organizationName}}\" أدناه للمتابعة.",
+ "messageLine1": "هل أنت متأكد أنك تريد مغادرة هذه المنظمة؟ ستفقد الوصول إلى هذه المنظمة وتطبيقاتها.",
+ "messageLine2": "هذا الإجراء دائم ولا يمكن التراجع عنه.",
+ "successMessage": "لقد غادرت المنظمة.",
+ "title": "مغادرة المنظمة"
+ },
+ "title": "خطر"
+ },
+ "domainSection": {
+ "menuAction__manage": "إدارة",
+ "menuAction__remove": "حذف",
+ "menuAction__verify": "التحقق",
+ "primaryButton": "إضافة النطاق",
+ "subtitle": "اسمح للمستخدمين بالانضمام إلى المنظمة تلقائيًا أو طلب الانضمام بناءً على نطاق بريد إلكتروني موثق.",
+ "title": "النطاقات الموثقة"
+ },
+ "successMessage": "تم تحديث المنظمة.",
+ "title": "تحديث الملف الشخصي"
+ },
+ "removeDomainPage": {
+ "messageLine1": "سيتم إزالة نطاق البريد الإلكتروني {{domain}}.",
+ "messageLine2": "لن يتمكن المستخدمون من الانضمام إلى المنظمة تلقائيًا بعد ذلك.",
+ "successMessage": "تمت إزالة {{domain}}.",
+ "title": "إزالة النطاق"
+ },
+ "start": {
+ "headerTitle__general": "عام",
+ "headerTitle__members": "أعضاء",
+ "profileSection": {
+ "primaryButton": "تحديث الملف الشخصي",
+ "title": "ملف المنظمة",
+ "uploadAction__title": "شعار"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "سيؤثر إزالة هذا النطاق على المستخدمين المدعوين.",
+ "removeDomainActionLabel__remove": "إزالة النطاق",
+ "removeDomainSubtitle": "قم بإزالة هذا النطاق من نطاقاتك الموثقة",
+ "removeDomainTitle": "إزالة النطاق"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "يتم دعوة المستخدمين تلقائيًا للانضمام إلى المنظمة عند تسجيلهم، ويمكنهم الانضمام في أي وقت.",
+ "automaticInvitationOption__label": "دعوات تلقائية",
+ "automaticSuggestionOption__description": "يتلقى المستخدمون اقتراحًا لطلب الانضمام، ولكن يجب أن يتمتعوا بموافقة من مسؤول قبل الانضمام إلى المنظمة.",
+ "automaticSuggestionOption__label": "اقتراحات تلقائية",
+ "calloutInfoLabel": "تؤثر تغيير وضع التسجيل فقط على المستخدمين الجدد.",
+ "calloutInvitationCountLabel": "الدعوات المعلقة المرسلة للمستخدمين: {{count}}",
+ "calloutSuggestionCountLabel": "الاقتراحات المعلقة المرسلة للمستخدمين: {{count}}",
+ "manualInvitationOption__description": "يمكن فقط دعوة المستخدمين يدويًا إلى المنظمة.",
+ "manualInvitationOption__label": "لا تسجيل تلقائي",
+ "subtitle": "اختر كيف يمكن للمستخدمين من هذا النطاق الانضمام إلى المنظمة."
+ },
+ "start": {
+ "headerTitle__danger": "خطر",
+ "headerTitle__enrollment": "خيارات التسجيل"
+ },
+ "subtitle": "تم التحقق الآن من النطاق {{domain}}. تابع باختيار وضع التسجيل.",
+ "title": "تحديث {{domain}}"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "أدخل رمز التحقق المرسل إلى عنوان بريدك الإلكتروني",
+ "formTitle": "رمز التحقق",
+ "resendButton": "لم تتلقى الرمز؟ إعادة الإرسال",
+ "subtitle": "يجب التحقق من النطاق {{domainName}} عبر البريد الإلكتروني.",
+ "subtitleVerificationCodeScreen": "تم إرسال رمز التحقق إلى {{emailAddress}}. أدخل الرمز للمتابعة.",
+ "title": "التحقق من النطاق"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "إنشاء منظمة",
+ "action__invitationAccept": "الانضمام",
+ "action__manageOrganization": "إدارة",
+ "action__suggestionsAccept": "طلب الانضمام",
+ "notSelected": "لم يتم تحديد أي منظمة",
+ "personalWorkspace": "الحساب الشخصي",
+ "suggestionsAcceptedLabel": "في انتظار الموافقة"
+ },
+ "paginationButton__next": "التالي",
+ "paginationButton__previous": "السابق",
+ "paginationRowText__displaying": "عرض",
+ "paginationRowText__of": "من",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "إضافة حساب",
+ "action__signOutAll": "تسجيل الخروج من جميع الحسابات",
+ "subtitle": "اختر الحساب الذي ترغب في الاستمرار به.",
+ "title": "اختر حسابًا"
+ },
+ "alternativeMethods": {
+ "actionLink": "احصل على مساعدة",
+ "actionText": "ليس لديك أحد هذه؟",
+ "blockButton__backupCode": "استخدام رمز الاحتياطي",
+ "blockButton__emailCode": "إرسال رمز بريد إلكتروني إلى {{identifier}}",
+ "blockButton__emailLink": "إرسال رابط بريد إلكتروني إلى {{identifier}}",
+ "blockButton__passkey": "تسجيل الدخول برمز الدخول",
+ "blockButton__password": "تسجيل الدخول بكلمة المرور",
+ "blockButton__phoneCode": "إرسال رمز SMS إلى {{identifier}}",
+ "blockButton__totp": "استخدام تطبيق الموثق الخاص بك",
+ "getHelp": {
+ "blockButton__emailSupport": "الدعم عبر البريد الإلكتروني",
+ "content": "إذا كنت تواجه صعوبة في تسجيل الدخول إلى حسابك، ارسل لنا بريدًا إلكترونيًا وسنعمل معك لاستعادة الوصول في أقرب وقت ممكن.",
+ "title": "احصل على مساعدة"
+ },
+ "subtitle": "هل تواجه مشاكل؟ يمكنك استخدام أي من هذه الطرق لتسجيل الدخول.",
+ "title": "استخدم طريقة أخرى"
+ },
+ "backupCodeMfa": {
+ "subtitle": "رمز الاحتياطي هو الرمز الذي حصلت عليه عند إعداد المصادقة ثنائية العامل.",
+ "title": "أدخل رمز الاحتياطي"
+ },
+ "emailCode": {
+ "formTitle": "رمز التحقق",
+ "resendButton": "لم تستلم الرمز؟ إعادة الإرسال",
+ "subtitle": "للمتابعة إلى {{applicationName}}",
+ "title": "تحقق من بريدك الإلكتروني"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "الرجاء العودة إلى التبويب الأصلي للمتابعة.",
+ "title": "انتهت صلاحية رابط التحقق هذا"
+ },
+ "failed": {
+ "subtitle": "الرجاء العودة إلى التبويب الأصلي للمتابعة.",
+ "title": "رابط التحقق هذا غير صالح"
+ },
+ "formSubtitle": "استخدم الرابط المرسل إلى بريدك الإلكتروني للتحقق",
+ "formTitle": "رابط التحقق",
+ "loading": {
+ "subtitle": "سيتم توجيهك قريبًا",
+ "title": "تسجيل الدخول..."
+ },
+ "resendButton": "لم تستلم الرابط؟ إعادة الإرسال",
+ "subtitle": "للمتابعة إلى {{applicationName}}",
+ "title": "تحقق من بريدك الإلكتروني",
+ "unusedTab": {
+ "title": "يمكنك إغلاق هذا التبويب"
+ },
+ "verified": {
+ "subtitle": "سيتم توجيهك قريبًا",
+ "title": "تم تسجيل الدخول بنجاح"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "الرجوع إلى التبويب الأصلي للمتابعة",
+ "subtitleNewTab": "الرجوع إلى التبويب الجديد للمتابعة",
+ "titleNewTab": "تم تسجيل الدخول على تبويب آخر"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "رمز إعادة تعيين كلمة المرور",
+ "resendButton": "لم تستلم الرمز؟ إعادة الإرسال",
+ "subtitle": "لإعادة تعيين كلمة المرور",
+ "subtitle_email": "أدخل أولًا الرمز المرسل إلى عنوان بريدك الإلكتروني",
+ "subtitle_phone": "أدخل أولًا الرمز المرسل إلى هاتفك",
+ "title": "إعادة تعيين كلمة المرور"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "إعادة تعيين كلمة المرور",
+ "label__alternativeMethods": "أو، قم بتسجيل الدخول باستخدام طريقة أخرى",
+ "title": "نسيت كلمة المرور؟"
+ },
+ "noAvailableMethods": {
+ "message": "لا يمكن المتابعة في عملية تسجيل الدخول. لا يوجد عامل مصادقة متاح.",
+ "subtitle": "حدث خطأ",
+ "title": "لا يمكن تسجيل الدخول"
+ },
+ "passkey": {
+ "subtitle": "استخدام رمز الدخول يؤكد أنك أنت. قد يطلب جهازك بصمة الإصبع أو الوجه أو قفل الشاشة.",
+ "title": "استخدام رمز الدخول"
+ },
+ "password": {
+ "actionLink": "استخدام طريقة أخرى",
+ "subtitle": "أدخل كلمة المرور المرتبطة بحسابك",
+ "title": "أدخل كلمة المرور"
+ },
+ "passwordPwned": {
+ "title": "تم تسريب كلمة المرور"
+ },
+ "phoneCode": {
+ "formTitle": "رمز التحقق",
+ "resendButton": "لم تستلم الرمز؟ إعادة الإرسال",
+ "subtitle": "للمتابعة إلى {{applicationName}}",
+ "title": "تحقق من هاتفك"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "رمز التحقق",
+ "resendButton": "لم تستلم الرمز؟ إعادة الإرسال",
+ "subtitle": "للمتابعة، يرجى إدخال رمز التحقق المرسل إلى هاتفك",
+ "title": "تحقق من هاتفك"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "إعادة تعيين كلمة المرور",
+ "requiredMessage": "من الضروري إعادة تعيين كلمة المرور لأسباب أمنية.",
+ "successMessage": "تم تغيير كلمة المرور بنجاح. جارٍ تسجيل الدخول، يرجى الانتظار لحظة.",
+ "title": "تعيين كلمة مرور جديدة"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "نحتاج إلى التحقق من هويتك قبل إعادة تعيين كلمة المرور."
+ },
+ "start": {
+ "actionLink": "التسجيل",
+ "actionLink__use_email": "استخدام البريد الإلكتروني",
+ "actionLink__use_email_username": "استخدام البريد الإلكتروني أو اسم المستخدم",
+ "actionLink__use_passkey": "استخدام رمز الدخول بدلاً",
+ "actionLink__use_phone": "استخدام الهاتف",
+ "actionLink__use_username": "استخدام اسم المستخدم",
+ "actionText": "ليس لديك حساب؟",
+ "subtitle": "مرحبًا! يرجى ملء التفاصيل للبدء.",
+ "title": "إنشاء حسابك"
+ },
+ "totpMfa": {
+ "formTitle": "رمز التحقق",
+ "subtitle": "للمتابعة، يرجى إدخال رمز التحقق الذي تم توليده بواسطة تطبيق الموثق الخاص بك",
+ "title": "التحقق الثنائي الخطوة"
+ }
+ },
+ "signInEnterPasswordTitle": "أدخل كلمة المرور الخاصة بك",
+ "signUp": {
+ "continue": {
+ "actionLink": "تسجيل الدخول",
+ "actionText": "هل لديك حساب بالفعل؟",
+ "subtitle": "يرجى ملء التفاصيل المتبقية للمتابعة.",
+ "title": "املأ الحقول الناقصة"
+ },
+ "emailCode": {
+ "formSubtitle": "أدخل رمز التحقق المرسل إلى عنوان بريدك الإلكتروني",
+ "formTitle": "رمز التحقق",
+ "resendButton": "لم تستلم الرمز؟ إعادة الإرسال",
+ "subtitle": "أدخل رمز التحقق المرسل إلى بريدك الإلكتروني",
+ "title": "تحقق من بريدك الإلكتروني"
+ },
+ "emailLink": {
+ "formSubtitle": "استخدم الرابط المرسل إلى عنوان بريدك الإلكتروني للتحقق",
+ "formTitle": "رابط التحقق",
+ "loading": {
+ "title": "جارٍ التسجيل..."
+ },
+ "resendButton": "لم تستلم الرابط؟ إعادة الإرسال",
+ "subtitle": "للمتابعة إلى {{applicationName}}",
+ "title": "تحقق من بريدك الإلكتروني",
+ "verified": {
+ "title": "تم التسجيل بنجاح"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "الرجوع إلى التبويب الجديد للمتابعة",
+ "subtitleNewTab": "الرجوع إلى التبويب السابق للمتابعة",
+ "title": "تم التحقق بنجاح من البريد الإلكتروني"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "أدخل رمز التحقق المرسل إلى رقم هاتفك",
+ "formTitle": "رمز التحقق",
+ "resendButton": "لم تستلم الرمز؟ إعادة الإرسال",
+ "subtitle": "أدخل رمز التحقق المرسل إلى هاتفك",
+ "title": "تحقق من هاتفك"
+ },
+ "start": {
+ "actionLink": "تسجيل الدخول",
+ "actionText": "هل لديك حساب بالفعل؟",
+ "subtitle": "مرحبًا! يرجى ملء التفاصيل للبدء.",
+ "title": "إنشاء حسابك"
+ }
+ },
+ "socialButtonsBlockButton": "المتابعة مع {{provider|titleize}}",
+ "unstable__errors": {
+ "captcha_invalid": "فشل تسجيل الدخول بسبب فشل التحقق من الأمان. يرجى تحديث الصفحة للمحاولة مرة أخرى أو التواصل مع الدعم للمزيد من المساعدة.",
+ "captcha_unavailable": "فشل تسجيل الدخول بسبب فشل التحقق من الروبوت. يرجى تحديث الصفحة للمحاولة مرة أخرى أو التواصل مع الدعم للمزيد من المساعدة.",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "هذا البريد الإلكتروني مستخدم. يرجى المحاولة بعنوان آخر.",
+ "form_identifier_exists__phone_number": "هذا الرقم مستخدم. يرجى المحاولة برقم آخر.",
+ "form_identifier_exists__username": "اسم المستخدم هذا مستخدم. يرجى المحاولة بآخر.",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "يجب أن يكون عنوان البريد الإلكتروني صالحًا.",
+ "form_param_format_invalid__phone_number": "يجب أن يكون رقم الهاتف بتنسيق دولي صالح.",
+ "form_param_max_length_exceeded__first_name": "يجب ألا يتجاوز الاسم الأول 256 حرفًا.",
+ "form_param_max_length_exceeded__last_name": "يجب ألا يتجاوز الاسم الأخير 256 حرفًا.",
+ "form_param_max_length_exceeded__name": "يجب ألا يتجاوز الاسم 256 حرفًا.",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "كلمة المرور الخاصة بك غير قوية بما فيه الكفاية.",
+ "form_password_pwned": "تم العثور على هذه كلمة المرور كجزء من اختراق ولا يمكن استخدامها، يرجى تجربة كلمة مرور أخرى بدلاً منها.",
+ "form_password_pwned__sign_in": "تم العثور على هذه كلمة المرور كجزء من اختراق ولا يمكن استخدامها، يرجى إعادة تعيين كلمة المرور الخاصة بك.",
+ "form_password_size_in_bytes_exceeded": "لقد تجاوزت كلمة المرور الحد الأقصى المسموح به من البايتات، يرجى تقصيرها أو إزالة بعض الرموز الخاصة.",
+ "form_password_validation_failed": "كلمة المرور غير صحيحة",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "لا يمكنك حذف هويتك الأخيرة.",
+ "not_allowed_access": "",
+ "passkey_already_exists": "تم تسجيل مفتاح الوصول مسبقًا بهذا الجهاز.",
+ "passkey_not_supported": "مفاتيح الوصول غير مدعومة على هذا الجهاز.",
+ "passkey_pa_not_supported": "التسجيل يتطلب مصادق النظام ولكن الجهاز لا يدعم ذلك.",
+ "passkey_registration_cancelled": "تم إلغاء تسجيل مفتاح الوصول أو انتهت صلاحيته.",
+ "passkey_retrieval_cancelled": "تم إلغاء استرداد مفتاح الوصول أو انتهت صلاحيته.",
+ "passwordComplexity": {
+ "maximumLength": "أقل من {{length}} حرف",
+ "minimumLength": "{{length}} أو أكثر من الأحرف",
+ "requireLowercase": "حرف صغير",
+ "requireNumbers": "رقم",
+ "requireSpecialCharacter": "رمز خاص",
+ "requireUppercase": "حرف كبير",
+ "sentencePrefix": "يجب أن تحتوي كلمة المرور الخاصة بك على"
+ },
+ "phone_number_exists": "هذا الرقم مستخدم. يرجى المحاولة برقم آخر.",
+ "zxcvbn": {
+ "couldBeStronger": "كلمة المرور الخاصة بك تعمل، ولكن يمكن أن تكون أقوى. جرب إضافة المزيد من الأحرف.",
+ "goodPassword": "كلمة المرور الخاصة بك تلبي جميع المتطلبات اللازمة.",
+ "notEnough": "كلمة المرور الخاصة بك ليست قوية بما فيه الكفاية.",
+ "suggestions": {
+ "allUppercase": "قم بتحويل بعض الحروف إلى أحرف كبيرة، ولكن ليس كلها.",
+ "anotherWord": "أضف المزيد من الكلمات غير الشائعة.",
+ "associatedYears": "تجنب السنوات المرتبطة بك.",
+ "capitalization": "استخدم الحروف الكبيرة أكثر من الحرف الأول فقط.",
+ "dates": "تجنب التواريخ والسنوات المرتبطة بك.",
+ "l33t": "تجنب التبديلات التنبؤية للحروف مثل '@' بدلاً من 'a'.",
+ "longerKeyboardPattern": "استخدم أنماط لوحة المفاتيح الطويلة وغير اتجاه الكتابة عدة مرات.",
+ "noNeed": "يمكنك إنشاء كلمات مرور قوية دون استخدام رموز أو أرقام أو حروف كبيرة.",
+ "pwned": "إذا استخدمت هذه كلمة المرور في مكان آخر، يجب عليك تغييرها.",
+ "recentYears": "تجنب السنوات الحديثة.",
+ "repeated": "تجنب تكرار الكلمات والأحرف.",
+ "reverseWords": "تجنب تهجئة الكلمات الشائعة بشكل معكوس.",
+ "sequences": "تجنب تسلسلات الأحرف الشائعة.",
+ "useWords": "استخدم كلمات متعددة، ولكن تجنب العبارات الشائعة."
+ },
+ "warnings": {
+ "common": "هذه كلمة مرور شائعة الاستخدام.",
+ "commonNames": "الأسماء الشائعة سهلة التخمين.",
+ "dates": "التواريخ سهلة التخمين.",
+ "extendedRepeat": "أنماط الحروف المتكررة مثل \"abcabcabc\" سهلة التخمين.",
+ "keyPattern": "أنماط لوحة المفاتيح القصيرة سهلة التخمين.",
+ "namesByThemselves": "الأسماء الفردية أو الأسماء العائلية سهلة التخمين.",
+ "pwned": "تمت تعريض كلمة المرور الخاصة بك في اختراق على الإنترنت.",
+ "recentYears": "السنوات الحديثة سهلة التخمين.",
+ "sequences": "تسلسلات الأحرف الشائعة مثل \"abc\" سهلة التخمين.",
+ "similarToCommon": "هذا مشابه لكلمة مرور شائعة الاستخدام.",
+ "simpleRepeat": "الحروف المتكررة مثل \"aaa\" سهلة التخمين.",
+ "straightRow": "صفوف الحروف المستقيمة على لوحة المفاتيح سهلة التخمين.",
+ "topHundred": "هذه كلمة مرور تستخدم بكثرة.",
+ "topTen": "هذه كلمة مرور تستخدم بشكل كبير.",
+ "userInputs": "يجب ألا يكون هناك أي بيانات شخصية أو ذات صلة بالصفحة.",
+ "wordByItself": "الكلمات الفردية سهلة التخمين."
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "إضافة حساب",
+ "action__manageAccount": "إدارة الحساب",
+ "action__signOut": "تسجيل الخروج",
+ "action__signOutAll": "تسجيل الخروج من جميع الحسابات"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "تم النسخ!",
+ "actionLabel__copy": "نسخ الكل",
+ "actionLabel__download": "تحميل .txt",
+ "actionLabel__print": "طباعة",
+ "infoText1": "سيتم تمكين رموز النسخ الاحتياطي لهذا الحساب.",
+ "infoText2": "احتفظ برموز النسخ الاحتياطي بسرية وقم بتخزينها بشكل آمن. يمكنك إعادة إنشاء رموز النسخ الاحتياطي إذا كنت تشتبه في تعرضها للخطر.",
+ "subtitle__codelist": "قم بتخزينها بشكل آمن واحتفظ بها سرية.",
+ "successMessage": "تم تمكين رموز النسخ الاحتياطي الآن. يمكنك استخدام أحد هذه الرموز لتسجيل الدخول إلى حسابك، إذا فقدت الوصول إلى جهاز المصادقة الخاص بك. يمكن استخدام كل رمز مرة واحدة فقط.",
+ "successSubtitle": "يمكنك استخدام أحد هذه الرموز لتسجيل الدخول إلى حسابك، إذا فقدت الوصول إلى جهاز المصادقة الخاص بك.",
+ "title": "إضافة التحقق برمز النسخ الاحتياطي",
+ "title__codelist": "رموز النسخ الاحتياطي"
+ },
+ "connectedAccountPage": {
+ "formHint": "حدد مزودًا للاتصال بحسابك.",
+ "formHint__noAccounts": "لا توجد مزودي حساب خارجي متاحين.",
+ "removeResource": {
+ "messageLine1": "سيتم إزالة {{identifier}} من هذا الحساب.",
+ "messageLine2": "لن تتمكن بعد الآن من استخدام هذا الحساب المتصل ولن تعمل أي ميزات تعتمد عليه.",
+ "successMessage": "تمت إزالة {{connectedAccount}} من حسابك.",
+ "title": "إزالة الحساب المتصل"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "تمت إضافة المزود إلى حسابك",
+ "title": "إضافة حساب متصل"
+ },
+ "deletePage": {
+ "actionDescription": "اكتب \"حذف الحساب\" أدناه للمتابعة.",
+ "confirm": "حذف الحساب",
+ "messageLine1": "هل أنت متأكد أنك تريد حذف حسابك؟",
+ "messageLine2": "هذا الإجراء دائم ولا يمكن التراجع عنه.",
+ "title": "حذف الحساب"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "سيتم إرسال بريد إلكتروني يحتوي على رمز التحقق إلى هذا العنوان الإلكتروني.",
+ "formSubtitle": "أدخل رمز التحقق المرسل إلى {{identifier}}",
+ "formTitle": "رمز التحقق",
+ "resendButton": "لم تستلم الرمز؟ إعادة الإرسال",
+ "successMessage": "تمت إضافة البريد الإلكتروني {{identifier}} إلى حسابك."
+ },
+ "emailLink": {
+ "formHint": "سيتم إرسال بريد إلكتروني يحتوي على رابط التحقق إلى هذا العنوان الإلكتروني.",
+ "formSubtitle": "انقر على الرابط في البريد الإلكتروني المرسل إلى {{identifier}}",
+ "formTitle": "رابط التحقق",
+ "resendButton": "لم تستلم الرابط؟ إعادة الإرسال",
+ "successMessage": "تمت إضافة البريد الإلكتروني {{identifier}} إلى حسابك."
+ },
+ "removeResource": {
+ "messageLine1": "سيتم إزالة {{identifier}} من هذا الحساب.",
+ "messageLine2": "لن تتمكن بعد الآن من تسجيل الدخول باستخدام هذا العنوان الإلكتروني.",
+ "successMessage": "تمت إزالة {{emailAddress}} من حسابك.",
+ "title": "إزالة عنوان البريد الإلكتروني"
+ },
+ "title": "إضافة عنوان بريد إلكتروني",
+ "verifyTitle": "تحقق من عنوان البريد الإلكتروني"
+ },
+ "formButtonPrimary__add": "إضافة",
+ "formButtonPrimary__continue": "متابعة",
+ "formButtonPrimary__finish": "إنهاء",
+ "formButtonPrimary__remove": "إزالة",
+ "formButtonPrimary__save": "حفظ",
+ "formButtonReset": "إلغاء",
+ "mfaPage": {
+ "formHint": "حدد طريقة للإضافة.",
+ "title": "إضافة التحقق بخطوتين"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "استخدام الرقم الحالي",
+ "primaryButton__addPhoneNumber": "إضافة رقم الهاتف",
+ "removeResource": {
+ "messageLine1": "لن يتم استقبال رموز التحقق من هذا الرقم بعد الآن عند تسجيل الدخول.",
+ "messageLine2": "قد لا يكون حسابك آمنًا. هل أنت متأكد من رغبتك في المتابعة؟",
+ "successMessage": "تمت إزالة التحقق بخطوتين عبر رمز SMS لـ {{mfaPhoneCode}}",
+ "title": "إزالة التحقق بخطوتين"
+ },
+ "subtitle__availablePhoneNumbers": "حدد رقم هاتف موجود للتسجيل في التحقق بخطوتين عبر رمز SMS أو أضف واحدًا جديدًا.",
+ "subtitle__unavailablePhoneNumbers": "لا توجد أرقام هواتف متاحة للتسجيل في التحقق بخطوتين عبر رمز SMS، يرجى إضافة واحدة جديدة.",
+ "successMessage1": "عند تسجيل الدخول، ستحتاج إلى إدخال رمز التحقق المرسل إلى هذا الرقم كخطوة إضافية.",
+ "successMessage2": "احفظ هذه الرموز الاحتياطية وقم بتخزينها في مكان آمن. إذا فقدت الوصول إلى جهاز المصادقة الخاص بك، يمكنك استخدام رموز النسخ الاحتياطي لتسجيل الدخول.",
+ "successTitle": "تم تمكين التحقق برمز SMS",
+ "title": "إضافة التحقق برمز SMS"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "مسح رمز الاستجابة السريعة بدلاً من ذلك",
+ "buttonUnableToScan__nonPrimary": "لا يمكن مسح رمز الاستجابة السريعة؟",
+ "infoText__ableToScan": "قم بإعداد طريقة تسجيل دخول جديدة في تطبيق المصادقة الخاص بك وامسح رمز الاستجابة السريعة التالي لربطه بحسابك.",
+ "infoText__unableToScan": "قم بإعداد طريقة تسجيل دخول جديدة في تطبيق المصادقة الخاص بك وأدخل المفتاح المقدم أدناه.",
+ "inputLabel__unableToScan1": "تأكد من تمكين كلمة المرور الزمنية أو كلمات المرور لمرة واحدة، ثم انهي ربط حسابك.",
+ "inputLabel__unableToScan2": "بديلًا، إذا كان جهاز المصادقة الخاص بك يدعم TOTP URIs، يمكنك أيضًا نسخ الرابط الكامل."
+ },
+ "removeResource": {
+ "messageLine1": "لن تكون هناك حاجة لرموز التحقق من هذا التطبيق بعد الآن عند تسجيل الدخول.",
+ "messageLine2": "قد لا يكون حسابك آمنًا. هل أنت متأكد من رغبتك في المتابعة؟",
+ "successMessage": "تمت إزالة التحقق بخطوتين عبر تطبيق المصادقة.",
+ "title": "إزالة التحقق بخطوتين"
+ },
+ "successMessage": "تم تمكين التحقق بخطوتين الآن. عند تسجيل الدخول، ستحتاج إلى إدخال رمز التحقق من هذا التطبيق كخطوة إضافية.",
+ "title": "إضافة تطبيق المصادقة",
+ "verifySubtitle": "أدخل رمز التحقق الذي تم إنشاؤه بواسطة تطبيق المصادقة الخاص بك",
+ "verifyTitle": "رمز التحقق"
+ },
+ "mobileButton__menu": "القائمة",
+ "navbar": {
+ "account": "الملف الشخصي",
+ "description": "إدارة معلومات حسابك.",
+ "security": "الأمان",
+ "title": "الحساب"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} سيتم إزالته من هذا الحساب.",
+ "title": "إزالة مفتاح المرور"
+ },
+ "subtitle__rename": "يمكنك تغيير اسم مفتاح المرور لتسهيل العثور عليه.",
+ "title__rename": "إعادة تسمية مفتاح المرور"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "يُوصى بتسجيل الخروج من جميع الأجهزة الأخرى التي قد تكون استخدمت كلمة المرور القديمة الخاصة بك.",
+ "readonly": "لا يمكن تحرير كلمة المرور الخاصة بك حاليًا لأنه يمكنك تسجيل الدخول فقط عبر الاتصال بالشركة.",
+ "successMessage__set": "تم تعيين كلمة المرور الخاصة بك.",
+ "successMessage__signOutOfOtherSessions": "تم تسجيل الخروج من جميع الأجهزة الأخرى.",
+ "successMessage__update": "تم تحديث كلمة المرور الخاصة بك.",
+ "title__set": "تعيين كلمة المرور",
+ "title__update": "تحديث كلمة المرور"
+ },
+ "phoneNumberPage": {
+ "infoText": "سيتم إرسال رسالة نصية تحتوي على رمز التحقق إلى هذا الرقم. قد تنطبق رسوم الرسائل والبيانات.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} سيتم إزالته من هذا الحساب.",
+ "messageLine2": "لن تتمكن بعد الآن من تسجيل الدخول باستخدام هذا الرقم.",
+ "successMessage": "تمت إزالة {{phoneNumber}} من حسابك.",
+ "title": "إزالة رقم الهاتف"
+ },
+ "successMessage": "{{identifier}} تمت إضافته إلى حسابك.",
+ "title": "إضافة رقم الهاتف",
+ "verifySubtitle": "أدخل رمز التحقق المرسل إلى {{identifier}}",
+ "verifyTitle": "تحقق من رقم الهاتف"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "الحجم الموصى به 1:1، حتى 10 ميغابايت.",
+ "imageFormDestructiveActionSubtitle": "إزالة",
+ "imageFormSubtitle": "تحميل",
+ "imageFormTitle": "صورة الملف الشخصي",
+ "readonly": "تم توفير معلومات ملفك الشخصي من خلال الاتصال بالشركة ولا يمكن تحريرها.",
+ "successMessage": "تم تحديث ملفك الشخصي.",
+ "title": "تحديث الملف الشخصي"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "تسجيل الخروج من الجهاز",
+ "title": "الأجهزة النشطة"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "حاول مرة أخرى",
+ "actionLabel__reauthorize": "التفويض الآن",
+ "destructiveActionTitle": "إزالة",
+ "primaryButton": "ربط الحساب",
+ "subtitle__reauthorize": "تم تحديث النطاقات المطلوبة، وقد تواجه قدرًا محدودًا من الوظائف. يرجى إعادة تفويض هذا التطبيق لتجنب أي مشاكل",
+ "title": "الحسابات المتصلة"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "حذف الحساب",
+ "title": "حذف الحساب"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "إزالة البريد الإلكتروني",
+ "detailsAction__nonPrimary": "تعيين كأساسي",
+ "detailsAction__primary": "اكتمال التحقق",
+ "detailsAction__unverified": "التحقق",
+ "primaryButton": "إضافة عنوان بريد إلكتروني",
+ "title": "عناوين البريد الإلكتروني"
+ },
+ "enterpriseAccountsSection": {
+ "title": "حسابات المؤسسة"
+ },
+ "headerTitle__account": "تفاصيل الملف الشخصي",
+ "headerTitle__security": "الأمان",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "إعادة إنشاء",
+ "headerTitle": "رموز النسخ الاحتياطي",
+ "subtitle__regenerate": "احصل على مجموعة جديدة من رموز النسخ الاحتياطي الآمنة. سيتم حذف رموز النسخ الاحتياطي السابقة ولا يمكن استخدامها.",
+ "title__regenerate": "إعادة إنشاء رموز النسخ الاحتياطي"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "تعيين كافتراضي",
+ "destructiveActionLabel": "إزالة"
+ },
+ "primaryButton": "إضافة التحقق من خطوتين",
+ "title": "التحقق من خطوتين",
+ "totp": {
+ "destructiveActionTitle": "إزالة",
+ "headerTitle": "تطبيق المصادقة"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "إزالة",
+ "menuAction__rename": "إعادة تسمية",
+ "title": "مفاتيح المرور"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "تعيين كلمة مرور",
+ "primaryButton__updatePassword": "تحديث كلمة المرور",
+ "title": "كلمة المرور"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "إزالة رقم الهاتف",
+ "detailsAction__nonPrimary": "تعيين كأساسي",
+ "detailsAction__primary": "اكتمال التحقق",
+ "detailsAction__unverified": "التحقق من رقم الهاتف",
+ "primaryButton": "إضافة رقم الهاتف",
+ "title": "أرقام الهواتف"
+ },
+ "profileSection": {
+ "primaryButton": "تحديث الملف الشخصي",
+ "title": "الملف الشخصي"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "تعيين اسم المستخدم",
+ "primaryButton__updateUsername": "تحديث اسم المستخدم",
+ "title": "اسم المستخدم"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "إزالة المحفظة",
+ "primaryButton": "محافظ Web3",
+ "title": "محافظ Web3"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "تم تحديث اسم المستخدم الخاص بك.",
+ "title__set": "تعيين اسم المستخدم",
+ "title__update": "تحديث اسم المستخدم"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} سيتم إزالته من هذا الحساب.",
+ "messageLine2": "لن تتمكن بعد الآن من تسجيل الدخول باستخدام هذا المحفظة web3.",
+ "successMessage": "{{web3Wallet}} تمت إزالته من حسابك.",
+ "title": "إزالة محفظة web3"
+ },
+ "subtitle__availableWallets": "حدد محفظة web3 للاتصال بحسابك.",
+ "subtitle__unavailableWallets": "لا توجد محافظ web3 متاحة.",
+ "successMessage": "تمت إضافة المحفظة إلى حسابك.",
+ "title": "إضافة محفظة web3"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/common.json b/DigitalHumanWeb/locales/ar/common.json
new file mode 100644
index 0000000..d7b55c3
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "حول",
+ "advanceSettings": "إعدادات متقدمة",
+ "alert": {
+ "cloud": {
+ "action": "تجربة مجانية",
+ "desc": "نحن نقدم {{credit}} نقطة حساب مجانية لجميع المستخدمين المسجلين، بدون الحاجة إلى تكوين معقد، فقط قم بتشغيلها، تدعم تاريخ الدردشة غير المحدود ومزامنة السحابة العالمية، والمزيد من الميزات المتقدمة بانتظار استكشافها معًا.",
+ "descOnMobile": "نحن نقدم {{credit}} نقطة حساب مجانية لجميع المستخدمين المسجلين، بدون الحاجة إلى إعدادات معقدة، فقط قم بالاستخدام.",
+ "title": "مرحبًا بك في التجربة {{name}}"
+ }
+ },
+ "appInitializing": "جارٍ تشغيل التطبيق...",
+ "autoGenerate": "توليد تلقائي",
+ "autoGenerateTooltip": "إكمال تلقائي بناءً على الكلمات المقترحة لوصف المساعد",
+ "autoGenerateTooltipDisabled": "الرجاء إدخال كلمة تلميح قبل تفعيل وظيفة الإكمال التلقائي",
+ "back": "عودة",
+ "batchDelete": "حذف دفعة",
+ "blog": "مدونة المنتجات",
+ "cancel": "إلغاء",
+ "changelog": "سجل التغييرات",
+ "close": "إغلاق",
+ "contact": "اتصل بنا",
+ "copy": "نسخ",
+ "copyFail": "فشل في النسخ",
+ "copySuccess": "تم النسخ بنجاح",
+ "dataStatistics": {
+ "messages": "رسائل",
+ "sessions": "جلسات",
+ "today": "اليوم",
+ "topics": "مواضيع"
+ },
+ "defaultAgent": "مساعد افتراضي",
+ "defaultSession": "جلسة افتراضية",
+ "delete": "حذف",
+ "document": "وثيقة الاستخدام",
+ "download": "تحميل",
+ "duplicate": "إنشاء نسخة",
+ "edit": "تحرير",
+ "export": "تصدير الإعدادات",
+ "exportType": {
+ "agent": "تصدير إعدادات المساعد",
+ "agentWithMessage": "تصدير المساعد والرسائل",
+ "all": "تصدير الإعدادات العامة وجميع بيانات المساعد",
+ "allAgent": "تصدير جميع إعدادات المساعد",
+ "allAgentWithMessage": "تصدير جميع المساعدين والرسائل",
+ "globalSetting": "تصدير الإعدادات العامة"
+ },
+ "feedback": "تقديم ملاحظات",
+ "follow": "تابعنا على {{name}}",
+ "footer": {
+ "action": {
+ "feedback": "مشاركة ملاحظاتك الثمينة",
+ "star": "قم بإضافة نجمة على GitHub"
+ },
+ "and": "و",
+ "feedback": {
+ "action": "مشاركة الملاحظات",
+ "desc": "كل فكرة ومقترح لديك ثمين بالنسبة لنا، نحن نتطلع بشوق لمعرفة آرائك! نرحب بالتواصل معنا لتقديم ملاحظاتك حول ميزات المنتج وتجربة الاستخدام، لمساعدتنا في تحسين LobeChat بشكل أفضل.",
+ "title": "مشاركة ملاحظاتك الثمينة على GitHub"
+ },
+ "later": "لاحقًا",
+ "star": {
+ "action": "قم بإضاءة النجمة",
+ "desc": "إذا كنت تحب منتجنا وترغب في دعمنا، هل يمكنك إضافة نجمة لنا على GitHub؟ هذا الإجراء الصغير له أهمية كبيرة بالنسبة لنا، حيث يمكن أن يلهمنا لتقديم تجربة ميزات مستمرة لك.",
+ "title": "قم بإضاءة النجمة لنا على GitHub"
+ },
+ "title": "هل تحب منتجنا؟"
+ },
+ "fullscreen": "وضع كامل الشاشة",
+ "historyRange": "نطاق التاريخ",
+ "import": "استيراد الإعدادات",
+ "importModal": {
+ "error": {
+ "desc": "عذرًا، حدث استثناء أثناء عملية استيراد البيانات. يرجى المحاولة مرة أخرى، أو <1>تقديم مشكلتك1>، وسنقوم بمساعدتك على الفور في تحديد المشكلة.",
+ "title": "فشل استيراد البيانات"
+ },
+ "finish": {
+ "onlySettings": "تم استيراد إعدادات النظام بنجاح",
+ "start": "ابدأ الاستخدام",
+ "subTitle": "تم استيراد البيانات بنجاح، وقت الاستيراد {{duration}} ثانية. تفاصيل الاستيراد كالتالي:",
+ "title": "اكتمال عملية الاستيراد"
+ },
+ "loading": "جاري استيراد البيانات، يرجى الانتظار...",
+ "preparing": "جاري تجهيز وحدة استيراد البيانات...",
+ "result": {
+ "added": "تمت الإضافة بنجاح",
+ "errors": "حدثت أخطاء أثناء الاستيراد",
+ "messages": "الرسائل",
+ "sessionGroups": "مجموعات الجلسة",
+ "sessions": "الجلسات",
+ "skips": "التخطيات",
+ "topics": "المواضيع",
+ "type": "نوع البيانات"
+ },
+ "title": "استيراد البيانات",
+ "uploading": {
+ "desc": "الملف الحالي كبير نسبيًا، يتم رفعه بجد...",
+ "restTime": "الوقت المتبقي",
+ "speed": "سرعة الرفع"
+ }
+ },
+ "information": "المجتمع والمعلومات",
+ "installPWA": "تثبيت تطبيق المتصفح",
+ "lang": {
+ "ar": "العربية",
+ "bg-BG": "البلغارية",
+ "bn": "البنغالية",
+ "cs-CZ": "التشيكية",
+ "da-DK": "الدنماركية",
+ "de-DE": "الألمانية",
+ "el-GR": "اليونانية",
+ "en": "الإنجليزية",
+ "en-US": "الإنجليزية",
+ "es-ES": "الإسبانية",
+ "fi-FI": "الفنلندية",
+ "fr-FR": "الفرنسية",
+ "hi-IN": "الهندية",
+ "hu-HU": "الهنغارية",
+ "id-ID": "الإندونيسية",
+ "it-IT": "الإيطالية",
+ "ja-JP": "اليابانية",
+ "ko-KR": "الكورية",
+ "nl-NL": "الهولندية",
+ "no-NO": "النرويجية",
+ "pl-PL": "البولندية",
+ "pt-BR": "البرتغالية",
+ "pt-PT": "البرتغالية",
+ "ro-RO": "الرومانية",
+ "ru-RU": "الروسية",
+ "sk-SK": "السلوفاكية",
+ "sr-RS": "الصربية",
+ "sv-SE": "السويدية",
+ "th-TH": "التايلاندية",
+ "tr-TR": "التركية",
+ "uk-UA": "الأوكرانية",
+ "vi-VN": "الفيتنامية",
+ "zh": "الصينية المبسطة",
+ "zh-CN": "الصينية المبسطة",
+ "zh-TW": "الصينية التقليدية"
+ },
+ "layoutInitializing": "جاري تحميل التخطيط...",
+ "legal": "بيان قانوني",
+ "loading": "جارِ التحميل...",
+ "mail": {
+ "business": "شراكات تجارية",
+ "support": "الدعم عبر البريد الإلكتروني"
+ },
+ "oauth": "تسجيل الدخول SSO",
+ "officialSite": "الموقع الرسمي",
+ "ok": "موافق",
+ "password": "كلمة المرور",
+ "pin": "تثبيت في الأعلى",
+ "pinOff": "إلغاء التثبيت",
+ "privacy": "سياسة الخصوصية",
+ "regenerate": "إعادة توليد",
+ "rename": "إعادة تسمية",
+ "reset": "إعادة تعيين",
+ "retry": "إعادة المحاولة",
+ "send": "إرسال",
+ "setting": "الإعدادات",
+ "share": "مشاركة",
+ "stop": "إيقاف",
+ "sync": {
+ "actions": {
+ "settings": "إعدادات المزامنة",
+ "sync": "مزامنة فورية"
+ },
+ "awareness": {
+ "current": "الجهاز الحالي"
+ },
+ "channel": "القناة",
+ "disabled": {
+ "actions": {
+ "enable": "تمكين المزامنة السحابية",
+ "settings": "تكوين معلمات المزامنة"
+ },
+ "desc": "بيانات الجلسة الحالية تُخزن فقط في هذا المتصفح. إذا كنت بحاجة إلى مزامنة البيانات بين عدة أجهزة، يرجى تكوين وتمكين المزامنة السحابية.",
+ "title": "لم يتم تشغيل مزامنة البيانات"
+ },
+ "enabled": {
+ "title": "مزامنة البيانات"
+ },
+ "status": {
+ "connecting": "جار الاتصال",
+ "disabled": "مزامنة غير مفعلة",
+ "ready": "متصل",
+ "synced": "تمت المزامنة",
+ "syncing": "جار المزامنة",
+ "unconnected": "فشل الاتصال"
+ },
+ "title": "حالة المزامنة",
+ "unconnected": {
+ "tip": "فشل اتصال خادم الإشارة، لن يتمكن من إنشاء قناة اتصال نقطية، يرجى التحقق من الشبكة وإعادة المحاولة"
+ }
+ },
+ "tab": {
+ "chat": "الدردشة",
+ "discover": "اكتشاف",
+ "files": "ملفات",
+ "me": "أنا",
+ "setting": "الإعدادات"
+ },
+ "telemetry": {
+ "allow": "السماح",
+ "deny": "رفض",
+ "desc": "نحن نأمل في الحصول على معلومات استخدامك بشكل مجهول لمساعدتنا في تحسين LobeChat وتوفير تجربة منتج أفضل لك. يمكنك إيقاف ذلك في أي وقت من \"الإعدادات\" - \"حول\".",
+ "learnMore": "معرفة المزيد",
+ "title": "مساعدة LobeChat في التحسن"
+ },
+ "temp": "مؤقت",
+ "terms": "شروط الخدمة",
+ "updateAgent": "تحديث معلومات الوكيل",
+ "upgradeVersion": {
+ "action": "ترقية",
+ "hasNew": "يوجد تحديث متاح",
+ "newVersion": "هناك إصدار جديد متاح: {{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "مستخدم مجهول",
+ "billing": "إدارة الفواتير",
+ "cloud": "تجربة {{name}}",
+ "data": "تخزين البيانات",
+ "defaultNickname": "مستخدم النسخة المجتمعية",
+ "discord": "الدعم المجتمعي",
+ "docs": "وثائق الاستخدام",
+ "email": "الدعم عبر البريد الإلكتروني",
+ "feedback": "تقديم ملاحظات واقتراحات",
+ "help": "مركز المساعدة",
+ "moveGuide": "تم نقل زر الإعدادات إلى هنا",
+ "plans": "خطط الاشتراك",
+ "preview": "المعاينة",
+ "profile": "إدارة الحساب",
+ "setting": "إعدادات التطبيق",
+ "usages": "إحصاءات الاستخدام"
+ },
+ "version": "الإصدار"
+}
diff --git a/DigitalHumanWeb/locales/ar/components.json b/DigitalHumanWeb/locales/ar/components.json
new file mode 100644
index 0000000..94b98c7
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "اسحب الملفات هنا، يدعم تحميل عدة صور.",
+ "dragFileDesc": "اسحب الصور والملفات هنا، يدعم تحميل عدة صور وملفات.",
+ "dragFileTitle": "تحميل الملفات",
+ "dragTitle": "تحميل الصور"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "إضافة إلى قاعدة المعرفة",
+ "addToOtherKnowledgeBase": "إضافة إلى قاعدة معرفة أخرى",
+ "batchChunking": "تقسيم دفعي",
+ "chunking": "تقسيم",
+ "chunkingTooltip": "قم بتقسيم الملف إلى عدة كتل نصية وتحويلها إلى متجهات، يمكن استخدامها في البحث الدلالي والمحادثة حول الملفات",
+ "confirmDelete": "سيتم حذف هذا الملف، ولن يمكن استعادته بعد الحذف، يرجى تأكيد العملية",
+ "confirmDeleteMultiFiles": "سيتم حذف {{count}} ملفًا محددًا، ولن يمكن استعادته بعد الحذف، يرجى تأكيد العملية",
+ "confirmRemoveFromKnowledgeBase": "سيتم إزالة {{count}} ملفًا محددًا من قاعدة المعرفة، لا يزال بإمكانك رؤية الملفات في جميع الملفات، يرجى تأكيد العملية",
+ "copyUrl": "نسخ الرابط",
+ "copyUrlSuccess": "تم نسخ عنوان الملف بنجاح",
+ "createChunkingTask": "جارٍ التحضير...",
+ "deleteSuccess": "تم حذف الملف بنجاح",
+ "downloading": "جارٍ تحميل الملف...",
+ "removeFromKnowledgeBase": "إزالة من قاعدة المعرفة",
+ "removeFromKnowledgeBaseSuccess": "تمت إزالة الملف بنجاح"
+ },
+ "bottom": "لقد وصلت إلى النهاية",
+ "config": {
+ "showFilesInKnowledgeBase": "عرض المحتوى في قاعدة المعرفة"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "رفع ملف",
+ "folder": "رفع مجلد",
+ "knowledgeBase": "إنشاء قاعدة معرفة جديدة"
+ },
+ "or": "أو",
+ "title": "قم بسحب الملف أو المجلد هنا"
+ },
+ "title": {
+ "createdAt": "تاريخ الإنشاء",
+ "size": "الحجم",
+ "title": "ملف"
+ },
+ "total": {
+ "fileCount": "إجمالي {{count}} عنصر",
+ "selectedCount": "تم تحديد {{count}} عنصر"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "لم يتم تحويل كتل النص بالكامل إلى متجهات، مما سيؤدي إلى عدم توفر وظيفة البحث الدلالي، لتحسين جودة البحث، يرجى تحويل كتل النص إلى متجهات",
+ "error": "فشل في تحويل البيانات إلى متجهات",
+ "errorResult": "فشل في تحويل البيانات إلى متجهات، يرجى التحقق والمحاولة مرة أخرى. سبب الفشل:",
+ "processing": "يتم تحويل كتل النص إلى متجهات، يرجى الانتظار",
+ "success": "تم تحويل جميع كتل النص الحالية إلى متجهات"
+ },
+ "embeddings": "تحويل إلى متجهات",
+ "status": {
+ "error": "فشل في التقسيم",
+ "errorResult": "فشل في التقسيم، يرجى التحقق والمحاولة مرة أخرى. سبب الفشل:",
+ "processing": "جارٍ التقسيم",
+ "processingTip": "الخادم يقوم بتقسيم كتل النص، إغلاق الصفحة لا يؤثر على تقدم التقسيم"
+ }
+ }
+ },
+ "GoBack": {
+ "back": "عودة"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "نموذج مخصص، الإعداد الافتراضي يدعم الاستدعاء الوظيفي والتعرف البصري، يرجى التحقق من قدرة النموذج على القيام بذلك بناءً على الحالة الفعلية",
+ "file": "يدعم هذا النموذج قراءة وتعرف الملفات المرفوعة",
+ "functionCall": "يدعم هذا النموذج استدعاء الوظائف",
+ "tokens": "يدعم هذا النموذج حتى {{tokens}} رمزًا في جلسة واحدة",
+ "vision": "يدعم هذا النموذج التعرف البصري"
+ },
+ "removed": "هذا النموذج لم يعد متوفر في القائمة، سيتم إزالته تلقائيًا إذا تم إلغاء تحديده"
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "لا توجد نماذج ممكن تمكينها، يرجى الانتقال إلى الإعدادات لتمكينها",
+ "provider": "مزود"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/discover.json b/DigitalHumanWeb/locales/ar/discover.json
new file mode 100644
index 0000000..a3ab309
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "إضافة مساعد",
+ "addAgentAndConverse": "إضافة مساعد والدردشة",
+ "addAgentSuccess": "تمت الإضافة بنجاح",
+ "conversation": {
+ "l1": "مرحبًا، أنا **{{name}}**، يمكنك أن تسألني أي سؤال وسأبذل قصارى جهدي للإجابة ~",
+ "l2": "إليك مقدمة عن قدراتي: ",
+ "l3": "لنبدأ المحادثة!"
+ },
+ "description": "مقدمة المساعد",
+ "detail": "تفاصيل",
+ "list": "قائمة المساعدين",
+ "more": "المزيد",
+ "plugins": "دمج الإضافات",
+ "recentSubmits": "آخر التحديثات",
+ "suggestions": "اقتراحات ذات صلة",
+ "systemRole": "إعدادات المساعد",
+ "try": "جرب"
+ },
+ "back": "عودة إلى الاكتشاف",
+ "category": {
+ "assistant": {
+ "academic": "أكاديمي",
+ "all": "الكل",
+ "career": "مهنة",
+ "copywriting": "كتابة نصوص",
+ "design": "تصميم",
+ "education": "تعليم",
+ "emotions": "عواطف",
+ "entertainment": "ترفيه",
+ "games": "ألعاب",
+ "general": "عام",
+ "life": "حياة",
+ "marketing": "تسويق",
+ "office": "مكتب",
+ "programming": "برمجة",
+ "translation": "ترجمة"
+ },
+ "plugin": {
+ "all": "الكل",
+ "gaming-entertainment": "ألعاب وترفيه",
+ "life-style": "أسلوب حياة",
+ "media-generate": "توليد الوسائط",
+ "science-education": "علوم وتعليم",
+ "social": "وسائل التواصل الاجتماعي",
+ "stocks-finance": "أسواق مالية",
+ "tools": "أدوات عملية",
+ "web-search": "بحث على الويب"
+ }
+ },
+ "cleanFilter": "مسح الفلتر",
+ "create": "إنشاء",
+ "createGuide": {
+ "func1": {
+ "desc1": "ادخل إلى صفحة إعداد المساعد الذي ترغب في تقديمه من خلال الإعدادات في الزاوية العليا اليمنى من نافذة المحادثة;",
+ "desc2": "انقر على زر التقديم إلى سوق المساعدين في الزاوية العليا اليمنى.",
+ "tag": "الطريقة الأولى",
+ "title": "تقديم عبر LobeChat"
+ },
+ "func2": {
+ "button": "اذهب إلى مستودع مساعدي Github",
+ "desc": "إذا كنت ترغب في إضافة مساعد إلى الفهرس، يرجى استخدام agent-template.json أو agent-template-full.json لإنشاء إدخال في دليل الإضافات، كتابة وصف قصير ووضع علامات مناسبة، ثم إنشاء طلب سحب.",
+ "tag": "الطريقة الثانية",
+ "title": "تقديم عبر Github"
+ }
+ },
+ "dislike": "لا يعجبني",
+ "filter": "تصفية",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "جميع المؤلفين",
+ "followed": "المؤلفون المتابعون",
+ "title": "نطاق المؤلفين"
+ },
+ "contentLength": "أقل طول للسياق",
+ "maxToken": {
+ "title": "تعيين الحد الأقصى للطول (Token)",
+ "unlimited": "غير محدود"
+ },
+ "other": {
+ "functionCall": "دعم استدعاء الوظائف",
+ "title": "أخرى",
+ "vision": "دعم التعرف البصري",
+ "withKnowledge": "مع قاعدة المعرفة",
+ "withTool": "مع الإضافات"
+ },
+ "pricing": "أسعار النموذج",
+ "timePeriod": {
+ "all": "كل الوقت",
+ "day": "آخر 24 ساعة",
+ "month": "آخر 30 يومًا",
+ "title": "نطاق الوقت",
+ "week": "آخر 7 أيام",
+ "year": "آخر سنة"
+ }
+ },
+ "home": {
+ "featuredAssistants": "مساعدون مميزون",
+ "featuredModels": "نماذج مميزة",
+ "featuredProviders": "مزودو نماذج مميزون",
+ "featuredTools": "إضافات مميزة",
+ "more": "اكتشف المزيد"
+ },
+ "like": "أحب",
+ "models": {
+ "chat": "بدء المحادثة",
+ "contentLength": "أقصى طول للسياق",
+ "free": "مجاني",
+ "guide": "دليل الإعداد",
+ "list": "قائمة النماذج",
+ "more": "المزيد",
+ "parameterList": {
+ "defaultValue": "القيمة الافتراضية",
+ "docs": "عرض الوثائق",
+ "frequency_penalty": {
+ "desc": "تقوم هذه الإعدادات بتعديل تكرار استخدام النموذج لكلمات معينة ظهرت بالفعل في المدخلات. القيم الأعلى تقلل من احتمال تكرار هذه الكلمات، بينما القيم السلبية تعزز استخدامها. عقوبة الكلمات لا تزداد مع زيادة عدد مرات الظهور. القيم السلبية ستشجع على تكرار الكلمات.",
+ "title": "عقوبة التكرار"
+ },
+ "max_tokens": {
+ "desc": "تحدد هذه الإعدادات الحد الأقصى لطول النص الذي يمكن أن ينتجه النموذج في رد واحد. يسمح تعيين قيمة أعلى للنموذج بإنتاج ردود أطول، بينما تحدد القيمة المنخفضة طول الردود، مما يجعلها أكثر اختصارًا. يمكن أن يساعد ضبط هذه القيمة بشكل معقول وفقًا لمختلف سيناريوهات الاستخدام في تحقيق الطول والتفاصيل المتوقعة للرد.",
+ "title": "حد الرد الواحد"
+ },
+ "presence_penalty": {
+ "desc": "تهدف هذه الإعدادات إلى التحكم في تكرار استخدام الكلمات بناءً على تكرار ظهورها في المدخلات. تحاول تقليل استخدام الكلمات التي ظهرت كثيرًا في المدخلات، حيث يتناسب تكرار استخدامها مع تكرار ظهورها. عقوبة الكلمات تزداد مع عدد مرات الظهور. القيم السلبية ستشجع على تكرار الكلمات.",
+ "title": "جدة الموضوع"
+ },
+ "range": "نطاق",
+ "temperature": {
+ "desc": "تؤثر هذه الإعدادات على تنوع استجابة النموذج. القيم المنخفضة تؤدي إلى استجابات أكثر توقعًا ونمطية، بينما القيم الأعلى تشجع على استجابات أكثر تنوعًا وغير شائعة. عندما تكون القيمة 0، يعطي النموذج نفس الاستجابة دائمًا لنفس المدخل.",
+ "title": "عشوائية"
+ },
+ "title": "معلمات النموذج",
+ "top_p": {
+ "desc": "تحدد هذه الإعدادات اختيار النموذج للكلمات ذات الاحتمالية الأعلى فقط: اختيار الكلمات التي تصل احتمالاتها التراكمية إلى P. القيم المنخفضة تجعل استجابات النموذج أكثر توقعًا، بينما الإعداد الافتراضي يسمح للنموذج بالاختيار من جميع نطاق الكلمات.",
+ "title": "عينات النواة"
+ },
+ "type": "نوع"
+ },
+ "providerInfo": {
+ "apiTooltip": "يدعم LobeChat استخدام مفتاح API مخصص لهذا المزود.",
+ "input": "سعر الإدخال",
+ "inputTooltip": "تكلفة لكل مليون Token",
+ "latency": "زمن الاستجابة",
+ "latencyTooltip": "متوسط زمن استجابة المزود لإرسال أول Token",
+ "maxOutput": "أقصى طول للإخراج",
+ "maxOutputTooltip": "عدد Tokens الأقصى الذي يمكن أن ينتجه هذا النقطة",
+ "officialTooltip": "خدمة رسمية من LobeHub",
+ "output": "سعر الإخراج",
+ "outputTooltip": "تكلفة لكل مليون Token",
+ "streamCancellationTooltip": "يدعم هذا المزود ميزة إلغاء التدفق.",
+ "throughput": "معدل النقل",
+ "throughputTooltip": "متوسط عدد Tokens المنقولة في الطلبات المتدفقة في الثانية"
+ },
+ "suggestions": "نماذج ذات صلة",
+ "supportedProviders": "مزودو الخدمة المدعومون لهذا النموذج"
+ },
+ "plugins": {
+ "community": "إضافات المجتمع",
+ "install": "تثبيت الإضافة",
+ "installed": "تم التثبيت",
+ "list": "قائمة الإضافات",
+ "meta": {
+ "description": "وصف",
+ "parameter": "معامل",
+ "title": "معامل الأداة",
+ "type": "نوع"
+ },
+ "more": "المزيد",
+ "official": "إضافات رسمية",
+ "recentSubmits": "آخر التحديثات",
+ "suggestions": "اقتراحات ذات صلة"
+ },
+ "providers": {
+ "config": "تكوين مزود الخدمة",
+ "list": "قائمة مزودي النماذج",
+ "modelCount": "{{count}} نموذج",
+ "modelSite": "وثائق النموذج",
+ "more": "المزيد",
+ "officialSite": "الموقع الرسمي",
+ "showAllModels": "عرض جميع النماذج",
+ "suggestions": "مزودو الخدمة ذوو الصلة",
+ "supportedModels": "النماذج المدعومة"
+ },
+ "search": {
+ "placeholder": "ابحث عن اسم أو كلمة مفتاحية...",
+ "result": "{{count}} نتيجة بحث حول {{keyword}}",
+ "searching": "جارٍ البحث..."
+ },
+ "sort": {
+ "mostLiked": "الأكثر إعجابًا",
+ "mostUsed": "الأكثر استخدامًا",
+ "newest": "الأحدث",
+ "oldest": "الأقدم",
+ "recommended": "موصى به"
+ },
+ "tab": {
+ "assistants": "المساعدون",
+ "home": "الصفحة الرئيسية",
+ "models": "النماذج",
+ "plugins": "الإضافات",
+ "providers": "مزودو النماذج"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/error.json b/DigitalHumanWeb/locales/ar/error.json
new file mode 100644
index 0000000..0dc4f28
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "استمر في الجلسة",
+ "desc": "{{greeting}}، يسعدني أن أواصل خدمتك. دعنا نواصل الحديث عن الموضوع الذي تحدثنا عنه مؤخرًا",
+ "title": "مرحبًا بعودتك، {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "العودة إلى الصفحة الرئيسية",
+ "desc": "حاول مرة أخرى في وقت لاحق، أو عد إلى العالم المألوف",
+ "retry": "إعادة التحميل",
+ "title": "واجهت الصفحة مشكلة ما.."
+ },
+ "fetchError": "فشل الطلب",
+ "fetchErrorDetail": "تفاصيل الخطأ",
+ "notFound": {
+ "backHome": "العودة إلى الصفحة الرئيسية",
+ "check": "يرجى التحقق من صحة عنوان URL الخاص بك",
+ "desc": "لم نتمكن من العثور على الصفحة التي تبحث عنها",
+ "title": "هل دخلت إلى مجال غير معروف؟"
+ },
+ "pluginSettings": {
+ "desc": "أكمل الإعدادات التالية لبدء استخدام هذا المكون الإضافي",
+ "title": "تكوين مكون الإضافة {{name}}"
+ },
+ "response": {
+ "400": "عذرًا، الخادم غير قادر على فهم طلبك، يرجى التحقق من صحة معلمات الطلب الخاصة بك",
+ "401": "عذرًا، رفض الخادم طلبك، قد يكون بسبب صلاحياتك غير الكافية أو عدم تقديم التحقق من الهوية الصالحة",
+ "403": "عذرًا، رفض الخادم طلبك، ليس لديك إذن للوصول إلى هذا المحتوى",
+ "404": "عذرًا، الخادم لا يمكنه العثور على الصفحة أو المورد المطلوب، يرجى التحقق من صحة عنوان URL الخاص بك",
+ "405": "عذرًا، الخادم لا يدعم طريقة الطلب المستخدمة، يرجى التحقق من صحة طريقة الطلب الخاصة بك",
+ "406": "عذرًا، الخادم غير قادر على استكمال الطلب وفقًا لخصائص المحتوى التي طلبتها",
+ "407": "عذرًا، تحتاج إلى مصادقة الوكيل لمتابعة هذا الطلب",
+ "408": "عذرًا، تجاوز الخادم الوقت المحدد في انتظار الطلب، يرجى التحقق من اتصالك بالشبكة والمحاولة مرة أخرى",
+ "409": "عذرًا، يوجد تضارب في الطلب الذي لا يمكن معالجته، قد يكون بسبب عدم توافق حالة المورد مع الطلب",
+ "410": "عذرًا، تمت إزالة المورد الذي طلبته بشكل دائم ولا يمكن العثور عليه",
+ "411": "عذرًا، الخادم غير قادر على معالجة الطلب الذي لا يحتوي على طول محتوى صالح",
+ "412": "عذرًا، لم يتم تلبية شروط الخادم الجانبية لطلبك ولا يمكن استكمال الطلب",
+ "413": "عذرًا، حجم بيانات طلبك كبير جدًا والخادم غير قادر على معالجته",
+ "414": "عذرًا، طول عنوان URI الخاص بطلبك كبير جدًا والخادم غير قادر على معالجته",
+ "415": "عذرًا، الخادم غير قادر على معالجة تنسيق الوسائط المرفقة بالطلب",
+ "416": "عذرًا، الخادم غير قادر على تلبية نطاق الطلب الذي قدمته",
+ "417": "عذرًا، الخادم غير قادر على تلبية قيم توقعاتك",
+ "422": "عذرًا، الطلب لديه تنسيق صحيح، ولكن بسبب وجود أخطاء دلالية، لا يمكن الاستجابة",
+ "423": "عذرًا، تم قفل المورد الذي طلبته",
+ "424": "عذرًا، بسبب فشل الطلب السابق، لا يمكن استكمال الطلب الحالي",
+ "426": "عذرًا، يتطلب الخادم ترقية عميلك إلى إصدار بروتوكول أعلى",
+ "428": "عذرًا، يتطلب الخادم شروطًا مسبقة، ويجب أن يحتوي طلبك على رؤوس الشروط الصحيحة",
+ "429": "عذرًا، طلبك كثير جدًا والخادم متعب قليلاً، يرجى المحاولة مرة أخرى لاحقًا",
+ "431": "عذرًا، حقول رأس الطلب الخاصة بك كبيرة جدًا والخادم غير قادر على معالجتها",
+ "451": "عذرًا، بسبب الأسباب القانونية، يرفض الخادم توفير هذا المورد",
+ "500": "عذرًا، يبدو أن الخادم واجه بعض الصعوبات ولا يمكنه حاليًا استكمال طلبك، يرجى المحاولة مرة أخرى لاحقًا",
+ "502": "عذرًا، يبدو أن الخادم قد ضل الطريق ولا يمكنه حاليًا تقديم الخدمة، يرجى المحاولة مرة أخرى لاحقًا",
+ "503": "عذرًا، الخادم غير قادر حاليًا على معالجة طلبك، قد يكون بسبب الحمل الزائد أو الصيانة الجارية، يرجى المحاولة مرة أخرى لاحقًا",
+ "504": "عذرًا، الخادم لم ينتظر ردًا من الخادم الأصلي، يرجى المحاولة مرة أخرى لاحقًا",
+ "AgentRuntimeError": "حدث خطأ في تشغيل نموذج Lobe اللغوي، يرجى التحقق من المعلومات التالية أو إعادة المحاولة",
+ "FreePlanLimit": "أنت حاليًا مستخدم مجاني، لا يمكنك استخدام هذه الوظيفة، يرجى الترقية إلى خطة مدفوعة للمتابعة",
+ "InvalidAccessCode": "كلمة المرور غير صحيحة أو فارغة، يرجى إدخال كلمة مرور الوصول الصحيحة أو إضافة مفتاح API مخصص",
+ "InvalidBedrockCredentials": "فشلت مصادقة Bedrock، يرجى التحقق من AccessKeyId/SecretAccessKey وإعادة المحاولة",
+ "InvalidClerkUser": "عذرًا، لم تقم بتسجيل الدخول بعد، يرجى تسجيل الدخول أو التسجيل للمتابعة",
+ "InvalidGithubToken": "رمز وصول شخصية GitHub غير صحيح أو فارغ، يرجى التحقق من رمز وصول GitHub الشخصي والمحاولة مرة أخرى",
+ "InvalidOllamaArgs": "تكوين Ollama غير صحيح، يرجى التحقق من تكوين Ollama وإعادة المحاولة",
+ "InvalidProviderAPIKey": "{{provider}} مفتاح API غير صحيح أو فارغ، يرجى التحقق من مفتاح API {{provider}} الخاص بك وحاول مرة أخرى",
+ "LocationNotSupportError": "عذرًا، لا يدعم موقعك الحالي خدمة هذا النموذج، قد يكون ذلك بسبب قيود المنطقة أو عدم توفر الخدمة. يرجى التحقق مما إذا كان الموقع الحالي يدعم استخدام هذه الخدمة، أو محاولة استخدام معلومات الموقع الأخرى.",
+ "NoOpenAIAPIKey": "مفتاح API الخاص بـ OpenAI فارغ، يرجى إضافة مفتاح API الخاص بـ OpenAI",
+ "OllamaBizError": "خطأ في طلب خدمة Ollama، يرجى التحقق من المعلومات التالية أو إعادة المحاولة",
+ "OllamaServiceUnavailable": "خدمة Ollama غير متوفرة، يرجى التحقق من تشغيل Ollama بشكل صحيح أو إعدادات الـ Ollama للاتصال عبر النطاقات",
+ "OpenAIBizError": "طلب خدمة OpenAI خاطئ، يرجى التحقق من المعلومات التالية أو إعادة المحاولة",
+ "PluginApiNotFound": "عذرًا، لا يوجد API للإضافة في وصف الإضافة، يرجى التحقق من تطابق طريقة الطلب الخاصة بك مع API الوصف",
+ "PluginApiParamsError": "عذرًا، فشلت التحقق من صحة معلمات الطلب للإضافة، يرجى التحقق من تطابق المعلمات مع معلومات الوصف",
+ "PluginFailToTransformArguments": "عذرًا، فشل تحويل معلمات استدعاء الإضافة، يرجى محاولة إعادة إنشاء رسالة المساعد أو تجربة نموذج AI ذو قدرات استدعاء أقوى",
+ "PluginGatewayError": "عذرًا، حدث خطأ في بوابة الإضافة، يرجى التحقق من تكوين بوابة الإضافة",
+ "PluginManifestInvalid": "عذرًا، فشلت التحقق من صحة وصف الإضافة، يرجى التحقق من تنسيق وصف الإضافة",
+ "PluginManifestNotFound": "عذرًا، لم يتم العثور على وصف الإضافة (manifest.json) في الخادم، يرجى التحقق من صحة عنوان ملف وصف الإضافة",
+ "PluginMarketIndexInvalid": "عذرًا، فشلت التحقق من صحة فهرس الإضافات، يرجى التحقق من تنسيق ملف الفهرس",
+ "PluginMarketIndexNotFound": "عذرًا، لم يتم العثور على فهرس الإضافات في الخادم، يرجى التحقق من صحة عنوان الفهرس",
+ "PluginMetaInvalid": "عذرًا، فشلت التحقق من صحة بيانات الإضافة، يرجى التحقق من تنسيق بيانات الإضافة",
+ "PluginMetaNotFound": "عذرًا، لم يتم العثور على معلومات تكوين الإضافة في الفهرس",
+ "PluginOpenApiInitError": "عذرًا، فشل تهيئة عميل OpenAPI، يرجى التحقق من معلومات تكوين OpenAPI",
+ "PluginServerError": "خطأ في استجابة الخادم لطلب الإضافة، يرجى التحقق من ملف وصف الإضافة وتكوين الإضافة وتنفيذ الخادم وفقًا لمعلومات الخطأ أدناه",
+ "PluginSettingsInvalid": "تحتاج هذه الإضافة إلى تكوين صحيح قبل الاستخدام، يرجى التحقق من صحة تكوينك",
+ "ProviderBizError": "طلب خدمة {{provider}} خاطئ، يرجى التحقق من المعلومات التالية أو إعادة المحاولة",
+ "StreamChunkError": "خطأ في تحليل كتلة الرسالة لطلب التدفق، يرجى التحقق مما إذا كانت واجهة برمجة التطبيقات الحالية تتوافق مع المعايير، أو الاتصال بمزود واجهة برمجة التطبيقات الخاصة بك للاستفسار.",
+ "SubscriptionPlanLimit": "لقد استنفذت حصتك من الاشتراك، لا يمكنك استخدام هذه الوظيفة، يرجى الترقية إلى خطة أعلى أو شراء حزمة موارد للمتابعة",
+ "UnknownChatFetchError": "عذرًا، حدث خطأ غير معروف في الطلب، يرجى التحقق من المعلومات التالية أو المحاولة مرة أخرى"
+ },
+ "stt": {
+ "responseError": "فشل طلب الخدمة، يرجى التحقق من الإعدادات أو إعادة المحاولة"
+ },
+ "tts": {
+ "responseError": "فشل طلب الخدمة، يرجى التحقق من الإعدادات أو إعادة المحاولة"
+ },
+ "unlock": {
+ "addProxyUrl": "إضافة عنوان وكيل OpenAI (اختياري)",
+ "apiKey": {
+ "description": "يمكنك بدء الجلسة عن طريق إدخال مفتاح API {{name}} الخاص بك",
+ "title": "استخدام مفتاح API {{name}} المخصص"
+ },
+ "closeMessage": "إغلاق الرسالة",
+ "confirm": "تأكيد وإعادة المحاولة",
+ "oauth": {
+ "description": "فتح المسؤول توثيق تسجيل الدخول الموحد، انقر فوق الزر أدناه لتسجيل الدخول وفتح التطبيق",
+ "success": "تم تسجيل الدخول بنجاح",
+ "title": "تسجيل الدخول إلى الحساب",
+ "welcome": "مرحبا بك!"
+ },
+ "password": {
+ "description": "قام المسؤول بتشفير التطبيق، قم بإدخال كلمة مرور التطبيق لفتح التطبيق. يتعين إدخال كلمة المرور مرة واحدة فقط",
+ "placeholder": "الرجاء إدخال كلمة المرور",
+ "title": "إدخال كلمة المرور لفتح التطبيق"
+ },
+ "tabs": {
+ "apiKey": "مفتاح واجهة برمجة التطبيقات المخصص",
+ "password": "كلمة المرور"
+ }
+ },
+ "upload": {
+ "desc": "التفاصيل: {{detail}}",
+ "fileOnlySupportInServerMode": "وضع النشر الحالي لا يدعم تحميل ملفات غير الصور. إذا كنت بحاجة إلى تحميل تنسيق {{ext}}، يرجى التبديل إلى نشر قاعدة البيانات على الخادم أو استخدام خدمة {{cloud}}.",
+ "networkError": "يرجى التأكد من أن اتصال الشبكة لديك يعمل بشكل صحيح، والتحقق من إعدادات تكوين خدمة تخزين الملفات عبر النطاق.",
+ "title": "فشل تحميل الملف، يرجى التحقق من الاتصال بالشبكة أو المحاولة لاحقًا",
+ "unknownError": "سبب الخطأ: {{reason}}",
+ "uploadFailed": "فشل تحميل الملف"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/file.json b/DigitalHumanWeb/locales/ar/file.json
new file mode 100644
index 0000000..0306ee6
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "إدارة ملفاتك ومكتبتك المعرفية",
+ "detail": {
+ "basic": {
+ "createdAt": "تاريخ الإنشاء",
+ "filename": "اسم الملف",
+ "size": "حجم الملف",
+ "title": "معلومات أساسية",
+ "type": "الصيغة",
+ "updatedAt": "تاريخ التحديث"
+ },
+ "data": {
+ "chunkCount": "عدد الأجزاء",
+ "embedding": {
+ "default": "لم يتم تحويله إلى متجهات بعد",
+ "error": "فشل",
+ "pending": "في انتظار البدء",
+ "processing": "جارٍ المعالجة",
+ "success": "اكتمل"
+ },
+ "embeddingStatus": "تحويل إلى متجهات"
+ }
+ },
+ "empty": "لا توجد ملفات/مجلدات تم تحميلها بعد",
+ "header": {
+ "actions": {
+ "newFolder": "إنشاء مجلد جديد",
+ "uploadFile": "رفع ملف",
+ "uploadFolder": "رفع مجلد"
+ },
+ "uploadButton": "رفع"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "سيتم حذف هذه المكتبة المعرفية، ولن يتم حذف الملفات الموجودة بها، بل ستنتقل إلى جميع الملفات. بعد حذف المكتبة المعرفية، لن يمكن استعادتها، يرجى توخي الحذر.",
+ "empty": "انقر على <1>+1> لبدء إنشاء مكتبة معرفية"
+ },
+ "new": "إنشاء مكتبة معرفية جديدة",
+ "title": "المكتبة المعرفية"
+ },
+ "networkError": "فشل في الحصول على قاعدة المعرفة، يرجى التحقق من اتصال الشبكة ثم إعادة المحاولة",
+ "notSupportGuide": {
+ "desc": "الوضع الحالي للنشر هو وضع قاعدة بيانات العميل، ولا يمكن استخدام وظيفة إدارة الملفات. يرجى التبديل إلى <1>وضع نشر قاعدة بيانات الخادم1>، أو استخدام <3>LobeChat Cloud3> مباشرة.",
+ "features": {
+ "allKind": {
+ "desc": "يدعم أنواع الملفات الشائعة، بما في ذلك تنسيقات المستندات الشائعة مثل Word وPPT وExcel وPDF وTXT، بالإضافة إلى ملفات الشيفرة الشائعة مثل JS وPython.",
+ "title": "تحليل أنواع متعددة من الملفات"
+ },
+ "embeddings": {
+ "desc": "استخدام نماذج متجهات عالية الأداء لتحويل النصوص إلى متجهات، مما يتيح البحث الدلالي في محتوى الملفات.",
+ "title": "تحويل دلالي إلى متجهات"
+ },
+ "repos": {
+ "desc": "يدعم إنشاء مكتبات معرفية، ويسمح بإضافة أنواع مختلفة من الملفات، لبناء معرفتك في مجالك.",
+ "title": "المكتبة المعرفية"
+ }
+ },
+ "title": "الوضع الحالي للنشر لا يدعم إدارة الملفات"
+ },
+ "preview": {
+ "downloadFile": "تحميل الملف",
+ "unsupportedFileAndContact": "هذا التنسيق من الملفات غير مدعوم للمعاينة عبر الإنترنت، إذا كان لديك طلب للمعاينة، فلا تتردد في <1>إبلاغنا1>"
+ },
+ "searchFilePlaceholder": "بحث عن ملف",
+ "tab": {
+ "all": "جميع الملفات",
+ "audios": "الصوتيات",
+ "documents": "المستندات",
+ "images": "الصور",
+ "videos": "الفيديوهات",
+ "websites": "المواقع"
+ },
+ "title": "الملفات",
+ "uploadDock": {
+ "body": {
+ "collapse": "طي",
+ "item": {
+ "done": "تم الرفع",
+ "error": "فشل الرفع، يرجى المحاولة مرة أخرى",
+ "pending": "جاهز للرفع...",
+ "processing": "جارٍ معالجة الملف...",
+ "restTime": "الوقت المتبقي {{time}}"
+ }
+ },
+ "totalCount": "إجمالي {{count}} عنصر",
+ "uploadStatus": {
+ "error": "حدث خطأ أثناء الرفع",
+ "pending": "في انتظار الرفع",
+ "processing": "جارٍ الرفع",
+ "success": "اكتمل الرفع",
+ "uploading": "جارٍ الرفع"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/knowledgeBase.json b/DigitalHumanWeb/locales/ar/knowledgeBase.json
new file mode 100644
index 0000000..708d894
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "تم إضافة الملف بنجاح، <1>عرض الآن1>",
+ "confirm": "إضافة",
+ "id": {
+ "placeholder": "يرجى اختيار قاعدة المعرفة لإضافتها",
+ "required": "يرجى اختيار قاعدة المعرفة",
+ "title": "قاعدة المعرفة المستهدفة"
+ },
+ "title": "إضافة إلى قاعدة المعرفة",
+ "totalFiles": "تم اختيار {{count}} ملف"
+ },
+ "createNew": {
+ "confirm": "إنشاء جديد",
+ "description": {
+ "placeholder": "وصف قاعدة المعرفة (اختياري)"
+ },
+ "formTitle": "المعلومات الأساسية",
+ "name": {
+ "placeholder": "اسم قاعدة المعرفة",
+ "required": "يرجى إدخال اسم قاعدة المعرفة"
+ },
+ "title": "إنشاء قاعدة معرفة جديدة"
+ },
+ "tab": {
+ "evals": "تقييمات",
+ "files": "المستندات",
+ "settings": "الإعدادات",
+ "testing": "اختبار الاسترجاع"
+ },
+ "title": "قاعدة المعرفة"
+}
diff --git a/DigitalHumanWeb/locales/ar/market.json b/DigitalHumanWeb/locales/ar/market.json
new file mode 100644
index 0000000..4ed7999
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "إضافة وكيل",
+ "addAgentAndConverse": "إضافة وكيل وبدء المحادثة",
+ "addAgentSuccess": "تمت الإضافة بنجاح",
+ "guide": {
+ "func1": {
+ "desc1": "في نافذة الدردشة، انتقل إلى صفحة إعدادات الوكيل التي ترغب في تقديمها من الزاوية اليمنى العلوية.",
+ "desc2": "انقر فوق زر تقديم إلى سوق الوكلاء في الزاوية اليمنى العلوية.",
+ "tag": "الطريقة الأولى",
+ "title": "تقديم عبر LobeChat"
+ },
+ "func2": {
+ "button": "انتقل إلى مستودع وكلاء Github",
+ "desc": "إذا كنت ترغب في إضافة الوكيل إلى الفهرس، يرجى استخدام agent-template.json أو agent-template-full.json لإنشاء إدخال في دليل plugins، وكتابة وصف موجز ووضع علامات بشكل مناسب، ثم إنشاء طلب سحب.",
+ "tag": "الطريقة الثانية",
+ "title": "تقديم عبر Github"
+ }
+ },
+ "search": {
+ "placeholder": "ابحث عن اسم الوكيل أو وصفه أو كلمات رئيسية..."
+ },
+ "sidebar": {
+ "comment": "منطقة النقاش",
+ "prompt": "كلمة تلميح",
+ "title": "تفاصيل الوكيل"
+ },
+ "submitAgent": "تقديم الوكيل",
+ "title": {
+ "allAgents": "جميع الوكلاء",
+ "recentSubmits": "الإضافات الأخيرة"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/metadata.json b/DigitalHumanWeb/locales/ar/metadata.json
new file mode 100644
index 0000000..034b748
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}} يقدم لك أفضل تجربة لاستخدام ChatGPT وClaude وGemini وOLLaMA WebUI",
+ "title": "{{appName}}: أداة الذكاء الاصطناعي الشخصية، امنح نفسك دماغًا أكثر ذكاءً"
+ },
+ "discover": {
+ "assistants": {
+ "description": "إنشاء المحتوى، الكتابة، الأسئلة والأجوبة، توليد الصور، توليد الفيديو، توليد الصوت، الوكلاء الذكيون، سير العمل الآلي، تخصيص مساعد الذكاء الاصطناعي / GPTs / OLLaMA الخاص بك",
+ "title": "مساعدات الذكاء الاصطناعي"
+ },
+ "description": "إنشاء المحتوى، الكتابة، الأسئلة والأجوبة، توليد الصور، توليد الفيديو، توليد الصوت، الوكلاء الذكيون، سير العمل الآلي، تطبيقات الذكاء الاصطناعي المخصصة، تخصيص منصة تطبيقات الذكاء الاصطناعي الخاصة بك",
+ "models": {
+ "description": "استكشاف نماذج الذكاء الاصطناعي الرائجة OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek",
+ "title": "نماذج الذكاء الاصطناعي"
+ },
+ "plugins": {
+ "description": "استكشف توليد الرسوم البيانية، والأبحاث الأكاديمية، وتوليد الصور، وتوليد الفيديو، وتوليد الصوت، وأتمتة سير العمل، ودمج قدرات إضافية غنية لمساعدتك.",
+ "title": "إضافات الذكاء الاصطناعي"
+ },
+ "providers": {
+ "description": "استكشاف مزودي النماذج الرائجة OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter",
+ "title": "مزودو خدمات نماذج الذكاء الاصطناعي"
+ },
+ "search": "بحث",
+ "title": "اكتشاف"
+ },
+ "plugins": {
+ "description": "البحث، توليد الرسوم البيانية، الأكاديميات، توليد الصور، توليد الفيديو، توليد الصوت، سير العمل الآلي، خصص قدرات ToolCall الخاصة بـ ChatGPT / Claude",
+ "title": "سوق الإضافات"
+ },
+ "welcome": {
+ "description": "{{appName}} يقدم لك أفضل تجربة لاستخدام ChatGPT وClaude وGemini وOLLaMA WebUI",
+ "title": "مرحبًا بك في {{appName}}: أداة الذكاء الاصطناعي الشخصية، امنح نفسك دماغًا أكثر ذكاءً"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/migration.json b/DigitalHumanWeb/locales/ar/migration.json
new file mode 100644
index 0000000..2be0c29
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "مسح البيانات المحلية",
+ "downloadBackup": "تنزيل نسخة احتياطية للبيانات",
+ "reUpgrade": "إعادة الترقية",
+ "start": "ابدأ الاستخدام",
+ "upgrade": "ترقية فورية"
+ },
+ "clear": {
+ "confirm": "سيتم مسح البيانات المحلية (دون تأثير على الإعدادات العامة)، يرجى التأكد من تنزيل نسخة احتياطية للبيانات."
+ },
+ "description": "في الإصدار الجديد، حقق تخزين بيانات {{appName}} قفزة هائلة. لذلك، نحتاج إلى ترقية البيانات القديمة لتوفير تجربة استخدام أفضل لك.",
+ "features": {
+ "capability": {
+ "desc": "استنادًا إلى تقنية IndexedDB، تكفي لتخزين محادثاتك مدى الحياة",
+ "title": "سعة كبيرة"
+ },
+ "performance": {
+ "desc": "مليون رسالة يتم فهرستها تلقائيًا، واستجابة استعلامات البحث في مللي ثانية",
+ "title": "أداء عالي"
+ },
+ "use": {
+ "desc": "يدعم البحث عن العناوين، الأوصاف، العلامات، محتوى الرسائل وحتى نصوص الترجمة، مما يعزز كفاءة البحث اليومية بشكل كبير",
+ "title": "أسهل في الاستخدام"
+ }
+ },
+ "title": "تطور بيانات {{appName}}",
+ "upgrade": {
+ "error": {
+ "subTitle": "نعتذر، حدث خطأ أثناء عملية ترقية قاعدة البيانات. يرجى تجربة الحلول التالية: A. امسح البيانات المحلية ثم أعد استيراد البيانات الاحتياطية؛ B. انقر على زر «إعادة الترقية».
إذا استمرت المشكلة، يرجى <1>تقديم مشكلة1>، وسنساعدك في حلها في أقرب وقت ممكن.",
+ "title": "فشل ترقية قاعدة البيانات"
+ },
+ "success": {
+ "subTitle": "تمت ترقية قاعدة بيانات {{appName}} إلى أحدث إصدار، ابدأ التجربة الآن",
+ "title": "نجاح ترقية قاعدة البيانات"
+ }
+ },
+ "upgradeTip": "تستغرق عملية الترقية حوالي 10 إلى 20 ثانية، يرجى عدم إغلاق {{appName}} أثناء الترقية"
+ },
+ "migrateError": {
+ "missVersion": "البيانات المستوردة تفتقد رقم الإصدار، يرجى التحقق من الملف وإعادة المحاولة",
+ "noMigration": "لم يتم العثور على خطة هجرة تتوافق مع الإصدار الحالي، يرجى التحقق من رقم الإصدار وإعادة المحاولة. إذا استمرت المشكلة، يرجى تقديم ملاحظاتك"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/modelProvider.json b/DigitalHumanWeb/locales/ar/modelProvider.json
new file mode 100644
index 0000000..c15f1f3
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "نسخة API الخاصة بـ Azure، والتي تتبع تنسيق YYYY-MM-DD، راجع [الإصدارات الأحدث](https://learn.microsoft.com/zh-en/azure/ai-services/openai/reference#chat-completions)",
+ "fetch": "جلب القائمة",
+ "title": "Azure API Version"
+ },
+ "empty": "الرجاء إدخال معرف النموذج لإضافة أول نموذج",
+ "endpoint": {
+ "desc": "يمكن العثور على هذه القيمة في قسم 'المفاتيح والنقاط النهائية' عند فحص الموارد في بوابة Azure",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "عنوان Azure API"
+ },
+ "modelListPlaceholder": "يرجى تحديد أو إضافة نموذج OpenAI الذي قمت بنشره",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "يمكن العثور على هذه القيمة في قسم 'المفاتيح والنقاط النهائية' عند فحص الموارد في بوابة Azure. يمكن استخدام KEY1 أو KEY2",
+ "placeholder": "Azure API Key",
+ "title": "مفتاح API"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "أدخل AWS Access Key Id",
+ "placeholder": "AWS Access Key Id",
+ "title": "AWS Access Key Id"
+ },
+ "checker": {
+ "desc": "اختبر ما إذا كان AccessKeyId / SecretAccessKey مدخلاً بشكل صحيح"
+ },
+ "region": {
+ "desc": "أدخل AWS Region",
+ "placeholder": "AWS Region",
+ "title": "AWS Region"
+ },
+ "secretAccessKey": {
+ "desc": "أدخل AWS Secret Access Key",
+ "placeholder": "AWS Secret Access Key",
+ "title": "AWS Secret Access Key"
+ },
+ "sessionToken": {
+ "desc": "إذا كنت تستخدم AWS SSO/STS، يرجى إدخال رمز جلسة AWS الخاص بك",
+ "placeholder": "رمز جلسة AWS",
+ "title": "رمز جلسة AWS (اختياري)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "منطقة خدمة مخصصة",
+ "customSessionToken": "رمز الجلسة المخصص",
+ "description": "أدخل معرف الوصول / مفتاح الوصول السري الخاص بك في AWS لبدء الجلسة. لن يتم تسجيل تكوين المصادقة الخاص بك من قبل التطبيق",
+ "title": "استخدام معلومات المصادقة الخاصة بـ Bedrock المخصصة"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "أدخل رمز الوصول الشخصي الخاص بك على Github، انقر [هنا](https://github.com/settings/tokens) لإنشاء واحد",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "اختبر ما إذا تم إدخال عنوان الوكيل بشكل صحيح",
+ "title": "فحص الاتصال"
+ },
+ "customModelName": {
+ "desc": "أضف نماذج مخصصة، استخدم الفاصلة (،) لفصل عدة نماذج",
+ "placeholder": "vicuna,llava,codellama,llama2:13b-text",
+ "title": "أسماء النماذج المخصصة"
+ },
+ "download": {
+ "desc": "Ollama is downloading the model. Please try not to close this page. It will resume from where it left off if you restart the download.",
+ "remainingTime": "Remaining Time",
+ "speed": "Download Speed",
+ "title": "Downloading model {{model}}"
+ },
+ "endpoint": {
+ "desc": "أدخل عنوان واجهة برمجة التطبيقات الخاص بـ Ollama، إذا لم يتم تحديده محليًا، يمكن تركه فارغًا",
+ "title": "عنوان وكيل الواجهة"
+ },
+ "setup": {
+ "cors": {
+ "description": "بسبب قيود الأمان في المتصفح، يجب تكوين الوصول عبر المواقع المختلفة لـ Ollama لاستخدامه بشكل صحيح.",
+ "linux": {
+ "env": "في القسم [Service]، أضف `Environment` وأضف متغير البيئة OLLAMA_ORIGINS:",
+ "reboot": "أعد تحميل systemd وأعد تشغيل Ollama",
+ "systemd": "استدعاء تحرير خدمة ollama في systemd:"
+ },
+ "macos": "افتح تطبيق \"Terminal\" والصق الأمر التالي، ثم اضغط على Enter للتشغيل.",
+ "reboot": "يرجى إعادة تشغيل خدمة Ollama بعد الانتهاء من التنفيذ",
+ "title": "تكوين Ollama للسماح بالوصول عبر المواقع المختلفة",
+ "windows": "على نظام Windows، انقر فوق \"لوحة التحكم\"، ثم ادخل إلى تحرير متغيرات البيئة النظامية. قم بإنشاء متغير بيئي بعنوان \"OLLAMA_ORIGINS\" لحساب المستخدم الخاص بك، واجعل قيمته * ثم انقر على \"موافق/تطبيق\" للحفظ."
+ },
+ "install": {
+ "description": "يرجى التأكد من أنك قد قمت بتشغيل Ollama ، إذا لم تقم بتنزيل Ollama ، يرجى زيارة الموقع الرسمي <1>للتنزيل1>",
+ "docker": "إذا كنت تفضل استخدام Docker، يوفر Ollama أيضًا صور Docker الرسمية، يمكنك سحبها باستخدام الأمر التالي:",
+ "linux": {
+ "command": "قم بتثبيته باستخدام الأمر التالي:",
+ "manual": "أو يمكنك الرجوع إلى <1>دليل تثبيت Linux يدويًا1> للقيام بالتثبيت بنفسك."
+ },
+ "title": "تثبيت وتشغيل تطبيق Ollama محليًا",
+ "windowsTab": "Windows (نسخة معاينة)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "Cancel Download",
+ "confirm": "Download",
+ "description": "Enter your Ollama model tag to continue the session",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "Starting download...",
+ "title": "Download specified Ollama model"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI الأشياء الصغرى"
+ },
+ "zhipu": {
+ "title": "Zhipu"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/models.json b/DigitalHumanWeb/locales/ar/models.json
new file mode 100644
index 0000000..2173242
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34B، يقدم أداءً ممتازًا في التطبيقات الصناعية بفضل مجموعة التدريب الغنية."
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9B يدعم 16K توكن، ويوفر قدرة توليد لغوية فعالة وسلسة."
+ },
+ "360gpt-pro": {
+ "description": "360GPT Pro كعضو مهم في سلسلة نماذج 360 AI، يلبي احتياجات معالجة النصوص المتنوعة بفعالية، ويدعم فهم النصوص الطويلة والحوار المتعدد الجولات."
+ },
+ "360gpt-turbo": {
+ "description": "يوفر 360GPT Turbo قدرات حسابية وحوارية قوية، ويتميز بفهم دلالي ممتاز وكفاءة في التوليد، مما يجعله الحل المثالي للمؤسسات والمطورين كمساعد ذكي."
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8K يركز على الأمان الدلالي والتوجيه المسؤول، مصمم خصيصًا لتطبيقات تتطلب مستوى عالٍ من الأمان في المحتوى، مما يضمن دقة وموثوقية تجربة المستخدم."
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Pro هو نموذج متقدم لمعالجة اللغة الطبيعية تم إطلاقه من قبل شركة 360، يتمتع بقدرات استثنائية في توليد وفهم النصوص، خاصة في مجالات التوليد والإبداع، ويستطيع التعامل مع مهام تحويل اللغة المعقدة وأداء الأدوار."
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultra هو أقوى إصدار في سلسلة نماذج Spark، حيث يعزز فهم النصوص وقدرات التلخيص مع تحسين روابط البحث عبر الإنترنت. إنه حل شامل يهدف إلى تعزيز إنتاجية المكتب والاستجابة الدقيقة للاحتياجات، ويعتبر منتجًا ذكيًا رائدًا في الصناعة."
+ },
+ "Baichuan2-Turbo": {
+ "description": "يستخدم تقنية تعزيز البحث لتحقيق الربط الشامل بين النموذج الكبير والمعرفة الميدانية والمعرفة من جميع أنحاء الشبكة. يدعم تحميل مستندات PDF وWord وغيرها من المدخلات، مما يضمن الحصول على المعلومات بشكل سريع وشامل، ويقدم نتائج دقيقة واحترافية."
+ },
+ "Baichuan3-Turbo": {
+ "description": "تم تحسينه لمشاهد الاستخدام المتكررة في الشركات، مما أدى إلى تحسين كبير في الأداء وتكلفة فعالة. مقارنةً بنموذج Baichuan2، زادت قدرة الإبداع بنسبة 20%، وزادت قدرة الإجابة على الأسئلة المعرفية بنسبة 17%، وزادت قدرة التمثيل بنسبة 40%. الأداء العام أفضل من GPT3.5."
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "يمتلك نافذة سياق طويلة جدًا تصل إلى 128K، تم تحسينه لمشاهد الاستخدام المتكررة في الشركات، مما أدى إلى تحسين كبير في الأداء وتكلفة فعالة. مقارنةً بنموذج Baichuan2، زادت قدرة الإبداع بنسبة 20%، وزادت قدرة الإجابة على الأسئلة المعرفية بنسبة 17%، وزادت قدرة التمثيل بنسبة 40%. الأداء العام أفضل من GPT3.5."
+ },
+ "Baichuan4": {
+ "description": "النموذج الأول في البلاد من حيث القدرة، يتفوق على النماذج الرئيسية الأجنبية في المهام الصينية مثل الموسوعات، والنصوص الطويلة، والإبداع. كما يتمتع بقدرات متعددة الوسائط رائدة في الصناعة، ويظهر أداءً ممتازًا في العديد من معايير التقييم الموثوقة."
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B) هو نموذج مبتكر، مناسب لتطبيقات متعددة المجالات والمهام المعقدة."
+ },
+ "Max-32k": {
+ "description": "Spark Max 32K مزود بقدرة معالجة سياقية كبيرة، وفهم أقوى للسياق وقدرة على الاستدلال المنطقي، يدعم إدخال نصوص تصل إلى 32K توكن، مناسب لقراءة الوثائق الطويلة، وأسئلة وأجوبة المعرفة الخاصة، وغيرها من السيناريوهات."
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPO هو دمج متعدد النماذج مرن للغاية، يهدف إلى تقديم تجربة إبداعية ممتازة."
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B) هو نموذج تعليمات عالي الدقة، مناسب للحسابات المعقدة."
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B) يوفر مخرجات لغوية محسنة وإمكانيات تطبيق متنوعة."
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "تحديث لنموذج Phi-3-mini."
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "نموذج Phi-3-medium نفسه، ولكن مع حجم سياق أكبر لـ RAG أو التوجيه القليل."
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "نموذج بحجم 14B، يثبت جودة أفضل من Phi-3-mini، مع التركيز على البيانات الكثيفة في التفكير عالية الجودة."
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "نموذج Phi-3-mini نفسه، ولكن مع حجم سياق أكبر لـ RAG أو التوجيه القليل."
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "أصغر عضو في عائلة Phi-3. مُحسّن لكل من الجودة وزمن الاستجابة المنخفض."
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "نموذج Phi-3-small نفسه، ولكن مع حجم سياق أكبر لـ RAG أو التوجيه القليل."
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "نموذج بحجم 7B، يثبت جودة أفضل من Phi-3-mini، مع التركيز على البيانات الكثيفة في التفكير عالية الجودة."
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128K مزود بقدرة معالجة سياق ضخمة، يمكنه التعامل مع معلومات سياق تصل إلى 128K، مما يجعله مثاليًا للمحتوى الطويل الذي يتطلب تحليلًا شاملًا ومعالجة علاقات منطقية طويلة الأمد، ويمكنه تقديم منطق سلس ودقيق ودعم متنوع للاقتباسات في التواصل النصي المعقد."
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "كنموذج تجريبي لـ Qwen2، يستخدم Qwen1.5 بيانات ضخمة لتحقيق وظائف حوارية أكثر دقة."
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B) يوفر استجابة سريعة وقدرة على الحوار الطبيعي، مناسب للبيئات متعددة اللغات."
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2 هو نموذج لغوي عام متقدم، يدعم أنواع متعددة من التعليمات."
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5 هو سلسلة جديدة من نماذج اللغة الكبيرة، تهدف إلى تحسين معالجة المهام الإرشادية."
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5 هو سلسلة جديدة من نماذج اللغة الكبيرة، تهدف إلى تحسين معالجة المهام الإرشادية."
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5 هو سلسلة جديدة من نماذج اللغة الكبيرة، تتمتع بقدرات أقوى في الفهم والتوليد."
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5 هو سلسلة جديدة من نماذج اللغة الكبيرة، تهدف إلى تحسين معالجة المهام الإرشادية."
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coder يركز على كتابة الشيفرة."
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Math يركز على حل المشكلات في مجال الرياضيات، ويقدم إجابات احترافية للأسئلة الصعبة."
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9B هو إصدار مفتوح المصدر، يوفر تجربة حوار محسنة لتطبيقات الحوار."
+ },
+ "abab5.5-chat": {
+ "description": "موجه لمشاهد الإنتاجية، يدعم معالجة المهام المعقدة وتوليد النصوص بكفاءة، مناسب للتطبيقات في المجالات المهنية."
+ },
+ "abab5.5s-chat": {
+ "description": "مصمم لمشاهد الحوار باللغة الصينية، يوفر قدرة توليد حوار عالي الجودة باللغة الصينية، مناسب لمجموعة متنوعة من التطبيقات."
+ },
+ "abab6.5g-chat": {
+ "description": "مصمم للحوار متعدد اللغات، يدعم توليد حوارات عالية الجودة بالإنجليزية والعديد من اللغات الأخرى."
+ },
+ "abab6.5s-chat": {
+ "description": "مناسب لمجموعة واسعة من مهام معالجة اللغة الطبيعية، بما في ذلك توليد النصوص، وأنظمة الحوار، وغيرها."
+ },
+ "abab6.5t-chat": {
+ "description": "محسن لمشاهد الحوار باللغة الصينية، يوفر قدرة توليد حوار سلس ومتوافق مع عادات التعبير الصينية."
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "نموذج استدعاء الدوال مفتوح المصدر من Fireworks، يوفر قدرة تنفيذ تعليمات ممتازة وخصائص قابلة للتخصيص."
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "Firefunction-v2 من شركة Fireworks هو نموذج استدعاء دوال عالي الأداء، تم تطويره بناءً على Llama-3، وتم تحسينه بشكل كبير، مناسب بشكل خاص لاستدعاء الدوال، والحوار، واتباع التعليمات."
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13b هو نموذج لغوي بصري، يمكنه استقبال المدخلات من الصور والنصوص، تم تدريبه على بيانات عالية الجودة، مناسب للمهام متعددة الوسائط."
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "نموذج Gemma 2 9B للتعليمات، يعتمد على تقنيات Google السابقة، مناسب لمهام توليد النصوص مثل الإجابة على الأسئلة، والتلخيص، والاستدلال."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "نموذج Llama 3 70B للتعليمات، مصمم للحوار متعدد اللغات وفهم اللغة الطبيعية، أداءه يتفوق على معظم النماذج المنافسة."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "نموذج Llama 3 70B للتعليمات (نسخة HF)، يتوافق مع نتائج التنفيذ الرسمية، مناسب لمهام اتباع التعليمات عالية الجودة."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "نموذج Llama 3 8B للتعليمات، تم تحسينه للحوار والمهام متعددة اللغات، يظهر أداءً ممتازًا وفعالًا."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "نموذج Llama 3 8B للتعليمات (نسخة HF)، يتوافق مع نتائج التنفيذ الرسمية، يتمتع بتوافق عالٍ عبر المنصات."
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "نموذج Llama 3.1 405B للتعليمات، يتمتع بمعلمات ضخمة، مناسب لمهام معقدة واتباع التعليمات في سيناريوهات ذات حمل عالي."
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "نموذج Llama 3.1 70B للتعليمات، يوفر قدرة ممتازة على فهم اللغة الطبيعية وتوليدها، وهو الخيار المثالي لمهام الحوار والتحليل."
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "نموذج Llama 3.1 8B للتعليمات، تم تحسينه للحوار متعدد اللغات، قادر على تجاوز معظم النماذج المفتوحة والمغلقة في المعايير الصناعية الشائعة."
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "نموذج Mixtral MoE 8x22B للتعليمات، مع معلمات ضخمة وهيكل خبير متعدد، يدعم معالجة فعالة لمهام معقدة."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "نموذج Mixtral MoE 8x7B للتعليمات، يوفر هيكل خبير متعدد لتقديم تعليمات فعالة واتباعها."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "نموذج Mixtral MoE 8x7B للتعليمات (نسخة HF)، الأداء يتوافق مع التنفيذ الرسمي، مناسب لمجموعة متنوعة من سيناريوهات المهام الفعالة."
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "نموذج MythoMax L2 13B، يجمع بين تقنيات الدمج الجديدة، بارع في السرد وأدوار الشخصيات."
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "نموذج Phi 3 Vision للتعليمات، نموذج متعدد الوسائط خفيف الوزن، قادر على معالجة معلومات بصرية ونصية معقدة، يتمتع بقدرة استدلال قوية."
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "نموذج StarCoder 15.5B، يدعم مهام البرمجة المتقدمة، مع تعزيز القدرة على التعامل مع لغات متعددة، مناسب لتوليد وفهم الشيفرات المعقدة."
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "نموذج StarCoder 7B، تم تدريبه على أكثر من 80 لغة برمجة، يتمتع بقدرة ممتازة على ملء البرمجة وفهم السياق."
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "نموذج Yi-Large، يتمتع بقدرة معالجة لغوية ممتازة، يمكن استخدامه في جميع أنواع مهام توليد وفهم اللغة."
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "نموذج متعدد اللغات بحجم 398B (94B نشط)، يقدم نافذة سياق طويلة بحجم 256K، واستدعاء وظائف، وإخراج منظم، وتوليد مستند."
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "نموذج متعدد اللغات بحجم 52B (12B نشط)، يقدم نافذة سياق طويلة بحجم 256K، واستدعاء وظائف، وإخراج منظم، وتوليد مستند."
+ },
+ "ai21-jamba-instruct": {
+ "description": "نموذج LLM يعتمد على Mamba، مصمم لتحقيق أفضل أداء وكفاءة من حيث الجودة والتكلفة."
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "Claude 3.5 Sonnet يرفع المعايير في الصناعة، حيث يتفوق على نماذج المنافسين وClaude 3 Opus، ويظهر أداءً ممتازًا في تقييمات واسعة، مع سرعة وتكلفة تتناسب مع نماذجنا المتوسطة."
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "Claude 3 Haiku هو أسرع وأصغر نموذج من Anthropic، يوفر سرعة استجابة شبه فورية. يمكنه بسرعة الإجابة على الاستفسارات والطلبات البسيطة. سيتمكن العملاء من بناء تجربة ذكاء اصطناعي سلسة تحاكي التفاعل البشري. يمكن لـ Claude 3 Haiku معالجة الصور وإرجاع إخراج نصي، مع نافذة سياقية تبلغ 200K."
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "Claude 3 Opus هو أقوى نموذج ذكاء اصطناعي من Anthropic، يتمتع بأداء متقدم في المهام المعقدة للغاية. يمكنه معالجة المطالبات المفتوحة والمشاهد غير المعروفة، مع سلاسة وفهم يشبه البشر. يعرض Claude 3 Opus حدود إمكانيات الذكاء الاصطناعي التوليدي. يمكن لـ Claude 3 Opus معالجة الصور وإرجاع إخراج نصي، مع نافذة سياقية تبلغ 200K."
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "Claude 3 Sonnet من Anthropic يحقق توازنًا مثاليًا بين الذكاء والسرعة - مناسب بشكل خاص لأعباء العمل المؤسسية. يقدم أكبر فائدة بأقل من تكلفة المنافسين، وقد تم تصميمه ليكون نموذجًا موثوقًا وعالي التحمل، مناسبًا لنشر الذكاء الاصطناعي على نطاق واسع. يمكن لـ Claude 3 Sonnet معالجة الصور وإرجاع إخراج نصي، مع نافذة سياقية تبلغ 200K."
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "نموذج سريع واقتصادي وما زال قويًا للغاية، يمكنه معالجة مجموعة من المهام بما في ذلك المحادثات اليومية، وتحليل النصوص، والتلخيص، والأسئلة والأجوبة على الوثائق."
+ },
+ "anthropic.claude-v2": {
+ "description": "نموذج يظهر قدرة عالية في مجموعة واسعة من المهام، من المحادثات المعقدة وتوليد المحتوى الإبداعي إلى اتباع التعليمات التفصيلية."
+ },
+ "anthropic.claude-v2:1": {
+ "description": "الإصدار المحدث من Claude 2، مع نافذة سياقية مضاعفة، وتحسينات في الاعتمادية ومعدل الهلوسة والدقة المستندة إلى الأدلة في الوثائق الطويلة وسياقات RAG."
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 Haiku هو أسرع وأصغر نموذج من Anthropic، مصمم لتحقيق استجابة شبه فورية. يتمتع بأداء توجيهي سريع ودقيق."
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opus هو أقوى نموذج من Anthropic لمعالجة المهام المعقدة للغاية. يتميز بأداء ممتاز وذكاء وسلاسة وفهم."
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 Sonnet يقدم قدرات تتجاوز Opus وسرعة أكبر من Sonnet، مع الحفاظ على نفس السعر. يتميز Sonnet بمهارات خاصة في البرمجة وعلوم البيانات ومعالجة الصور والمهام الوكيلة."
+ },
+ "aya": {
+ "description": "Aya 23 هو نموذج متعدد اللغات أطلقته Cohere، يدعم 23 لغة، مما يسهل التطبيقات اللغوية المتنوعة."
+ },
+ "aya:35b": {
+ "description": "Aya 23 هو نموذج متعدد اللغات أطلقته Cohere، يدعم 23 لغة، مما يسهل التطبيقات اللغوية المتنوعة."
+ },
+ "charglm-3": {
+ "description": "CharGLM-3 مصمم خصيصًا للأدوار التفاعلية والمرافقة العاطفية، يدعم ذاكرة متعددة الجولات طويلة الأمد وحوارات مخصصة، ويستخدم على نطاق واسع."
+ },
+ "chatgpt-4o-latest": {
+ "description": "ChatGPT-4o هو نموذج ديناميكي يتم تحديثه في الوقت الحقيقي للحفاظ على أحدث إصدار. يجمع بين فهم اللغة القوي وقدرات التوليد، مما يجعله مناسبًا لمجموعة واسعة من التطبيقات، بما في ذلك خدمة العملاء والتعليم والدعم الفني."
+ },
+ "claude-2.0": {
+ "description": "Claude 2 يوفر تقدمًا في القدرات الأساسية للمؤسسات، بما في ذلك سياق يصل إلى 200K توكن، وتقليل كبير في معدل حدوث الهلوسة في النموذج، وإشعارات النظام، وميزة اختبار جديدة: استدعاء الأدوات."
+ },
+ "claude-2.1": {
+ "description": "Claude 2 يوفر تقدمًا في القدرات الأساسية للمؤسسات، بما في ذلك سياق يصل إلى 200K توكن، وتقليل كبير في معدل حدوث الهلوسة في النموذج، وإشعارات النظام، وميزة اختبار جديدة: استدعاء الأدوات."
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnet يوفر قدرات تتجاوز Opus وسرعة أكبر من Sonnet، مع الحفاظ على نفس السعر. Sonnet بارع بشكل خاص في البرمجة، وعلوم البيانات، ومعالجة الصور، ومهام الوكالة."
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haiku هو أسرع وأصغر نموذج من Anthropic، مصمم لتحقيق استجابة شبه فورية. يتمتع بأداء توجيهي سريع ودقيق."
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opus هو أقوى نموذج من Anthropic لمعالجة المهام المعقدة للغاية. يظهر أداءً ممتازًا في الذكاء، والسلاسة، والفهم."
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnet يوفر توازنًا مثاليًا بين الذكاء والسرعة لحمولات العمل المؤسسية. يقدم أقصى فائدة بسعر أقل، موثوق ومناسب للنشر على نطاق واسع."
+ },
+ "claude-instant-1.2": {
+ "description": "نموذج Anthropic يستخدم لتوليد النصوص ذات التأخير المنخفض، يدعم توليد مئات الصفحات من النص."
+ },
+ "codegeex-4": {
+ "description": "CodeGeeX-4 هو مساعد برمجي قوي، يدعم مجموعة متنوعة من لغات البرمجة في الإجابة الذكية وإكمال الشيفرة، مما يعزز من كفاءة التطوير."
+ },
+ "codegemma": {
+ "description": "CodeGemma هو نموذج لغوي خفيف الوزن مخصص لمهام البرمجة المختلفة، يدعم التكرار السريع والتكامل."
+ },
+ "codegemma:2b": {
+ "description": "CodeGemma هو نموذج لغوي خفيف الوزن مخصص لمهام البرمجة المختلفة، يدعم التكرار السريع والتكامل."
+ },
+ "codellama": {
+ "description": "Code Llama هو نموذج لغوي كبير يركز على توليد الشيفرة والنقاش، يجمع بين دعم مجموعة واسعة من لغات البرمجة، مناسب لبيئات المطورين."
+ },
+ "codellama:13b": {
+ "description": "Code Llama هو نموذج لغوي كبير يركز على توليد الشيفرة والنقاش، يجمع بين دعم مجموعة واسعة من لغات البرمجة، مناسب لبيئات المطورين."
+ },
+ "codellama:34b": {
+ "description": "Code Llama هو نموذج لغوي كبير يركز على توليد الشيفرة والنقاش، يجمع بين دعم مجموعة واسعة من لغات البرمجة، مناسب لبيئات المطورين."
+ },
+ "codellama:70b": {
+ "description": "Code Llama هو نموذج لغوي كبير يركز على توليد الشيفرة والنقاش، يجمع بين دعم مجموعة واسعة من لغات البرمجة، مناسب لبيئات المطورين."
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5 هو نموذج لغوي كبير تم تدريبه على مجموعة كبيرة من بيانات الشيفرة، مصمم لحل مهام البرمجة المعقدة."
+ },
+ "codestral": {
+ "description": "Codestral هو أول نموذج شيفرة من Mistral AI، يوفر دعمًا ممتازًا لمهام توليد الشيفرة."
+ },
+ "codestral-latest": {
+ "description": "Codestral هو نموذج توليد متقدم يركز على توليد الشيفرة، تم تحسينه لمهام الملء الوسيط وإكمال الشيفرة."
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22B هو نموذج مصمم للامتثال للتعليمات، والحوار، والبرمجة."
+ },
+ "cohere-command-r": {
+ "description": "نموذج توليدي قابل للتوسع يستهدف RAG واستخدام الأدوات لتمكين الذكاء الاصطناعي على نطاق الإنتاج للمؤسسات."
+ },
+ "cohere-command-r-plus": {
+ "description": "نموذج RAG محسّن من الطراز الأول مصمم للتعامل مع أحمال العمل على مستوى المؤسسات."
+ },
+ "command-r": {
+ "description": "Command R هو نموذج LLM محسن لمهام الحوار والسياقات الطويلة، مناسب بشكل خاص للتفاعل الديناميكي وإدارة المعرفة."
+ },
+ "command-r-plus": {
+ "description": "Command R+ هو نموذج لغوي كبير عالي الأداء، مصمم لمشاهد الأعمال الحقيقية والتطبيقات المعقدة."
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instruct يوفر قدرة معالجة تعليمات موثوقة، يدعم تطبيقات متعددة الصناعات."
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5 يجمع بين الميزات الممتازة للإصدارات السابقة، ويعزز القدرات العامة والترميز."
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67B هو نموذج متقدم تم تدريبه للحوار المعقد."
+ },
+ "deepseek-chat": {
+ "description": "نموذج مفتوح المصدر الجديد الذي يجمع بين القدرات العامة وقدرات البرمجة، لا يحتفظ فقط بالقدرات الحوارية العامة لنموذج الدردشة الأصلي وقدرات معالجة الشيفرة القوية لنموذج Coder، بل يتماشى أيضًا بشكل أفضل مع تفضيلات البشر. بالإضافة إلى ذلك، حقق DeepSeek-V2.5 تحسينات كبيرة في مهام الكتابة، واتباع التعليمات، وغيرها من المجالات."
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2 هو نموذج شيفرة مفتوح المصدر من نوع خبير مختلط، يقدم أداءً ممتازًا في مهام الشيفرة، ويضاهي GPT4-Turbo."
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2 هو نموذج شيفرة مفتوح المصدر من نوع خبير مختلط، يقدم أداءً ممتازًا في مهام الشيفرة، ويضاهي GPT4-Turbo."
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2 هو نموذج لغوي فعال من نوع Mixture-of-Experts، مناسب لاحتياجات المعالجة الاقتصادية."
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236B هو نموذج تصميم الشيفرة لـ DeepSeek، يوفر قدرة توليد شيفرة قوية."
+ },
+ "deepseek/deepseek-chat": {
+ "description": "نموذج مفتوح المصدر جديد يجمع بين القدرات العامة وقدرات البرمجة، لا يحتفظ فقط بقدرات الحوار العامة لنموذج الدردشة الأصلي وقدرات معالجة الأكواد القوية لنموذج Coder، بل يتماشى أيضًا بشكل أفضل مع تفضيلات البشر. بالإضافة إلى ذلك، حقق DeepSeek-V2.5 تحسينات كبيرة في مهام الكتابة، واتباع التعليمات، وغيرها من المجالات."
+ },
+ "emohaa": {
+ "description": "Emohaa هو نموذج نفسي، يتمتع بقدرات استشارية متخصصة، يساعد المستخدمين في فهم القضايا العاطفية."
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001 (تعديل) يوفر أداءً مستقرًا وقابلًا للتعديل، وهو الخيار المثالي لحلول المهام المعقدة."
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002 (تعديل) يوفر دعمًا ممتازًا متعدد الوسائط، مع التركيز على الحلول الفعالة للمهام المعقدة."
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Pro هو نموذج ذكاء اصطناعي عالي الأداء من Google، مصمم للتوسع في مجموعة واسعة من المهام."
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001 هو نموذج متعدد الوسائط فعال، يدعم التوسع في التطبيقات الواسعة."
+ },
+ "gemini-1.5-flash-002": {
+ "description": "جمني 1.5 فلاش 002 هو نموذج متعدد الوسائط فعال، يدعم توسيع التطبيقات على نطاق واسع."
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827 مصمم لمعالجة سيناريوهات المهام الكبيرة، ويوفر سرعة معالجة لا مثيل لها."
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "جمني 1.5 فلاش 8B 0924 هو النموذج التجريبي الأحدث، حيث حقق تحسينات ملحوظة في الأداء في حالات الاستخدام النصية ومتعددة الوسائط."
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827 يوفر قدرات معالجة متعددة الوسائط محسّنة، مناسبة لمجموعة متنوعة من سيناريوهات المهام المعقدة."
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flash هو أحدث نموذج ذكاء اصطناعي متعدد الوسائط من Google، يتمتع بقدرات معالجة سريعة، ويدعم إدخال النصوص والصور والفيديو، مما يجعله مناسبًا للتوسع الفعال في مجموعة متنوعة من المهام."
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001 هو حل ذكاء اصطناعي متعدد الوسائط قابل للتوسع، يدعم مجموعة واسعة من المهام المعقدة."
+ },
+ "gemini-1.5-pro-002": {
+ "description": "جمني 1.5 برو 002 هو النموذج الأحدث الجاهز للإنتاج، حيث يقدم مخرجات ذات جودة أعلى، مع تحسينات ملحوظة خاصة في الرياضيات والسياقات الطويلة والمهام البصرية."
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801 يوفر قدرات معالجة متعددة الوسائط ممتازة، مما يوفر مرونة أكبر لتطوير التطبيقات."
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827 يجمع بين أحدث تقنيات التحسين، مما يوفر قدرة معالجة بيانات متعددة الوسائط أكثر كفاءة."
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Pro يدعم ما يصل إلى 2 مليون توكن، وهو الخيار المثالي للنماذج المتوسطة الحجم متعددة الوسائط، مناسب لدعم المهام المعقدة من جوانب متعددة."
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7B مناسب لمعالجة المهام المتوسطة والصغيرة، ويجمع بين الكفاءة من حيث التكلفة."
+ },
+ "gemma2": {
+ "description": "Gemma 2 هو نموذج فعال أطلقته Google، يغطي مجموعة متنوعة من سيناريوهات التطبيقات من التطبيقات الصغيرة إلى معالجة البيانات المعقدة."
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9B هو نموذج محسن لمهام محددة ودمج الأدوات."
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2 هو نموذج فعال أطلقته Google، يغطي مجموعة متنوعة من سيناريوهات التطبيقات من التطبيقات الصغيرة إلى معالجة البيانات المعقدة."
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2 هو نموذج فعال أطلقته Google، يغطي مجموعة متنوعة من سيناريوهات التطبيقات من التطبيقات الصغيرة إلى معالجة البيانات المعقدة."
+ },
+ "general": {
+ "description": "Spark Lite هو نموذج لغوي كبير خفيف الوزن، يتمتع بتأخير منخفض للغاية وقدرة معالجة فعالة، ومفتوح بالكامل، ويدعم وظيفة البحث عبر الإنترنت في الوقت الحقيقي. تجعل خاصية الاستجابة السريعة منه مثاليًا لتطبيقات الاستدلال على الأجهزة ذات القدرة الحاسوبية المنخفضة وتعديل النماذج، مما يوفر للمستخدمين قيمة ممتازة وتجربة ذكية، خاصة في مجالات الإجابة على الأسئلة، وتوليد المحتوى، وسيناريوهات البحث."
+ },
+ "generalv3": {
+ "description": "Spark Pro هو نموذج لغوي كبير عالي الأداء تم تحسينه للحقول المهنية، يركز على الرياضيات، والبرمجة، والطب، والتعليم، ويدعم البحث عبر الإنترنت بالإضافة إلى المكونات الإضافية المدمجة مثل الطقس والتاريخ. يظهر النموذج المحسن أداءً ممتازًا وكفاءة في الإجابة على الأسئلة المعقدة، وفهم اللغة، وإنشاء نصوص عالية المستوى، مما يجعله الخيار المثالي لتطبيقات الاستخدام المهني."
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Max هو الإصدار الأكثر شمولاً، يدعم البحث عبر الإنترنت والعديد من المكونات الإضافية المدمجة. تعزز قدراته الأساسية المحسنة، بالإضافة إلى إعدادات الأدوار النظامية ووظائف استدعاء الدوال، أداؤه بشكل استثنائي في مجموعة متنوعة من سيناريوهات التطبيقات المعقدة."
+ },
+ "glm-4": {
+ "description": "GLM-4 هو الإصدار القديم الذي تم إصداره في يناير 2024، وقد تم استبداله الآن بـ GLM-4-0520 الأقوى."
+ },
+ "glm-4-0520": {
+ "description": "GLM-4-0520 هو أحدث إصدار من النموذج، مصمم للمهام المعقدة والمتنوعة، ويظهر أداءً ممتازًا."
+ },
+ "glm-4-air": {
+ "description": "GLM-4-Air هو إصدار ذو قيمة عالية، يتمتع بأداء قريب من GLM-4، ويقدم سرعة عالية وسعرًا معقولًا."
+ },
+ "glm-4-airx": {
+ "description": "GLM-4-AirX يقدم إصدارًا فعالًا من GLM-4-Air، حيث تصل سرعة الاستدلال إلى 2.6 مرة."
+ },
+ "glm-4-alltools": {
+ "description": "GLM-4-AllTools هو نموذج وكيل متعدد الوظائف، تم تحسينه لدعم تخطيط التعليمات المعقدة واستدعاء الأدوات، مثل تصفح الإنترنت، وتفسير الشيفرة، وتوليد النصوص، مناسب لتنفيذ المهام المتعددة."
+ },
+ "glm-4-flash": {
+ "description": "GLM-4-Flash هو الخيار المثالي لمعالجة المهام البسيطة، حيث يتمتع بأسرع سرعة وأفضل سعر."
+ },
+ "glm-4-long": {
+ "description": "GLM-4-Long يدعم إدخالات نصية طويلة جدًا، مما يجعله مناسبًا للمهام الذاكرية ومعالجة الوثائق الكبيرة."
+ },
+ "glm-4-plus": {
+ "description": "GLM-4-Plus كنموذج رائد ذكي، يتمتع بقدرات قوية في معالجة النصوص الطويلة والمهام المعقدة، مع تحسين شامل في الأداء."
+ },
+ "glm-4v": {
+ "description": "GLM-4V يوفر قدرات قوية في فهم الصور والاستدلال، ويدعم مجموعة متنوعة من المهام البصرية."
+ },
+ "glm-4v-plus": {
+ "description": "GLM-4V-Plus يتمتع بقدرة على فهم محتوى الفيديو والصور المتعددة، مما يجعله مناسبًا للمهام متعددة الوسائط."
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827 يوفر قدرات معالجة متعددة الوسائط محسّنة، مناسبة لمجموعة متنوعة من سيناريوهات المهام المعقدة."
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827 يجمع بين أحدث تقنيات التحسين، مما يوفر قدرة معالجة بيانات متعددة الوسائط أكثر كفاءة."
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2 تستمر في مفهوم التصميم الخفيف والفعال."
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2 هو سلسلة نماذج نصية مفتوحة المصدر خفيفة الوزن من Google."
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2 هو سلسلة نماذج نصية مفتوحة المصدر خفيفة الوزن من Google."
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B) يوفر قدرة أساسية على معالجة التعليمات، مناسب للتطبيقات الخفيفة."
+ },
+ "gpt-3.5-turbo": {
+ "description": "نموذج GPT 3.5 Turbo، مناسب لمجموعة متنوعة من مهام توليد وفهم النصوص، يشير حاليًا إلى gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "نموذج GPT 3.5 Turbo، مناسب لمجموعة متنوعة من مهام توليد وفهم النصوص، يشير حاليًا إلى gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "نموذج GPT 3.5 Turbo، مناسب لمجموعة متنوعة من مهام توليد وفهم النصوص، يشير حاليًا إلى gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "نموذج GPT 3.5 Turbo، مناسب لمجموعة متنوعة من مهام توليد وفهم النصوص، يشير حاليًا إلى gpt-3.5-turbo-0125."
+ },
+ "gpt-4": {
+ "description": "يوفر GPT-4 نافذة سياقية أكبر، مما يمكنه من معالجة إدخالات نصية أطول، مما يجعله مناسبًا للمواقف التي تتطلب دمج معلومات واسعة وتحليل البيانات."
+ },
+ "gpt-4-0125-preview": {
+ "description": "نموذج GPT-4 Turbo الأحدث يتمتع بقدرات بصرية. الآن، يمكن استخدام الطلبات البصرية باستخدام نمط JSON واستدعاء الوظائف. GPT-4 Turbo هو إصدار معزز يوفر دعمًا فعالًا من حيث التكلفة للمهام متعددة الوسائط. يجد توازنًا بين الدقة والكفاءة، مما يجعله مناسبًا للتطبيقات التي تتطلب تفاعلات في الوقت الحقيقي."
+ },
+ "gpt-4-0613": {
+ "description": "يوفر GPT-4 نافذة سياقية أكبر، مما يمكنه من معالجة إدخالات نصية أطول، مما يجعله مناسبًا للمواقف التي تتطلب دمج معلومات واسعة وتحليل البيانات."
+ },
+ "gpt-4-1106-preview": {
+ "description": "نموذج GPT-4 Turbo الأحدث يتمتع بقدرات بصرية. الآن، يمكن استخدام الطلبات البصرية باستخدام نمط JSON واستدعاء الوظائف. GPT-4 Turbo هو إصدار معزز يوفر دعمًا فعالًا من حيث التكلفة للمهام متعددة الوسائط. يجد توازنًا بين الدقة والكفاءة، مما يجعله مناسبًا للتطبيقات التي تتطلب تفاعلات في الوقت الحقيقي."
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "نموذج GPT-4 Turbo الأحدث يتمتع بقدرات بصرية. الآن، يمكن استخدام الطلبات البصرية باستخدام نمط JSON واستدعاء الوظائف. GPT-4 Turbo هو إصدار معزز يوفر دعمًا فعالًا من حيث التكلفة للمهام متعددة الوسائط. يجد توازنًا بين الدقة والكفاءة، مما يجعله مناسبًا للتطبيقات التي تتطلب تفاعلات في الوقت الحقيقي."
+ },
+ "gpt-4-32k": {
+ "description": "يوفر GPT-4 نافذة سياقية أكبر، مما يمكنه من معالجة إدخالات نصية أطول، مما يجعله مناسبًا للمواقف التي تتطلب دمج معلومات واسعة وتحليل البيانات."
+ },
+ "gpt-4-32k-0613": {
+ "description": "يوفر GPT-4 نافذة سياقية أكبر، مما يمكنه من معالجة إدخالات نصية أطول، مما يجعله مناسبًا للمواقف التي تتطلب دمج معلومات واسعة وتحليل البيانات."
+ },
+ "gpt-4-turbo": {
+ "description": "نموذج GPT-4 Turbo الأحدث يتمتع بقدرات بصرية. الآن، يمكن استخدام الطلبات البصرية باستخدام نمط JSON واستدعاء الوظائف. GPT-4 Turbo هو إصدار معزز يوفر دعمًا فعالًا من حيث التكلفة للمهام متعددة الوسائط. يجد توازنًا بين الدقة والكفاءة، مما يجعله مناسبًا للتطبيقات التي تتطلب تفاعلات في الوقت الحقيقي."
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "نموذج GPT-4 Turbo الأحدث يتمتع بقدرات بصرية. الآن، يمكن استخدام الطلبات البصرية باستخدام نمط JSON واستدعاء الوظائف. GPT-4 Turbo هو إصدار معزز يوفر دعمًا فعالًا من حيث التكلفة للمهام متعددة الوسائط. يجد توازنًا بين الدقة والكفاءة، مما يجعله مناسبًا للتطبيقات التي تتطلب تفاعلات في الوقت الحقيقي."
+ },
+ "gpt-4-turbo-preview": {
+ "description": "نموذج GPT-4 Turbo الأحدث يتمتع بقدرات بصرية. الآن، يمكن استخدام الطلبات البصرية باستخدام نمط JSON واستدعاء الوظائف. GPT-4 Turbo هو إصدار معزز يوفر دعمًا فعالًا من حيث التكلفة للمهام متعددة الوسائط. يجد توازنًا بين الدقة والكفاءة، مما يجعله مناسبًا للتطبيقات التي تتطلب تفاعلات في الوقت الحقيقي."
+ },
+ "gpt-4-vision-preview": {
+ "description": "نموذج GPT-4 Turbo الأحدث يتمتع بقدرات بصرية. الآن، يمكن استخدام الطلبات البصرية باستخدام نمط JSON واستدعاء الوظائف. GPT-4 Turbo هو إصدار معزز يوفر دعمًا فعالًا من حيث التكلفة للمهام متعددة الوسائط. يجد توازنًا بين الدقة والكفاءة، مما يجعله مناسبًا للتطبيقات التي تتطلب تفاعلات في الوقت الحقيقي."
+ },
+ "gpt-4o": {
+ "description": "ChatGPT-4o هو نموذج ديناميكي يتم تحديثه في الوقت الحقيقي للحفاظ على أحدث إصدار. يجمع بين فهم اللغة القوي وقدرات التوليد، مما يجعله مناسبًا لمجموعة واسعة من التطبيقات، بما في ذلك خدمة العملاء والتعليم والدعم الفني."
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "ChatGPT-4o هو نموذج ديناميكي يتم تحديثه في الوقت الحقيقي للحفاظ على أحدث إصدار. يجمع بين فهم اللغة القوي وقدرات التوليد، مما يجعله مناسبًا لمجموعة واسعة من التطبيقات، بما في ذلك خدمة العملاء والتعليم والدعم الفني."
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o هو نموذج ديناميكي يتم تحديثه في الوقت الحقيقي للحفاظ على أحدث إصدار. يجمع بين فهم اللغة القوي وقدرات التوليد، مما يجعله مناسبًا لمجموعة واسعة من التطبيقات، بما في ذلك خدمة العملاء والتعليم والدعم الفني."
+ },
+ "gpt-4o-mini": {
+ "description": "نموذج GPT-4o mini هو أحدث نموذج أطلقته OpenAI بعد GPT-4 Omni، ويدعم إدخال الصور والنصوص وإخراج النصوص. كأحد نماذجهم المتقدمة الصغيرة، فهو أرخص بكثير من النماذج الرائدة الأخرى في الآونة الأخيرة، وأرخص بأكثر من 60% من GPT-3.5 Turbo. يحتفظ بذكاء متقدم مع قيمة ممتازة. حصل GPT-4o mini على 82% في اختبار MMLU، وهو حاليًا يتفوق على GPT-4 في تفضيلات الدردشة."
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13B هو نموذج لغوي يجمع بين الإبداع والذكاء من خلال دمج عدة نماذج رائدة."
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "نموذج مفتوح المصدر مبتكر InternLM2.5، يعزز الذكاء الحواري من خلال عدد كبير من المعلمات."
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5 يوفر حلول حوار ذكية في عدة سيناريوهات."
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "نموذج Llama 3.1 70B للتعليمات، يتمتع بـ 70B من المعلمات، قادر على تقديم أداء ممتاز في مهام توليد النصوص الكبيرة والتعليمات."
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70B يوفر قدرة استدلال ذكائي أقوى، مناسب للتطبيقات المعقدة، يدعم معالجة حسابية ضخمة ويضمن الكفاءة والدقة."
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8B هو نموذج عالي الأداء، يوفر قدرة سريعة على توليد النصوص، مما يجعله مثاليًا لمجموعة من التطبيقات التي تتطلب كفاءة كبيرة وتكلفة فعالة."
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "نموذج Llama 3.1 8B للتعليمات، يتمتع بـ 8B من المعلمات، يدعم تنفيذ مهام التعليمات بكفاءة، ويوفر قدرة ممتازة على توليد النصوص."
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "نموذج Llama 3.1 Sonar Huge Online، يتمتع بـ 405B من المعلمات، يدعم طول سياق حوالي 127,000 علامة، مصمم لتطبيقات دردشة معقدة عبر الإنترنت."
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "نموذج Llama 3.1 Sonar Large Chat، يتمتع بـ 70B من المعلمات، يدعم طول سياق حوالي 127,000 علامة، مناسب لمهام دردشة غير متصلة معقدة."
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "نموذج Llama 3.1 Sonar Large Online، يتمتع بـ 70B من المعلمات، يدعم طول سياق حوالي 127,000 علامة، مناسب لمهام دردشة عالية السعة ومتنوعة."
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "نموذج Llama 3.1 Sonar Small Chat، يتمتع بـ 8B من المعلمات، مصمم للدردشة غير المتصلة، يدعم طول سياق حوالي 127,000 علامة."
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "نموذج Llama 3.1 Sonar Small Online، يتمتع بـ 8B من المعلمات، يدعم طول سياق حوالي 127,000 علامة، مصمم للدردشة عبر الإنترنت، قادر على معالجة تفاعلات نصية متنوعة بكفاءة."
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70B يوفر قدرة معالجة معقدة لا مثيل لها، مصمم خصيصًا للمشاريع ذات المتطلبات العالية."
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8B يوفر أداء استدلال عالي الجودة، مناسب لمتطلبات التطبيقات متعددة السيناريوهات."
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Use يوفر قدرة قوية على استدعاء الأدوات، يدعم معالجة فعالة للمهام المعقدة."
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Use هو نموذج محسن للاستخدام الفعال للأدوات، يدعم الحسابات المتوازية السريعة."
+ },
+ "llama3.1": {
+ "description": "Llama 3.1 هو النموذج الرائد الذي أطلقته Meta، يدعم ما يصل إلى 405B من المعلمات، ويمكن تطبيقه في مجالات الحوار المعقد، والترجمة متعددة اللغات، وتحليل البيانات."
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1 هو النموذج الرائد الذي أطلقته Meta، يدعم ما يصل إلى 405B من المعلمات، ويمكن تطبيقه في مجالات الحوار المعقد، والترجمة متعددة اللغات، وتحليل البيانات."
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1 هو النموذج الرائد الذي أطلقته Meta، يدعم ما يصل إلى 405B من المعلمات، ويمكن تطبيقه في مجالات الحوار المعقد، والترجمة متعددة اللغات، وتحليل البيانات."
+ },
+ "llava": {
+ "description": "LLaVA هو نموذج متعدد الوسائط يجمع بين مشفرات بصرية وVicuna، يستخدم لفهم بصري ولغوي قوي."
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7B يوفر قدرة معالجة بصرية مدمجة، من خلال إدخال المعلومات البصرية لتوليد مخرجات معقدة."
+ },
+ "llava:13b": {
+ "description": "LLaVA هو نموذج متعدد الوسائط يجمع بين مشفرات بصرية وVicuna، يستخدم لفهم بصري ولغوي قوي."
+ },
+ "llava:34b": {
+ "description": "LLaVA هو نموذج متعدد الوسائط يجمع بين مشفرات بصرية وVicuna، يستخدم لفهم بصري ولغوي قوي."
+ },
+ "mathstral": {
+ "description": "MathΣtral مصمم للبحث العلمي والاستدلال الرياضي، يوفر قدرة حسابية فعالة وتفسير النتائج."
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "نموذج قوي بحجم 70 مليار معلمة يتفوق في التفكير، والترميز، وتطبيقات اللغة الواسعة."
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "نموذج متعدد الاستخدامات بحجم 8 مليار معلمة، مُحسّن لمهام الحوار وتوليد النصوص."
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "نموذج Llama 3.1 المُعدل للتعليمات، مُحسّن لاستخدامات الحوار متعددة اللغات ويتفوق على العديد من نماذج الدردشة المفتوحة والمغلقة المتاحة في المعايير الصناعية الشائعة."
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "نموذج Llama 3.1 المُعدل للتعليمات، مُحسّن لاستخدامات الحوار متعددة اللغات ويتفوق على العديد من نماذج الدردشة المفتوحة والمغلقة المتاحة في المعايير الصناعية الشائعة."
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "نموذج Llama 3.1 المُعدل للتعليمات، مُحسّن لاستخدامات الحوار متعددة اللغات ويتفوق على العديد من نماذج الدردشة المفتوحة والمغلقة المتاحة في المعايير الصناعية الشائعة."
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B) يوفر قدرة ممتازة على معالجة اللغة وتجربة تفاعلية رائعة."
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B) هو نموذج دردشة قوي، يدعم احتياجات الحوار المعقدة."
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B) يوفر دعمًا متعدد اللغات، ويغطي مجموعة واسعة من المعرفة في المجالات."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Lite مناسب للبيئات التي تتطلب أداءً عاليًا وزمن استجابة منخفض."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turbo يوفر قدرة ممتازة على فهم اللغة وتوليدها، مناسب لأكثر المهام الحسابية تطلبًا."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Lite مناسب للبيئات ذات الموارد المحدودة، ويوفر أداءً متوازنًا ممتازًا."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turbo هو نموذج لغوي كبير عالي الأداء، يدعم مجموعة واسعة من سيناريوهات التطبيق."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405B هو نموذج قوي للتدريب المسبق وضبط التعليمات."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "نموذج Llama 3.1 Turbo 405B يوفر دعمًا كبيرًا للسياق لمعالجة البيانات الكبيرة، ويظهر أداءً بارزًا في تطبيقات الذكاء الاصطناعي على نطاق واسع."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70B يوفر دعمًا فعالًا للحوار متعدد اللغات."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "نموذج Llama 3.1 70B تم ضبطه بدقة، مناسب للتطبيقات ذات الحمل العالي، تم تكميمه إلى FP8 لتوفير قدرة حسابية ودقة أعلى، مما يضمن أداءً ممتازًا في السيناريوهات المعقدة."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1 يوفر دعمًا متعدد اللغات، وهو واحد من النماذج الرائدة في الصناعة."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "نموذج Llama 3.1 8B يستخدم FP8 للتكميم، يدعم ما يصل إلى 131,072 علامة سياق، وهو من بين الأفضل في النماذج المفتوحة المصدر، مناسب للمهام المعقدة، ويظهر أداءً ممتازًا في العديد من المعايير الصناعية."
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instruct تم تحسينه لمشاهد الحوار عالية الجودة، ويظهر أداءً ممتازًا في مختلف التقييمات البشرية."
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instruct تم تحسينه لمشاهد الحوار عالية الجودة، ويظهر أداءً أفضل من العديد من النماذج المغلقة."
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B Instruct هو أحدث إصدار من Meta، تم تحسينه لتوليد حوارات عالية الجودة، متجاوزًا العديد من النماذج المغلقة الرائدة."
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct مصمم للحوار عالي الجودة، ويظهر أداءً بارزًا في التقييمات البشرية، مما يجعله مناسبًا بشكل خاص للمشاهد التفاعلية العالية."
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct هو أحدث إصدار من Meta، تم تحسينه لمشاهد الحوار عالية الجودة، ويظهر أداءً أفضل من العديد من النماذج المغلقة الرائدة."
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1 يوفر دعمًا متعدد اللغات، وهو واحد من النماذج الرائدة في الصناعة في مجال التوليد."
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "نموذج Meta Llama 3.1 405B Instruct هو أكبر وأقوى نموذج في مجموعة نماذج Llama 3.1 Instruct، وهو نموذج متقدم للغاية لتوليد البيانات والحوار، ويمكن استخدامه كأساس للتدريب المستمر أو التخصيص في مجالات معينة. توفر Llama 3.1 نماذج لغوية كبيرة متعددة اللغات (LLMs) وهي مجموعة من النماذج المدربة مسبقًا والمعدلة وفقًا للتعليمات، بما في ذلك أحجام 8B و70B و405B (إدخال/إخراج نصي). تم تحسين نماذج النص المعدلة وفقًا للتعليمات (8B و70B و405B) لحالات الاستخدام الحوارية متعددة اللغات، وقد تفوقت في العديد من اختبارات المعايير الصناعية الشائعة على العديد من نماذج الدردشة مفتوحة المصدر المتاحة. تم تصميم Llama 3.1 للاستخدام التجاري والبحثي في عدة لغات. نماذج النص المعدلة وفقًا للتعليمات مناسبة للدردشة الشبيهة بالمساعد، بينما يمكن للنماذج المدربة مسبقًا التكيف مع مجموعة متنوعة من مهام توليد اللغة الطبيعية. تدعم نماذج Llama 3.1 أيضًا تحسين نماذج أخرى باستخدام مخرجاتها، بما في ذلك توليد البيانات الاصطناعية والتنقيح. Llama 3.1 هو نموذج لغوي ذاتي التكرار يستخدم بنية المحولات المحسّنة. تستخدم النسخ المعدلة التعلم المعزز مع التغذية الراجعة البشرية (RLHF) لتلبية تفضيلات البشر فيما يتعلق بالمساعدة والأمان."
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "الإصدار المحدث من Meta Llama 3.1 70B Instruct، يتضمن طول سياق موسع يبلغ 128K، ودعم لغات متعددة، وقدرات استدلال محسنة. توفر Llama 3.1 نماذج لغوية كبيرة متعددة اللغات (LLMs) وهي مجموعة من النماذج التوليدية المدربة مسبقًا والمعدلة للتعليمات، بما في ذلك أحجام 8B و70B و405B (إدخال/إخراج نص). تم تحسين نماذج النص المعدلة للتعليمات (8B و70B و405B) لحالات الاستخدام متعددة اللغات، وتفوقت في اختبارات المعايير الصناعية الشائعة على العديد من نماذج الدردشة مفتوحة المصدر المتاحة. تم تصميم Llama 3.1 للاستخدام التجاري والبحثي في لغات متعددة. نماذج النص المعدلة للتعليمات مناسبة للدردشة الشبيهة بالمساعد، بينما يمكن للنماذج المدربة مسبقًا التكيف مع مجموعة متنوعة من مهام توليد اللغة الطبيعية. تدعم نماذج Llama 3.1 أيضًا تحسين نماذج أخرى باستخدام مخرجات نموذجها، بما في ذلك توليد البيانات الاصطناعية والتنقيح. Llama 3.1 هو نموذج لغوي ذاتي التكرار يستخدم بنية المحولات المحسنة. تستخدم النسخ المعدلة التعلم الموجه بالإشراف (SFT) والتعلم المعزز مع التغذية الراجعة البشرية (RLHF) لتلبية تفضيلات البشر فيما يتعلق بالمساعدة والأمان."
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "الإصدار المحدث من Meta Llama 3.1 8B Instruct، يتضمن طول سياق موسع يبلغ 128K، ودعم لغات متعددة، وقدرات استدلال محسنة. توفر Llama 3.1 نماذج لغوية كبيرة متعددة اللغات (LLMs) وهي مجموعة من النماذج التوليدية المدربة مسبقًا والمعدلة للتعليمات، بما في ذلك أحجام 8B و70B و405B (إدخال/إخراج نص). تم تحسين نماذج النص المعدلة للتعليمات (8B و70B و405B) لحالات الاستخدام متعددة اللغات، وتفوقت في اختبارات المعايير الصناعية الشائعة على العديد من نماذج الدردشة مفتوحة المصدر المتاحة. تم تصميم Llama 3.1 للاستخدام التجاري والبحثي في لغات متعددة. نماذج النص المعدلة للتعليمات مناسبة للدردشة الشبيهة بالمساعد، بينما يمكن للنماذج المدربة مسبقًا التكيف مع مجموعة متنوعة من مهام توليد اللغة الطبيعية. تدعم نماذج Llama 3.1 أيضًا تحسين نماذج أخرى باستخدام مخرجات نموذجها، بما في ذلك توليد البيانات الاصطناعية والتنقيح. Llama 3.1 هو نموذج لغوي ذاتي التكرار يستخدم بنية المحولات المحسنة. تستخدم النسخ المعدلة التعلم الموجه بالإشراف (SFT) والتعلم المعزز مع التغذية الراجعة البشرية (RLHF) لتلبية تفضيلات البشر فيما يتعلق بالمساعدة والأمان."
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3 هو نموذج لغوي كبير مفتوح (LLM) موجه للمطورين والباحثين والشركات، يهدف إلى مساعدتهم في بناء وتجربة وتوسيع أفكارهم في الذكاء الاصطناعي بشكل مسؤول. كجزء من نظام الابتكار المجتمعي العالمي، فهو مثالي لإنشاء المحتوى، والذكاء الاصطناعي الحواري، وفهم اللغة، والبحث والتطوير، وتطبيقات الأعمال."
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3 هو نموذج لغوي كبير مفتوح (LLM) موجه للمطورين والباحثين والشركات، يهدف إلى مساعدتهم في بناء وتجربة وتوسيع أفكارهم في الذكاء الاصطناعي بشكل مسؤول. كجزء من نظام الابتكار المجتمعي العالمي، فهو مثالي للأجهزة ذات القدرة الحاسوبية والموارد المحدودة، والأجهزة الطرفية، وأوقات التدريب الأسرع."
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7B هو أحدث نموذج خفيف الوزن وسريع من Microsoft AI، ويقترب أداؤه من 10 أضعاف النماذج الرائدة المفتوحة المصدر الحالية."
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22B هو نموذج Wizard المتقدم من Microsoft، يظهر أداءً تنافسيًا للغاية."
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-V هو نموذج متعدد الوسائط من الجيل الجديد تم إطلاقه بواسطة OpenBMB، ويتميز بقدرات استثنائية في التعرف على النصوص وفهم الوسائط المتعددة، ويدعم مجموعة واسعة من سيناريوهات الاستخدام."
+ },
+ "mistral": {
+ "description": "Mistral هو نموذج 7B أطلقته Mistral AI، مناسب لاحتياجات معالجة اللغة المتغيرة."
+ },
+ "mistral-large": {
+ "description": "Mixtral Large هو النموذج الرائد من Mistral، يجمع بين قدرات توليد الشيفرة، والرياضيات، والاستدلال، ويدعم نافذة سياق تصل إلى 128k."
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407) هو نموذج لغة كبير متقدم (LLM) يتمتع بقدرات متطورة في التفكير والمعرفة والترميز."
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Large هو النموذج الرائد، يتفوق في المهام متعددة اللغات، والاستدلال المعقد، وتوليد الشيفرة، وهو الخيار المثالي للتطبيقات الراقية."
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemo تم تطويره بالتعاون بين Mistral AI وNVIDIA، وهو نموذج 12B عالي الأداء."
+ },
+ "mistral-small": {
+ "description": "يمكن استخدام Mistral Small في أي مهمة تعتمد على اللغة تتطلب كفاءة عالية وزمن استجابة منخفض."
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Small هو خيار فعال من حيث التكلفة وسريع وموثوق، مناسب لمهام الترجمة، والتلخيص، وتحليل المشاعر."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instruct معروف بأدائه العالي، مناسب لمهام لغوية متعددة."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7B هو نموذج تم ضبطه حسب الطلب، يوفر إجابات محسنة للمهام."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3 يوفر قدرة حسابية فعالة وفهم اللغة الطبيعية، مناسب لمجموعة واسعة من التطبيقات."
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B) هو نموذج لغوي كبير للغاية، يدعم احتياجات معالجة عالية جدًا."
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7B هو نموذج خبير مختلط مدرب مسبقًا، يستخدم لمهام النص العامة."
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instruct هو نموذج صناعي عالي الأداء يجمع بين تحسين السرعة ودعم السياقات الطويلة."
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemo هو نموذج ببارامترات 7.3B يدعم عدة لغات ويتميز بأداء برمجي عالي."
+ },
+ "mixtral": {
+ "description": "Mixtral هو نموذج خبير من Mistral AI، يتمتع بأوزان مفتوحة المصدر، ويوفر دعمًا في توليد الشيفرة وفهم اللغة."
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7B يوفر قدرة حسابية متوازية عالية التحمل، مناسب للمهام المعقدة."
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtral هو نموذج خبير من Mistral AI، يتمتع بأوزان مفتوحة المصدر، ويوفر دعمًا في توليد الشيفرة وفهم اللغة."
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128K هو نموذج يتمتع بقدرة معالجة سياقات طويلة جدًا، مناسب لتوليد نصوص طويلة جدًا، يلبي احتياجات المهام المعقدة، قادر على معالجة ما يصل إلى 128,000 توكن، مما يجعله مثاليًا للبحث، والأكاديميات، وتوليد الوثائق الكبيرة."
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32K يوفر قدرة معالجة سياقات متوسطة الطول، قادر على معالجة 32,768 توكن، مناسب بشكل خاص لتوليد مجموعة متنوعة من الوثائق الطويلة والحوار المعقد، ويستخدم في إنشاء المحتوى، وتوليد التقارير، وأنظمة الحوار."
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8K مصمم خصيصًا لتوليد مهام النصوص القصيرة، يتمتع بأداء معالجة فعال، قادر على معالجة 8,192 توكن، مما يجعله مثاليًا للحوار القصير، والتدوين السريع، وتوليد المحتوى السريع."
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8B هو إصدار مطور من Nous Hermes 2، ويحتوي على أحدث مجموعات البيانات المطورة داخليًا."
+ },
+ "o1-mini": {
+ "description": "o1-mini هو نموذج استدلال سريع وفعال من حيث التكلفة مصمم لتطبيقات البرمجة والرياضيات والعلوم. يحتوي هذا النموذج على 128K من السياق وتاريخ انتهاء المعرفة في أكتوبر 2023."
+ },
+ "o1-preview": {
+ "description": "o1 هو نموذج استدلال جديد من OpenAI، مناسب للمهام المعقدة التي تتطلب معرفة عامة واسعة. يحتوي هذا النموذج على 128K من السياق وتاريخ انتهاء المعرفة في أكتوبر 2023."
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mamba هو نموذج لغة Mamba 2 يركز على توليد الشيفرة، ويوفر دعمًا قويًا لمهام الشيفرة المتقدمة والاستدلال."
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7B هو نموذج مدمج ولكنه عالي الأداء، يتفوق في معالجة الدفعات والمهام البسيطة، مثل التصنيف وتوليد النصوص، ويتميز بقدرة استدلال جيدة."
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemo هو نموذج 12B تم تطويره بالتعاون مع Nvidia، يوفر أداء استدلال وترميز ممتاز، سهل التكامل والاستبدال."
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22B هو نموذج خبير أكبر، يركز على المهام المعقدة، ويوفر قدرة استدلال ممتازة وإنتاجية أعلى."
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7B هو نموذج خبير نادر، يستخدم عدة معلمات لزيادة سرعة الاستدلال، مناسب لمعالجة المهام متعددة اللغات وتوليد الشيفرة."
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o هو نموذج ديناميكي يتم تحديثه في الوقت الحقيقي للحفاظ على أحدث إصدار. يجمع بين قدرات الفهم اللغوي القوي والتوليد، وهو مناسب لمجموعة واسعة من سيناريوهات الاستخدام، بما في ذلك خدمة العملاء والتعليم والدعم الفني."
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o mini هو أحدث نموذج من OpenAI تم إطلاقه بعد GPT-4 Omni، ويدعم إدخال النصوص والصور وإخراج النصوص. كأحد نماذجهم المتقدمة الصغيرة، فهو أرخص بكثير من النماذج الرائدة الأخرى في الآونة الأخيرة، وأرخص بأكثر من 60% من GPT-3.5 Turbo. يحتفظ بذكاء متقدم مع قيمة ممتازة. حصل GPT-4o mini على 82% في اختبار MMLU، وهو حاليًا يتفوق على GPT-4 في تفضيلات الدردشة."
+ },
+ "openai/o1-mini": {
+ "description": "o1-mini هو نموذج استدلال سريع وفعال من حيث التكلفة مصمم لتطبيقات البرمجة والرياضيات والعلوم. يحتوي هذا النموذج على 128K من السياق وتاريخ انتهاء المعرفة في أكتوبر 2023."
+ },
+ "openai/o1-preview": {
+ "description": "o1 هو نموذج استدلال جديد من OpenAI، مناسب للمهام المعقدة التي تتطلب معرفة عامة واسعة. يحتوي هذا النموذج على 128K من السياق وتاريخ انتهاء المعرفة في أكتوبر 2023."
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7B هو مكتبة نماذج لغوية مفتوحة المصدر تم تحسينها باستخدام استراتيجية \"C-RLFT (تعزيز التعلم الشرطي)\"."
+ },
+ "openrouter/auto": {
+ "description": "استنادًا إلى طول السياق، والموضوع، والتعقيد، سيتم إرسال طلبك إلى Llama 3 70B Instruct، أو Claude 3.5 Sonnet (التعديل الذاتي) أو GPT-4o."
+ },
+ "phi3": {
+ "description": "Phi-3 هو نموذج مفتوح خفيف الوزن أطلقته Microsoft، مناسب للتكامل الفعال واستدلال المعرفة على نطاق واسع."
+ },
+ "phi3:14b": {
+ "description": "Phi-3 هو نموذج مفتوح خفيف الوزن أطلقته Microsoft، مناسب للتكامل الفعال واستدلال المعرفة على نطاق واسع."
+ },
+ "pixtral-12b-2409": {
+ "description": "نموذج Pixtral يظهر قدرات قوية في فهم الرسوم البيانية والصور، والإجابة على الأسئلة المتعلقة بالمستندات، والاستدلال متعدد الوسائط، واتباع التعليمات، مع القدرة على إدخال الصور بدقة طبيعية ونسبة عرض إلى ارتفاع، بالإضافة إلى معالجة عدد غير محدود من الصور في نافذة سياق طويلة تصل إلى 128K توكن."
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "نموذج Qwen للبرمجة."
+ },
+ "qwen-long": {
+ "description": "نموذج Qwen العملاق للغة، يدعم سياقات نصية طويلة، بالإضافة إلى وظائف الحوار المستندة إلى الوثائق الطويلة والعديد من الوثائق."
+ },
+ "qwen-math-plus-latest": {
+ "description": "نموذج Qwen الرياضي مصمم خصيصًا لحل المسائل الرياضية."
+ },
+ "qwen-math-turbo-latest": {
+ "description": "نموذج Qwen الرياضي مصمم خصيصًا لحل المسائل الرياضية."
+ },
+ "qwen-max-latest": {
+ "description": "نموذج لغة ضخم من Qwen بمستوى تريليونات، يدعم إدخال لغات مختلفة مثل الصينية والإنجليزية، وهو النموذج API وراء إصدار Qwen 2.5."
+ },
+ "qwen-plus-latest": {
+ "description": "نسخة محسنة من نموذج لغة Qwen الضخم، تدعم إدخال لغات مختلفة مثل الصينية والإنجليزية."
+ },
+ "qwen-turbo-latest": {
+ "description": "نموذج لغة ضخم من Qwen، يدعم إدخال لغات مختلفة مثل الصينية والإنجليزية."
+ },
+ "qwen-vl-chat-v1": {
+ "description": "نموذج Qwen العملاق للغة البصرية يدعم طرق تفاعل مرنة، بما في ذلك الصور المتعددة، والأسئلة والأجوبة المتعددة، والإبداع."
+ },
+ "qwen-vl-max": {
+ "description": "نموذج Qwen العملاق للغة البصرية. يعزز بشكل أكبر من قدرة الاستدلال البصري والامتثال للتعليمات، ويقدم مستوى أعلى من الإدراك البصري والفهم."
+ },
+ "qwen-vl-plus": {
+ "description": "نموذج Qwen العملاق للغة البصرية المعزز. يعزز بشكل كبير من قدرة التعرف على التفاصيل والتعرف على النصوص، ويدعم دقة تصل إلى مليون بكسل وأبعاد صورة بأي نسبة."
+ },
+ "qwen-vl-v1": {
+ "description": "نموذج تم تدريبه باستخدام نموذج Qwen-7B اللغوي، مع إضافة نموذج الصور، بدقة إدخال الصور 448."
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2 هو سلسلة جديدة من نماذج اللغة الكبيرة، تتمتع بقدرات فهم وتوليد أقوى."
+ },
+ "qwen2": {
+ "description": "Qwen2 هو نموذج لغوي كبير من الجيل الجديد من Alibaba، يدعم أداءً ممتازًا لتلبية احتياجات التطبيقات المتنوعة."
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "نموذج Qwen 2.5 مفتوح المصدر بحجم 14B."
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "نموذج Qwen 2.5 مفتوح المصدر بحجم 32B."
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "نموذج Qwen 2.5 مفتوح المصدر بحجم 72B."
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "نموذج Qwen 2.5 مفتوح المصدر بحجم 7B."
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "نسخة مفتوحة المصدر من نموذج Qwen للبرمجة."
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "نسخة مفتوحة المصدر من نموذج Qwen للبرمجة."
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "نموذج Qwen-Math يتمتع بقدرات قوية في حل المسائل الرياضية."
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "نموذج Qwen-Math يتمتع بقدرات قوية في حل المسائل الرياضية."
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "نموذج Qwen-Math يتمتع بقدرات قوية في حل المسائل الرياضية."
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2 هو نموذج لغوي كبير من الجيل الجديد من Alibaba، يدعم أداءً ممتازًا لتلبية احتياجات التطبيقات المتنوعة."
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2 هو نموذج لغوي كبير من الجيل الجديد من Alibaba، يدعم أداءً ممتازًا لتلبية احتياجات التطبيقات المتنوعة."
+ },
+ "qwen2:72b": {
+ "description": "Qwen2 هو نموذج لغوي كبير من الجيل الجديد من Alibaba، يدعم أداءً ممتازًا لتلبية احتياجات التطبيقات المتنوعة."
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar Mini هو نموذج LLM مدمج، يتفوق على GPT-3.5، ويتميز بقدرات متعددة اللغات، ويدعم الإنجليزية والكورية، ويقدم حلولًا فعالة وصغيرة الحجم."
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja) يوسع قدرات Solar Mini، ويركز على اللغة اليابانية، مع الحفاظ على الكفاءة والأداء الممتاز في استخدام الإنجليزية والكورية."
+ },
+ "solar-pro": {
+ "description": "Solar Pro هو نموذج LLM عالي الذكاء تم إطلاقه من قبل Upstage، يركز على قدرة اتباع التعليمات على وحدة معالجة الرسوميات الواحدة، وسجل IFEval فوق 80. حاليًا يدعم اللغة الإنجليزية، ومن المقرر إصدار النسخة الرسمية في نوفمبر 2024، مع توسيع دعم اللغات وطول السياق."
+ },
+ "step-1-128k": {
+ "description": "يوفر توازنًا بين الأداء والتكلفة، مناسب لمجموعة متنوعة من السيناريوهات."
+ },
+ "step-1-256k": {
+ "description": "يمتلك قدرة معالجة سياق طويلة جدًا، مناسب بشكل خاص لتحليل الوثائق الطويلة."
+ },
+ "step-1-32k": {
+ "description": "يدعم حوارات متوسطة الطول، مناسب لمجموعة متنوعة من تطبيقات السيناريو."
+ },
+ "step-1-8k": {
+ "description": "نموذج صغير، مناسب للمهام الخفيفة."
+ },
+ "step-1-flash": {
+ "description": "نموذج عالي السرعة، مناسب للحوار في الوقت الحقيقي."
+ },
+ "step-1v-32k": {
+ "description": "يدعم المدخلات البصرية، يعزز تجربة التفاعل متعدد الوسائط."
+ },
+ "step-1v-8k": {
+ "description": "نموذج بصري صغير، مناسب للمهام الأساسية المتعلقة بالنصوص والصور."
+ },
+ "step-2-16k": {
+ "description": "يدعم تفاعلات سياق كبيرة، مناسب لمشاهد الحوار المعقدة."
+ },
+ "taichu_llm": {
+ "description": "نموذج اللغة الكبير TaiChu يتمتع بقدرات قوية في فهم اللغة، بالإضافة إلى إنشاء النصوص، والإجابة على الأسئلة، وبرمجة الأكواد، والحسابات الرياضية، والاستدلال المنطقي، وتحليل المشاعر، وتلخيص النصوص. يجمع بشكل مبتكر بين التدريب المسبق على البيانات الضخمة والمعرفة الغنية من مصادر متعددة، من خلال تحسين تقنيات الخوارزميات باستمرار واستيعاب المعرفة الجديدة من البيانات النصية الضخمة، مما يحقق تطورًا مستمرًا في أداء النموذج. يوفر للمستخدمين معلومات وخدمات أكثر سهولة وتجربة أكثر ذكاءً."
+ },
+ "taichu_vqa": {
+ "description": "تايتشو 2.0V يجمع بين فهم الصور، ونقل المعرفة، والاستدلال المنطقي، ويظهر أداءً بارزًا في مجال الأسئلة والأجوبة النصية والصورية."
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B) يوفر قدرة حسابية معززة من خلال استراتيجيات فعالة وهندسة نموذجية."
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B) مناسب لمهام التعليمات الدقيقة، يوفر قدرة معالجة لغوية ممتازة."
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2 هو نموذج لغوي تقدمه Microsoft AI، يتميز بأداء ممتاز في الحوار المعقد، واللغات المتعددة، والاستدلال، والمساعدين الذكيين."
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2 هو نموذج لغوي تقدمه Microsoft AI، يتميز بأداء ممتاز في الحوار المعقد، واللغات المتعددة، والاستدلال، والمساعدين الذكيين."
+ },
+ "yi-large": {
+ "description": "نموذج جديد بمليارات المعلمات، يوفر قدرة قوية على الإجابة وتوليد النصوص."
+ },
+ "yi-large-fc": {
+ "description": "يدعم ويعزز قدرة استدعاء الأدوات على نموذج yi-large، مناسب لمجموعة متنوعة من سيناريوهات الأعمال التي تتطلب بناء وكيل أو سير عمل."
+ },
+ "yi-large-preview": {
+ "description": "الإصدار الأولي، يوصى باستخدام yi-large (الإصدار الجديد)."
+ },
+ "yi-large-rag": {
+ "description": "خدمة متقدمة تعتمد على نموذج yi-large القوي، تجمع بين تقنيات الاسترجاع والتوليد لتوفير إجابات دقيقة، وخدمة استرجاع المعلومات من الإنترنت في الوقت الحقيقي."
+ },
+ "yi-large-turbo": {
+ "description": "عالية الكفاءة، أداء ممتاز. يتم ضبطها بدقة عالية لتحقيق توازن بين الأداء وسرعة الاستدلال والتكلفة."
+ },
+ "yi-medium": {
+ "description": "نموذج متوسط الحجم تم تحسينه، يتمتع بقدرات متوازنة، وكفاءة عالية في التكلفة. تم تحسين قدرة اتباع التعليمات بشكل عميق."
+ },
+ "yi-medium-200k": {
+ "description": "نافذة سياق طويلة تصل إلى 200K، توفر قدرة عميقة على فهم وتوليد النصوص الطويلة."
+ },
+ "yi-spark": {
+ "description": "نموذج صغير ولكنه قوي، خفيف وسريع. يوفر قدرة معززة على العمليات الرياضية وكتابة الشيفرات."
+ },
+ "yi-vision": {
+ "description": "نموذج لمهام الرؤية المعقدة، يوفر قدرة عالية على فهم وتحليل الصور."
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/plugin.json b/DigitalHumanWeb/locales/ar/plugin.json
new file mode 100644
index 0000000..9577dad
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "متغيرات الاستدعاء",
+ "function_call": "استدعاء الدالة",
+ "off": "إيقاف التصحيح",
+ "on": "عرض معلومات استدعاء البرنامج المساعد",
+ "payload": "حمولة البرنامج المساعد",
+ "response": "الرد",
+ "tool_call": "طلب استدعاء الأداة"
+ },
+ "detailModal": {
+ "info": {
+ "description": "وصف واجهة برمجة التطبيقات",
+ "name": "اسم واجهة برمجة التطبيقات"
+ },
+ "tabs": {
+ "info": "قدرات البرنامج المساعد",
+ "manifest": "ملف التثبيت",
+ "settings": "الإعدادات"
+ },
+ "title": "تفاصيل البرنامج المساعد"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "سيتم حذف البرنامج المساعد المحلي، وبمجرد الحذف لن يمكن استعادته، هل ترغب في حذف هذا البرنامج المساعد؟",
+ "customParams": {
+ "useProxy": {
+ "label": "تثبيت عبر الوكيل (في حالة حدوث أخطاء الوصول عبر النطاقات المتقاطعة، يمكنك تجربة تفعيل هذا الخيار ثم إعادة التثبيت)"
+ }
+ },
+ "deleteSuccess": "تم حذف البرنامج المساعد بنجاح",
+ "manifest": {
+ "identifier": {
+ "desc": "العلامة المميزة للبرنامج المساعد",
+ "label": "المعرف"
+ },
+ "mode": {
+ "local": "تكوين بصري",
+ "local-tooltip": "غير مدعوم مؤقتًا",
+ "url": "رابط عبر الإنترنت"
+ },
+ "name": {
+ "desc": "عنوان البرنامج المساعد",
+ "label": "العنوان",
+ "placeholder": "محرك البحث"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "مؤلف البرنامج المساعد",
+ "label": "المؤلف"
+ },
+ "avatar": {
+ "desc": "رمز البرنامج المساعد، يمكن استخدام الرموز التعبيرية أو روابط URL",
+ "label": "الرمز"
+ },
+ "description": {
+ "desc": "وصف البرنامج المساعد",
+ "label": "الوصف",
+ "placeholder": "البحث في محركات البحث للحصول على المعلومات"
+ },
+ "formFieldRequired": "هذا الحقل مطلوب",
+ "homepage": {
+ "desc": "صفحة البداية للبرنامج المساعد",
+ "label": "الصفحة الرئيسية"
+ },
+ "identifier": {
+ "desc": "العلامة المميزة للبرنامج المساعد، سيتم التعرف عليها تلقائيًا من خلال الملف التعريفي",
+ "errorDuplicate": "تكرار العلامة المميزة مع برنامج مساعد موجود، يرجى تعديل العلامة المميزة",
+ "label": "المعرف",
+ "pattenErrorMessage": "يمكن إدخال الأحرف الإنجليزية والأرقام والرمزين - و_ فقط"
+ },
+ "manifest": {
+ "desc": "{{appName}} سيتم تثبيت الإضافة من خلال هذا الرابط",
+ "label": "ملف وصف البرنامج المساعد (Manifest) URL",
+ "preview": "معاينة الملف التعريفي",
+ "refresh": "تحديث"
+ },
+ "title": {
+ "desc": "عنوان البرنامج المساعد",
+ "label": "العنوان",
+ "placeholder": "محرك البحث"
+ }
+ },
+ "metaConfig": "تكوين معلومات البرنامج المساعد",
+ "modalDesc": "بعد إضافة البرنامج المساعد المخصص، يمكن استخدامه للتحقق من تطوير البرنامج المساعد، كما يمكن استخدامه مباشرة في الدردشة. للحصول على معلومات حول تطوير البرنامج المساعد، يرجى الرجوع إلى <1>وثائق التطوير↗>",
+ "openai": {
+ "importUrl": "استيراد من رابط URL",
+ "schema": "مخطط"
+ },
+ "preview": {
+ "card": "معاينة عرض البرنامج المساعد",
+ "desc": "معاينة وصف البرنامج المساعد",
+ "title": "معاينة اسم البرنامج المساعد"
+ },
+ "save": "تثبيت البرنامج المساعد",
+ "saveSuccess": "تم حفظ إعدادات البرنامج المساعد بنجاح",
+ "tabs": {
+ "manifest": "قائمة وصف الوظائف (Manifest)",
+ "meta": "معلومات البرنامج المساعد"
+ },
+ "title": {
+ "create": "إضافة برنامج مساعد مخصص",
+ "edit": "تحرير برنامج مساعد مخصص"
+ },
+ "type": {
+ "lobe": "برنامج مساعد LobeChat",
+ "openai": "برنامج مساعد OpenAI"
+ },
+ "update": "تحديث",
+ "updateSuccess": "تم تحديث إعدادات البرنامج المساعد بنجاح"
+ },
+ "error": {
+ "fetchError": "فشل طلب الرابط المعطى للملف، يرجى التأكد من صحة الرابط والسماح بالوصول عبر النطاقات المختلفة",
+ "installError": "فشل تثبيت الإضافة {{name}}",
+ "manifestInvalid": "الملف غير مطابق للمواصفات، نتيجة التحقق: \n\n {{error}}",
+ "noManifest": "ملف الوصف غير موجود",
+ "openAPIInvalid": "فشل تحليل OpenAPI، الخطأ: \n\n {{error}}",
+ "reinstallError": "فشل تحديث الإضافة {{name}}",
+ "urlError": "الرابط لا يعيد محتوى بتنسيق JSON، يرجى التأكد من صحة الرابط"
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "مهجور",
+ "local.config": "التكوين",
+ "local.title": "مخصص"
+ }
+ },
+ "loading": {
+ "content": "جاري استدعاء الإضافة...",
+ "plugin": "جاري تشغيل الإضافة..."
+ },
+ "pluginList": "قائمة الإضافات",
+ "setting": "إعدادات الإضافة",
+ "settings": {
+ "indexUrl": {
+ "title": "فهرس السوق",
+ "tooltip": "غير مدعوم حاليا للتحرير عبر الإنترنت، يرجى ضبطه عند نشر المتغيرات البيئية"
+ },
+ "modalDesc": "بعد ضبط عنوان سوق الإضافات، يمكن استخدام سوق الإضافات المخصص",
+ "title": "ضبط سوق الإضافات"
+ },
+ "showInPortal": "يرجى الاطلاع على التفاصيل في مساحة العمل",
+ "store": {
+ "actions": {
+ "confirmUninstall": "سيتم إلغاء تثبيت الإضافة، وسيتم مسح تكوين الإضافة، يرجى تأكيد العملية",
+ "detail": "التفاصيل",
+ "install": "تثبيت",
+ "manifest": "تحرير ملف التثبيت",
+ "settings": "الإعدادات",
+ "uninstall": "إلغاء التثبيت"
+ },
+ "communityPlugin": "مجتمع ثالث",
+ "customPlugin": "مخصص",
+ "empty": "لا توجد إضافات مثبتة حاليا",
+ "installAllPlugins": "تثبيت الكل",
+ "networkError": "فشل الحصول على متجر الإضافات، يرجى التحقق من الاتصال بالشبكة وإعادة المحاولة",
+ "placeholder": "ابحث عن اسم الإضافة أو الكلمات الرئيسية...",
+ "releasedAt": "صدر في {{createdAt}}",
+ "tabs": {
+ "all": "الكل",
+ "installed": "مثبتة"
+ },
+ "title": "متجر الإضافات"
+ },
+ "unknownPlugin": "البرنامج المساعد غير معروف"
+}
diff --git a/DigitalHumanWeb/locales/ar/portal.json b/DigitalHumanWeb/locales/ar/portal.json
new file mode 100644
index 0000000..3280f17
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "القطع الأثرية",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "جزء",
+ "file": "ملف"
+ }
+ },
+ "Plugins": "ملحقات",
+ "actions": {
+ "genAiMessage": "إنشاء رسالة مساعد ذكاء اصطناعي",
+ "summary": "ملخص",
+ "summaryTooltip": "ملخص للمحتوى الحالي"
+ },
+ "artifacts": {
+ "display": {
+ "code": "رمز",
+ "preview": "معاينة"
+ },
+ "svg": {
+ "copyAsImage": "نسخ كصورة",
+ "copyFail": "فشل النسخ، سبب الخطأ: {{error}}",
+ "copySuccess": "تم نسخ الصورة بنجاح",
+ "download": {
+ "png": "تحميل كـ PNG",
+ "svg": "تحميل كـ SVG"
+ }
+ }
+ },
+ "emptyArtifactList": "قائمة القطع الأثرية الحالية فارغة، يرجى استخدام الإضافات في الجلسة ومن ثم التحقق مرة أخرى",
+ "emptyKnowledgeList": "قائمة المعرفة الحالية فارغة، يرجى فتح قاعدة المعرفة حسب الحاجة في المحادثة قبل العرض",
+ "files": "ملفات",
+ "messageDetail": "تفاصيل الرسالة",
+ "title": "نافذة موسعة"
+}
diff --git a/DigitalHumanWeb/locales/ar/providers.json b/DigitalHumanWeb/locales/ar/providers.json
new file mode 100644
index 0000000..9daacf2
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "AI 360 هي منصة نماذج وخدمات الذكاء الاصطناعي التي أطلقتها شركة 360، تقدم مجموعة متنوعة من نماذج معالجة اللغة الطبيعية المتقدمة، بما في ذلك 360GPT2 Pro و360GPT Pro و360GPT Turbo و360GPT Turbo Responsibility 8K. تجمع هذه النماذج بين المعلمات الكبيرة والقدرات متعددة الوسائط، وتستخدم على نطاق واسع في توليد النصوص، وفهم المعاني، وأنظمة الحوار، وتوليد الشيفرات. من خلال استراتيجيات تسعير مرنة، تلبي AI 360 احتياجات المستخدمين المتنوعة، وتدعم المطورين في التكامل، مما يعزز الابتكار والتطوير في التطبيقات الذكية."
+ },
+ "anthropic": {
+ "description": "Anthropic هي شركة تركز على أبحاث وتطوير الذكاء الاصطناعي، وتقدم مجموعة من نماذج اللغة المتقدمة، مثل Claude 3.5 Sonnet وClaude 3 Sonnet وClaude 3 Opus وClaude 3 Haiku. تحقق هذه النماذج توازنًا مثاليًا بين الذكاء والسرعة والتكلفة، وتناسب مجموعة متنوعة من سيناريوهات التطبيقات، من أحمال العمل على مستوى المؤسسات إلى الاستجابات السريعة. يعتبر Claude 3.5 Sonnet أحدث نماذجها، وقد أظهر أداءً ممتازًا في العديد من التقييمات مع الحفاظ على نسبة تكلفة فعالة."
+ },
+ "azure": {
+ "description": "توفر Azure مجموعة متنوعة من نماذج الذكاء الاصطناعي المتقدمة، بما في ذلك GPT-3.5 وأحدث سلسلة GPT-4، تدعم أنواع بيانات متعددة ومهام معقدة، وتلتزم بحلول ذكاء اصطناعي آمنة وموثوقة ومستدامة."
+ },
+ "baichuan": {
+ "description": "Baichuan Intelligence هي شركة تركز على تطوير نماذج الذكاء الاصطناعي الكبيرة، حيث تظهر نماذجها أداءً ممتازًا في المهام الصينية مثل الموسوعات المعرفية ومعالجة النصوص الطويلة والإبداع. تتفوق على النماذج الرئيسية الأجنبية. كما تتمتع Baichuan Intelligence بقدرات متعددة الوسائط رائدة في الصناعة، وقد أظهرت أداءً ممتازًا في العديد من التقييمات الموثوقة. تشمل نماذجها Baichuan 4 وBaichuan 3 Turbo وBaichuan 3 Turbo 128k، وكل منها مُحسّن لمشاهد تطبيق مختلفة، مما يوفر حلولًا فعالة من حيث التكلفة."
+ },
+ "bedrock": {
+ "description": "Bedrock هي خدمة تقدمها أمازون AWS، تركز على توفير نماذج لغة ورؤية متقدمة للذكاء الاصطناعي للشركات. تشمل عائلة نماذجها سلسلة Claude من Anthropic وسلسلة Llama 3.1 من Meta، وتغطي مجموعة من الخيارات من النماذج الخفيفة إلى عالية الأداء، وتدعم مهام مثل توليد النصوص، والحوار، ومعالجة الصور، مما يجعلها مناسبة لتطبيقات الشركات بمختلف أحجامها واحتياجاتها."
+ },
+ "deepseek": {
+ "description": "DeepSeek هي شركة تركز على أبحاث وتطبيقات تقنيات الذكاء الاصطناعي، حيث يجمع نموذجها الأحدث DeepSeek-V2.5 بين قدرات الحوار العامة ومعالجة الشيفرات، وقد حقق تحسينات ملحوظة في محاذاة تفضيلات البشر، ومهام الكتابة، واتباع التعليمات."
+ },
+ "fireworksai": {
+ "description": "Fireworks AI هي شركة رائدة في تقديم خدمات نماذج اللغة المتقدمة، تركز على استدعاء الوظائف والمعالجة متعددة الوسائط. نموذجها الأحدث Firefunction V2 مبني على Llama-3، مُحسّن لاستدعاء الوظائف، والحوار، واتباع التعليمات. يدعم نموذج اللغة البصرية FireLLaVA-13B إدخال الصور والنصوص المختلطة. تشمل النماذج البارزة الأخرى سلسلة Llama وسلسلة Mixtral، مما يوفر دعمًا فعالًا لاتباع التعليمات وتوليدها بلغات متعددة."
+ },
+ "github": {
+ "description": "مع نماذج GitHub، يمكن للمطورين أن يصبحوا مهندسي ذكاء اصطناعي ويبنون باستخدام نماذج الذكاء الاصطناعي الرائدة في الصناعة."
+ },
+ "google": {
+ "description": "سلسلة Gemini من Google هي نماذج الذكاء الاصطناعي الأكثر تقدمًا وشمولية، تم تطويرها بواسطة Google DeepMind، مصممة خصيصًا لتكون متعددة الوسائط، تدعم الفهم والمعالجة السلسة للنصوص، والشيفرات، والصور، والصوت، والفيديو. تناسب مجموعة متنوعة من البيئات، من مراكز البيانات إلى الأجهزة المحمولة، مما يعزز بشكل كبير كفاءة نماذج الذكاء الاصطناعي وانتشار استخدامها."
+ },
+ "groq": {
+ "description": "يتميز محرك الاستدلال LPU من Groq بأداء ممتاز في أحدث اختبارات المعايير لنماذج اللغة الكبيرة المستقلة (LLM)، حيث أعاد تعريف معايير حلول الذكاء الاصطناعي بسرعته وكفاءته المذهلة. Groq يمثل سرعة استدلال فورية، ويظهر أداءً جيدًا في النشر القائم على السحابة."
+ },
+ "minimax": {
+ "description": "MiniMax هي شركة تكنولوجيا الذكاء الاصطناعي العامة التي تأسست في عام 2021، تكرس جهودها للتعاون مع المستخدمين في إنشاء الذكاء. طورت MiniMax نماذج كبيرة عامة من أوضاع مختلفة، بما في ذلك نموذج نصي MoE الذي يحتوي على تريليونات من المعلمات، ونموذج صوتي، ونموذج صور. وقد أطلقت تطبيقات مثل Conch AI."
+ },
+ "mistral": {
+ "description": "تقدم Mistral نماذج متقدمة عامة ومتخصصة وبحثية، تستخدم على نطاق واسع في الاستدلال المعقد، والمهام متعددة اللغات، وتوليد الشيفرات، من خلال واجهة استدعاء الوظائف، يمكن للمستخدمين دمج وظائف مخصصة لتحقيق تطبيقات محددة."
+ },
+ "moonshot": {
+ "description": "Moonshot هي منصة مفتوحة أطلقتها شركة Beijing Dark Side Technology Co.، Ltd، تقدم مجموعة متنوعة من نماذج معالجة اللغة الطبيعية، وتغطي مجالات واسعة، بما في ذلك ولكن لا تقتصر على إنشاء المحتوى، والبحث الأكاديمي، والتوصيات الذكية، والتشخيص الطبي، وتدعم معالجة النصوص الطويلة والمهام المعقدة."
+ },
+ "novita": {
+ "description": "Novita AI هي منصة تقدم خدمات API لمجموعة متنوعة من نماذج اللغة الكبيرة وتوليد الصور بالذكاء الاصطناعي، مرنة وموثوقة وفعالة من حيث التكلفة. تدعم أحدث النماذج مفتوحة المصدر مثل Llama3 وMistral، وتوفر حلول API شاملة وسهلة الاستخدام وقابلة للتوسع تلقائيًا لتطوير تطبيقات الذكاء الاصطناعي، مما يجعلها مناسبة لنمو الشركات الناشئة في مجال الذكاء الاصطناعي."
+ },
+ "ollama": {
+ "description": "تغطي نماذج Ollama مجموعة واسعة من مجالات توليد الشيفرة، والعمليات الرياضية، ومعالجة اللغات المتعددة، والتفاعل الحواري، وتدعم احتياجات النشر على مستوى المؤسسات والتخصيص المحلي."
+ },
+ "openai": {
+ "description": "OpenAI هي مؤسسة رائدة عالميًا في أبحاث الذكاء الاصطناعي، حيث دفعت النماذج التي طورتها مثل سلسلة GPT حدود معالجة اللغة الطبيعية. تلتزم OpenAI بتغيير العديد من الصناعات من خلال حلول الذكاء الاصطناعي المبتكرة والفعالة. تتمتع منتجاتهم بأداء ملحوظ وفعالية من حيث التكلفة، وتستخدم على نطاق واسع في البحث والتجارة والتطبيقات الابتكارية."
+ },
+ "openrouter": {
+ "description": "OpenRouter هي منصة خدمة تقدم واجهات لمجموعة متنوعة من النماذج الكبيرة المتقدمة، تدعم OpenAI وAnthropic وLLaMA وغيرها، مما يجعلها مناسبة لاحتياجات التطوير والتطبيق المتنوعة. يمكن للمستخدمين اختيار النموذج والسعر الأمثل وفقًا لاحتياجاتهم، مما يعزز تجربة الذكاء الاصطناعي."
+ },
+ "perplexity": {
+ "description": "Perplexity هي شركة رائدة في تقديم نماذج توليد الحوار، تقدم مجموعة من نماذج Llama 3.1 المتقدمة، تدعم التطبيقات عبر الإنترنت وغير المتصلة، وتناسب بشكل خاص مهام معالجة اللغة الطبيعية المعقدة."
+ },
+ "qwen": {
+ "description": "Qwen هو نموذج لغة ضخم تم تطويره ذاتيًا بواسطة Alibaba Cloud، يتمتع بقدرات قوية في فهم وتوليد اللغة الطبيعية. يمكنه الإجابة على مجموعة متنوعة من الأسئلة، وكتابة المحتوى، والتعبير عن الآراء، وكتابة الشيفرات، ويؤدي دورًا في مجالات متعددة."
+ },
+ "siliconcloud": {
+ "description": "تسعى SiliconFlow إلى تسريع الذكاء الاصطناعي العام (AGI) لفائدة البشرية، من خلال تحسين كفاءة الذكاء الاصطناعي على نطاق واسع باستخدام حزمة GenAI سهلة الاستخدام وذات التكلفة المنخفضة."
+ },
+ "spark": {
+ "description": "تقدم شركة iFlytek نموذج Spark الكبير، الذي يوفر قدرات ذكاء اصطناعي قوية عبر مجالات متعددة ولغات متعددة، باستخدام تقنيات معالجة اللغة الطبيعية المتقدمة، لبناء تطبيقات مبتكرة مناسبة للأجهزة الذكية، والرعاية الصحية الذكية، والتمويل الذكي، وغيرها من السيناريوهات الرأسية."
+ },
+ "stepfun": {
+ "description": "نموذج StepFun الكبير يتمتع بقدرات متعددة الوسائط واستدلال معقد رائدة في الصناعة، ويدعم فهم النصوص الطويلة جدًا وميزات قوية لمحرك البحث الذاتي."
+ },
+ "taichu": {
+ "description": "أطلقت الأكاديمية الصينية للعلوم ومعهد ووهان للذكاء الاصطناعي نموذجًا جديدًا متعدد الوسائط، يدعم أسئلة وأجوبة متعددة الجولات، وإنشاء النصوص، وتوليد الصور، وفهم 3D، وتحليل الإشارات، ويغطي مجموعة شاملة من مهام الأسئلة والأجوبة، مع قدرات أقوى في الإدراك والفهم والإبداع، مما يوفر تجربة تفاعلية جديدة."
+ },
+ "togetherai": {
+ "description": "تسعى Together AI لتحقيق أداء رائد من خلال نماذج الذكاء الاصطناعي المبتكرة، وتقدم مجموعة واسعة من القدرات المخصصة، بما في ذلك دعم التوسع السريع وعمليات النشر البديهية، لتلبية احتياجات الشركات المتنوعة."
+ },
+ "upstage": {
+ "description": "تتخصص Upstage في تطوير نماذج الذكاء الاصطناعي لتلبية احتياجات الأعمال المتنوعة، بما في ذلك Solar LLM وDocument AI، بهدف تحقيق الذكاء الاصطناعي العام (AGI) القائم على العمل. من خلال واجهة Chat API، يمكن إنشاء وكلاء حوار بسيطين، وتدعم استدعاء الوظائف، والترجمة، والتضمين، وتطبيقات المجالات المحددة."
+ },
+ "zeroone": {
+ "description": "01.AI تركز على تقنيات الذكاء الاصطناعي في عصر الذكاء الاصطناعي 2.0، وتعزز الابتكار والتطبيقات \"الإنسان + الذكاء الاصطناعي\"، باستخدام نماذج قوية وتقنيات ذكاء اصطناعي متقدمة لتعزيز إنتاجية البشر وتحقيق تمكين التكنولوجيا."
+ },
+ "zhipu": {
+ "description": "تقدم Zhipu AI منصة مفتوحة للنماذج متعددة الوسائط ونماذج اللغة، تدعم مجموعة واسعة من سيناريوهات تطبيقات الذكاء الاصطناعي، بما في ذلك معالجة النصوص، وفهم الصور، والمساعدة في البرمجة."
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/ragEval.json b/DigitalHumanWeb/locales/ar/ragEval.json
new file mode 100644
index 0000000..dc28461
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "إنشاء جديد",
+ "description": {
+ "placeholder": "وصف مجموعة البيانات (اختياري)"
+ },
+ "name": {
+ "placeholder": "اسم مجموعة البيانات",
+ "required": "يرجى إدخال اسم مجموعة البيانات"
+ },
+ "title": "إضافة مجموعة بيانات"
+ },
+ "dataset": {
+ "addNewButton": "إنشاء مجموعة بيانات",
+ "emptyGuide": "مجموعة البيانات الحالية فارغة، يرجى إنشاء مجموعة بيانات.",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "استيراد البيانات"
+ },
+ "columns": {
+ "actions": "الإجراءات",
+ "ideal": {
+ "title": "الإجابة المثالية"
+ },
+ "question": {
+ "title": "السؤال"
+ },
+ "referenceFiles": {
+ "title": "ملفات مرجعية"
+ }
+ },
+ "notSelected": "يرجى اختيار مجموعة بيانات من اليسار",
+ "title": "تفاصيل مجموعة البيانات"
+ },
+ "title": "مجموعة البيانات"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "إنشاء جديد",
+ "datasetId": {
+ "placeholder": "يرجى اختيار مجموعة بيانات التقييم الخاصة بك",
+ "required": "يرجى اختيار مجموعة بيانات التقييم"
+ },
+ "description": {
+ "placeholder": "وصف مهمة التقييم (اختياري)"
+ },
+ "name": {
+ "placeholder": "اسم مهمة التقييم",
+ "required": "يرجى إدخال اسم مهمة التقييم"
+ },
+ "title": "إضافة مهمة تقييم"
+ },
+ "addNewButton": "إنشاء تقييم",
+ "emptyGuide": "مهمة التقييم الحالية فارغة، ابدأ بإنشاء تقييم.",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "تحقق من الحالة",
+ "confirmDelete": "هل تريد حذف هذه المهمة من التقييم؟",
+ "confirmRun": "هل تريد بدء التشغيل؟ بعد بدء التشغيل، سيتم تنفيذ مهمة التقييم في الخلفية بشكل غير متزامن، وإغلاق الصفحة لن يؤثر على تنفيذ المهمة غير المتزامنة.",
+ "downloadRecords": "تنزيل السجلات",
+ "retry": "إعادة المحاولة",
+ "run": "تشغيل",
+ "title": "الإجراءات"
+ },
+ "datasetId": {
+ "title": "مجموعة البيانات"
+ },
+ "name": {
+ "title": "اسم مهمة التقييم"
+ },
+ "records": {
+ "title": "عدد سجلات التقييم"
+ },
+ "referenceFiles": {
+ "title": "ملفات مرجعية"
+ },
+ "status": {
+ "error": "حدث خطأ أثناء التنفيذ",
+ "pending": "في انتظار التشغيل",
+ "processing": "جارٍ التشغيل",
+ "success": "تم التنفيذ بنجاح",
+ "title": "الحالة"
+ }
+ },
+ "title": "قائمة مهام التقييم"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/setting.json b/DigitalHumanWeb/locales/ar/setting.json
new file mode 100644
index 0000000..3d36ece
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "حول"
+ },
+ "agentTab": {
+ "chat": "تفضيلات الدردشة",
+ "meta": "معلومات المساعد",
+ "modal": "إعدادات النموذج",
+ "plugin": "إعدادات الإضافة",
+ "prompt": "تعيين الشخصية",
+ "tts": "خدمة النص إلى كلام"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "من خلال اختيار إرسال بيانات القياس عن بُعد، يمكنك مساعدتنا في تحسين تجربة المستخدم العامة لـ {{appName}}",
+ "title": "إرسال بيانات الاستخدام المجهولة"
+ },
+ "title": "تحليلات"
+ },
+ "danger": {
+ "clear": {
+ "action": "مسح الآن",
+ "confirm": "هل تؤكد مسح جميع بيانات المحادثات؟",
+ "desc": "سيتم مسح جميع بيانات الجلسة بما في ذلك المساعد والملفات والرسائل والإضافات",
+ "success": "تم مسح جميع رسائل الجلسة",
+ "title": "مسح جميع رسائل الجلسة"
+ },
+ "reset": {
+ "action": "إعادة تعيين الآن",
+ "confirm": "هل تؤكد إعادة تعيين جميع الإعدادات؟",
+ "currentVersion": "الإصدار الحالي",
+ "desc": "إعادة تعيين جميع عناصر الإعدادات إلى القيم الافتراضية",
+ "success": "تمت إعادة ضبط جميع الإعدادات",
+ "title": "إعادة تعيين جميع الإعدادات"
+ }
+ },
+ "header": {
+ "desc": "إعدادات التفضيلات والنماذج.",
+ "global": "إعدادات عامة",
+ "session": "إعدادات الجلسة",
+ "sessionDesc": "إعداد الشخصية وتفضيلات الجلسة.",
+ "sessionWithName": "إعدادات الجلسة · {{name}}",
+ "title": "إعدادات"
+ },
+ "llm": {
+ "aesGcm": "سيتم استخدام خوارزمية التشفير <1>AES-GCM1> لتشفير مفتاحك وعنوان الوكيل",
+ "apiKey": {
+ "desc": "يرجى ملء مفتاح API الخاص بك {{name}}",
+ "placeholder": "{{name}} مفتاح API",
+ "title": "مفتاح API"
+ },
+ "checker": {
+ "button": "فحص",
+ "desc": "اختبار ما إذا كان مفتاح واجهة البرمجة وعنوان الوكيل مملوء بشكل صحيح",
+ "pass": "تمت المراقبة",
+ "title": "فحص الاتصال"
+ },
+ "customModelCards": {
+ "addNew": "إنشاء وإضافة نموذج {{id}}",
+ "config": "تكوين النموذج",
+ "confirmDelete": "سيتم حذف النموذج المخصص هذا، وبمجرد الحذف لا يمكن استعادته، يرجى التحلي بالحذر.",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "الحقل الفعلي المطلوب في طلب Azure OpenAI",
+ "placeholder": "الرجاء إدخال اسم نشر النموذج في Azure",
+ "title": "اسم نشر النموذج"
+ },
+ "displayName": {
+ "placeholder": "الرجاء إدخال اسم العرض للنموذج، مثل ChatGPT، GPT-4، إلخ",
+ "title": "اسم العرض للنموذج"
+ },
+ "files": {
+ "extra": "تنفيذ تحميل الملفات الحالي هو مجرد حل Hack، ومخصص للتجربة الذاتية فقط. يرجى الانتظار حتى يتم تنفيذ القدرة الكاملة على تحميل الملفات لاحقًا",
+ "title": "دعم تحميل الملفات"
+ },
+ "functionCall": {
+ "extra": "ستفتح هذه الإعدادات فقط القدرة على استدعاء الوظائف داخل التطبيق، وما إذا كانت الوظائف مدعومة يعتمد تمامًا على النموذج نفسه، يرجى اختبار قابلية استخدام استدعاء الوظائف لهذا النموذج بنفسك",
+ "title": "دعم استدعاء الوظائف"
+ },
+ "id": {
+ "extra": "سيتم عرضه كعلامة للنموذج",
+ "placeholder": "الرجاء إدخال معرف النموذج، مثل gpt-4-turbo-preview أو claude-2.1",
+ "title": "معرف النموذج"
+ },
+ "modalTitle": "تكوين النموذج المخصص",
+ "tokens": {
+ "title": "أقصى عدد من الرموز",
+ "unlimited": "غير محدود"
+ },
+ "vision": {
+ "extra": "ستفتح هذه الإعدادات فقط القدرة على تحميل الصور داخل التطبيق، وما إذا كانت القدرة على التعرف مدعومة يعتمد تمامًا على النموذج نفسه، يرجى اختبار قابلية استخدام التعرف البصري لهذا النموذج بنفسك",
+ "title": "دعم التعرف على الصور"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "طريقة طلب العميل ستبدأ طلب الجلسة مباشرة من المتصفح، مما يمكن أن يعزز سرعة الاستجابة",
+ "title": "استخدام طريقة طلب العميل"
+ },
+ "fetcher": {
+ "fetch": "احصل على قائمة النماذج",
+ "fetching": "جاري الحصول على قائمة النماذج...",
+ "latestTime": "آخر تحديث: {{time}}",
+ "noLatestTime": "لم يتم الحصول على قائمة بعد"
+ },
+ "helpDoc": "دليل التكوين",
+ "modelList": {
+ "desc": "اختيار النموذج الذي سيتم عرضه في الجلسة، سيتم عرض النموذج المحدد في قائمة النماذج",
+ "placeholder": "الرجاء اختيار نموذج من القائمة",
+ "title": "قائمة النماذج",
+ "total": "متاح {{count}} نموذج"
+ },
+ "proxyUrl": {
+ "desc": "يجب أن يتضمن عنوان الوكيل API بالإضافة إلى العنوان الافتراضي http(s)://",
+ "title": "عنوان وكيل API"
+ },
+ "waitingForMore": "يتم <1>التخطيط لتوفير1> المزيد من النماذج، ترقبوا المزيد"
+ },
+ "plugin": {
+ "addTooltip": "إضافة البرنامج المساعد",
+ "clearDeprecated": "مسح البرامج المساعدة الغير صالحة",
+ "empty": "لا توجد برامج مساعدة مثبتة حاليًا، نرحب بك لزيارة <1>متجر البرامج المساعدة1> للاستكشاف",
+ "installStatus": {
+ "deprecated": "تم إلغاء التثبيت"
+ },
+ "settings": {
+ "hint": "يرجى ملء الإعدادات التالية وفقًا للوصف",
+ "title": "إعدادات البرنامج المساعد {{id}}",
+ "tooltip": "إعدادات البرنامج المساعد"
+ },
+ "store": "متجر البرامج المساعد"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "الصورة الرمزية"
+ },
+ "backgroundColor": {
+ "title": "لون الخلفية"
+ },
+ "description": {
+ "placeholder": "الرجاء إدخال وصف المساعد",
+ "title": "وصف المساعد"
+ },
+ "name": {
+ "placeholder": "الرجاء إدخال اسم المساعد",
+ "title": "الاسم"
+ },
+ "prompt": {
+ "placeholder": "الرجاء إدخال كلمة الإشارة للشخصية",
+ "title": "ضبط الشخصية"
+ },
+ "tag": {
+ "placeholder": "الرجاء إدخال العلامة",
+ "title": "العلامة"
+ },
+ "title": "معلومات المساعد"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "عند تجاوز عدد الرسائل الحالي هذا القيمة، سيتم إنشاء موضوع تلقائيًا",
+ "title": "عتبة إنشاء الموضوع التلقائي"
+ },
+ "chatStyleType": {
+ "title": "نوع نافذة الدردشة",
+ "type": {
+ "chat": "نمط المحادثة",
+ "docs": "نمط الوثائق"
+ }
+ },
+ "compressThreshold": {
+ "desc": "عندما يتجاوز عدد الرسائل التاريخية غير المضغوطة هذه القيمة، سيتم ضغطها",
+ "title": "عتبة ضغط طول الرسائل التاريخية"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "هل يجب إنشاء موضوع تلقائيًا أثناء الدردشة، يسري ذلك فقط في المواضيع المؤقتة",
+ "title": "تمكين إنشاء الموضوع تلقائيًا"
+ },
+ "enableCompressThreshold": {
+ "title": "هل تريد تمكين عتبة ضغط طول الرسائل التاريخية"
+ },
+ "enableHistoryCount": {
+ "alias": "غير محدود",
+ "limited": "يحتوي فقط على {{number}} رسالة محادثة",
+ "setlimited": "تعيين عدد الرسائل التاريخية",
+ "title": "تحديد عدد الرسائل التاريخية",
+ "unlimited": "غير محدود"
+ },
+ "historyCount": {
+ "desc": "عدد الرسائل التي يتم إرفاقها في كل طلب (تشمل الأسئلة والأجوبة الجديدة. يُحسب كل سؤال وجواب كرسالة واحدة)",
+ "title": "عدد الرسائل المرفقة"
+ },
+ "inputTemplate": {
+ "desc": "سيتم ملء أحدث رسالة من المستخدم في هذا القالب",
+ "placeholder": "القالب المُعالج مسبقًا {{text}} سيتم استبداله بالمعلومات المُدخلة في الوقت الحقيقي",
+ "title": "معالجة مُدخلات المستخدم"
+ },
+ "title": "إعدادات الدردشة"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "تمكين الحد الأقصى للردود"
+ },
+ "frequencyPenalty": {
+ "desc": "كلما زادت القيمة، زاد احتمال تقليل تكرار الكلمات",
+ "title": "عقوبة التكرار"
+ },
+ "maxTokens": {
+ "desc": "عدد الرموز الأقصى المستخدمة في التفاعل الواحد",
+ "title": "الحد الأقصى للردود"
+ },
+ "model": {
+ "desc": "{{provider}} نموذج",
+ "title": "النموذج"
+ },
+ "presencePenalty": {
+ "desc": "كلما زادت القيمة، زاد احتمال التوسع في مواضيع جديدة",
+ "title": "جديد الحديث"
+ },
+ "temperature": {
+ "desc": "كلما زادت القيمة، زادت الردود عشوائية أكثر",
+ "title": "التباين",
+ "titleWithValue": "التباين {{value}}"
+ },
+ "title": "إعدادات النموذج",
+ "topP": {
+ "desc": "مشابه للتباين ولكن لا يجب تغييره مع التباين",
+ "title": "العينة الأساسية"
+ }
+ },
+ "settingPlugin": {
+ "title": "قائمة الإضافات"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "قام المسؤول بتمكين الوصول المشفر",
+ "placeholder": "الرجاء إدخال كلمة المرور",
+ "title": "كلمة المرور"
+ },
+ "oauth": {
+ "info": {
+ "desc": "تم تسجيل الدخول",
+ "title": "معلومات الحساب"
+ },
+ "signin": {
+ "action": "تسجيل الدخول",
+ "desc": "قم بتسجيل الدخول باستخدام SSO لفتح التطبيق",
+ "title": "تسجيل الدخول إلى الحساب"
+ },
+ "signout": {
+ "action": "تسجيل الخروج",
+ "confirm": "هل ترغب في تأكيد الخروج؟",
+ "success": "تم تسجيل الخروج بنجاح"
+ }
+ },
+ "title": "إعدادات النظام"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "نموذج تحويل النص إلى كلام من OpenAI",
+ "title": "OpenAI",
+ "ttsModel": "نموذج توليد الكلام من OpenAI"
+ },
+ "showAllLocaleVoice": {
+ "desc": "إذا تم إيقافه، سيتم عرض مصادر الصوت الخاصة باللغة الحالية فقط",
+ "title": "عرض جميع مصادر الصوت للغات"
+ },
+ "stt": "إعدادات التحويل من الصوت إلى نص",
+ "sttAutoStop": {
+ "desc": "عند الإيقاف، لن يتم إيقاف تحويل الصوت إلى نص تلقائيًا، وسيتطلب الأمر النقر على زر الإيقاف يدويًا",
+ "title": "إيقاف تحويل الصوت إلى نص تلقائيًا"
+ },
+ "sttLocale": {
+ "desc": "لغة الصوت المدخلة، يمكن أن يساعد هذا الخيار في زيادة دقة تحويل الصوت إلى نص",
+ "title": "لغة تحويل الصوت إلى نص"
+ },
+ "sttService": {
+ "desc": "حيث يكون المتصفح هو خدمة التحويل الصوتي الأصلية",
+ "title": "خدمة تحويل الصوت إلى نص"
+ },
+ "title": "خدمة الصوت",
+ "tts": "إعدادات توليد الكلام",
+ "ttsService": {
+ "desc": "إذا كنت تستخدم خدمة توليد الكلام من OpenAI، يجب التأكد من تمكين خدمة نموذج OpenAI",
+ "title": "خدمة توليد الكلام"
+ },
+ "voice": {
+ "desc": "حدد صوتًا للمساعد الحالي، تختلف مصادر الصوت المدعومة بحسب خدمة توليد الكلام",
+ "preview": "معاينة الصوت",
+ "title": "مصدر توليد الكلام"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "الصورة الرمزية"
+ },
+ "fontSize": {
+ "desc": "حجم الخط لمحتوى المحادثة",
+ "marks": {
+ "normal": "عادي"
+ },
+ "title": "حجم الخط"
+ },
+ "lang": {
+ "autoMode": "متابعة النظام",
+ "title": "اللغة"
+ },
+ "neutralColor": {
+ "desc": "تخصيص درجات اللون الرمادي للاتجاهات المختلفة",
+ "title": "اللون الأحادي"
+ },
+ "primaryColor": {
+ "desc": "تخصيص لون السمة الرئيسي",
+ "title": "لون السمة"
+ },
+ "themeMode": {
+ "auto": "تلقائي",
+ "dark": "داكن",
+ "light": "فاتح",
+ "title": "السمة"
+ },
+ "title": "إعدادات السمة"
+ },
+ "submitAgentModal": {
+ "button": "تقديم المساعد",
+ "identifier": "معرف المساعد",
+ "metaMiss": "يرجى استكمال معلومات المساعد قبل التقديم، يجب أن تتضمن الاسم والوصف والعلامة",
+ "placeholder": "الرجاء إدخال معرف المساعد، يجب أن يكون فريدًا، مثل تطوير الويب",
+ "tooltips": "مشاركة في سوق المساعدين"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "أضف اسمًا للتعرف بشكل أفضل",
+ "placeholder": "الرجاء إدخال اسم الجهاز",
+ "title": "اسم الجهاز"
+ },
+ "title": "معلومات الجهاز",
+ "unknownBrowser": "متصفح غير معروف",
+ "unknownOS": "نظام التشغيل غير معروف"
+ },
+ "warning": {
+ "tip": "بعد فترة اختبار عامة طويلة، قد لا يكون تزامن WebRTC مستقرًا بما يكفي لتلبية احتياجات التزامن العامة. يرجى <1>نشر خادم الإشارة1> بنفسك قبل الاستخدام."
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "سيستخدم WebRTC هذا الاسم لإنشاء قناة مزامنة، يرجى التأكد من فرادة اسم القناة",
+ "placeholder": "الرجاء إدخال اسم قناة المزامنة",
+ "shuffle": "توليف عشوائي",
+ "title": "اسم قناة المزامنة"
+ },
+ "channelPassword": {
+ "desc": "إضافة كلمة مرور لضمان خصوصية القناة، يمكن للأجهزة الانضمام إلى القناة فقط عند إدخال كلمة المرور الصحيحة",
+ "placeholder": "الرجاء إدخال كلمة مرور قناة المزامنة",
+ "title": "كلمة مرور قناة المزامنة"
+ },
+ "desc": "اتصال البيانات النقطي الفوري يتطلب تواجد الأجهزة معًا للمزامنة",
+ "enabled": {
+ "invalid": "الرجاء ملء اسم خادم الإشارة واسم القناة المتزامنة قبل تمكينها",
+ "title": "تمكين المزامنة"
+ },
+ "signaling": {
+ "desc": "سيستخدم WebRTC هذا العنوان للتزامن",
+ "placeholder": "الرجاء إدخال عنوان خادم الإشارة",
+ "title": "خادم الإشارة"
+ },
+ "title": "WebRTC مزامنة"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "نموذج إنشاء بيانات المساعد",
+ "modelDesc": "يحدد النموذج المستخدم لإنشاء اسم المساعد ووصفه وصورته وعلامته",
+ "title": "توليد معلومات المساعد تلقائيًا"
+ },
+ "queryRewrite": {
+ "label": "نموذج إعادة صياغة الأسئلة",
+ "modelDesc": "نموذج مخصص لتحسين أسئلة المستخدمين",
+ "title": "قاعدة المعرفة"
+ },
+ "title": "مساعد النظام",
+ "topic": {
+ "label": "نموذج تسمية الموضوع",
+ "modelDesc": "يحدد النموذج المستخدم لإعادة تسمية الموضوع تلقائيًا",
+ "title": "إعادة تسمية الموضوع"
+ },
+ "translation": {
+ "label": "نموذج الترجمة",
+ "modelDesc": "النموذج المحدد للاستخدام في الترجمة",
+ "title": "إعدادات مساعد الترجمة"
+ }
+ },
+ "tab": {
+ "about": "حول",
+ "agent": "المساعد الافتراضي",
+ "common": "إعدادات عامة",
+ "experiment": "تجربة",
+ "llm": "نموذج اللغة",
+ "sync": "مزامنة السحابة",
+ "system-agent": "مساعد النظام",
+ "tts": "خدمة الكلام"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "الامتدادات المدمجة"
+ },
+ "disabled": "النموذج الحالي لا يدعم استدعاء الوظائف، ولا يمكن استخدام الإضافة",
+ "plugins": {
+ "enabled": "ممكّنة {{num}}",
+ "groupName": "الإضافات",
+ "noEnabled": "لا توجد إضافات ممكّنة حاليًا",
+ "store": "متجر الإضافات"
+ },
+ "title": "أدوات الامتداد"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/tool.json b/DigitalHumanWeb/locales/ar/tool.json
new file mode 100644
index 0000000..6876ab6
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "توليد تلقائي",
+ "downloading": "صلاحية روابط الصور المُولَّدة بواسطة DallE3 تدوم ساعة واحدة فقط، يتم تحميل الصور إلى الجهاز المحلي...",
+ "generate": "توليد",
+ "generating": "جارٍ التوليد...",
+ "images": "الصور:",
+ "prompt": "كلمة تلميح"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ar/welcome.json b/DigitalHumanWeb/locales/ar/welcome.json
new file mode 100644
index 0000000..8a88801
--- /dev/null
+++ b/DigitalHumanWeb/locales/ar/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "استيراد التكوين",
+ "market": "تسوق في السوق",
+ "start": "ابدأ الآن"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "تغيير",
+ "title": "إضافة توصيات المساعدين:"
+ },
+ "defaultMessage": "أنا مساعدك الذكي الشخصي {{appName}}، كيف يمكنني مساعدتك الآن؟\nإذا كنت بحاجة إلى مساعد أكثر احترافية أو تخصيصًا، يمكنك النقر على `+` لإنشاء مساعد مخصص",
+ "defaultMessageWithoutCreate": "أنا مساعدك الذكي الشخصي {{appName}}، كيف يمكنني مساعدتك الآن؟",
+ "qa": {
+ "q01": "ما هو LobeHub؟",
+ "q02": "ما هو {{appName}}؟",
+ "q03": "هل يوجد دعم مجتمعي لـ {{appName}}؟",
+ "q04": "ما هي الميزات التي يدعمها {{appName}}؟",
+ "q05": "كيف يمكن نشر واستخدام {{appName}}؟",
+ "q06": "كيف يتم تسعير {{appName}}؟",
+ "q07": "هل {{appName}} مجاني؟",
+ "q08": "هل هناك نسخة سحابية؟",
+ "q09": "هل يدعم نماذج اللغة المحلية؟",
+ "q10": "هل يدعم التعرف على الصور وتوليدها؟",
+ "q11": "هل يدعم تحويل النص إلى كلام والتعرف على الصوت؟",
+ "q12": "هل يدعم نظام الإضافات؟",
+ "q13": "هل يوجد سوق خاص للحصول على GPTs؟",
+ "q14": "هل يدعم مزودي خدمات الذكاء الاصطناعي المتعددين؟",
+ "q15": "ماذا يجب أن أفعل إذا واجهت مشكلة أثناء الاستخدام؟"
+ },
+ "questions": {
+ "moreBtn": "معرفة المزيد",
+ "title": "الأسئلة الشائعة:"
+ },
+ "welcome": {
+ "afternoon": "مساء الخير",
+ "morning": "صباح الخير",
+ "night": "مساء الخير",
+ "noon": "نهاراً"
+ }
+ },
+ "header": "مرحبًا بكم في الاستخدام",
+ "pickAgent": "أو اختيار قالب مساعد من القائمة التالية",
+ "skip": "تخطى الإنشاء",
+ "slogan": {
+ "desc1": "قم بتشغيل عقلك الجماعي وأشعل شرارة التفكير. مساعدك الذكي، دائمًا موجود.",
+ "desc2": "أنشئ مساعدك الأول ولنبدأ!",
+ "title": "امنح نفسك عقلاً أذكى"
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/auth.json b/DigitalHumanWeb/locales/bg-BG/auth.json
new file mode 100644
index 0000000..1d88bf0
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "Вход",
+ "loginOrSignup": "Вход / Регистрация",
+ "profile": "Профил",
+ "security": "Сигурност",
+ "signout": "Изход",
+ "signup": "Регистрация"
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/chat.json b/DigitalHumanWeb/locales/bg-BG/chat.json
new file mode 100644
index 0000000..e07eb82
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "Модел"
+ },
+ "agentDefaultMessage": "Здравейте, аз съм **{{name}}**, можете да започнете разговор с мен веднага или да отидете на [Настройки на асистента]({{url}}), за да попълните информацията ми.",
+ "agentDefaultMessageWithSystemRole": "Здравей, аз съм **{{name}}**, {{systemRole}}. Нека започнем да чатим!",
+ "agentDefaultMessageWithoutEdit": "Здравей, аз съм **{{name}}** и нека започнем разговора!",
+ "agents": "Асистент",
+ "artifact": {
+ "generating": "Генериране",
+ "thinking": "В процес на мислене",
+ "thought": "Процес на мислене",
+ "unknownTitle": "Неназован артефакт"
+ },
+ "backToBottom": "Върни се в началото",
+ "chatList": {
+ "longMessageDetail": "Вижте детайлите"
+ },
+ "clearCurrentMessages": "Изчисти съобщенията от текущата сесия",
+ "confirmClearCurrentMessages": "На път си да изчистиш съобщенията от текущата сесия. След като бъдат изчистени, те не могат да бъдат възстановени. Моля, потвърди действието си.",
+ "confirmRemoveSessionItemAlert": "На път си да изтриеш този агент. След като бъде изтрит, той не може да бъде възстановен. Моля, потвърди действието си.",
+ "confirmRemoveSessionSuccess": "Сесията е успешно изтрита",
+ "defaultAgent": "Агент по подразбиране",
+ "defaultList": "Списък по подразбиране",
+ "defaultSession": "Агент по подразбиране",
+ "duplicateSession": {
+ "loading": "Копиране...",
+ "success": "Копирането е успешно",
+ "title": "{{title}} Копие"
+ },
+ "duplicateTitle": "{{title}} Копие",
+ "emptyAgent": "Няма наличен асистент",
+ "historyRange": "Диапазон на историята",
+ "inbox": {
+ "desc": "Активирай мозъчния клъстер и събуди креативното мислене. Твоят виртуален агент е тук, за да общува с теб за всичко.",
+ "title": "Просто чати"
+ },
+ "input": {
+ "addAi": "Добави AI съобщение",
+ "addUser": "Добави потребителско съобщение",
+ "more": "още",
+ "send": "Изпрати",
+ "sendWithCmdEnter": "Натисни {{meta}} + Enter за да изпратиш",
+ "sendWithEnter": "Натисни Enter за да изпратиш",
+ "stop": "Спри",
+ "warp": "Нов ред"
+ },
+ "knowledgeBase": {
+ "all": "Всички съдържания",
+ "allFiles": "Всички файлове",
+ "allKnowledgeBases": "Всички знания",
+ "disabled": "Текущият режим на внедряване не поддържа разговори с база знания. Ако искате да използвате тази функция, моля, превключете на внедряване с база данни на сървъра или използвайте услугата {{cloud}}.",
+ "library": {
+ "action": {
+ "add": "Добави",
+ "detail": "Детайли",
+ "remove": "Премахни"
+ },
+ "title": "Файлове/База знания"
+ },
+ "relativeFilesOrKnowledgeBases": "Свързани файлове/бази знания",
+ "title": "База знания",
+ "uploadGuide": "Качените файлове могат да бъдат прегледани в „База знания“",
+ "viewMore": "Вижте още"
+ },
+ "messageAction": {
+ "delAndRegenerate": "Изтрий и прегенерирай",
+ "regenerate": "Прегенерирай"
+ },
+ "newAgent": "Нов агент",
+ "pin": "Закачи",
+ "pinOff": "Откачи",
+ "rag": {
+ "referenceChunks": "Цитирани източници",
+ "userQuery": {
+ "actions": {
+ "delete": "Изтрий Query",
+ "regenerate": "Прегенерирай Query"
+ }
+ }
+ },
+ "regenerate": "Прегенерирай",
+ "roleAndArchive": "Роля и архив",
+ "searchAgentPlaceholder": "Търсач на помощ...",
+ "sendPlaceholder": "Напиши съобщението си тук...",
+ "sessionGroup": {
+ "config": "Управление на групи",
+ "confirmRemoveGroupAlert": "Тази група е на път да бъде изтрита. След изтриването, агентите в тази група ще бъдат преместени в списъка по подразбиране. Моля, потвърди действието си.",
+ "createAgentSuccess": "Асистентът е създаден успешно",
+ "createGroup": "Добави нова група",
+ "createSuccess": "Създадена успешно",
+ "creatingAgent": "Създаване на асистент...",
+ "inputPlaceholder": "Моля, въведете име на групата...",
+ "moveGroup": "Премести в група",
+ "newGroup": "Нова група",
+ "rename": "Преименувай група",
+ "renameSuccess": "Преименувана успешно",
+ "sortSuccess": "Пренареждането е успешно",
+ "sorting": "Актуализиране на подредбата на групата...",
+ "tooLong": "Дължината на името на групата трябва да бъде между 1-20 символа"
+ },
+ "shareModal": {
+ "download": "Изтегли екранна снимка",
+ "imageType": "Формат на изображението",
+ "screenshot": "Екранна снимка",
+ "settings": "Настройки за експортиране",
+ "shareToShareGPT": "Генерирай ShareGPT линк за споделяне",
+ "withBackground": "Включи фоново изображение",
+ "withFooter": "Включи долен колонтитул",
+ "withPluginInfo": "Включи информация за плъгина",
+ "withSystemRole": "Включи настройката за роля на агента"
+ },
+ "stt": {
+ "action": "Гласов вход",
+ "loading": "Разпознаване...",
+ "prettifying": "Изглаждане..."
+ },
+ "temp": "Временен",
+ "tokenDetails": {
+ "chats": "Чат съобщения",
+ "rest": "Оставащи",
+ "systemRole": "Настройки на ролята",
+ "title": "Детайли на токена",
+ "tools": "Настройки на плъгина",
+ "total": "Общо налични",
+ "used": "Общо използвани"
+ },
+ "tokenTag": {
+ "overload": "Превишен лимит",
+ "remained": "Оставащи",
+ "used": "Използвани"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "Автоматично преименуване",
+ "duplicate": "Създай копие",
+ "export": "Експортирай тема"
+ },
+ "checkOpenNewTopic": "Да се отвори ли нова тема?",
+ "checkSaveCurrentMessages": "Искате ли да запазите текущата сесия като тема?",
+ "confirmRemoveAll": "На път си да изтриеш всички теми. След като бъдат изтрити, те не могат да бъдат възстановени. Моля, продължи с повишено внимание.",
+ "confirmRemoveTopic": "На път си да изтриеш тази тема. След като бъде изтрита, тя не може да бъде възстановена. Моля, продължи с повишено внимание.",
+ "confirmRemoveUnstarred": "На път си да изтриеш немаркираните теми. След като бъдат изтрити, те не могат да бъдат възстановени. Моля, продължи с повишено внимание.",
+ "defaultTitle": "Тема по подразбиране",
+ "duplicateLoading": "Копиране на темата...",
+ "duplicateSuccess": "Темата е успешно копирана",
+ "guide": {
+ "desc": "Кликни върху бутона вляво, за да запазиш текущата сесия като историческа тема и да започнеш нова сесия.",
+ "title": "Списък с теми"
+ },
+ "openNewTopic": "Отвори нова тема",
+ "removeAll": "Премахни всички теми",
+ "removeUnstarred": "Премахни немаркираните теми",
+ "saveCurrentMessages": "Запази текущата сесия като тема",
+ "searchPlaceholder": "Търсене на теми...",
+ "title": "Списък с теми"
+ },
+ "translate": {
+ "action": "Превод",
+ "clear": "Изчисти превода"
+ },
+ "tts": {
+ "action": "Текст към говор",
+ "clear": "Изчисти речта"
+ },
+ "updateAgent": "Актуализирай информацията за агента",
+ "upload": {
+ "action": {
+ "fileUpload": "Качване на файл",
+ "folderUpload": "Качване на папка",
+ "imageDisabled": "Текущият модел не поддържа визуално разпознаване, моля, превключете модела и опитайте отново",
+ "imageUpload": "Качване на изображение",
+ "tooltip": "Качване"
+ },
+ "clientMode": {
+ "actionFiletip": "Качване на файл",
+ "actionTooltip": "Качване",
+ "disabled": "Текущият модел не поддържа визуално разпознаване и анализ на файлове, моля, превключете модела и опитайте отново"
+ },
+ "preview": {
+ "prepareTasks": "Подготовка на парчета...",
+ "status": {
+ "pending": "Подготовка за качване...",
+ "processing": "Обработка на файла..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/clerk.json b/DigitalHumanWeb/locales/bg-BG/clerk.json
new file mode 100644
index 0000000..88a3d08
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "Назад",
+ "badge__default": "По подразбиране",
+ "badge__otherImpersonatorDevice": "Друго устройство за имитиране",
+ "badge__primary": "Основен",
+ "badge__requiresAction": "Изисква действие",
+ "badge__thisDevice": "Това устройство",
+ "badge__unverified": "Непотвърден",
+ "badge__userDevice": "Потребителско устройство",
+ "badge__you": "Вие",
+ "createOrganization": {
+ "formButtonSubmit": "Създай организация",
+ "invitePage": {
+ "formButtonReset": "Пропусни"
+ },
+ "title": "Създаване на организация"
+ },
+ "dates": {
+ "lastDay": "Вчера в {{ date | timeString('en-US') }}",
+ "next6Days": "{{ date | weekday('en-US', 'long') }} в {{ date | timeString('en-US') }}",
+ "nextDay": "Утре в {{ date | timeString('en-US') }}",
+ "numeric": "{{ date | numeric('en-US') }}",
+ "previous6Days": "Миналата {{ date | weekday('en-US', 'long') }} в {{ date | timeString('en-US') }}",
+ "sameDay": "Днес в {{ date | timeString('en-US') }}"
+ },
+ "dividerText": "или",
+ "footerActionLink__useAnotherMethod": "Използвай друг метод",
+ "footerPageLink__help": "Помощ",
+ "footerPageLink__privacy": "Поверителност",
+ "footerPageLink__terms": "Условия",
+ "formButtonPrimary": "Продължи",
+ "formButtonPrimary__verify": "Потвърди",
+ "formFieldAction__forgotPassword": "Забравена парола?",
+ "formFieldError__matchingPasswords": "Паролите съвпадат.",
+ "formFieldError__notMatchingPasswords": "Паролите не съвпадат.",
+ "formFieldError__verificationLinkExpired": "Връзката за потвърждение изтече. Моля, поискайте нова връзка.",
+ "formFieldHintText__optional": "По избор",
+ "formFieldHintText__slug": "Slug е четим идентификатор, който трябва да бъде уникален. Често се използва в URL адреси.",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "Изтрий профила",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "пример@email.com, пример2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "моята-орг",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "Активирай автоматични покани за този домейн",
+ "formFieldLabel__backupCode": "Резервен код",
+ "formFieldLabel__confirmDeletion": "Потвърждение",
+ "formFieldLabel__confirmPassword": "Потвърди парола",
+ "formFieldLabel__currentPassword": "Текуща парола",
+ "formFieldLabel__emailAddress": "Имейл адрес",
+ "formFieldLabel__emailAddress_username": "Имейл адрес или потребителско име",
+ "formFieldLabel__emailAddresses": "Имейл адреси",
+ "formFieldLabel__firstName": "Първо име",
+ "formFieldLabel__lastName": "Фамилия",
+ "formFieldLabel__newPassword": "Нова парола",
+ "formFieldLabel__organizationDomain": "Домейн",
+ "formFieldLabel__organizationDomainDeletePending": "Изтрийте изчакващите покани и предложения за този домейн",
+ "formFieldLabel__organizationDomainEmailAddress": "Имейл адрес за потвърждение",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "Въведете имейл адрес под този домейн, за да получите код и да потвърдите този домейн.",
+ "formFieldLabel__organizationName": "Име",
+ "formFieldLabel__organizationSlug": "Slug",
+ "formFieldLabel__passkeyName": "Име на ключ",
+ "formFieldLabel__password": "Парола",
+ "formFieldLabel__phoneNumber": "Телефонен номер",
+ "formFieldLabel__role": "Роля",
+ "formFieldLabel__signOutOfOtherSessions": "Изход от всички други устройства",
+ "formFieldLabel__username": "Потребителско име",
+ "impersonationFab": {
+ "action__signOut": "Изход",
+ "title": "Влизане като {{identifier}}"
+ },
+ "locale": "bg-BG",
+ "maintenanceMode": "В момента извършваме поддръжка, но не се притеснявайте, не би трябвало да отнеме повече от няколко минути.",
+ "membershipRole__admin": "Администратор",
+ "membershipRole__basicMember": "Член",
+ "membershipRole__guestMember": "Гост",
+ "organizationList": {
+ "action__createOrganization": "Създай организация",
+ "action__invitationAccept": "Присъедини се",
+ "action__suggestionsAccept": "Изпрати заявка за присъединяване",
+ "createOrganization": "Създай организация",
+ "invitationAcceptedLabel": "Присъединен",
+ "subtitle": "за продължаване към {{applicationName}}",
+ "suggestionsAcceptedLabel": "Чака одобрение",
+ "title": "Изберете акаунт",
+ "titleWithoutPersonal": "Изберете организация"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "Автоматични покани",
+ "badge__automaticSuggestion": "Автоматични предложения",
+ "badge__manualInvitation": "Няма автоматично записване",
+ "badge__unverified": "Непотвърден",
+ "createDomainPage": {
+ "subtitle": "Добавете домейна за потвърждение. Потребителите с имейл адреси от този домейн могат автоматично да се присъединят към организацията или да поискат присъединяване.",
+ "title": "Добавете домейн"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "Поканите не могат да бъдат изпратени. Вече има чакащи покани за следните имейл адреси: {{email_addresses}}.",
+ "formButtonPrimary__continue": "Изпрати покани",
+ "selectDropdown__role": "Изберете роля",
+ "subtitle": "Въведете или поставете един или повече имейл адреси, разделени с интервали или запетая.",
+ "successMessage": "Поканите бяха успешно изпратени",
+ "title": "Покани нови членове"
+ },
+ "membersPage": {
+ "action__invite": "Покани",
+ "activeMembersTab": {
+ "menuAction__remove": "Премахни член",
+ "tableHeader__actions": "",
+ "tableHeader__joined": "Присъединил се",
+ "tableHeader__role": "Роля",
+ "tableHeader__user": "Потребител"
+ },
+ "detailsTitle__emptyRow": "Няма членове за показване",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "Поканете потребители, като свържете имейл домейн с вашата организация. Всеки, който се регистрира със съвпадащ имейл домейн, ще може да се присъедини към организацията по всяко време.",
+ "headerTitle": "Автоматични покани",
+ "primaryButton": "Управление на потвърдени домейни"
+ },
+ "table__emptyRow": "Няма покани за показване"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "Оттегли поканата",
+ "tableHeader__invited": "Поканени"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "Потребителите, които се регистрират със съвпадащ имейл домейн, ще виждат предложение да поискат да се присъединят към вашата организация.",
+ "headerTitle": "Автоматични предложения",
+ "primaryButton": "Управление на потвърдени домейни"
+ },
+ "menuAction__approve": "Одобри",
+ "menuAction__reject": "Отхвърли",
+ "tableHeader__requested": "Заявено право на достъп",
+ "table__emptyRow": "Няма заявки за показване"
+ },
+ "start": {
+ "headerTitle__invitations": "Покани",
+ "headerTitle__members": "Членове",
+ "headerTitle__requests": "Заявки"
+ }
+ },
+ "navbar": {
+ "description": "Управлявайте вашата организация",
+ "general": "Общи",
+ "members": "Членове",
+ "title": "Организация"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "Въведете \"{{organizationName}}\" по-долу, за да продължите.",
+ "messageLine1": "Сигурни ли сте, че искате да изтриете тази организация?",
+ "messageLine2": "Това действие е постоянно и необратимо.",
+ "successMessage": "Изтрили сте организацията.",
+ "title": "Изтрий организация"
+ },
+ "leaveOrganization": {
+ "actionDescription": "Въведете \"{{organizationName}}\" по-долу, за да продължите.",
+ "messageLine1": "Сигурни ли сте, че искате да напуснете тази организация? Ще загубите достъп до тази организация и нейните приложения.",
+ "messageLine2": "Това действие е постоянно и необратимо.",
+ "successMessage": "Излязохте от организацията.",
+ "title": "Напусни организация"
+ },
+ "title": "Опасност"
+ },
+ "domainSection": {
+ "menuAction__manage": "Управление",
+ "menuAction__remove": "Изтрий",
+ "menuAction__verify": "Потвърди",
+ "primaryButton": "Добави домейн",
+ "subtitle": "Позволете на потребителите автоматично да се присъединяват към организацията или да поискат присъединяване въз основа на потвърден имейл домейн.",
+ "title": "Потвърдени домейни"
+ },
+ "successMessage": "Организацията беше актуализирана.",
+ "title": "Актуализиране на профила"
+ },
+ "removeDomainPage": {
+ "messageLine1": "Имейл домейнът {{domain}} ще бъде премахнат.",
+ "messageLine2": "Потребителите няма да могат автоматично да се присъединяват към организацията след това.",
+ "successMessage": "{{domain}} беше премахнат.",
+ "title": "Премахни домейн"
+ },
+ "start": {
+ "headerTitle__general": "Общи",
+ "headerTitle__members": "Членове",
+ "profileSection": {
+ "primaryButton": "Актуализиране на профила",
+ "title": "Профил на организацията",
+ "uploadAction__title": "Лого"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "Премахването на този домейн ще засегне поканените потребители.",
+ "removeDomainActionLabel__remove": "Премахни домейна",
+ "removeDomainSubtitle": "Премахни този домейн от потвърдените ви домейни",
+ "removeDomainTitle": "Премахни домейн"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "Потребителите се поканват автоматично да се присъединят към организацията, когато се регистрират и могат да се присъединят по всяко време.",
+ "automaticInvitationOption__label": "Автоматични покани",
+ "automaticSuggestionOption__description": "Потребителите получават предложение да поискат да се присъединят, но трябва да бъдат одобрени от администратор, преди да могат да се присъединят към организацията.",
+ "automaticSuggestionOption__label": "Автоматични предложения",
+ "calloutInfoLabel": "Промяната на режима на записване ще засегне само новите потребители.",
+ "calloutInvitationCountLabel": "Чакащи покани изпратени на потребители: {{count}}",
+ "calloutSuggestionCountLabel": "Чакащи предложения изпратени на потребители: {{count}}",
+ "manualInvitationOption__description": "Потребителите могат да бъдат поканени само ръчно към организацията.",
+ "manualInvitationOption__label": "Няма автоматично записване",
+ "subtitle": "Изберете как потребителите от този домейн могат да се присъединят към организацията."
+ },
+ "start": {
+ "headerTitle__danger": "Опасност",
+ "headerTitle__enrollment": "Опции за записване"
+ },
+ "subtitle": "Домейнът {{domain}} вече е потвърден. Продължете, като изберете режим на записване.",
+ "title": "Актуализиране на {{domain}}"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "Въведете кода за потвърждение, изпратен на вашия имейл адрес",
+ "formTitle": "Код за потвърждение",
+ "resendButton": "Не сте получили код? Изпрати отново",
+ "subtitle": "Домейнът {{domainName}} трябва да бъде потвърден чрез имейл.",
+ "subtitleVerificationCodeScreen": "Беше изпратен код за потвърждение на {{emailAddress}}. Въведете кода, за да продължите.",
+ "title": "Потвърди домейн"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "Създай организация",
+ "action__invitationAccept": "Присъедини се",
+ "action__manageOrganization": "Управление",
+ "action__suggestionsAccept": "Изпрати заявка за присъединяване",
+ "notSelected": "Не е избрана организация",
+ "personalWorkspace": "Личен акаунт",
+ "suggestionsAcceptedLabel": "Чака одобрение"
+ },
+ "paginationButton__next": "Следващ",
+ "paginationButton__previous": "Предишен",
+ "paginationRowText__displaying": "Показване на",
+ "paginationRowText__of": "от",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "Добавяне на акаунт",
+ "action__signOutAll": "Изход от всички акаунти",
+ "subtitle": "Изберете акаунта, с който искате да продължите.",
+ "title": "Изберете акаунт"
+ },
+ "alternativeMethods": {
+ "actionLink": "Получете помощ",
+ "actionText": "Нямате нито един от тях?",
+ "blockButton__backupCode": "Използвайте резервен код",
+ "blockButton__emailCode": "Изпратете код на имейл до {{identifier}}",
+ "blockButton__emailLink": "Изпратете връзка на имейл до {{identifier}}",
+ "blockButton__passkey": "Влезте с вашата парола",
+ "blockButton__password": "Влезте с паролата си",
+ "blockButton__phoneCode": "Изпратете SMS код до {{identifier}}",
+ "blockButton__totp": "Използвайте вашето приложение за аутентикация",
+ "getHelp": {
+ "blockButton__emailSupport": "Имейл поддръжка",
+ "content": "Ако имате затруднения при влизане в профила си, пишете ни имейл и ще работим с вас, за да възстановим достъпа възможно най-бързо.",
+ "title": "Получете помощ"
+ },
+ "subtitle": "Имате проблеми? Можете да използвате някой от тези методи за влизане.",
+ "title": "Използвайте друг метод"
+ },
+ "backupCodeMfa": {
+ "subtitle": "Вашият резервен код е този, който сте получили при настройване на двустепенната аутентикация.",
+ "title": "Въведете резервен код"
+ },
+ "emailCode": {
+ "formTitle": "Код за потвърждение",
+ "resendButton": "Не сте получили код? Изпратете отново",
+ "subtitle": "за да продължите към {{applicationName}}",
+ "title": "Проверете имейла си"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "Върнете се към оригиналната разделка, за да продължите.",
+ "title": "Тази връзка за потвърждение е изтекла"
+ },
+ "failed": {
+ "subtitle": "Върнете се към оригиналната разделка, за да продължите.",
+ "title": "Тази връзка за потвърждение е невалидна"
+ },
+ "formSubtitle": "Използвайте връзката за потвърждение изпратена на вашия имейл",
+ "formTitle": "Връзка за потвърждение",
+ "loading": {
+ "subtitle": "Ще бъдете пренасочени скоро",
+ "title": "Влизане..."
+ },
+ "resendButton": "Не сте получили връзка? Изпратете отново",
+ "subtitle": "за да продължите към {{applicationName}}",
+ "title": "Проверете имейла си",
+ "unusedTab": {
+ "title": "Можете да затворите тази разделка"
+ },
+ "verified": {
+ "subtitle": "Ще бъдете пренасочени скоро",
+ "title": "Успешно влязохте"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Върнете се към оригиналната разделка, за да продължите",
+ "subtitleNewTab": "Върнете се към новоотворената разделка, за да продължите",
+ "titleNewTab": "Влязохте в друга разделка"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "Код за нулиране на парола",
+ "resendButton": "Не сте получили код? Изпратете отново",
+ "subtitle": "за да нулирате паролата си",
+ "subtitle_email": "Първо въведете кода, изпратен на вашия имейл адрес",
+ "subtitle_phone": "Първо въведете кода, изпратен на вашия телефон",
+ "title": "Нулиране на парола"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "Нулиране на паролата си",
+ "label__alternativeMethods": "Или влезте с друг метод",
+ "title": "Забравена парола?"
+ },
+ "noAvailableMethods": {
+ "message": "Не можем да продължим с влизането. Няма наличен аутентикационен фактор.",
+ "subtitle": "Възникна грешка",
+ "title": "Не може да се влезе"
+ },
+ "passkey": {
+ "subtitle": "Използването на вашия код потвърждава, че сте вие. Вашият устройство може да поиска вашите пръсти, лице или заключване на екрана.",
+ "title": "Използвайте вашия код"
+ },
+ "password": {
+ "actionLink": "Използвайте друг метод",
+ "subtitle": "Въведете паролата, свързана с вашия акаунт",
+ "title": "Въведете паролата си"
+ },
+ "passwordPwned": {
+ "title": "Паролата е компрометирана"
+ },
+ "phoneCode": {
+ "formTitle": "Код за потвърждение",
+ "resendButton": "Не сте получили код? Изпратете отново",
+ "subtitle": "за да продължите към {{applicationName}}",
+ "title": "Проверете телефона си"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "Код за потвърждение",
+ "resendButton": "Не сте получили код? Изпратете отново",
+ "subtitle": "За да продължите, моля въведете кода за потвърждение, изпратен на телефона ви",
+ "title": "Проверете телефона си"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "Нулиране на паролата",
+ "requiredMessage": "Поради сигурностни причини е необходимо да нулирате паролата си.",
+ "successMessage": "Паролата ви беше успешно променена. Влизане в процес, моля изчакайте момент.",
+ "title": "Задайте нова парола"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "Трябва да потвърдим вашата самоличност преди да нулираме паролата ви."
+ },
+ "start": {
+ "actionLink": "Регистрирайте се",
+ "actionLink__use_email": "Използвайте имейл",
+ "actionLink__use_email_username": "Използвайте имейл или потребителско име",
+ "actionLink__use_passkey": "Използвайте кода си",
+ "actionLink__use_phone": "Използвайте телефона",
+ "actionLink__use_username": "Използвайте потребителско име",
+ "actionText": "Нямате акаунт?",
+ "subtitle": "Добре дошли отново! Моля, влезте, за да продължите",
+ "title": "Влезте в {{applicationName}}"
+ },
+ "totpMfa": {
+ "formTitle": "Код за потвърждение",
+ "subtitle": "За да продължите, моля въведете кода за потвърждение, генериран от вашето приложение за аутентикация",
+ "title": "Двустепенна верификация"
+ }
+ },
+ "signInEnterPasswordTitle": "Въведете паролата си",
+ "signUp": {
+ "continue": {
+ "actionLink": "Влезте",
+ "actionText": "Вече имате акаунт?",
+ "subtitle": "Моля, попълнете оставащите данни, за да продължите",
+ "title": "Попълнете липсващите полета"
+ },
+ "emailCode": {
+ "formSubtitle": "Въведете кода за потвърждение, изпратен на вашия имейл адрес",
+ "formTitle": "Код за потвърждение",
+ "resendButton": "Не сте получили код? Изпратете отново",
+ "subtitle": "Въведете кода за потвърждение, изпратен на вашия имейл",
+ "title": "Потвърдете вашия имейл"
+ },
+ "emailLink": {
+ "formSubtitle": "Използвайте връзката за потвърждение, изпратена на вашия имейл адрес",
+ "formTitle": "Връзка за потвърждение",
+ "loading": {
+ "title": "Регистриране..."
+ },
+ "resendButton": "Не сте получили връзка? Изпратете отново",
+ "subtitle": "за да продължите към {{applicationName}}",
+ "title": "Потвърдете вашия имейл",
+ "verified": {
+ "title": "Успешно се регистрирахте"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Върнете се към новоотворената разделка, за да продължите",
+ "subtitleNewTab": "Върнете се към предишната разделка, за да продължите",
+ "title": "Успешно потвърден имейл"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "Въведете кода за потвърждение, изпратен на вашия телефонен номер",
+ "formTitle": "Код за потвърждение",
+ "resendButton": "Не сте получили код? Изпратете отново",
+ "subtitle": "Въведете кода за потвърждение, изпратен на вашия телефон",
+ "title": "Потвърдете вашия телефон"
+ },
+ "start": {
+ "actionLink": "Влезте",
+ "actionText": "Вече имате акаунт?",
+ "subtitle": "Добре дошли! Моля, попълнете данните, за да започнете",
+ "title": "Създайте вашия акаунт"
+ }
+ },
+ "socialButtonsBlockButton": "Продължете с {{provider|titleize}}",
+ "unstable__errors": {
+ "captcha_invalid": "Регистрацията не беше успешна поради неуспешни проверки за сигурност. Моля, опитайте отново като презаредите страницата или се свържете с поддръжката за повече помощ.",
+ "captcha_unavailable": "Регистрацията не беше успешна поради неуспешна валидация на бот. Моля, опитайте отново като презаредите страницата или се свържете с поддръжката за повече помощ.",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "Този имейл адрес е зает. Моля, опитайте с друг.",
+ "form_identifier_exists__phone_number": "Този телефонен номер е зает. Моля, опитайте с друг.",
+ "form_identifier_exists__username": "Това потребителско име е заето. Моля, опитайте с друго.",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "Имейл адресът трябва да бъде валиден имейл адрес.",
+ "form_param_format_invalid__phone_number": "Телефонният номер трябва да бъде валиден в международен формат.",
+ "form_param_max_length_exceeded__first_name": "Първото име не трябва да надвишава 256 знака.",
+ "form_param_max_length_exceeded__last_name": "Фамилията не трябва да надвишава 256 знака.",
+ "form_param_max_length_exceeded__name": "Името не трябва да надвишава 256 знака.",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "Паролата ви не е достатъчно силна.",
+ "form_password_pwned": "Тази парола е открита като част от нарушение и не може да се използва, моля опитайте с друга парола.",
+ "form_password_pwned__sign_in": "Тази парола е открита като част от нарушение и не може да се използва, моля нулирайте паролата си.",
+ "form_password_size_in_bytes_exceeded": "Паролата ви е надвишила максималния брой байтове, моля я скратете или премахнете някои специални знаци.",
+ "form_password_validation_failed": "Некоректна парола",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "Не можете да изтриете последната си идентификация.",
+ "not_allowed_access": "",
+ "passkey_already_exists": "Паролата вече е регистрирана с този устройство.",
+ "passkey_not_supported": "Паролите не се поддържат на това устройство.",
+ "passkey_pa_not_supported": "Регистрацията изисква аутентикатор на платформата, но устройството не го поддържа.",
+ "passkey_registration_cancelled": "Регистрацията на паролата беше отменена или изтече.",
+ "passkey_retrieval_cancelled": "Проверката на паролата беше отменена или изтече.",
+ "passwordComplexity": {
+ "maximumLength": "по-малко от {{length}} знака",
+ "minimumLength": "{{length}} или повече знака",
+ "requireLowercase": "малка буква",
+ "requireNumbers": "число",
+ "requireSpecialCharacter": "специален символ",
+ "requireUppercase": "главна буква",
+ "sentencePrefix": "Паролата ви трябва да съдържа"
+ },
+ "phone_number_exists": "Този телефонен номер е зает. Моля, опитайте с друг.",
+ "zxcvbn": {
+ "couldBeStronger": "Паролата ви работи, но може да бъде по-силна. Опитайте да добавите повече знаци.",
+ "goodPassword": "Паролата ви отговаря на всички необходими изисквания.",
+ "notEnough": "Паролата ви не е достатъчно силна.",
+ "suggestions": {
+ "allUppercase": "Напишете някои букви с главни, но не всички.",
+ "anotherWord": "Добавете повече думи, които не са толкова обичайни.",
+ "associatedYears": "Избягвайте години, които са свързани с вас.",
+ "capitalization": "Напишете с главни повече от първата буква.",
+ "dates": "Избягвайте дати и години, които са свързани с вас.",
+ "l33t": "Избягвайте предсказуеми замествания на букви като '@' за 'а'.",
+ "longerKeyboardPattern": "Използвайте по-дълги шаблони на клавиатурата и променяйте посоката на набиране няколко пъти.",
+ "noNeed": "Можете да създадете силни пароли без да използвате символи, числа или главни букви.",
+ "pwned": "Ако използвате тази парола някъде другаде, трябва да я промените.",
+ "recentYears": "Избягвайте скорошни години.",
+ "repeated": "Избягвайте повтарящи се думи и знаци.",
+ "reverseWords": "Избягвайте обратни написания на обичайни думи.",
+ "sequences": "Избягвайте обичайни последователности на знаци.",
+ "useWords": "Използвайте няколко думи, но избягвайте обичайни фрази."
+ },
+ "warnings": {
+ "common": "Това е често използвана парола.",
+ "commonNames": "Обичайните имена и фамилии са лесни за отгатване.",
+ "dates": "Датите са лесни за отгатване.",
+ "extendedRepeat": "Повтарящи се шаблони на символи като \"abcabcabc\" са лесни за отгатване.",
+ "keyPattern": "Кратките шаблони на клавиатурата са лесни за отгатване.",
+ "namesByThemselves": "Единичните имена или фамилии са лесни за отгатване.",
+ "pwned": "Вашата парола беше изложена в интернет от данни.",
+ "recentYears": "Скорошните години са лесни за отгатване.",
+ "sequences": "Обичайните последователности на знаци като \"abc\" са лесни за отгатване.",
+ "similarToCommon": "Това е подобно на често използвана парола.",
+ "simpleRepeat": "Повтарящи се символи като \"aaa\" са лесни за отгатване.",
+ "straightRow": "Правите редове на клавишите на клавиатурата са лесни за отгатване.",
+ "topHundred": "Това е често използвана парола.",
+ "topTen": "Това е много използвана парола.",
+ "userInputs": "Не трябва да има лични или свързани със страницата данни.",
+ "wordByItself": "Единичните думи са лесни за отгатване."
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "Добави акаунт",
+ "action__manageAccount": "Управлявай акаунта",
+ "action__signOut": "Изход",
+ "action__signOutAll": "Изход от всички акаунти"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "Копирано!",
+ "actionLabel__copy": "Копиране на всички",
+ "actionLabel__download": "Изтегли .txt",
+ "actionLabel__print": "Принтиране",
+ "infoText1": "Резервните кодове ще бъдат активирани за този акаунт.",
+ "infoText2": "Пазете резервните кодове в тайна и ги съхранявайте на сигурно място. Можете да генерирате нови резервни кодове, ако подозирате, че са компрометирани.",
+ "subtitle__codelist": "Съхранявайте ги на сигурно място и ги пазете в тайна.",
+ "successMessage": "Резервните кодове вече са активирани. Можете да използвате един от тях, за да влезете в своя акаунт, ако загубите достъпа до устройството за удостоверяване. Всеки код може да бъде използван само веднъж.",
+ "successSubtitle": "Можете да използвате един от тези кодове, за да влезете в своя акаунт, ако загубите достъпа до устройството за удостоверяване.",
+ "title": "Добавяне на потвърждение с резервен код",
+ "title__codelist": "Резервни кодове"
+ },
+ "connectedAccountPage": {
+ "formHint": "Изберете доставчик, за да свържете своя акаунт.",
+ "formHint__noAccounts": "Няма налични външни доставчици на акаунти.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} ще бъде премахнат от този акаунт.",
+ "messageLine2": "Вече няма да можете да използвате този свързан акаунт и всички зависими функции няма да работят повече.",
+ "successMessage": "{{connectedAccount}} е премахнат от вашия акаунт.",
+ "title": "Премахване на свързан акаунт"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "Доставчикът е добавен към вашия акаунт",
+ "title": "Добавяне на свързан акаунт"
+ },
+ "deletePage": {
+ "actionDescription": "Въведете \"Изтриване на акаунт\" по-долу, за да продължите.",
+ "confirm": "Изтриване на акаунт",
+ "messageLine1": "Сигурни ли сте, че искате да изтриете своя акаунт?",
+ "messageLine2": "Това действие е постоянно и необратимо.",
+ "title": "Изтриване на акаунт"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "На този имейл адрес ще бъде изпратен имейл с код за потвърждение.",
+ "formSubtitle": "Въведете кода за потвърждение, изпратен на {{identifier}}",
+ "formTitle": "Код за потвърждение",
+ "resendButton": "Не сте получили код? Изпрати отново",
+ "successMessage": "Имейлът {{identifier}} е добавен към вашия акаунт."
+ },
+ "emailLink": {
+ "formHint": "На този имейл адрес ще бъде изпратен имейл с връзка за потвърждение.",
+ "formSubtitle": "Кликнете върху връзката за потвърждение в имейла, изпратен на {{identifier}}",
+ "formTitle": "Връзка за потвърждение",
+ "resendButton": "Не сте получили връзка? Изпрати отново",
+ "successMessage": "Имейлът {{identifier}} е добавен към вашия акаунт."
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}} ще бъде премахнат от този акаунт.",
+ "messageLine2": "Вече няма да можете да влезете, използвайки този имейл адрес.",
+ "successMessage": "{{emailAddress}} е премахнат от вашия акаунт.",
+ "title": "Премахване на имейл адрес"
+ },
+ "title": "Добавяне на имейл адрес",
+ "verifyTitle": "Потвърждение на имейл адрес"
+ },
+ "formButtonPrimary__add": "Добавяне",
+ "formButtonPrimary__continue": "Продължи",
+ "formButtonPrimary__finish": "Завърши",
+ "formButtonPrimary__remove": "Премахване",
+ "formButtonPrimary__save": "Запазване",
+ "formButtonReset": "Отказ",
+ "mfaPage": {
+ "formHint": "Изберете метод за добавяне.",
+ "title": "Добавяне на двустепенна верификация"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "Използвайте същия номер",
+ "primaryButton__addPhoneNumber": "Добавяне на телефонен номер",
+ "removeResource": {
+ "messageLine1": "{{identifier}} няма да получава повече кодове за потвърждение при влизане.",
+ "messageLine2": "Вашият акаунт може да не е толкова защитен. Сигурни ли сте, че искате да продължите?",
+ "successMessage": "Двустепенната верификация с код по SMS е премахната за {{mfaPhoneCode}}",
+ "title": "Премахване на двустепенна верификация"
+ },
+ "subtitle__availablePhoneNumbers": "Изберете съществуващ телефонен номер, за да се регистрирате за двустепенна верификация с код по SMS или добавете нов.",
+ "subtitle__unavailablePhoneNumbers": "Няма налични телефонни номера за регистрация за двустепенна верификация с код по SMS, моля добавете нов.",
+ "successMessage1": "При влизане ще трябва да въведете код за потвърждение, изпратен на този телефонен номер като допълнителна стъпка.",
+ "successMessage2": "Запазете тези резервни кодове и ги съхранявайте на сигурно място. Ако загубите достъпа до устройството за удостоверяване, можете да използвате резервни кодове, за да влезете.",
+ "successTitle": "Активиране на верификация с код по SMS",
+ "title": "Добавяне на верификация с код по SMS"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "Сканиране на QR код вместо това",
+ "buttonUnableToScan__nonPrimary": "Не може да се сканира QR код?",
+ "infoText__ableToScan": "Настройте нов метод за влизане във вашия аутентикатор и сканирайте следния QR код, за да го свържете с вашия акаунт.",
+ "infoText__unableToScan": "Настройте нов метод за влизане във вашия аутентикатор и въведете ключа, предоставен по-долу.",
+ "inputLabel__unableToScan1": "Уверете се, че времевите или еднократни пароли са активирани, след което завършете свързването на вашия акаунт.",
+ "inputLabel__unableToScan2": "Алтернативно, ако вашият аутентикатор поддържа TOTP URIs, можете също да копирате целия URI."
+ },
+ "removeResource": {
+ "messageLine1": "Кодовете за потвърждение от този аутентикатор вече няма да са необходими при влизане.",
+ "messageLine2": "Вашият акаунт може да не е толкова защитен. Сигурни ли сте, че искате да продължите?",
+ "successMessage": "Двустепенната верификация чрез приложение за аутентикация е премахната.",
+ "title": "Премахване на двустепенна верификация"
+ },
+ "successMessage": "Двустепенната верификация вече е активирана. При влизане ще трябва да въведете код за потвърждение от този аутентикатор като допълнителна стъпка.",
+ "title": "Добавяне на приложение за аутентикация",
+ "verifySubtitle": "Въведете кода за потвърждение, генериран от вашия аутентикатор",
+ "verifyTitle": "Код за потвърждение"
+ },
+ "mobileButton__menu": "Меню",
+ "navbar": {
+ "account": "Профил",
+ "description": "Управлявайте информацията за вашия профил.",
+ "security": "Сигурност",
+ "title": "Профил"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} ще бъде премахнат от този профил.",
+ "title": "Премахни кода"
+ },
+ "subtitle__rename": "Можете да промените името на кода, за да го намерите по-лесно.",
+ "title__rename": "Преименувай кода"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "Препоръчва се да излезете от всички други устройства, които може да са използвали старата ви парола.",
+ "readonly": "В момента не можете да редактирате паролата си, защото можете да влезете само чрез връзката с предприятието.",
+ "successMessage__set": "Паролата ви е зададена.",
+ "successMessage__signOutOfOtherSessions": "Всички други устройства са излезли.",
+ "successMessage__update": "Паролата ви е актуализирана.",
+ "title__set": "Задайте парола",
+ "title__update": "Актуализирайте паролата"
+ },
+ "phoneNumberPage": {
+ "infoText": "Ще бъде изпратено съобщение с код за потвърждение на този телефонен номер. Могат да се прилагат такси за съобщения и данни.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} ще бъде премахнат от този профил.",
+ "messageLine2": "Вече няма да можете да влезете с този телефонен номер.",
+ "successMessage": "{{phoneNumber}} е премахнат от вашия профил.",
+ "title": "Премахни телефонен номер"
+ },
+ "successMessage": "{{identifier}} е добавен към вашия профил.",
+ "title": "Добави телефонен номер",
+ "verifySubtitle": "Въведете кода за потвърждение, изпратен на {{identifier}}",
+ "verifyTitle": "Потвърди телефонния номер"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "Препоръчителен размер 1:1, до 10MB.",
+ "imageFormDestructiveActionSubtitle": "Премахни",
+ "imageFormSubtitle": "Качи",
+ "imageFormTitle": "Профилна снимка",
+ "readonly": "Информацията за вашия профил е предоставена от връзката с предприятието и не може да бъде редактирана.",
+ "successMessage": "Профилът ви е актуализиран.",
+ "title": "Актуализирай профила"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "Изход от устройството",
+ "title": "Активни устройства"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "Опитайте отново",
+ "actionLabel__reauthorize": "Оторизирайте сега",
+ "destructiveActionTitle": "Премахни",
+ "primaryButton": "Свържете акаунт",
+ "subtitle__reauthorize": "Необходимите обхвати са актуализирани и може да изпитвате ограничена функционалност. Моля, повторно авторизирайте това приложение, за да избегнете проблеми",
+ "title": "Свързани акаунти"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "Изтрий акаунта",
+ "title": "Изтрий акаунта"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "Премахни имейл",
+ "detailsAction__nonPrimary": "Задай като основен",
+ "detailsAction__primary": "Завърши верификацията",
+ "detailsAction__unverified": "Верифицирай",
+ "primaryButton": "Добави имейл адрес",
+ "title": "Имейл адреси"
+ },
+ "enterpriseAccountsSection": {
+ "title": "Корпоративни акаунти"
+ },
+ "headerTitle__account": "Детайли за профила",
+ "headerTitle__security": "Сигурност",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "Регенерирай",
+ "headerTitle": "Резервни кодове",
+ "subtitle__regenerate": "Получете нов комплект сигурни резервни кодове. Предишните резервни кодове ще бъдат изтрити и няма да могат да бъдат използвани.",
+ "title__regenerate": "Регенериране на резервни кодове"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "Задай като основен",
+ "destructiveActionLabel": "Премахни"
+ },
+ "primaryButton": "Добави двустепенна верификация",
+ "title": "Двустепенна верификация",
+ "totp": {
+ "destructiveActionTitle": "Премахни",
+ "headerTitle": "Приложение за автентикация"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "Премахни",
+ "menuAction__rename": "Преименувай",
+ "title": "Пароли"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "Задай парола",
+ "primaryButton__updatePassword": "Актуализирай паролата",
+ "title": "Парола"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "Премахни телефонен номер",
+ "detailsAction__nonPrimary": "Задай като основен",
+ "detailsAction__primary": "Завърши верификацията",
+ "detailsAction__unverified": "Верифицирай телефонния номер",
+ "primaryButton": "Добави телефонен номер",
+ "title": "Телефонни номера"
+ },
+ "profileSection": {
+ "primaryButton": "Актуализирай профила",
+ "title": "Профил"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "Задай потребителско име",
+ "primaryButton__updateUsername": "Актуализирай потребителското име",
+ "title": "Потребителско име"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "Премахни портфейла",
+ "primaryButton": "Web3 портфейли",
+ "title": "Web3 портфейли"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "Потребителското ви име е актуализирано.",
+ "title__set": "Задайте потребителско име",
+ "title__update": "Актуализирайте потребителското име"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} ще бъде премахнат от този профил.",
+ "messageLine2": "Вече няма да можете да влезете с този web3 портфейл.",
+ "successMessage": "{{web3Wallet}} е премахнат от вашия профил.",
+ "title": "Премахни web3 портфейла"
+ },
+ "subtitle__availableWallets": "Изберете web3 портфейл, за да се свържете с вашия профил.",
+ "subtitle__unavailableWallets": "Няма налични web3 портфейли.",
+ "successMessage": "Портфейлът е добавен към вашия профил.",
+ "title": "Добави web3 портфейл"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/common.json b/DigitalHumanWeb/locales/bg-BG/common.json
new file mode 100644
index 0000000..c56c072
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "Относно",
+ "advanceSettings": "Разширени настройки",
+ "alert": {
+ "cloud": {
+ "action": "Безплатен пробен период",
+ "desc": "Предлагаме на всички регистрирани потребители {{credit}} безплатни изчислителни точки, без необходимост от сложна конфигурация, готови за употреба, поддържащи неограничен обем на историята на разговорите и глобална синхронизация в облака. Очакват ви още много напреднали функции за изследване.",
+ "descOnMobile": "Предлагаме {{credit}} безплатни изчислителни точки на всички регистрирани потребители, без необходимост от сложна конфигурация, готови за употреба.",
+ "title": "Добре дошли в {{name}}"
+ }
+ },
+ "appInitializing": "Приложението се стартира...",
+ "autoGenerate": "Автоматично генериране",
+ "autoGenerateTooltip": "Автоматично генериране на описание на агент въз основа на подкани",
+ "autoGenerateTooltipDisabled": "Моля, попълнете подсказката, за да използвате функцията за автоматично допълване",
+ "back": "Назад",
+ "batchDelete": "Пакетно изтриване",
+ "blog": "Продуктов блог",
+ "cancel": "Отказ",
+ "changelog": "Дневник на промените",
+ "close": "Затвори",
+ "contact": "Свържете се с нас",
+ "copy": "Копирай",
+ "copyFail": "Копирането не е успешно",
+ "copySuccess": "Копирано успешно",
+ "dataStatistics": {
+ "messages": "Съобщения",
+ "sessions": "Сесии",
+ "today": "Днес",
+ "topics": "Теми"
+ },
+ "defaultAgent": "Агент по подразбиране",
+ "defaultSession": "Агент по подразбиране",
+ "delete": "Изтрий",
+ "document": "Ръководство за потребителя",
+ "download": "Изтегляне",
+ "duplicate": "Създай дубликат",
+ "edit": "Редактирай",
+ "export": "Експортирай конфигурация",
+ "exportType": {
+ "agent": "Експортирай настройките на агента",
+ "agentWithMessage": "Експортирай агент и съобщения",
+ "all": "Експортирай глобални настройки и всички данни на агента",
+ "allAgent": "Експортирай всички настройки на агента",
+ "allAgentWithMessage": "Експортирай всички агенти и съобщения",
+ "globalSetting": "Експортирай глобални настройки"
+ },
+ "feedback": "Обратна връзка",
+ "follow": "Следете ни на {{name}}",
+ "footer": {
+ "action": {
+ "feedback": "Споделете ценните си съвети",
+ "star": "Добавете звезда в GitHub"
+ },
+ "and": "и",
+ "feedback": {
+ "action": "Споделете обратна връзка",
+ "desc": "Всеки ваши идеи и предложения са от изключително значение за нас, нетърпеливи сме да чуем мнението ви! Не се колебайте да се свържете с нас, за да споделите отзиви за функциите на продукта и потребителския опит, които да ни помогнат да направим LobeChat още по-добър.",
+ "title": "Споделете ценните си отзиви в GitHub"
+ },
+ "later": "По-късно",
+ "star": {
+ "action": "Осветете звездата",
+ "desc": "Ако обичате нашия продукт и искате да ни подкрепите, можете ли да ни добавите звезда в GitHub? Този малък жест е от огромно значение за нас и ни мотивира да продължим да ви предоставяме уникално преживяване.",
+ "title": "Осветете звездата за нас в GitHub"
+ },
+ "title": "Харесвате нашия продукт?"
+ },
+ "fullscreen": "Цял екран",
+ "historyRange": "Диапазон на историята",
+ "import": "Импортирай конфигурация",
+ "importModal": {
+ "error": {
+ "desc": "Съжаляваме, възникна грешка по време на процеса на импорт на данни. Моля, опитайте отново да ги импортирате или <1>подайте проблем1>, за да можем да помогнем веднага с отстраняването на проблема.",
+ "title": "Грешка при импортиране на данни"
+ },
+ "finish": {
+ "onlySettings": "Системните настройки са импортирани успешно",
+ "start": "Започни да използваш",
+ "subTitle": "Данните са импортирани успешно, отне {{duration}} секунди. Подробностите за импортирането са както следва:",
+ "title": "Импортирането на данни е завършено"
+ },
+ "loading": "Импортиране на данни, моля изчакайте...",
+ "preparing": "Подготовка на модула за импорт на данни...",
+ "result": {
+ "added": "Импортирани успешно",
+ "errors": "Грешки при импортиране",
+ "messages": "Съобщения",
+ "sessionGroups": "Групи",
+ "sessions": "Агенти",
+ "skips": "Пропуснати дубликати",
+ "topics": "Теми",
+ "type": "Тип данни"
+ },
+ "title": "Импортирай данни",
+ "uploading": {
+ "desc": "Текущият файл е голям и се качва...",
+ "restTime": "Оставащо време",
+ "speed": "Скорост на качване"
+ }
+ },
+ "information": "Общност и информация",
+ "installPWA": "Инсталиране на PWA",
+ "lang": {
+ "ar": "Арабски",
+ "bg-BG": "български",
+ "bn": "Бенгалски",
+ "cs-CZ": "Чешки",
+ "da-DK": "Датски",
+ "de-DE": "Немски",
+ "el-GR": "Гръцки",
+ "en": "Английски",
+ "en-US": "Английски",
+ "es-ES": "Испански",
+ "fi-FI": "Финландски",
+ "fr-FR": "Френски",
+ "hi-IN": "Хинди",
+ "hu-HU": "Унгарски",
+ "id-ID": "Индонезийски",
+ "it-IT": "Италиански",
+ "ja-JP": "Японски",
+ "ko-KR": "Корейски",
+ "nl-NL": "Холандски",
+ "no-NO": "Норвежки",
+ "pl-PL": "Полски",
+ "pt-BR": "Португалски",
+ "pt-PT": "Португалски",
+ "ro-RO": "Румънски",
+ "ru-RU": "Руски",
+ "sk-SK": "Словашки",
+ "sr-RS": "Сръбски",
+ "sv-SE": "Шведски",
+ "th-TH": "Тайландски",
+ "tr-TR": "Турски",
+ "uk-UA": "Украински",
+ "vi-VN": "Виетнамски",
+ "zh": "Опростен китайски",
+ "zh-CN": "Опростен китайски",
+ "zh-TW": "Традиционен китайски"
+ },
+ "layoutInitializing": "Инициализиране на оформлението...",
+ "legal": "Правно уведомление",
+ "loading": "Зареждане...",
+ "mail": {
+ "business": "Бизнес сътрудничество",
+ "support": "Поддръжка по имейл"
+ },
+ "oauth": "SSO Вход",
+ "officialSite": "Официален сайт",
+ "ok": "Добре",
+ "password": "Парола",
+ "pin": "Закачи",
+ "pinOff": "Откачи",
+ "privacy": "Политика за поверителност",
+ "regenerate": "Прегенерирай",
+ "rename": "Преименувай",
+ "reset": "Нулирай",
+ "retry": "Опитай отново",
+ "send": "Изпрати",
+ "setting": "Настройки",
+ "share": "Сподели",
+ "stop": "Спри",
+ "sync": {
+ "actions": {
+ "settings": "Синхронизирай настройките",
+ "sync": "Синхронизирай сега"
+ },
+ "awareness": {
+ "current": "Текущо устройство"
+ },
+ "channel": "Канал",
+ "disabled": {
+ "actions": {
+ "enable": "Активирай синхронизиране в облака",
+ "settings": "Настройки за синхронизиране"
+ },
+ "desc": "Данните от текущата сесия се съхраняват само в този браузър. Ако трябва да синхронизираш данни между няколко устройства, моля, конфигурирай и активирай синхронизирането в облака.",
+ "title": "Синхронизирането на данни е деактивирано"
+ },
+ "enabled": {
+ "title": "Синхронизирането на данни е активирано"
+ },
+ "status": {
+ "connecting": "Свързване",
+ "disabled": "Синхронизирането е деактивирано",
+ "ready": "Свързан",
+ "synced": "Синхронизиран",
+ "syncing": "Синхронизиране",
+ "unconnected": "Неуспешна връзка"
+ },
+ "title": "Състояние на синхронизиране",
+ "unconnected": {
+ "tip": "Връзката със сървъра за сигнализация е неуспешна и не може да бъде установен канал за комуникация между партньори. Моля, провери мрежата и опитай отново."
+ }
+ },
+ "tab": {
+ "chat": "Чат",
+ "discover": "Открий",
+ "files": "Файлове",
+ "me": "аз",
+ "setting": "Настройки"
+ },
+ "telemetry": {
+ "allow": "Разреши",
+ "deny": "Откажи",
+ "desc": "Бихме искали да събираме анонимно информация за използването, за да ни помогнете да подобрим LobeChat и да ви предоставим по-добро изживяване с продукта. Можете да деактивирате това по всяко време в Настройки - Относно.",
+ "learnMore": "Научете повече",
+ "title": "Помогнете на LobeChat да бъде по-добър"
+ },
+ "temp": "Временен",
+ "terms": "Условия за ползване",
+ "updateAgent": "Актуализирай информацията за агента",
+ "upgradeVersion": {
+ "action": "Надстрой",
+ "hasNew": "Налична е нова актуализация",
+ "newVersion": "Налична е нова версия: {{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "Анонимен потребител",
+ "billing": "Управление на сметките",
+ "cloud": "Изпробвайте {{name}}",
+ "data": "Съхранение на данни",
+ "defaultNickname": "Потребител на общността",
+ "discord": "Поддръжка на общността",
+ "docs": "Документация",
+ "email": "Поддръжка по имейл",
+ "feedback": "Обратна връзка и предложения",
+ "help": "Център за помощ",
+ "moveGuide": "Бутонът за настройки е преместен тук",
+ "plans": "Планове за абонамент",
+ "preview": "Преглед",
+ "profile": "Управление на профила",
+ "setting": "Настройки на приложението",
+ "usages": "Статистика за използване"
+ },
+ "version": "Версия"
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/components.json b/DigitalHumanWeb/locales/bg-BG/components.json
new file mode 100644
index 0000000..37ba01e
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "Плъзнете файлове тук, поддържа качване на множество изображения.",
+ "dragFileDesc": "Плъзнете изображения и файлове тук, поддържа качване на множество изображения и файлове.",
+ "dragFileTitle": "Качване на файл",
+ "dragTitle": "Качване на изображение"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "Добави в базата знания",
+ "addToOtherKnowledgeBase": "Добави в друга база знания",
+ "batchChunking": "Партидно разделяне",
+ "chunking": "Разделяне",
+ "chunkingTooltip": "Разделете файла на множество текстови блокове и ги векторизирайте, за да се използват за семантично търсене и диалог с файла",
+ "confirmDelete": "Ще изтриете този файл. След изтриването му няма да може да бъде възстановен. Моля, потвърдете действието си.",
+ "confirmDeleteMultiFiles": "Ще изтриете избраните {{count}} файла. След изтриването им няма да могат да бъдат възстановени. Моля, потвърдете действието си.",
+ "confirmRemoveFromKnowledgeBase": "Ще премахнете избраните {{count}} файла от базата знания. След премахването им файловете все още могат да бъдат видяни в списъка с всички файлове. Моля, потвърдете действието си.",
+ "copyUrl": "Копирай линк",
+ "copyUrlSuccess": "Адресът на файла е копиран успешно",
+ "createChunkingTask": "Подготовка...",
+ "deleteSuccess": "Файлът е изтрит успешно",
+ "downloading": "Изтегляне на файла...",
+ "removeFromKnowledgeBase": "Премахни от базата знания",
+ "removeFromKnowledgeBaseSuccess": "Файлът е премахнат успешно"
+ },
+ "bottom": "Достигнахте края",
+ "config": {
+ "showFilesInKnowledgeBase": "Покажи съдържанието в базата знания"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "Качи файл",
+ "folder": "Качи папка",
+ "knowledgeBase": "Създай нова база знания"
+ },
+ "or": "или",
+ "title": "Плъзнете файл или папка тук"
+ },
+ "title": {
+ "createdAt": "Дата на създаване",
+ "size": "Размер",
+ "title": "Файл"
+ },
+ "total": {
+ "fileCount": "Общо {{count}} елемента",
+ "selectedCount": "Избрани {{count}} елемента"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "Текстовите блокове все още не са напълно векторизирани, което ще доведе до недостъпност на семантичното търсене. За подобряване на качеството на търсенето, моля, векторизирайте текстовите блокове.",
+ "error": "Неуспешна векторизация",
+ "errorResult": "Неуспешна векторизация, моля проверете и опитайте отново. Причина за неуспеха:",
+ "processing": "Текстовите блокове се векторизират, моля, бъдете търпеливи.",
+ "success": "Текущите текстови блокове са напълно векторизирани."
+ },
+ "embeddings": "Векторизация",
+ "status": {
+ "error": "Разделянето е неуспешно",
+ "errorResult": "Разделянето е неуспешно, моля, проверете и опитайте отново. Причина за неуспеха:",
+ "processing": "Разделяне на блокове",
+ "processingTip": "Сървърът в момента разделя текстовите блокове, затварянето на страницата не влияе на напредъка на разделянето."
+ }
+ }
+ },
+ "GoBack": {
+ "back": "Назад"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "Потребителски модел, по подразбиране поддържа функционалност за функционални обаждания и визуално разпознаване, моля, потвърдете наличието на тези възможности спрямо реалните условия",
+ "file": "Този модел поддържа качване на файлове и разпознаване",
+ "functionCall": "Този модел поддържа функционални обаждания (Function Call)",
+ "tokens": "Този модел поддържа до {{tokens}} токена за една сесия",
+ "vision": "Този модел поддържа визуално разпознаване"
+ },
+ "removed": "Този модел не се намира в списъка. Ако бъде отменен изборът, той ще бъде автоматично премахнат."
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "Няма активирани модели, моля, посетете настройките и ги активирайте",
+ "provider": "Доставчик"
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/discover.json b/DigitalHumanWeb/locales/bg-BG/discover.json
new file mode 100644
index 0000000..b3e924c
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "Добави асистент",
+ "addAgentAndConverse": "Добави асистент и започни разговор",
+ "addAgentSuccess": "Успешно добавен",
+ "conversation": {
+ "l1": "Здравей, аз съм **{{name}}**, можеш да ми зададеш всякакви въпроси и ще се постарая да ти отговоря ~",
+ "l2": "Ето какви са моите способности: ",
+ "l3": "Нека започнем разговора!"
+ },
+ "description": "Представяне на асистента",
+ "detail": "Детайли",
+ "list": "Списък с асистенти",
+ "more": "Още",
+ "plugins": "Интегрирани плъгини",
+ "recentSubmits": "Наскоро обновено",
+ "suggestions": "Свързани предложения",
+ "systemRole": "Настройки на асистента",
+ "try": "Опитай"
+ },
+ "back": "Назад към открития",
+ "category": {
+ "assistant": {
+ "academic": "Академичен",
+ "all": "Всички",
+ "career": "Кариерен",
+ "copywriting": "Копирайтинг",
+ "design": "Дизайн",
+ "education": "Образование",
+ "emotions": "Емоции",
+ "entertainment": "Развлечение",
+ "games": "Игри",
+ "general": "Общ",
+ "life": "Живот",
+ "marketing": "Маркетинг",
+ "office": "Офис",
+ "programming": "Програмиране",
+ "translation": "Превод"
+ },
+ "plugin": {
+ "all": "Всички",
+ "gaming-entertainment": "Игри и развлечения",
+ "life-style": "Начин на живот",
+ "media-generate": "Генериране на медии",
+ "science-education": "Наука и образование",
+ "social": "Социални медии",
+ "stocks-finance": "Акции и финанси",
+ "tools": "Практически инструменти",
+ "web-search": "Уеб търсене"
+ }
+ },
+ "cleanFilter": "Изчисти филтъра",
+ "create": "Създай",
+ "createGuide": {
+ "func1": {
+ "desc1": "Влез в настройките на асистента, който искаш да добавиш, чрез иконата в горния десен ъгъл на прозореца за разговор;",
+ "desc2": "Натисни бутона за добавяне на асистент в горния десен ъгъл.",
+ "tag": "Метод 1",
+ "title": "Добавяне чрез LobeChat"
+ },
+ "func2": {
+ "button": "Отиди на хранилището на асистенти в Github",
+ "desc": "Ако искаш да добавиш асистент в индекса, създай запис с agent-template.json или agent-template-full.json в директорията plugins, напиши кратко описание и подходящи тагове, след което създай pull request.",
+ "tag": "Метод 2",
+ "title": "Добавяне чрез Github"
+ }
+ },
+ "dislike": "Не харесвам",
+ "filter": "Филтър",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "Всички автори",
+ "followed": "Следвани автори",
+ "title": "Обхват на авторите"
+ },
+ "contentLength": "Минимална дължина на контекста",
+ "maxToken": {
+ "title": "Настрой максимална дължина (Token)",
+ "unlimited": "Без ограничения"
+ },
+ "other": {
+ "functionCall": "Поддържа извикване на функции",
+ "title": "Други",
+ "vision": "Поддържа визуално разпознаване",
+ "withKnowledge": "С включена база данни",
+ "withTool": "С включени инструменти"
+ },
+ "pricing": "Цени на модела",
+ "timePeriod": {
+ "all": "Всички времена",
+ "day": "Последни 24 часа",
+ "month": "Последни 30 дни",
+ "title": "Обхват на времето",
+ "week": "Последни 7 дни",
+ "year": "Последна година"
+ }
+ },
+ "home": {
+ "featuredAssistants": "Препоръчани асистенти",
+ "featuredModels": "Препоръчани модели",
+ "featuredProviders": "Препоръчани доставчици на модели",
+ "featuredTools": "Препоръчани инструменти",
+ "more": "Открий повече"
+ },
+ "like": "Харесвам",
+ "models": {
+ "chat": "Започни разговор",
+ "contentLength": "Максимална дължина на контекста",
+ "free": "Безплатно",
+ "guide": "Ръководство за конфигурация",
+ "list": "Списък на моделите",
+ "more": "Още",
+ "parameterList": {
+ "defaultValue": "Стойност по подразбиране",
+ "docs": "Прегледай документацията",
+ "frequency_penalty": {
+ "desc": "Тази настройка регулира честотата на повторно използване на определени думи, които вече са се появили в входа. По-високи стойности намаляват вероятността за повторение, докато отрицателните стойности имат обратен ефект. Наказанието за думи не се увеличава с увеличаване на честотата на появата. Отрицателните стойности насърчават повторното използване на думи.",
+ "title": "Наказание за честота"
+ },
+ "max_tokens": {
+ "desc": "Тази настройка определя максималната дължина, която моделът може да генерира в един отговор. По-високата стойност позволява на модела да генерира по-дълги отговори, докато по-ниската стойност ограничава дължината на отговора, правейки го по-кратък. Разумното регулиране на тази стойност в зависимост от различните приложения може да помогне за постигане на желаната дължина и детайлност на отговора.",
+ "title": "Ограничение за един отговор"
+ },
+ "presence_penalty": {
+ "desc": "Тази настройка е предназначена да контролира повторното използване на думи в зависимост от честотата на появата им в входа. Тя се опитва да използва по-малко думи, които се появяват често, като честотата на използване е пропорционална на честотата на появата. Наказанието за думи нараства с увеличаване на честотата на появата. Отрицателните стойности насърчават повторното използване на думи.",
+ "title": "Свежест на темата"
+ },
+ "range": "Обхват",
+ "temperature": {
+ "desc": "Тази настройка влияе на разнообразието на отговорите на модела. По-ниски стойности водят до по-предсказуеми и типични отговори, докато по-високи стойности насърчават по-разнообразни и необичайни отговори. Когато стойността е 0, моделът винаги дава един и същ отговор на даден вход.",
+ "title": "Случайност"
+ },
+ "title": "Параметри на модела",
+ "top_p": {
+ "desc": "Тази настройка ограничава избора на модела до определен процент от най-вероятните думи: избират се само тези думи, чиято кумулативна вероятност достига P. По-ниски стойности правят отговорите на модела по-предсказуеми, докато настройката по подразбиране позволява на модела да избира от целия обхват на думите.",
+ "title": "Ядро на пробата"
+ },
+ "type": "Тип"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChat поддържа използването на персонализиран API ключ за този доставчик.",
+ "input": "Цена на входа",
+ "inputTooltip": "Цена на всеки милион Token",
+ "latency": "Забавяне",
+ "latencyTooltip": "Средно време за отговор на доставчика за изпращане на първия Token",
+ "maxOutput": "Максимална дължина на изхода",
+ "maxOutputTooltip": "Максимален брой Token, които този крайна точка може да генерира",
+ "officialTooltip": "Официална услуга на LobeHub",
+ "output": "Цена на изхода",
+ "outputTooltip": "Цена на всеки милион Token",
+ "streamCancellationTooltip": "Този доставчик поддържа функция за анулиране на потока.",
+ "throughput": "Пропускателна способност",
+ "throughputTooltip": "Среден брой Token, предавани на секунда за поточни заявки"
+ },
+ "suggestions": "Свързани модели",
+ "supportedProviders": "Доставчици, поддържащи този модел"
+ },
+ "plugins": {
+ "community": "Обществени плъгини",
+ "install": "Инсталирай плъгин",
+ "installed": "Инсталиран",
+ "list": "Списък с плъгини",
+ "meta": {
+ "description": "Описание",
+ "parameter": "Параметър",
+ "title": "Инструментални параметри",
+ "type": "Тип"
+ },
+ "more": "Още",
+ "official": "Официални плъгини",
+ "recentSubmits": "Наскоро обновени",
+ "suggestions": "Свързани предложения"
+ },
+ "providers": {
+ "config": "Конфигуриране на доставчици",
+ "list": "Списък на доставчиците на модели",
+ "modelCount": "{{count}} модела",
+ "modelSite": "Документация на моделите",
+ "more": "Още",
+ "officialSite": "Официален сайт",
+ "showAllModels": "Покажи всички модели",
+ "suggestions": "Свързани доставчици",
+ "supportedModels": "Поддържани модели"
+ },
+ "search": {
+ "placeholder": "Търсене по име, описание или ключови думи...",
+ "result": "{{count}} резултата за {{keyword}}",
+ "searching": "Търсене..."
+ },
+ "sort": {
+ "mostLiked": "Най-подобни",
+ "mostUsed": "Най-използвани",
+ "newest": "От ново към старо",
+ "oldest": "От старо към ново",
+ "recommended": "Препоръчани"
+ },
+ "tab": {
+ "assistants": "Асистенти",
+ "home": "Начална страница",
+ "models": "Модели",
+ "plugins": "Плъгини",
+ "providers": "Доставчици на модели"
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/error.json b/DigitalHumanWeb/locales/bg-BG/error.json
new file mode 100644
index 0000000..252b539
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "Continue Session",
+ "desc": "{{greeting}}, I'm glad to continue serving you. Let's pick up where we left off.",
+ "title": "Welcome back, {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "Върни се в началото",
+ "desc": "Опитайте отново по-късно или се върнете в познатия свят",
+ "retry": "Опитай отново",
+ "title": "Страницата се е сблъскала с проблем.."
+ },
+ "fetchError": "Грешка при извличане",
+ "fetchErrorDetail": "Подробности за грешката",
+ "notFound": {
+ "backHome": "Върни се в началото",
+ "check": "Моля, проверете дали URL адресът е правилен",
+ "desc": "Не можем да намерим страницата, която търсите",
+ "title": "Влезли сте в неизвестна територия?"
+ },
+ "pluginSettings": {
+ "desc": "Попълнете следната конфигурация, за да започнете да използвате този плъгин",
+ "title": "Настройки на плъгина {{name}}"
+ },
+ "response": {
+ "400": "Съжаляваме, сървърът не разбира заявката ви. Моля, уверете се, че параметрите на заявката ви са правилни.",
+ "401": "Съжаляваме, сървърът отхвърли заявката ви, вероятно поради недостатъчни разрешения или невалидна автентификация.",
+ "403": "Съжаляваме, сървърът отхвърли заявката ви. Нямате разрешение за достъп до това съдържание.",
+ "404": "Съжаляваме, сървърът не може да намери страницата или ресурса, който сте поискали. Моля, уверете се, че URL адресът ви е правилен.",
+ "405": "Съжаляваме, сървърът не поддържа метода на заявка, който използвате. Моля, уверете се, че методът на заявка е правилен.",
+ "406": "Съжаляваме, сървърът не може да изпълни заявката въз основа на характеристиките на поисканото от вас съдържание",
+ "407": "Съжаляваме, трябва да удостоверите прокси сървъра, преди да продължите с тази заявка",
+ "408": "Съжаляваме, сървърът изтече, докато чака заявката, моля, проверете мрежовата си връзка и опитайте отново",
+ "409": "Съжаляваме, заявката не може да бъде обработена поради конфликт, вероятно защото състоянието на ресурса е несъвместимо с заявката",
+ "410": "Съжаляваме, ресурсът, който сте поискали, е премахнат за постоянно и не може да бъде намерен",
+ "411": "Съжаляваме, сървърът не може да обработи заявката без валидна дължина на съдържанието",
+ "412": "Съжаляваме, заявката ви не отговаря на условията на сървъра и не може да бъде изпълнена",
+ "413": "Съжаляваме, данните от заявката ви са твърде големи, за да бъдат обработени от сървъра",
+ "414": "Съжаляваме, URI на вашата заявка е твърде дълъг, за да бъде обработен от сървъра",
+ "415": "Съжаляваме, сървърът не може да обработи медийния формат, прикачен към заявката",
+ "416": "Съжаляваме, сървърът не може да удовлетвори обхвата на вашата заявка",
+ "417": "Съжаляваме, сървърът не може да отговори на очакванията ви",
+ "422": "Съжаляваме, заявката ви е в правилния формат, но поради семантични грешки не може да бъде отговорено",
+ "423": "Съжаляваме, ресурсът, който сте поискали, е заключен",
+ "424": "Съжаляваме, текущата заявка не може да бъде изпълнена поради неуспех на предишна заявка",
+ "426": "Съжаляваме, сървърът изисква вашият клиент да бъде надстроен до по-висока версия на протокола",
+ "428": "Съжаляваме, сървърът изисква предварително условие и изисква вашата заявка да съдържа правилния условен заглавие",
+ "429": "Съжаляваме, заявката ви е твърде честа и сървърът е малко уморен. Моля, опитайте отново по-късно.",
+ "431": "Съжаляваме, полетата на заглавието на вашата заявка са твърде големи, за да бъдат обработени от сървъра",
+ "451": "Съжаляваме, сървърът отказва да предостави този ресурс поради правни причини",
+ "500": "Съжаляваме, изглежда сървърът има някои затруднения и временно не може да изпълни заявката ви. Моля, опитайте отново по-късно.",
+ "502": "Съжаляваме, изглежда сървърът е изгубен и временно не може да предостави услуга. Моля, опитайте отново по-късно.",
+ "503": "Съжаляваме, сървърът в момента не може да обработи заявката ви, вероятно поради претоварване или поддръжка. Моля, опитайте отново по-късно.",
+ "504": "Съжаляваме, сървърът не получи отговор от сървъра нагоре по веригата. Моля, опитайте отново по-късно.",
+ "AgentRuntimeError": "Грешка при изпълнение на времето за изпълнение на езиковия модел Lobe. Моля, отстранете неизправностите или опитайте отново въз основа на следната информация.",
+ "FreePlanLimit": "В момента сте потребител на безплатен план и не можете да използвате тази функционалност. Моля, надстройте до платен план, за да продължите да я използвате.",
+ "InvalidAccessCode": "Невалиден или празен код за достъп. Моля, въведете правилния код за достъп или добавете персонализиран API ключ.",
+ "InvalidBedrockCredentials": "Удостоверяването на Bedrock е неуспешно. Моля, проверете AccessKeyId/SecretAccessKey и опитайте отново.",
+ "InvalidClerkUser": "很抱歉,你当前尚未登录,请先登录或注册账号后继续操作",
+ "InvalidGithubToken": "GitHub Личният Достъпен Токен е неправилен или е празен. Моля, проверете Личния Достъпен Токен на GitHub и опитайте отново.",
+ "InvalidOllamaArgs": "Невалидна конфигурация на Ollama, моля, проверете конфигурацията на Ollama и опитайте отново",
+ "InvalidProviderAPIKey": "{{provider}} API ключ е невалиден или липсва, моля проверете {{provider}} API ключа и опитайте отново",
+ "LocationNotSupportError": "Съжаляваме, вашето текущо местоположение не поддържа тази услуга на модела. Това може да се дължи на регионални ограничения или на недостъпност на услугата. Моля, потвърдете дали текущото местоположение поддържа използването на тази услуга или опитайте да използвате друго местоположение.",
+ "NoOpenAIAPIKey": "API ключът на OpenAI е празен, моля, добавете персонализиран API ключ на OpenAI",
+ "OllamaBizError": "Грешка при заявка към услугата Ollama, моля, отстранете неизправностите или опитайте отново въз основа на следната информация",
+ "OllamaServiceUnavailable": "Услугата Ollama не е налична. Моля, проверете дали Ollama работи правилно или дали е конфигуриран коректно за междудомейност.",
+ "OpenAIBizError": "Грешка в услугата на OpenAI, моля проверете следната информация или опитайте отново",
+ "PluginApiNotFound": "Съжаляваме, API не съществува в манифеста на плъгина. Моля, проверете дали методът на вашата заявка съвпада с API на манифеста на плъгина",
+ "PluginApiParamsError": "Съжаляваме, проверката на входния параметър за заявката на плъгина е неуспешна. Моля, проверете дали входните параметри съвпадат с описанието на API",
+ "PluginFailToTransformArguments": "Съжаляваме, неуспешно преобразуване на аргументите за извикване на плъгин. Моля, опитайте отново да генерирате съобщението на помощника или опитайте с по-мощна AI модел на Tools Calling.",
+ "PluginGatewayError": "Съжаляваме, възникна грешка с шлюза на плъгина. Моля, проверете дали конфигурацията на шлюза на плъгина е правилна.",
+ "PluginManifestInvalid": "Съжаляваме, проверката на манифеста на плъгина е неуспешна. Моля, проверете дали форматът на манифеста е правилен",
+ "PluginManifestNotFound": "Съжаляваме, сървърът не можа да намери файла на манифеста на плъгина (manifest.json). Моля, проверете дали адресът на файла на манифеста на плъгина е правилен",
+ "PluginMarketIndexInvalid": "Съжаляваме, проверката на индекса на плъгина е неуспешна. Моля, проверете дали форматът на индексния файл е правилен",
+ "PluginMarketIndexNotFound": "Съжаляваме, сървърът не можа да намери индекса на плъгина. Моля, проверете дали адресът на индекса е правилен",
+ "PluginMetaInvalid": "Съжаляваме, проверката на метаданните на плъгина е неуспешна. Моля, проверете дали форматът на метаданните на плъгина е правилен",
+ "PluginMetaNotFound": "Съжаляваме, плъгинът не е намерен в индекса. Моля, проверете информацията за конфигурацията на плъгина в индекса",
+ "PluginOpenApiInitError": "Съжаляваме, клиентът на OpenAPI не успя да се инициализира. Моля, проверете дали информацията за конфигурацията на OpenAPI е правилна.",
+ "PluginServerError": "Заявката към сървъра на плъгина върна грешка. Моля, проверете файла на манифеста на плъгина, конфигурацията на плъгина или изпълнението на сървъра въз основа на информацията за грешката по-долу",
+ "PluginSettingsInvalid": "Този плъгин трябва да бъде конфигуриран правилно, преди да може да се използва. Моля, проверете дали конфигурацията ви е правилна",
+ "ProviderBizError": "Грешка в услугата на {{provider}}, моля проверете следната информация или опитайте отново",
+ "StreamChunkError": "Грешка при парсирането на съобщение от потокова заявка. Моля, проверете дали текущият API интерфейс отговаря на стандартите или се свържете с вашия доставчик на API за консултация.",
+ "SubscriptionPlanLimit": "Изчерпали сте вашия абонаментен лимит и не можете да използвате тази функционалност. Моля, надстройте до по-висок план или закупете допълнителни ресурси, за да продължите да я използвате.",
+ "UnknownChatFetchError": "Съжаляваме, възникна неизвестна грешка при заявката. Моля, проверете информацията по-долу или опитайте отново."
+ },
+ "stt": {
+ "responseError": "Заявката за услуга е неуспешна, моля, проверете конфигурацията или опитайте отново"
+ },
+ "tts": {
+ "responseError": "Заявката за услуга е неуспешна, моля, проверете конфигурацията или опитайте отново"
+ },
+ "unlock": {
+ "addProxyUrl": "Добавете URL адрес на OpenAI прокси (по избор)",
+ "apiKey": {
+ "description": "Въведете вашия {{name}} API ключ, за да започнете сесия",
+ "title": "Използване на персонализиран {{name}} API ключ"
+ },
+ "closeMessage": "Затвори съобщението",
+ "confirm": "Потвърди и опитай отново",
+ "oauth": {
+ "description": "Администраторът е активирал унифицирано удостоверяване за вход. Щракнете върху бутона по-долу, за да влезете и отключите приложението.",
+ "success": "Входът е успешен",
+ "title": "Влезте в акаунта си",
+ "welcome": "Добре дошли!"
+ },
+ "password": {
+ "description": "Криптирането на приложението е активирано от администратора. Въведете паролата за приложението, за да отключите приложението. Паролата трябва да бъде попълнена само веднъж.",
+ "placeholder": "Моля, въведете парола",
+ "title": "Въведете парола, за да отключите приложението"
+ },
+ "tabs": {
+ "apiKey": "Персонализиран API ключ",
+ "password": "Парола"
+ }
+ },
+ "upload": {
+ "desc": "Подробности: {{detail}}",
+ "fileOnlySupportInServerMode": "Текущият режим на разполагане не поддържа качване на файлове, различни от изображения. Ако искате да качите формат {{ext}}, моля, превключете на разполагане с база данни на сървъра или използвайте услугата {{cloud}}.",
+ "networkError": "Моля, уверете се, че вашата мрежа работи нормално и проверете дали конфигурацията за крос-домейн на услугата за съхранение на файлове е правилна.",
+ "title": "Неуспешно качване на файл, моля проверете интернет връзката или опитайте по-късно",
+ "unknownError": "Причина за грешка: {{reason}}",
+ "uploadFailed": "Неуспешно качване на файла."
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/file.json b/DigitalHumanWeb/locales/bg-BG/file.json
new file mode 100644
index 0000000..d101736
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "Управлявайте вашите файлове и база знания",
+ "detail": {
+ "basic": {
+ "createdAt": "Дата на създаване",
+ "filename": "Име на файла",
+ "size": "Размер на файла",
+ "title": "Основна информация",
+ "type": "Формат",
+ "updatedAt": "Дата на актуализация"
+ },
+ "data": {
+ "chunkCount": "Брой части",
+ "embedding": {
+ "default": "Все още не е векторизирано",
+ "error": "Неуспех",
+ "pending": "В очакване на стартиране",
+ "processing": "В процес на обработка",
+ "success": "Завършено"
+ },
+ "embeddingStatus": "Векторизация"
+ }
+ },
+ "empty": "Няма качени файлове/папки",
+ "header": {
+ "actions": {
+ "newFolder": "Нова папка",
+ "uploadFile": "Качване на файл",
+ "uploadFolder": "Качване на папка"
+ },
+ "uploadButton": "Качване"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "Сигурни ли сте, че искате да изтриете тази база знания? Файловете в нея няма да бъдат изтрити, а ще бъдат преместени в общите файлове. След изтриването на базата знания, тя не може да бъде възстановена, моля, действайте внимателно.",
+ "empty": "Кликнете <1>+1>, за да започнете създаването на база знания"
+ },
+ "new": "Нова база знания",
+ "title": "База знания"
+ },
+ "networkError": "Неуспешно получаване на базата от знания, моля, проверете интернет връзката и опитайте отново",
+ "notSupportGuide": {
+ "desc": "Текущият инстанс е в режим на клиентска база данни и не поддържа функцията за управление на файлове. Моля, превключете на <1>режим на сървърна база данни1> или използвайте директно <3>LobeChat Cloud3>",
+ "features": {
+ "allKind": {
+ "desc": "Поддържа основни файлови формати, включително Word, PPT, Excel, PDF, TXT и други често срещани документи, както и основни кодови файлове като JS и Python",
+ "title": "Разширена поддръжка на файлови формати"
+ },
+ "embeddings": {
+ "desc": "Използва високопроизводителни векторни модели за векторизация на текстови части, позволявайки семантично търсене на съдържанието на файловете",
+ "title": "Семантична векторизация"
+ },
+ "repos": {
+ "desc": "Поддържа създаване на база знания и позволява добавяне на различни типове файлове, за да изградите собствена област на знание",
+ "title": "База знания"
+ }
+ },
+ "title": "Текущият режим на инсталация не поддържа управление на файлове"
+ },
+ "preview": {
+ "downloadFile": "Изтеглете файла",
+ "unsupportedFileAndContact": "Този формат на файла не поддържа онлайн преглед. Ако имате нужда от преглед, моля, <1>свържете се с нас1>."
+ },
+ "searchFilePlaceholder": "Търсене на файл",
+ "tab": {
+ "all": "Всички файлове",
+ "audios": "Аудио",
+ "documents": "Документи",
+ "images": "Снимки",
+ "videos": "Видеа",
+ "websites": "Уебсайтове"
+ },
+ "title": "Файлове",
+ "uploadDock": {
+ "body": {
+ "collapse": "Скрий",
+ "item": {
+ "done": "Качено",
+ "error": "Качването не успя, моля, опитайте отново",
+ "pending": "Подготовка за качване...",
+ "processing": "Обработка на файла...",
+ "restTime": "Остава {{time}}"
+ }
+ },
+ "totalCount": "Общо {{count}} елемента",
+ "uploadStatus": {
+ "error": "Грешка при качване",
+ "pending": "В очакване на качване",
+ "processing": "Качва се",
+ "success": "Качването е завършено",
+ "uploading": "Качва се"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/knowledgeBase.json b/DigitalHumanWeb/locales/bg-BG/knowledgeBase.json
new file mode 100644
index 0000000..eb667a7
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "Файлът беше добавен успешно, <1>прегледайте веднага1>",
+ "confirm": "Добави",
+ "id": {
+ "placeholder": "Моля, изберете знание база за добавяне",
+ "required": "Моля, изберете знание база",
+ "title": "Целева знание база"
+ },
+ "title": "Добавяне към знание база",
+ "totalFiles": "Избрани са {{count}} файла"
+ },
+ "createNew": {
+ "confirm": "Създай нов",
+ "description": {
+ "placeholder": "Описание на знание базата (по избор)"
+ },
+ "formTitle": "Основна информация",
+ "name": {
+ "placeholder": "Име на знание базата",
+ "required": "Моля, попълнете името на знание базата"
+ },
+ "title": "Създаване на нова знание база"
+ },
+ "tab": {
+ "evals": "Оценки",
+ "files": "Документи",
+ "settings": "Настройки",
+ "testing": "Тест за извикване"
+ },
+ "title": "Знание база"
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/market.json b/DigitalHumanWeb/locales/bg-BG/market.json
new file mode 100644
index 0000000..820de03
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "Добави агент",
+ "addAgentAndConverse": "Добави агент и започни разговор",
+ "addAgentSuccess": "Успешно добавен",
+ "guide": {
+ "func1": {
+ "desc1": "Влезте в страницата с настройки, която искате да изпратите на асистента, като щракнете върху иконата за настройки в горния десен ъгъл на прозореца за чат.",
+ "desc2": "Щракнете върху бутона „Изпращане към пазара на асистенти“ в горния десен ъгъл.",
+ "tag": "Метод 1",
+ "title": "Изпращане чрез LobeChat"
+ },
+ "func2": {
+ "button": "Отидете в хранилището на Github Assistant",
+ "desc": "Ако искате да добавите асистента към индекса, създайте запис в директорията plugins, като използвате agent-template.json или agent-template-full.json, напишете кратко описание и подходящи тагове и след това създайте заявка за изтегляне.",
+ "tag": "Метод 2",
+ "title": "Изпращане чрез Github"
+ }
+ },
+ "search": {
+ "placeholder": "Търсене на име на агент, описание или ключови думи..."
+ },
+ "sidebar": {
+ "comment": "Коментари",
+ "prompt": "Подкани",
+ "title": "Подробности за агента"
+ },
+ "submitAgent": "Изпрати агент",
+ "title": {
+ "allAgents": "Всички агенти",
+ "recentSubmits": "Последни изпратени"
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/metadata.json b/DigitalHumanWeb/locales/bg-BG/metadata.json
new file mode 100644
index 0000000..8c12d30
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}} ви предлага най-доброто изживяване с ChatGPT, Claude, Gemini и OLLaMA WebUI",
+ "title": "{{appName}}: Личен AI инструмент за ефективност, дайте си по-умен мозък"
+ },
+ "discover": {
+ "assistants": {
+ "description": "Създаване на съдържание, копиране, въпроси и отговори, генериране на изображения, генериране на видео, генериране на глас, интелигентни агенти, автоматизирани работни потоци, персонализирайте своя собствен AI / GPTs / OLLaMA интелигентен асистент",
+ "title": "AI асистенти"
+ },
+ "description": "Създаване на съдържание, копиране, въпроси и отговори, генериране на изображения, генериране на видео, генериране на глас, интелигентни агенти, автоматизирани работни потоци, персонализирани AI приложения, персонализирайте своя собствена AI работна станция",
+ "models": {
+ "description": "Изследвайте основните AI модели OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek",
+ "title": "AI модели"
+ },
+ "plugins": {
+ "description": "Търсете графични генератори, академични ресурси, генератори на изображения, генератори на видео, генератори на глас и автоматизирани работни потоци, за да интегрирате богати възможности за плъгини във вашия асистент.",
+ "title": "AI плъгини"
+ },
+ "providers": {
+ "description": "Изследвайте основните доставчици на модели OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter",
+ "title": "Доставчици на AI модели"
+ },
+ "search": "Търсене",
+ "title": "Открий"
+ },
+ "plugins": {
+ "description": "Търсене, генериране на графики, академични изследвания, генериране на изображения, генериране на видео, генериране на глас, автоматизирани работни потоци, персонализирайте ToolCall плъгините на ChatGPT / Claude",
+ "title": "Пазар на плъгини"
+ },
+ "welcome": {
+ "description": "{{appName}} ви предлага най-доброто изживяване с ChatGPT, Claude, Gemini и OLLaMA WebUI",
+ "title": "Добре дошли в {{appName}}: Личен AI инструмент за ефективност, дайте си по-умен мозък"
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/migration.json b/DigitalHumanWeb/locales/bg-BG/migration.json
new file mode 100644
index 0000000..2582a6e
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "Изчисти локални данни",
+ "downloadBackup": "Изтегли резервно копие на данните",
+ "reUpgrade": "Повторно надстройване",
+ "start": "Започни",
+ "upgrade": "Надстрой"
+ },
+ "clear": {
+ "confirm": "На път сте да изчистите локалните данни (глобалните настройки няма да бъдат засегнати). Моля, потвърдете, че сте изтеглили резервно копие на данните."
+ },
+ "description": "В новата версия, данните на {{appName}} направиха огромен скок. Затова ще обновим старите данни, за да ти предоставим по-добро потребителско изживяване.",
+ "features": {
+ "capability": {
+ "desc": "Базирано на технологията IndexedDB, достатъчно за съхранение на всички съобщения от живота ти",
+ "title": "Голяма вместимост"
+ },
+ "performance": {
+ "desc": "Автоматично индексиране на милиони съобщения, с отговори на запитвания в милисекунди",
+ "title": "Висока производителност"
+ },
+ "use": {
+ "desc": "Поддържа търсене по заглавие, описание, етикети, съдържание на съобщения и дори преведени текстове, значително повишавайки ефективността на ежедневното търсене",
+ "title": "По-лесен за употреба"
+ }
+ },
+ "title": "Еволюция на данните на {{appName}}",
+ "upgrade": {
+ "error": {
+ "subTitle": "Много съжаляваме, но по време на процеса на обновление на базата данни възникна проблем. Моля, опитайте следните решения: A. Изчистете локалните данни и след това импортирайте резервното копие отново; B. Кликнете върху бутона „Презареждане на обновлението“.
Ако все още имате проблеми, моля <1>подайте запитване1>, ние ще се свържем с вас възможно най-скоро.",
+ "title": "Неуспешно обновление на базата данни"
+ },
+ "success": {
+ "subTitle": "Базата данни на {{appName}} вече е обновена до последната версия, започнете да я използвате веднага!",
+ "title": "Успешно обновление на базата данни"
+ }
+ },
+ "upgradeTip": "Обновлението обикновено отнема 10~20 секунди, моля, не затваряйте {{appName}} по време на процеса на обновление."
+ },
+ "migrateError": {
+ "missVersion": "В импортираните данни липсва номер на версия. Моля, проверете файла и опитайте отново.",
+ "noMigration": "Не е намерено решение за мигриране за текущата версия. Моля, проверете номера на версията и опитайте отново. Ако проблемът продължава, моля, изпратете заявка за обратна връзка."
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/modelProvider.json b/DigitalHumanWeb/locales/bg-BG/modelProvider.json
new file mode 100644
index 0000000..a3f7446
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "Версия на Azure API, в формат YYYY-MM-DD, вижте [най-новата версия](https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions)",
+ "fetch": "Извличане на списък",
+ "title": "Версия на Azure API"
+ },
+ "empty": "Моля, въведете идентификатор на модела, за да добавите първия модел",
+ "endpoint": {
+ "desc": "Тази стойност може да бъде намерена в раздела „Ключове и крайни точки“ при проверка на ресурсите от портала на Azure",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "Адрес на Azure API"
+ },
+ "modelListPlaceholder": "Изберете или добавете моделите на OpenAI, които сте разгърнали",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "Тази стойност може да бъде намерена в раздела „Ключове и крайни точки“ при проверка на ресурсите от портала на Azure. Можете да използвате KEY1 или KEY2",
+ "placeholder": "Azure API Key",
+ "title": "API ключ"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "Въведете AWS Access Key Id",
+ "placeholder": "AWS Access Key Id",
+ "title": "AWS Access Key Id"
+ },
+ "checker": {
+ "desc": "Тестване дали AccessKeyId / SecretAccessKey са попълнени правилно"
+ },
+ "region": {
+ "desc": "Въведете AWS Region",
+ "placeholder": "AWS Region",
+ "title": "AWS Region"
+ },
+ "secretAccessKey": {
+ "desc": "Въведете AWS Secret Access Key",
+ "placeholder": "AWS Secret Access Key",
+ "title": "AWS Secret Access Key"
+ },
+ "sessionToken": {
+ "desc": "Ако използвате AWS SSO/STS, моля, въведете вашия AWS Session Token",
+ "placeholder": "AWS Session Token",
+ "title": "AWS Session Token (по избор)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "Персонализиран регион за услуги",
+ "customSessionToken": "Персонализиран токен за сесия",
+ "description": "Въведете вашия AWS AccessKeyId / SecretAccessKey, за да започнете сесия. Приложението няма да запази вашата удостоверителна конфигурация",
+ "title": "Използване на персонализирана информация за удостоверяване на Bedrock"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "Въведете вашия GitHub PAT, кликнете [тук](https://github.com/settings/tokens), за да създадете",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "Тестване дали адресът на прокси е попълнен правилно",
+ "title": "Проверка на свързаност"
+ },
+ "customModelName": {
+ "desc": "Добавяне на персонализирани модели, използвайте запетая (,) за разделяне на множество модели",
+ "placeholder": "vicuna,llava,codellama,llama2:13b-text",
+ "title": "Имена на персонализирани модели"
+ },
+ "download": {
+ "desc": "Ollama is downloading the model. Please try not to close this page. It will resume from where it left off if you restart the download.",
+ "remainingTime": "Remaining Time",
+ "speed": "Download Speed",
+ "title": "Downloading model {{model}}"
+ },
+ "endpoint": {
+ "desc": "Въведете адрес на Ollama интерфейсния прокси, оставете празно, ако локално не е указано специално",
+ "title": "Адрес на прокси интерфейс"
+ },
+ "setup": {
+ "cors": {
+ "description": "Заради ограниченията за сигурност в браузъра, трябва да конфигурирате кросдомейн за Ollama, за да работи правилно.",
+ "linux": {
+ "env": "Добавете `Environment` в раздела [Service], като добавите променливата на средата OLLAMA_ORIGINS:",
+ "reboot": "Презаредете systemd и рестартирайте Ollama",
+ "systemd": "Извикайте systemd за редактиране на услугата ollama:"
+ },
+ "macos": "Моля, отворете приложението „Терминал“ и поставете следната команда, след което натиснете Enter",
+ "reboot": "Моля, рестартирайте услугата Ollama след приключване на изпълнението",
+ "title": "Конфигуриране на Ollama за позволяване на кросдомейн достъп",
+ "windows": "На Windows кликнете върху „Контролен панел“, влезте в редактиране на системните променливи. Създайте нова променлива на средата с име „OLLAMA_ORIGINS“, стойност * и кликнете „ОК/Приложи“, за да запазите промените"
+ },
+ "install": {
+ "description": "Моля, потвърдете, че сте активирали Ollama. Ако не сте го изтеглили, моля посетете <1>официалния сайт1> на Ollama.",
+ "docker": "Ако предпочитате да използвате Docker, Ollama предлага официален Docker образ, който можете да изтеглите с помощта на следната команда:",
+ "linux": {
+ "command": "Инсталирайте чрез следната команда:",
+ "manual": "Или може да се обадите на <1>Ръководство за ръчна инсталация на Linux1> и да инсталирате ръчно"
+ },
+ "title": "Инсталиране и стартиране на приложението Ollama локално",
+ "windowsTab": "Windows (преглед)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "Cancel Download",
+ "confirm": "Download",
+ "description": "Enter your Ollama model tag to continue the session",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "Starting download...",
+ "title": "Download specified Ollama model"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI Зероуан Всичко"
+ },
+ "zhipu": {
+ "title": "Интелигентен албум"
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/models.json b/DigitalHumanWeb/locales/bg-BG/models.json
new file mode 100644
index 0000000..4f40d01
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34B предлага отлични резултати в индустриалните приложения с богат набор от обучителни примери."
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9B поддържа 16K токена, предоставяйки ефективни и плавни способности за генериране на език."
+ },
+ "360gpt-pro": {
+ "description": "360GPT Pro, като важен член на серията AI модели на 360, отговаря на разнообразни приложения на естествения език с ефективни способности за обработка на текст, поддържайки разбиране на дълги текстове и многостепенни диалози."
+ },
+ "360gpt-turbo": {
+ "description": "360GPT Turbo предлага мощни изчислителни и диалогови способности, с отлична семантична разбираемост и ефективност на генериране, идеално решение за интелигентни асистенти за предприятия и разработчици."
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8K акцентира на семантичната безопасност и отговорността, проектиран специално за приложения с високи изисквания за безопасност на съдържанието, осигурявайки точност и стабилност на потребителското изживяване."
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Pro е усъвършенстван модел за обработка на естествен език, пуснат от компания 360, с изключителни способности за генериране и разбиране на текст, особено в областта на генерирането и творчеството, способен да обработва сложни езикови трансформации и ролеви игри."
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultra е най-мощната версия в серията Starfire, която подобрява разбирането и обобщаването на текстовото съдържание, докато надгражда свързаните търсения. Това е всестранно решение за повишаване на производителността в офиса и точно отговаряне на нуждите, водещо в индустрията интелигентно решение."
+ },
+ "Baichuan2-Turbo": {
+ "description": "Използва технологии за подобряване на търсенето, за да свърже голям модел с областни знания и знания от интернет. Поддържа качване на различни документи като PDF, Word и вход на уебсайтове, с бърз и цялостен достъп до информация, предоставяйки точни и професионални резултати."
+ },
+ "Baichuan3-Turbo": {
+ "description": "Оптимизиран за често срещани корпоративни сценарии, с значително подобрени резултати и висока цена-качество. В сравнение с модела Baichuan2, генерирането на съдържание е увеличено с 20%, отговорите на знания с 17%, а способността за ролеви игри с 40%. Общите резултати са по-добри от тези на GPT3.5."
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "С 128K свръхдълъг контекстен прозорец, оптимизиран за често срещани корпоративни сценарии, с значително подобрени резултати и висока цена-качество. В сравнение с модела Baichuan2, генерирането на съдържание е увеличено с 20%, отговорите на знания с 17%, а способността за ролеви игри с 40%. Общите резултати са по-добри от тези на GPT3.5."
+ },
+ "Baichuan4": {
+ "description": "Моделът е с най-добри способности в страната, надминаващ чуждестранните водещи модели в задачи като енциклопедични знания, дълги текстове и генериране на съдържание. Също така притежава водещи в индустрията мултимодални способности и отлични резултати в множество авторитетни тестови стандарти."
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B) е иновативен модел, подходящ за приложения в множество области и сложни задачи."
+ },
+ "Max-32k": {
+ "description": "Spark Max 32K е конфигуриран с голяма способност за обработка на контекст, по-силно разбиране на контекста и логическо разсъждение, поддържа текстов вход от 32K токена, подходящ за четене на дълги документи, частни въпроси и отговори и други сценарии."
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPO е високо гъвкава многомоделна комбинация, предназначена да предостави изключителен креативен опит."
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B) е модел с висока точност за инструкции, подходящ за сложни изчисления."
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B) предлага оптимизирани езикови изходи и разнообразни възможности за приложение."
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "Обновление на модела Phi-3-mini."
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "Същият модел Phi-3-medium, но с по-голям размер на контекста за RAG или малко подканване."
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "Модел с 14B параметри, предлагащ по-добро качество от Phi-3-mini, с акцент върху висококачествени, плътни на разсъждения данни."
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "Същият модел Phi-3-mini, но с по-голям размер на контекста за RAG или малко подканване."
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "Най-малкият член на семейството Phi-3. Оптимизиран както за качество, така и за ниска латентност."
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "Същият модел Phi-3-small, но с по-голям размер на контекста за RAG или малко подканване."
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "Модел с 7B параметри, предлагащ по-добро качество от Phi-3-mini, с акцент върху висококачествени, плътни на разсъждения данни."
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128K е конфигуриран с изключителна способност за обработка на контекст, способен да обработва до 128K контекстна информация, особено подходящ за дълги текстове, изискващи цялостен анализ и дългосрочни логически връзки, предоставяйки плавна и последователна логика и разнообразна поддръжка на цитати в сложни текстови комуникации."
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "Като тестова версия на Qwen2, Qwen1.5 използва големи данни за постигане на по-точни диалогови функции."
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B) предлага бързи отговори и естествени диалогови способности, подходящи за многоезични среди."
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2 е напреднал универсален езиков модел, поддържащ множество типове инструкции."
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5 е нова серия от големи езикови модели, проектирана да оптимизира обработката на инструкции."
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5 е нова серия от големи езикови модели, проектирана да оптимизира обработката на инструкции."
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5 е нова серия от големи езикови модели с по-силни способности за разбиране и генериране."
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5 е нова серия от големи езикови модели, проектирана да оптимизира обработката на инструкции."
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coder се фокусира върху писането на код."
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Math се фокусира върху решаването на математически проблеми, предоставяйки професионални отговори на трудни задачи."
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9B е отворен код версия, предоставяща оптимизирано изживяване в разговорните приложения."
+ },
+ "abab5.5-chat": {
+ "description": "Насочена към производствени сценарии, поддържаща обработка на сложни задачи и ефективно генериране на текст, подходяща за професионални приложения."
+ },
+ "abab5.5s-chat": {
+ "description": "Специално проектирана за диалогови сценарии на китайски, предлагаща висококачествено генериране на диалози на китайски, подходяща за множество приложения."
+ },
+ "abab6.5g-chat": {
+ "description": "Специално проектирана за многоезични диалогови системи, поддържаща висококачествено генериране на диалози на английски и много други езици."
+ },
+ "abab6.5s-chat": {
+ "description": "Подходяща за широк спектър от задачи за обработка на естествен език, включително генериране на текст, диалогови системи и др."
+ },
+ "abab6.5t-chat": {
+ "description": "Оптимизирана за диалогови сценарии на китайски, предлагаща плавно и съответстващо на китайските изразни навици генериране на диалози."
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "Fireworks отворен модел за извикване на функции, предлагащ отлични способности за изпълнение на инструкции и отворени, персонализируеми характеристики."
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "Fireworks компанията представя Firefunction-v2, модел за извикване на функции с изключителна производителност, разработен на базата на Llama-3 и оптимизиран за функции, диалози и следване на инструкции."
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13b е визуален езиков модел, който може да приема изображения и текстови входове, обучен с висококачествени данни, подходящ за мултимодални задачи."
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "Gemma 2 9B модел за инструкции, базиран на предишната технология на Google, подходящ за отговори на въпроси, обобщения и разсъждения в множество текстови генериращи задачи."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "Llama 3 70B модел за инструкции, специално оптимизиран за многоезични диалози и разбиране на естествен език, с производителност, превъзхождаща повечето конкурентни модели."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "Llama 3 70B модел за инструкции (HF версия), с резултати, съвпадащи с официалната реализация, подходящ за висококачествени задачи за следване на инструкции."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "Llama 3 8B модел за инструкции, оптимизиран за диалози и многоезични задачи, с изключителна производителност и ефективност."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "Llama 3 8B модел за инструкции (HF версия), с резултати, съвпадащи с официалната реализация, предлагаща висока последователност и съвместимост между платформите."
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "Llama 3.1 405B модел за инструкции, с огромен брой параметри, подходящ за сложни задачи и следване на инструкции в сценарии с високо натоварване."
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "Llama 3.1 70B модел за инструкции, предлагащ изключителни способности за разбиране и генериране на естествен език, идеален за диалогови и аналитични задачи."
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "Llama 3.1 8B модел за инструкции, оптимизиран за многоезични диалози, способен да надмине повечето отворени и затворени модели на общи индустриални стандарти."
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "Mixtral MoE 8x22B модел за инструкции, с голям брой параметри и архитектура с множество експерти, осигуряваща всестранна поддръжка за ефективна обработка на сложни задачи."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "Mixtral MoE 8x7B модел за инструкции, архитектура с множество експерти, предлагаща ефективно следване и изпълнение на инструкции."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "Mixtral MoE 8x7B модел за инструкции (HF версия), с производителност, съвпадаща с официалната реализация, подходящ за множество ефективни сценарии."
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "MythoMax L2 13B модел, комбиниращ новаторски технологии за интеграция, специализиран в разказване на истории и ролеви игри."
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "Phi 3 Vision модел за инструкции, лек мултимодален модел, способен да обработва сложна визуална и текстова информация, с високи способности за разсъждение."
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "StarCoder 15.5B модел, поддържащ напреднали програмни задачи, с подобрени многоезични способности, подходящ за сложна генерация и разбиране на код."
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "StarCoder 7B модел, обучен за над 80 програмни езика, с отлични способности за попълване на код и разбиране на контекста."
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "Yi-Large модел, предлагащ изключителни способности за многоезична обработка, подходящ за различни задачи по генериране и разбиране на език."
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "Многоезичен модел с 398B параметри (94B активни), предлагащ контекстен прозорец с дължина 256K, извикване на функции, структурирани изходи и генериране на основа."
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "Многоезичен модел с 52B параметри (12B активни), предлагащ контекстен прозорец с дължина 256K, извикване на функции, структурирани изходи и генериране на основа."
+ },
+ "ai21-jamba-instruct": {
+ "description": "Модел на базата на Mamba, проектиран за постигане на най-добри резултати, качество и ефективност на разходите."
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "Claude 3.5 Sonnet повишава индустриалните стандарти, с производителност, надвишаваща конкурентните модели и Claude 3 Opus, с отлични резултати в широки оценки, като същевременно предлага скорост и разходи на нашите модели от средно ниво."
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "Claude 3 Haiku е най-бързият и компактен модел на Anthropic, предлагащ почти мигновена скорост на отговор. Той може бързо да отговаря на прости запитвания и заявки. Клиентите ще могат да изградят безпроблемно AI изживяване, имитиращо човешко взаимодействие. Claude 3 Haiku може да обработва изображения и да връща текстови изходи, с контекстуален прозорец от 200K."
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "Claude 3 Opus е най-мощният AI модел на Anthropic, с най-съвременна производителност при високо сложни задачи. Той може да обработва отворени подсказки и непознати сценарии, с отлична плавност и човешко разбиране. Claude 3 Opus демонстрира предимствата на генериращия AI. Claude 3 Opus може да обработва изображения и да връща текстови изходи, с контекстуален прозорец от 200K."
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "Claude 3 Sonnet на Anthropic постига идеален баланс между интелигентност и скорост - особено подходящ за корпоративни работни натоварвания. Той предлага максимална полезност на цена под конкурентите и е проектиран да бъде надежден и издръжлив основен модел, подходящ за мащабируеми AI внедрения. Claude 3 Sonnet може да обработва изображения и да връща текстови изходи, с контекстуален прозорец от 200K."
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "Бърз, икономичен и все пак много способен модел, който може да обработва редица задачи, включително ежедневни разговори, текстов анализ, обобщение и въпроси и отговори на документи."
+ },
+ "anthropic.claude-v2": {
+ "description": "Anthropic демонстрира висока способност в широк спектър от задачи, от сложни разговори и генериране на креативно съдържание до следване на подробни инструкции."
+ },
+ "anthropic.claude-v2:1": {
+ "description": "Актуализирана версия на Claude 2, с двойно по-голям контекстуален прозорец и подобрения в надеждността, процента на халюцинации и точността, основана на доказателства, в контексти с дълги документи и RAG."
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 Haiku е най-бързият и компактен модел на Anthropic, проектиран за почти мигновени отговори. Той предлага бърза и точна насочена производителност."
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opus е най-мощният модел на Anthropic, предназначен за обработка на изключително сложни задачи. Той се отличава с изключителна производителност, интелигентност, гладкост и разбиране."
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 Sonnet предлага способности, надхвърлящи Opus, и по-бърза скорост в сравнение с Sonnet, като същевременно запазва същата цена. Sonnet е особено силен в програмирането, науката за данни, визуалната обработка и агентските задачи."
+ },
+ "aya": {
+ "description": "Aya 23 е многозначен модел, представен от Cohere, поддържащ 23 езика, предоставяйки удобство за многоезични приложения."
+ },
+ "aya:35b": {
+ "description": "Aya 23 е многозначен модел, представен от Cohere, поддържащ 23 езика, предоставяйки удобство за многоезични приложения."
+ },
+ "charglm-3": {
+ "description": "CharGLM-3 е проектиран за ролеви игри и емоционално придружаване, поддържаща дълга многократна памет и персонализиран диалог, с широко приложение."
+ },
+ "chatgpt-4o-latest": {
+ "description": "ChatGPT-4o е динамичен модел, който се актуализира в реално време, за да поддържа най-новата версия. Той комбинира мощно разбиране на езика и генериране на текст, подходящ за мащабни приложения, включително обслужване на клиенти, образование и техническа поддръжка."
+ },
+ "claude-2.0": {
+ "description": "Claude 2 предлага напредък в ключовите способности за бизнеса, включително водещи в индустрията 200K токена контекст, значително намаляване на честотата на илюзии на модела, системни подсказки и нова тестова функция: извикване на инструменти."
+ },
+ "claude-2.1": {
+ "description": "Claude 2 предлага напредък в ключовите способности за бизнеса, включително водещи в индустрията 200K токена контекст, значително намаляване на честотата на илюзии на модела, системни подсказки и нова тестова функция: извикване на инструменти."
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnet предлага способности, надминаващи Opus и по-бърза скорост от Sonnet, като същевременно поддържа същата цена. Sonnet е особено силен в програмирането, науката за данни, визуалната обработка и задачи с агенти."
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haiku е най-бързият и компактен модел на Anthropic, проектиран за почти мигновени отговори. Той предлага бърза и точна насочена производителност."
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opus е най-мощният модел на Anthropic за обработка на високо сложни задачи. Той показва изключителна производителност, интелигентност, гладкост и разбиране."
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnet предлага идеален баланс между интелигентност и скорост за корпоративни работни натоварвания. Той предлага максимална полезност на по-ниска цена, надежден и подходящ за мащабно внедряване."
+ },
+ "claude-instant-1.2": {
+ "description": "Моделът на Anthropic е предназначен за ниска латентност и висока производителност на текстовото генериране, поддържащ генерирането на стотици страници текст."
+ },
+ "codegeex-4": {
+ "description": "CodeGeeX-4 е мощен AI помощник за програмиране, който поддържа интелигентни въпроси и отговори и автоматично допълване на код за различни програмни езици, повишавайки ефективността на разработката."
+ },
+ "codegemma": {
+ "description": "CodeGemma е лек езиков модел, специализиран в различни програмни задачи, поддържащ бърза итерация и интеграция."
+ },
+ "codegemma:2b": {
+ "description": "CodeGemma е лек езиков модел, специализиран в различни програмни задачи, поддържащ бърза итерация и интеграция."
+ },
+ "codellama": {
+ "description": "Code Llama е LLM, фокусиран върху генерирането и обсъждането на код, комбиниращ широк спектър от поддръжка на програмни езици, подходящ за среда на разработчици."
+ },
+ "codellama:13b": {
+ "description": "Code Llama е LLM, фокусиран върху генерирането и обсъждането на код, комбиниращ широк спектър от поддръжка на програмни езици, подходящ за среда на разработчици."
+ },
+ "codellama:34b": {
+ "description": "Code Llama е LLM, фокусиран върху генерирането и обсъждането на код, комбиниращ широк спектър от поддръжка на програмни езици, подходящ за среда на разработчици."
+ },
+ "codellama:70b": {
+ "description": "Code Llama е LLM, фокусиран върху генерирането и обсъждането на код, комбиниращ широк спектър от поддръжка на програмни езици, подходящ за среда на разработчици."
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5 е голям езиков модел, обучен на основата на обширни кодови данни, специално проектиран за решаване на сложни програмни задачи."
+ },
+ "codestral": {
+ "description": "Codestral е първият кодов модел на Mistral AI, предоставящ отлична поддръжка за задачи по генериране на код."
+ },
+ "codestral-latest": {
+ "description": "Codestral е авангарден генеративен модел, фокусиран върху генерирането на код, оптимизиран за междинно попълване и задачи за допълване на код."
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22B е модел, проектиран за следване на инструкции, диалози и програмиране."
+ },
+ "cohere-command-r": {
+ "description": "Command R е мащабируем генеративен модел, насочен към RAG и използване на инструменти, за да позволи AI на производствено ниво за предприятия."
+ },
+ "cohere-command-r-plus": {
+ "description": "Command R+ е модел, оптимизиран за RAG, проектиран да се справя с натоварвания на ниво предприятие."
+ },
+ "command-r": {
+ "description": "Command R е LLM, оптимизиран за диалогови и дълги контекстуални задачи, особено подходящ за динамично взаимодействие и управление на знания."
+ },
+ "command-r-plus": {
+ "description": "Command R+ е високопроизводителен голям езиков модел, проектиран за реални бизнес сценарии и сложни приложения."
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instruct предлага висока надеждност в обработката на инструкции, поддържаща приложения в множество индустрии."
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5 обединява отличителните характеристики на предишните версии, подобрявайки общите и кодиращите способности."
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67B е напреднал модел, обучен за диалози с висока сложност."
+ },
+ "deepseek-chat": {
+ "description": "Новооткритият отворен модел, който съчетава общи и кодови способности, не само запазва общата диалогова способност на оригиналния Chat модел и мощната способност за обработка на код на Coder модела, но също така по-добре се съгласува с човешките предпочитания. Освен това, DeepSeek-V2.5 постигна значителни подобрения в писателските задачи, следването на инструкции и много други области."
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2 е отворен хибриден експертен кодов модел, който се представя отлично в кодовите задачи, сравним с GPT4-Turbo."
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2 е отворен хибриден експертен кодов модел, който се представя отлично в кодовите задачи, сравним с GPT4-Turbo."
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2 е ефективен модел на Mixture-of-Experts, подходящ за икономически ефективни нужди от обработка."
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236B е кодовият модел на DeepSeek, предоставящ мощни способности за генериране на код."
+ },
+ "deepseek/deepseek-chat": {
+ "description": "Новооткритият отворен модел, който съчетава общи и кодови способности, не само запазва общата диалогова способност на оригиналния Chat модел и мощната способност за обработка на код на Coder модела, но също така по-добре се съобразява с човешките предпочитания. Освен това, DeepSeek-V2.5 постигна значителни подобрения в задачи по писане, следване на инструкции и много други."
+ },
+ "emohaa": {
+ "description": "Emohaa е психологически модел с професионални консултантски способности, помагащ на потребителите да разберат емоционалните проблеми."
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001 (Тунинг) предлага стабилна и настройваема производителност, идеален избор за решения на сложни задачи."
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002 (Тунинг) предлага отлична поддръжка на многомодални данни, фокусирайки се върху ефективното решаване на сложни задачи."
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Pro е високопроизводителен AI модел на Google, проектиран за разширяване на широк спектър от задачи."
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001 е ефективен многомодален модел, който поддържа разширяване на широк спектър от приложения."
+ },
+ "gemini-1.5-flash-002": {
+ "description": "Gemini 1.5 Flash 002 е ефективен мултимодален модел, който поддържа разширения за широко приложение."
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827 е проектиран за обработка на мащабни задачи, предлагащ ненадмината скорост на обработка."
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "Gemini 1.5 Flash 8B 0924 е най-новият експериментален модел, който показва значителни подобрения в производителността както в текстови, така и в мултимодални приложения."
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827 предлага оптимизирани многомодални обработващи способности, подходящи за множество сложни задачи."
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flash е най-новият многомодален AI модел на Google, който предлага бърза обработка и поддържа текстови, изображенчески и видео входове, подходящ за ефективно разширяване на множество задачи."
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001 е разширяемо многомодално AI решение, което поддържа широк спектър от сложни задачи."
+ },
+ "gemini-1.5-pro-002": {
+ "description": "Gemini 1.5 Pro 002 е най-новият модел, готов за производство, който предлага по-високо качество на изхода, особено в математически, дълги контексти и визуални задачи."
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801 предлага отлични способности за обработка на многомодални данни, предоставяйки по-голяма гъвкавост за разработка на приложения."
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827 комбинира най-новите оптимизационни технологии, предоставяйки по-ефективни способности за обработка на многомодални данни."
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Pro поддържа до 2 милиона токена и е идеален избор за среден многомодален модел, подходящ за многостранна поддръжка на сложни задачи."
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7B е подходяща за обработка на средни и малки задачи, съчетаваща икономичност."
+ },
+ "gemma2": {
+ "description": "Gemma 2 е ефективен модел, представен от Google, обхващащ множество приложения от малки до сложни обработки на данни."
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9B е модел, оптимизиран за специфични задачи и интеграция на инструменти."
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2 е ефективен модел, представен от Google, обхващащ множество приложения от малки до сложни обработки на данни."
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2 е ефективен модел, представен от Google, обхващащ множество приложения от малки до сложни обработки на данни."
+ },
+ "general": {
+ "description": "Spark Lite е лек голям езиков модел с изключително ниска латентност и висока ефективност, напълно безплатен и отворен, поддържащ функция за търсене в реално време. Неговата бърза реакция го прави отличен за приложения с ниска изчислителна мощ и фино настройване на модела, предоставяйки на потребителите отлична цена-качество и интелигентно изживяване, особено в области като отговори на знания, генериране на съдържание и търсене."
+ },
+ "generalv3": {
+ "description": "Spark Pro е високопроизводителен голям езиков модел, оптимизиран за професионални области, фокусирайки се върху математика, програмиране, медицина, образование и др., и поддържа свързано търсене и вградени плъгини за времето, датата и др. Оптимизираният модел показва отлични резултати и висока производителност в сложни отговори на знания, разбиране на езика и високо ниво на текстово генериране, което го прави идеален избор за професионални приложения."
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Max е най-пълната версия, поддържаща свързано търсене и множество вградени плъгини. Неговите напълно оптимизирани основни способности, системни роли и функции за извикване на функции осигуряват изключителни резултати в различни сложни приложения."
+ },
+ "glm-4": {
+ "description": "GLM-4 е старата флагманска версия, пусната през януари 2024 г., която в момента е заменена от по-силната GLM-4-0520."
+ },
+ "glm-4-0520": {
+ "description": "GLM-4-0520 е най-новата версия на модела, проектирана за високо сложни и разнообразни задачи, с отлични резултати."
+ },
+ "glm-4-air": {
+ "description": "GLM-4-Air е икономичен вариант, с производителност близка до GLM-4, предлагаща бързина и достъпна цена."
+ },
+ "glm-4-airx": {
+ "description": "GLM-4-AirX предлага ефективна версия на GLM-4-Air, с скорост на извеждане до 2.6 пъти."
+ },
+ "glm-4-alltools": {
+ "description": "GLM-4-AllTools е многофункционален интелигентен модел, оптимизиран за поддръжка на сложни инструкции и извиквания на инструменти, като уеб браузинг, обяснение на код и генериране на текст, подходящ за изпълнение на множество задачи."
+ },
+ "glm-4-flash": {
+ "description": "GLM-4-Flash е идеалният избор за обработка на прости задачи, с най-бърза скорост и най-добра цена."
+ },
+ "glm-4-long": {
+ "description": "GLM-4-Long поддържа извеждане на много дълги текстове, подходящ за задачи, свързани с памет и обработка на големи документи."
+ },
+ "glm-4-plus": {
+ "description": "GLM-4-Plus, като флагман с висока интелигентност, разполага с мощни способности за обработка на дълги текстове и сложни задачи, с цялостно подобрена производителност."
+ },
+ "glm-4v": {
+ "description": "GLM-4V предлага мощни способности за разбиране и разсъждение на изображения, поддържаща множество визуални задачи."
+ },
+ "glm-4v-plus": {
+ "description": "GLM-4V-Plus разполага с разбиране на видео съдържание и множество изображения, подходящ за мултимодални задачи."
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827 предлага оптимизирани мултимодални обработващи способности, подходящи за различни сложни задачи."
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827 комбинира най-новите оптимизационни технологии, предоставяйки по-ефективни способности за обработка на мултимодални данни."
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2 продължава концепцията за лекота и ефективност."
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2 е серия от леки отворени текстови модели на Google."
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2 е лека отворена текстова моделна серия на Google."
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B) предлага основни способности за обработка на инструкции, подходящи за леки приложения."
+ },
+ "gpt-3.5-turbo": {
+ "description": "GPT 3.5 Turbo, подходящ за различни задачи по генериране и разбиране на текст, в момента сочи към gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "GPT 3.5 Turbo, подходящ за различни задачи по генериране и разбиране на текст, в момента сочи към gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "GPT 3.5 Turbo, подходящ за различни задачи по генериране и разбиране на текст, в момента сочи към gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "GPT 3.5 Turbo, подходящ за различни задачи по генериране и разбиране на текст, в момента сочи към gpt-3.5-turbo-0125."
+ },
+ "gpt-4": {
+ "description": "GPT-4 предлага по-голям контекстуален прозорец, способен да обработва по-дълги текстови входове, подходящ за сценарии, изискващи интеграция на обширна информация и анализ на данни."
+ },
+ "gpt-4-0125-preview": {
+ "description": "Най-новият модел GPT-4 Turbo разполага с визуални функции. Сега визуалните заявки могат да се използват с JSON формат и извиквания на функции. GPT-4 Turbo е подобрена версия, която предлага икономически ефективна поддръжка за мултимодални задачи. Той намира баланс между точност и ефективност, подходящ за приложения, изискващи взаимодействие в реално време."
+ },
+ "gpt-4-0613": {
+ "description": "GPT-4 предлага по-голям контекстуален прозорец, способен да обработва по-дълги текстови входове, подходящ за сценарии, изискващи интеграция на обширна информация и анализ на данни."
+ },
+ "gpt-4-1106-preview": {
+ "description": "Най-новият модел GPT-4 Turbo разполага с визуални функции. Сега визуалните заявки могат да се използват с JSON формат и извиквания на функции. GPT-4 Turbo е подобрена версия, която предлага икономически ефективна поддръжка за мултимодални задачи. Той намира баланс между точност и ефективност, подходящ за приложения, изискващи взаимодействие в реално време."
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "Най-новият модел GPT-4 Turbo разполага с визуални функции. Сега визуалните заявки могат да се използват с JSON формат и извиквания на функции. GPT-4 Turbo е подобрена версия, която предлага икономически ефективна поддръжка за мултимодални задачи. Той намира баланс между точност и ефективност, подходящ за приложения, изискващи взаимодействие в реално време."
+ },
+ "gpt-4-32k": {
+ "description": "GPT-4 предлага по-голям контекстуален прозорец, способен да обработва по-дълги текстови входове, подходящ за сценарии, изискващи интеграция на обширна информация и анализ на данни."
+ },
+ "gpt-4-32k-0613": {
+ "description": "GPT-4 предлага по-голям контекстуален прозорец, способен да обработва по-дълги текстови входове, подходящ за сценарии, изискващи интеграция на обширна информация и анализ на данни."
+ },
+ "gpt-4-turbo": {
+ "description": "Най-новият модел GPT-4 Turbo разполага с визуални функции. Сега визуалните заявки могат да се използват с JSON формат и извиквания на функции. GPT-4 Turbo е подобрена версия, която предлага икономически ефективна поддръжка за мултимодални задачи. Той намира баланс между точност и ефективност, подходящ за приложения, изискващи взаимодействие в реално време."
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "Най-новият модел GPT-4 Turbo разполага с визуални функции. Сега визуалните заявки могат да се използват с JSON формат и извиквания на функции. GPT-4 Turbo е подобрена версия, която предлага икономически ефективна поддръжка за мултимодални задачи. Той намира баланс между точност и ефективност, подходящ за приложения, изискващи взаимодействие в реално време."
+ },
+ "gpt-4-turbo-preview": {
+ "description": "Най-новият модел GPT-4 Turbo разполага с визуални функции. Сега визуалните заявки могат да се използват с JSON формат и извиквания на функции. GPT-4 Turbo е подобрена версия, която предлага икономически ефективна поддръжка за мултимодални задачи. Той намира баланс между точност и ефективност, подходящ за приложения, изискващи взаимодействие в реално време."
+ },
+ "gpt-4-vision-preview": {
+ "description": "Най-новият модел GPT-4 Turbo разполага с визуални функции. Сега визуалните заявки могат да се използват с JSON формат и извиквания на функции. GPT-4 Turbo е подобрена версия, която предлага икономически ефективна поддръжка за мултимодални задачи. Той намира баланс между точност и ефективност, подходящ за приложения, изискващи взаимодействие в реално време."
+ },
+ "gpt-4o": {
+ "description": "ChatGPT-4o е динамичен модел, който се актуализира в реално време, за да поддържа най-новата версия. Той комбинира мощно разбиране на езика и генериране на текст, подходящ за мащабни приложения, включително обслужване на клиенти, образование и техническа поддръжка."
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "ChatGPT-4o е динамичен модел, който се актуализира в реално време, за да поддържа най-новата версия. Той комбинира мощно разбиране на езика и генериране на текст, подходящ за мащабни приложения, включително обслужване на клиенти, образование и техническа поддръжка."
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o е динамичен модел, който се актуализира в реално време, за да поддържа най-новата версия. Той комбинира мощно разбиране на езика и генериране на текст, подходящ за мащабни приложения, включително обслужване на клиенти, образование и техническа поддръжка."
+ },
+ "gpt-4o-mini": {
+ "description": "GPT-4o mini е най-новият модел на OpenAI, след GPT-4 Omni, който поддържа текстово и визуално въвеждане и генерира текст. Като най-напредналият им малък модел, той е значително по-евтин от другите нови модели и е с над 60% по-евтин от GPT-3.5 Turbo. Запазва най-съвременната интелигентност, като същевременно предлага значителна стойност за парите. GPT-4o mini получи 82% на теста MMLU и в момента е с по-висок рейтинг от GPT-4 по предпочитания за чат."
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13B е езиков модел, който комбинира креативност и интелигентност, обединявайки множество водещи модели."
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "Иновативният отворен модел InternLM2.5 повишава интелигентността на диалога чрез голям брой параметри."
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5 предлага интелигентни решения за диалог в множество сценарии."
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct модел, с 70B параметри, способен да предоставя изключителна производителност в задачи за генериране на текст и инструкции."
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70B предлага по-мощни способности за разсъждение на AI, подходящи за сложни приложения, поддържащи множество изчислителни обработки и осигуряващи ефективност и точност."
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8B е модел с висока производителност, предлагащ бързи способности за генериране на текст, особено подходящ за приложения, изискващи мащабна ефективност и икономичност."
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct модел, с 8B параметри, поддържащ ефективно изпълнение на задачи с визуални указания, предлагащ качествени способности за генериране на текст."
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "Llama 3.1 Sonar Huge Online модел, с 405B параметри, поддържащ контекстова дължина от около 127,000 маркера, проектиран за сложни онлайн чат приложения."
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "Llama 3.1 Sonar Large Chat модел, с 70B параметри, поддържащ контекстова дължина от около 127,000 маркера, подходящ за сложни офлайн чат задачи."
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "Llama 3.1 Sonar Large Online модел, с 70B параметри, поддържащ контекстова дължина от около 127,000 маркера, подходящ за задачи с висока капацитет и разнообразие в чата."
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "Llama 3.1 Sonar Small Chat модел, с 8B параметри, проектиран за офлайн чат, поддържащ контекстова дължина от около 127,000 маркера."
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "Llama 3.1 Sonar Small Online модел, с 8B параметри, поддържащ контекстова дължина от около 127,000 маркера, проектиран за онлайн чат, способен да обработва ефективно различни текстови взаимодействия."
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70B предлага ненадмината способност за обработка на сложност, проектирана за високи изисквания."
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8B предлага качествени способности за разсъждение, подходящи за множество приложения."
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Use предлага мощни способности за извикване на инструменти, поддържащи ефективна обработка на сложни задачи."
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Use е модел, оптимизиран за ефективна употреба на инструменти, поддържащ бързо паралелно изчисление."
+ },
+ "llama3.1": {
+ "description": "Llama 3.1 е водещ модел, представен от Meta, поддържащ до 405B параметри, приложим в области като сложни диалози, многоезичен превод и анализ на данни."
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1 е водещ модел, представен от Meta, поддържащ до 405B параметри, приложим в области като сложни диалози, многоезичен превод и анализ на данни."
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1 е водещ модел, представен от Meta, поддържащ до 405B параметри, приложим в области като сложни диалози, многоезичен превод и анализ на данни."
+ },
+ "llava": {
+ "description": "LLaVA е многомодален модел, комбиниращ визуален кодер и Vicuna, предназначен за мощно визуално и езиково разбиране."
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7B предлага интегрирани способности за визуална обработка, генерирайки сложни изходи чрез визуална информация."
+ },
+ "llava:13b": {
+ "description": "LLaVA е многомодален модел, комбиниращ визуален кодер и Vicuna, предназначен за мощно визуално и езиково разбиране."
+ },
+ "llava:34b": {
+ "description": "LLaVA е многомодален модел, комбиниращ визуален кодер и Vicuna, предназначен за мощно визуално и езиково разбиране."
+ },
+ "mathstral": {
+ "description": "MathΣtral е проектиран за научни изследвания и математически разсъждения, предоставяйки ефективни изчислителни способности и интерпретация на резултати."
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "Мощен модел с 70 милиарда параметри, отличаващ се в разсъждения, кодиране и широки езикови приложения."
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "Универсален модел с 8 милиарда параметри, оптимизиран за диалогови и текстови генериращи задачи."
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "Моделите на Llama 3.1, настроени за инструкции, са оптимизирани за многоезични диалогови случаи на употреба и надминават много от наличните модели с отворен код и затворени чат модели на общи индустриални стандарти."
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "Моделите на Llama 3.1, настроени за инструкции, са оптимизирани за многоезични диалогови случаи на употреба и надминават много от наличните модели с отворен код и затворени чат модели на общи индустриални стандарти."
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "Моделите на Llama 3.1, настроени за инструкции, са оптимизирани за многоезични диалогови случаи на употреба и надминават много от наличните модели с отворен код и затворени чат модели на общи индустриални стандарти."
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B) предлага отлични способности за обработка на език и изключителен интерактивен опит."
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B) е мощен чат модел, поддържащ сложни изисквания за диалог."
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B) предлага многоезична поддръжка, обхващаща богати области на знание."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Lite е подходящ за среди, изискващи висока производителност и ниска латентност."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turbo предлага изключителни способности за разбиране и генериране на език, подходящи за най-строги изчислителни задачи."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Lite е подходящ за среди с ограничени ресурси, предлагащи отличен баланс на производителност."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turbo е високоефективен голям езиков модел, поддържащ широк спектър от приложения."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405B е мощен модел за предварително обучение и настройка на инструкции."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "405B Llama 3.1 Turbo моделът предлага огромна контекстова поддръжка за обработка на големи данни, с изключителна производителност в приложения с изкуствен интелект с много голям мащаб."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70B предлага ефективна поддръжка за многоезични диалози."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "Llama 3.1 70B моделът е прецизно настроен за приложения с високо натоварване, квантован до FP8, осигурявайки по-ефективна изчислителна мощ и точност, гарантиращи изключителна производителност в сложни сценарии."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1 предлага многоезична поддръжка и е един от водещите генеративни модели в индустрията."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "Llama 3.1 8B моделът използва FP8 квантоване, поддържа до 131,072 контекстови маркера и е сред най-добрите отворени модели, подходящи за сложни задачи, с производителност, превъзхождаща много индустриални стандарти."
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instruct е оптимизирана за висококачествени диалогови сценарии и показва отлични резултати в различни човешки оценки."
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instruct е оптимизирана за висококачествени диалогови сценарии, с представяне, надминаващо много затворени модели."
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B Instruct е най-новата версия на Meta, оптимизирана за генериране на висококачествени диалози, надминаваща много водещи затворени модели."
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct е проектиран за висококачествени диалози и показва отлични резултати в човешките оценки, особено подходящ за сценарии с висока интерактивност."
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct е най-новата версия, пусната от Meta, оптимизирана за висококачествени диалогови сценарии, с представяне, надминаващо много водещи затворени модели."
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1 предлага поддръжка на множество езици и е един от водещите генеративни модели в индустрията."
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 405B Instruct е най-голямата и най-мощната версия на модела Llama 3.1 Instruct. Това е високо напреднал модел за диалогово разсъждение и генериране на синтетични данни, който може да се използва и като основа за професионално продължително предварително обучение или фино настройване в специфични области. Многоезичният голям езиков модел (LLMs), предоставен от Llama 3.1, е набор от предварително обучени, коригирани по инструкции генеративни модели, включително размери 8B, 70B и 405B (текстов вход/изход). Текстовите модели, коригирани по инструкции (8B, 70B, 405B), са оптимизирани за многоезични диалогови случаи и надминават много налични отворени чат модели в общи индустриални бенчмаркове. Llama 3.1 е проектиран за търговски и изследователски цели на множество езици. Моделите, коригирани по инструкции, са подходящи за чатове, подобни на асистенти, докато предварително обучените модели могат да се адаптират към различни задачи за генериране на естествен език. Моделите на Llama 3.1 също поддържат използването на изхода на модела за подобряване на други модели, включително генериране на синтетични данни и рафиниране. Llama 3.1 е саморегресивен езиков модел, използващ оптимизирана трансформаторна архитектура. Коригираните версии използват супервизирано фино настройване (SFT) и обучение с човешка обратна връзка (RLHF), за да отговорят на предпочитанията на хората за полезност и безопасност."
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "Актуализирана версия на Meta Llama 3.1 70B Instruct, включваща разширен контекстуален прозорец от 128K, многоезичност и подобрени способности за разсъждение. Многоезичният голям езиков модел (LLMs) на Llama 3.1 е набор от предварително обучени, коригирани за инструкции генериращи модели, включващи размери 8B, 70B и 405B (текстово въвеждане/изход). Текстовите модели, коригирани за инструкции (8B, 70B, 405B), са оптимизирани за многоезични диалогови случаи и надминават много налични отворени чат модели в общи индустриални бенчмаркове. Llama 3.1 е проектиран за търговски и изследователски цели на множество езици. Текстовите модели, коригирани за инструкции, са подходящи за чат, подобен на асистент, докато предварително обучените модели могат да се адаптират за различни задачи по генериране на естествен език. Моделите на Llama 3.1 също поддържат използването на изхода на модела за подобряване на други модели, включително генериране на синтетични данни и рафиниране. Llama 3.1 е саморегресивен езиков модел, използващ оптимизирана архитектура на трансформатор. Коригираните версии използват наблюдавано фино настройване (SFT) и обучение с подсилване с човешка обратна връзка (RLHF), за да отговорят на предпочитанията на хората за полезност и безопасност."
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "Актуализирана версия на Meta Llama 3.1 8B Instruct, включваща разширен контекстуален прозорец от 128K, многоезичност и подобрени способности за разсъждение. Многоезичният голям езиков модел (LLMs) на Llama 3.1 е набор от предварително обучени, коригирани за инструкции генериращи модели, включващи размери 8B, 70B и 405B (текстово въвеждане/изход). Текстовите модели, коригирани за инструкции (8B, 70B, 405B), са оптимизирани за многоезични диалогови случаи и надминават много налични отворени чат модели в общи индустриални бенчмаркове. Llama 3.1 е проектиран за търговски и изследователски цели на множество езици. Текстовите модели, коригирани за инструкции, са подходящи за чат, подобен на асистент, докато предварително обучените модели могат да се адаптират за различни задачи по генериране на естествен език. Моделите на Llama 3.1 също поддържат използването на изхода на модела за подобряване на други модели, включително генериране на синтетични данни и рафиниране. Llama 3.1 е саморегресивен езиков модел, използващ оптимизирана архитектура на трансформатор. Коригираните версии използват наблюдавано фино настройване (SFT) и обучение с подсилване с човешка обратна връзка (RLHF), за да отговорят на предпочитанията на хората за полезност и безопасност."
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3 е отворен голям езиков модел (LLM), насочен към разработчици, изследователи и предприятия, предназначен да им помогне да изградят, експериментират и отговорно разширят своите идеи за генеративен ИИ. Като част от основната система на глобалната общност за иновации, той е особено подходящ за създаване на съдържание, диалогов ИИ, разбиране на езика, научноизследователска и развойна дейност и бизнес приложения."
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3 е отворен голям езиков модел (LLM), насочен към разработчици, изследователи и предприятия, предназначен да им помогне да изградят, експериментират и отговорно разширят своите идеи за генеративен ИИ. Като част от основната система на глобалната общност за иновации, той е особено подходящ за устройства с ограничени изчислителни ресурси и по-бързо време за обучение."
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7B е най-новият бърз и лек модел на Microsoft AI, с производителност, близка до 10 пъти на съществуващите водещи отворени модели."
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22B е най-напредналият Wizard модел на Microsoft AI, показващ изключителна конкурентоспособност."
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-V е новото поколение мултимодален голям модел, представен от OpenBMB, който притежава изключителни способности за OCR разпознаване и мултимодално разбиране, поддържащ широк спектър от приложения."
+ },
+ "mistral": {
+ "description": "Mistral е 7B модел, представен от Mistral AI, подходящ за променливи нужди в обработката на език."
+ },
+ "mistral-large": {
+ "description": "Mixtral Large е флагманският модел на Mistral, комбиниращ способности за генериране на код, математика и разсъждение, поддържащ контекстен прозорец от 128k."
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407) е напреднал модел на езика (LLM) с най-съвременни способности за разсъждение, знание и кодиране."
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Large е флагманският модел, специализиран в многоезични задачи, сложни разсъждения и генериране на код, идеален за висококачествени приложения."
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemo е 12B модел, разработен в сътрудничество между Mistral AI и NVIDIA, предлагащ ефективна производителност."
+ },
+ "mistral-small": {
+ "description": "Mistral Small може да се използва за всяка езикова задача, която изисква висока ефективност и ниска латентност."
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Small е икономически ефективен, бърз и надежден вариант, подходящ за случаи на употреба като превод, резюме и анализ на настроението."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instruct е известен с високата си производителност, подходящ за множество езикови задачи."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7B е модел с фино настройване по заявка, предлагащ оптимизирани отговори за задачи."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3 предлага ефективна изчислителна мощ и разбиране на естествения език, подходяща за широк спектър от приложения."
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B) е супер голям езиков модел, поддържащ изключително високи изисквания за обработка."
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7B е предварително обучен модел на разредени смесени експерти, предназначен за универсални текстови задачи."
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instruct е високопроизводителен индустриален стандартен модел, оптимизиран за бързина и поддръжка на дълги контексти."
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemo е модел с 7.3B параметри, предлагащ многоезична поддръжка и висока производителност."
+ },
+ "mixtral": {
+ "description": "Mixtral е експертен модел на Mistral AI, с отворени тегла, предоставящ поддръжка в генерирането на код и разбиране на езика."
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7B предлага висока толерантност на грешки при паралелно изчисление, подходяща за сложни задачи."
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtral е експертен модел на Mistral AI, с отворени тегла, предоставящ поддръжка в генерирането на код и разбиране на езика."
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128K е модел с изключителна способност за обработка на дълги контексти, подходящ за генериране на много дълги текстове, отговарящи на сложни изисквания за генериране, способен да обработва до 128,000 токена, особено подходящ за научни изследвания, академични и генериране на големи документи."
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32K предлага средна дължина на контекста, способен да обработва 32,768 токена, особено подходящ за генериране на различни дълги документи и сложни диалози, използван в области като създаване на съдържание, генериране на отчети и диалогови системи."
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8K е проектиран за генериране на кратки текстови задачи, с ефективна производителност, способен да обработва 8,192 токена, особено подходящ за кратки диалози, бележки и бързо генериране на съдържание."
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8B е обновена версия на Nous Hermes 2, включваща най-новите вътрешно разработени набори от данни."
+ },
+ "o1-mini": {
+ "description": "o1-mini е бърз и икономичен модел за изводи, проектиран за приложения в програмирането, математиката и науката. Моделът разполага с контекст от 128K и дата на знание до октомври 2023."
+ },
+ "o1-preview": {
+ "description": "o1 е новият модел за изводи на OpenAI, подходящ за сложни задачи, изискващи обширни общи знания. Моделът разполага с контекст от 128K и дата на знание до октомври 2023."
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mamba е модел на езика Mamba 2, специализиран в генерирането на код, предоставящ мощна поддръжка за напреднали кодови и разсъждателни задачи."
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7B е компактен, но високопроизводителен модел, специализиран в обработка на партиди и прости задачи, като класификация и генериране на текст, с добра способност за разсъждение."
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemo е 12B модел, разработен в сътрудничество с Nvidia, предлагащ отлични способности за разсъждение и кодиране, лесен за интеграция и замяна."
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22B е по-голям експертен модел, фокусиран върху сложни задачи, предлагащ отлични способности за разсъждение и по-висока производителност."
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7B е рядък експертен модел, който използва множество параметри за увеличаване на скоростта на разсъждение, подходящ за обработка на многоезични и кодови генериращи задачи."
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o е динамичен модел, който се актуализира в реално време, за да поддържа най-новата версия. Той комбинира мощни способности за разбиране и генериране на език, подходящи за мащабни приложения, включително обслужване на клиенти, образование и техническа поддръжка."
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o mini е най-новият модел на OpenAI, пуснат след GPT-4 Omni, който поддържа вход и изход на текст и изображения. Като най-напредналият им малък модел, той е значително по-евтин от другите нови модели и е с над 60% по-евтин от GPT-3.5 Turbo. Запазва най-съвременната интелигентност, като предлага значителна стойност за парите. GPT-4o mini получи 82% на теста MMLU и в момента е с по-висок рейтинг от GPT-4 в предпочитанията за чат."
+ },
+ "openai/o1-mini": {
+ "description": "o1-mini е бърз и икономичен модел за изводи, проектиран за приложения в програмирането, математиката и науката. Моделът разполага с контекст от 128K и дата на знание до октомври 2023."
+ },
+ "openai/o1-preview": {
+ "description": "o1 е новият модел за изводи на OpenAI, подходящ за сложни задачи, изискващи обширни общи знания. Моделът разполага с контекст от 128K и дата на знание до октомври 2023."
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7B е отворен езиков модел, прецизно настроен с помощта на стратегията „C-RLFT (условно подсилващо обучение)“."
+ },
+ "openrouter/auto": {
+ "description": "В зависимост от дължината на контекста, темата и сложността, вашето запитване ще бъде изпратено до Llama 3 70B Instruct, Claude 3.5 Sonnet (саморегулиращ) или GPT-4o."
+ },
+ "phi3": {
+ "description": "Phi-3 е лек отворен модел, представен от Microsoft, подходящ за ефективна интеграция и мащабно знание разсъждение."
+ },
+ "phi3:14b": {
+ "description": "Phi-3 е лек отворен модел, представен от Microsoft, подходящ за ефективна интеграция и мащабно знание разсъждение."
+ },
+ "pixtral-12b-2409": {
+ "description": "Моделът Pixtral демонстрира силни способности в задачи като разбиране на графики и изображения, отговори на документи, многомодално разсъждение и следване на инструкции, способен да приема изображения с естествено разрешение и съотношение на страните, както и да обработва произволен брой изображения в контекстен прозорец с дължина до 128K токена."
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "Моделът на кода Qwen."
+ },
+ "qwen-long": {
+ "description": "Qwen е мащабен езиков модел, който поддържа дълги текстови контексти и диалогови функции, базирани на дълги документи и множество документи."
+ },
+ "qwen-math-plus-latest": {
+ "description": "Математическият модел Qwen е специално проектиран за решаване на математически задачи."
+ },
+ "qwen-math-turbo-latest": {
+ "description": "Математическият модел Qwen е специално проектиран за решаване на математически задачи."
+ },
+ "qwen-max-latest": {
+ "description": "Qwen Max е езиков модел с мащаб от стотици милиарди параметри, който поддържа вход на различни езици, включително китайски и английски. В момента е основният API модел зад версията на продукта Qwen 2.5."
+ },
+ "qwen-plus-latest": {
+ "description": "Разширената версия на Qwen Turbo е мащабен езиков модел, който поддържа вход на различни езици, включително китайски и английски."
+ },
+ "qwen-turbo-latest": {
+ "description": "Моделът на езика Qwen Turbo е мащабен езиков модел, който поддържа вход на различни езици, включително китайски и английски."
+ },
+ "qwen-vl-chat-v1": {
+ "description": "Qwen VL поддържа гъвкави интерактивни методи, включително множество изображения, многократни въпроси и отговори, творчество и др."
+ },
+ "qwen-vl-max": {
+ "description": "Qwen е мащабен визуален езиков модел. В сравнение с подобрената версия, отново е подобрена способността за визуално разсъждение и следване на инструкции, предоставяйки по-високо ниво на визуално възприятие и познание."
+ },
+ "qwen-vl-plus": {
+ "description": "Qwen е подобрена версия на мащабния визуален езиков модел. Значително подобрена способност за разпознаване на детайли и текст, поддържа изображения с резолюция над един милион пиксела и произволни съотношения на страните."
+ },
+ "qwen-vl-v1": {
+ "description": "Инициализиран с езиковия модел Qwen-7B, добавя модел за изображения, предтренировъчен модел с резолюция на входа от 448."
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2 е нова серия от големи езикови модели с по-силни способности за разбиране и генериране."
+ },
+ "qwen2": {
+ "description": "Qwen2 е новото поколение голям езиков модел на Alibaba, предлагащ отлична производителност за разнообразни приложения."
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "Модел с мащаб 14B, отворен за обществеността от Qwen 2.5."
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "Модел с мащаб 32B, отворен за обществеността от Qwen 2.5."
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "Модел с мащаб 72B, отворен за обществеността от Qwen 2.5."
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "Модел с мащаб 7B, отворен за обществеността от Qwen 2.5."
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "Отворената версия на модела на кода Qwen."
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "Отворената версия на модела на кода Qwen."
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "Моделът Qwen-Math притежава силни способности за решаване на математически задачи."
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "Моделът Qwen-Math притежава силни способности за решаване на математически задачи."
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "Моделът Qwen-Math притежава силни способности за решаване на математически задачи."
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2 е новото поколение голям езиков модел на Alibaba, предлагащ отлична производителност за разнообразни приложения."
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2 е новото поколение голям езиков модел на Alibaba, предлагащ отлична производителност за разнообразни приложения."
+ },
+ "qwen2:72b": {
+ "description": "Qwen2 е новото поколение голям езиков модел на Alibaba, предлагащ отлична производителност за разнообразни приложения."
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar Mini е компактен LLM, с производителност над GPT-3.5, предлагащ мощни многоезични способности, поддържащ английски и корейски, предоставяйки ефективно и компактно решение."
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja) разширява възможностите на Solar Mini, фокусирайки се върху японския език, като същевременно поддържа висока ефективност и отлична производителност на английски и корейски."
+ },
+ "solar-pro": {
+ "description": "Solar Pro е високоинтелигентен LLM, пуснат от Upstage, фокусиран върху способността за следване на инструкции с един GPU, с IFEval оценка над 80. В момента поддържа английски, а официалната версия е планирана за пускане през ноември 2024 г., с разширена поддръжка на езици и дължина на контекста."
+ },
+ "step-1-128k": {
+ "description": "Баланс между производителност и разходи, подходящ за общи сценарии."
+ },
+ "step-1-256k": {
+ "description": "Супер дълга контекстова обработка, особено подходяща за анализ на дълги документи."
+ },
+ "step-1-32k": {
+ "description": "Поддържа диалози със средна дължина, подходящи за множество приложения."
+ },
+ "step-1-8k": {
+ "description": "Малък модел, подходящ за леки задачи."
+ },
+ "step-1-flash": {
+ "description": "Бърз модел, подходящ за реални диалози."
+ },
+ "step-1v-32k": {
+ "description": "Поддържа визуални входове, подобряваща мултимодалното взаимодействие."
+ },
+ "step-1v-8k": {
+ "description": "Малък визуален модел, подходящ за основни текстово-визуални задачи."
+ },
+ "step-2-16k": {
+ "description": "Поддържа взаимодействия с голям мащаб на контекста, подходящи за сложни диалогови сценарии."
+ },
+ "taichu_llm": {
+ "description": "Моделът на езика TaiChu е с изключителни способности за разбиране на езика, текстово генериране, отговори на знания, програмиране, математически изчисления, логическо разсъждение, анализ на емоции, резюмиране на текст и др. Иновативно комбинира предварително обучение с големи данни и разнообразни източници на знания, чрез непрекъснато усъвършенстване на алгоритмичните технологии и усвояване на нови знания от масивни текстови данни, за да осигури на потребителите по-удобна информация и услуги, както и по-интелигентно изживяване."
+ },
+ "taichu_vqa": {
+ "description": "Taichu 2.0V обединява способности за разбиране на изображения, прехвърляне на знания, логическо обяснение и др., и се представя отлично в областта на въпросите и отговорите на текст и изображения."
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B) предлага подобрена изчислителна мощ чрез ефективни стратегии и архитектура на модела."
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B) е подходящ за прецизни задачи с инструкции, предлагащи отлични способности за обработка на език."
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2 е езиков модел, предоставен от Microsoft AI, който се отличава в сложни диалози, многоезичност, разсъждение и интелигентни асистенти."
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2 е езиков модел, предоставен от Microsoft AI, който се отличава в сложни диалози, многоезичност, разсъждение и интелигентни асистенти."
+ },
+ "yi-large": {
+ "description": "Новият модел с хиляда милиарда параметри предлага изключителни способности за отговори и генериране на текст."
+ },
+ "yi-large-fc": {
+ "description": "Поддържа и усилва способностите за извикване на инструменти на базата на модела yi-large, подходящ за различни бизнес сценарии, изискващи изграждане на агенти или работни потоци."
+ },
+ "yi-large-preview": {
+ "description": "Начална версия, препоръчва се да се използва yi-large (новата версия)."
+ },
+ "yi-large-rag": {
+ "description": "Висококачествена услуга, базирана на мощния модел yi-large, комбинираща технологии за извличане и генериране, предлагаща точни отговори и услуги за търсене на информация в реално време."
+ },
+ "yi-large-turbo": {
+ "description": "Изключителна производителност на висока цена. Балансирано прецизно настройване на производителността и скоростта на разсъжденията."
+ },
+ "yi-medium": {
+ "description": "Модел с среден размер, обновен и прецизно настроен, с балансирани способности и висока цена на производителност."
+ },
+ "yi-medium-200k": {
+ "description": "200K свръхдълъг контекстов прозорец, предлагащ дълбочинно разбиране и генериране на дълги текстове."
+ },
+ "yi-spark": {
+ "description": "Малък и мощен, лек и бърз модел. Предлага подобрени способности за математически операции и писане на код."
+ },
+ "yi-vision": {
+ "description": "Модел за сложни визуални задачи, предлагащ висока производителност за разбиране и анализ на изображения."
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/plugin.json b/DigitalHumanWeb/locales/bg-BG/plugin.json
new file mode 100644
index 0000000..6b3ded8
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "Аргументи",
+ "function_call": "Извикване на функция",
+ "off": "Изключи отстраняване на грешки",
+ "on": "Преглед на информацията за извикване на плъгина",
+ "payload": "полезна натоварване",
+ "response": "Отговор",
+ "tool_call": "заявка за инструмент"
+ },
+ "detailModal": {
+ "info": {
+ "description": "Описание на API",
+ "name": "Име на API"
+ },
+ "tabs": {
+ "info": "Възможности на плъгина",
+ "manifest": "Инсталационен файл",
+ "settings": "Настройки"
+ },
+ "title": "Подробности за плъгина"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "Сигурни ли сте, че искате да изтриете този локален плъгин? След като бъде изтрит, той не може да бъде възстановен.",
+ "customParams": {
+ "useProxy": {
+ "label": "Инсталиране чрез прокси (ако срещате грешки при достъп от различен произход, опитайте да активирате тази опция и да преинсталирате)"
+ }
+ },
+ "deleteSuccess": "Плъгинът е изтрит успешно",
+ "manifest": {
+ "identifier": {
+ "desc": "Уникалният идентификатор на плъгина",
+ "label": "Идентификатор"
+ },
+ "mode": {
+ "local": "Визуална конфигурация",
+ "local-tooltip": "Визуалната конфигурация не се поддържа в момента",
+ "url": "Онлайн връзка"
+ },
+ "name": {
+ "desc": "Заглавието на плъгина",
+ "label": "Заглавие",
+ "placeholder": "Търсачка"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "Авторът на плъгина",
+ "label": "Автор"
+ },
+ "avatar": {
+ "desc": "Иконата на плъгина, може да бъде емоджи или URL адрес",
+ "label": "Икона"
+ },
+ "description": {
+ "desc": "Описанието на плъгина",
+ "label": "Описание",
+ "placeholder": "Получаване на информация от търсачки"
+ },
+ "formFieldRequired": "Това поле е задължително",
+ "homepage": {
+ "desc": "Началната страница на плъгина",
+ "label": "Начална страница"
+ },
+ "identifier": {
+ "desc": "Уникалният идентификатор на плъгина, поддържа само буквено-цифрови символи, тире - и долна черта _",
+ "errorDuplicate": "Идентификаторът вече се използва от друг плъгин, моля, променете идентификатора",
+ "label": "Идентификатор",
+ "pattenErrorMessage": "Разрешени са само буквено-цифрови символи, тире - и долна черта _"
+ },
+ "manifest": {
+ "desc": "{{appName}} ще инсталира приставката чрез тази връзка",
+ "label": "URL адрес на описанието на плъгина (Manifest)",
+ "preview": "Преглед на манифеста",
+ "refresh": "Опресняване"
+ },
+ "title": {
+ "desc": "Заглавието на плъгина",
+ "label": "Заглавие",
+ "placeholder": "Търсачка"
+ }
+ },
+ "metaConfig": "Конфигурация на метаданните на плъгина",
+ "modalDesc": "След като добавите персонализиран плъгин, той може да се използва за проверка на разработката на плъгина или директно в сесията. Моля, вижте <1>документацията за разработка↗> за разработка на плъгини.",
+ "openai": {
+ "importUrl": "Импортиране от URL връзка",
+ "schema": "Схема"
+ },
+ "preview": {
+ "card": "Преглед на дисплея на плъгина",
+ "desc": "Преглед на описанието на плъгина",
+ "title": "Преглед на името на плъгина"
+ },
+ "save": "Инсталирай плъгина",
+ "saveSuccess": "Настройките на плъгина са запазени успешно",
+ "tabs": {
+ "manifest": "Манифест на описанието на функцията (Manifest)",
+ "meta": "Метаданни на плъгина"
+ },
+ "title": {
+ "create": "Добави персонализиран плъгин",
+ "edit": "Редактирай персонализиран плъгин"
+ },
+ "type": {
+ "lobe": "Плъгин на LobeChat",
+ "openai": "Плъгин на OpenAI"
+ },
+ "update": "Актуализирай",
+ "updateSuccess": "Настройките на плъгина са актуализирани успешно"
+ },
+ "error": {
+ "fetchError": "Неуспешно извличане на връзката на манифеста. Моля, уверете се, че връзката е валидна и позволява достъп от различен произход.",
+ "installError": "Инсталирането на плъгина {{name}} е неуспешно",
+ "manifestInvalid": "Манифестът не отговаря на спецификацията. Резултат от проверката: \n\n {{error}}",
+ "noManifest": "Файлът на манифеста не съществува",
+ "openAPIInvalid": "Неуспешно анализиране на OpenAPI. Грешка: \n\n {{error}}",
+ "reinstallError": "Неуспешно опресняване на плъгина {{name}}",
+ "urlError": "Връзката не върна съдържание във формат JSON. Моля, уверете се, че е валидна връзка."
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "Изтрит",
+ "local.config": "Конфигурация",
+ "local.title": "Локален"
+ }
+ },
+ "loading": {
+ "content": "Извикване на плъгин...",
+ "plugin": "Плъгинът работи..."
+ },
+ "pluginList": "Списък с плъгини",
+ "setting": "Настройки на плъгина",
+ "settings": {
+ "indexUrl": {
+ "title": "Индекс на пазара",
+ "tooltip": "Редактирането не се поддържа в момента"
+ },
+ "modalDesc": "След като конфигурирате адреса на пазара на плъгини, можете да използвате персонализиран пазар на плъгини",
+ "title": "Конфигуриране на пазара на плъгини"
+ },
+ "showInPortal": "Моля, вижте подробностите в работното пространство",
+ "store": {
+ "actions": {
+ "confirmUninstall": "Плъгинът е на път да бъде деинсталиран. След деинсталирането конфигурацията на плъгина ще бъде изчистена. Моля, потвърдете операцията си.",
+ "detail": "Подробности",
+ "install": "Инсталирай",
+ "manifest": "Редактирай инсталационния файл",
+ "settings": "Настройки",
+ "uninstall": "Деинсталирай"
+ },
+ "communityPlugin": "От трети страни",
+ "customPlugin": "Персонализиран плъгин",
+ "empty": "Все още няма инсталирани плъгини",
+ "installAllPlugins": "Инсталирай всички",
+ "networkError": "Неуспешно извличане на магазина за плъгини. Моля, проверете мрежовата си връзка и опитайте отново",
+ "placeholder": "Търсене на име на плъгин, описание или ключова дума...",
+ "releasedAt": "Издаден на {{createdAt}}",
+ "tabs": {
+ "all": "Всички",
+ "installed": "Инсталирани"
+ },
+ "title": "Магазин за плъгини"
+ },
+ "unknownPlugin": "Неизвестен плъгин"
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/portal.json b/DigitalHumanWeb/locales/bg-BG/portal.json
new file mode 100644
index 0000000..da254d0
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "Артефакти",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "Част",
+ "file": "Файл"
+ }
+ },
+ "Plugins": "Плъгини",
+ "actions": {
+ "genAiMessage": "Създаване на съобщение на помощника",
+ "summary": "Обобщение",
+ "summaryTooltip": "Обобщение на текущото съдържание"
+ },
+ "artifacts": {
+ "display": {
+ "code": "Код",
+ "preview": "Преглед"
+ },
+ "svg": {
+ "copyAsImage": "Копирай като изображение",
+ "copyFail": "Копирането не успя, причина за грешката: {{error}}",
+ "copySuccess": "Изображението е копирано успешно",
+ "download": {
+ "png": "Изтегли като PNG",
+ "svg": "Изтегли като SVG"
+ }
+ }
+ },
+ "emptyArtifactList": "Списъкът с текущите артефакти е празен. Моля, използвайте добавки в разговора и след това проверете отново.",
+ "emptyKnowledgeList": "Текущият списък с познания е празен. Моля, активирайте базата данни на познанията по време на сесията, за да я прегледате.",
+ "files": "файлове",
+ "messageDetail": "Детайли на съобщението",
+ "title": "Разширено прозорец"
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/providers.json b/DigitalHumanWeb/locales/bg-BG/providers.json
new file mode 100644
index 0000000..3dc565a
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "360 AI е платформа за AI модели и услуги, предлагана от компания 360, предлагаща множество напреднали модели за обработка на естествен език, включително 360GPT2 Pro, 360GPT Pro, 360GPT Turbo и 360GPT Turbo Responsibility 8K. Тези модели комбинират голям брой параметри и мултимодални способности, широко използвани в текстово генериране, семантично разбиране, диалогови системи и генериране на код. Чрез гъвкава ценова стратегия, 360 AI отговаря на разнообразни потребителски нужди, поддържайки интеграция за разработчици и насърчавайки иновации и развитие на интелигентни приложения."
+ },
+ "anthropic": {
+ "description": "Anthropic е компания, специализирана в изследвания и разработка на изкуствен интелект, предлагаща набор от напреднали езикови модели, като Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus и Claude 3 Haiku. Тези модели постигат идеален баланс между интелигентност, скорост и разходи, подходящи за различни приложения, от корпоративни натоварвания до бързи отговори. Claude 3.5 Sonnet, като най-новия им модел, показва отлични резултати в множество оценки, като същевременно поддържа висока цена-качество."
+ },
+ "azure": {
+ "description": "Azure предлага разнообразие от напреднали AI модели, включително GPT-3.5 и най-новата серия GPT-4, поддържащи различни типове данни и сложни задачи, с акцент върху безопасни, надеждни и устойчиви AI решения."
+ },
+ "baichuan": {
+ "description": "Baichuan Intelligence е компания, специализирана в разработката на големи модели за изкуствен интелект, чийто модели показват отлични резултати в китайски задачи, свързани с енциклопедии, обработка на дълги текстове и генериране на съдържание, надминавайки основните чуждестранни модели. Baichuan Intelligence също така притежава индустриално водещи мултимодални способности, показвайки отлични резултати в множество авторитетни оценки. Моделите им включват Baichuan 4, Baichuan 3 Turbo и Baichuan 3 Turbo 128k, оптимизирани за различни приложения, предлагащи решения с висока цена-качество."
+ },
+ "bedrock": {
+ "description": "Bedrock е услуга, предоставяна от Amazon AWS, фокусирана върху предоставянето на напреднали AI езикови и визуални модели за предприятия. Семейството на моделите включва серията Claude на Anthropic, серията Llama 3.1 на Meta и други, обхващащи разнообразие от опции от леки до високо производителни, поддържащи текстово генериране, диалог, обработка на изображения и много други задачи, подходящи за различни мащаби и нужди на бизнес приложения."
+ },
+ "deepseek": {
+ "description": "DeepSeek е компания, специализирана в изследвания и приложения на технологии за изкуствен интелект, чийто най-нов модел DeepSeek-V2.5 комбинира способности за общи диалози и обработка на код, постигайки значителни подобрения в съответствието с човешките предпочитания, писателските задачи и следването на инструкции."
+ },
+ "fireworksai": {
+ "description": "Fireworks AI е водещ доставчик на напреднали езикови модели, фокусирайки се върху извикване на функции и мултимодална обработка. Най-новият им модел Firefunction V2, базиран на Llama-3, е оптимизиран за извикване на функции, диалози и следване на инструкции. Визуалният езиков модел FireLLaVA-13B поддържа смесени входове от изображения и текст. Други забележителни модели включват серията Llama и серията Mixtral, предлагащи ефективна поддръжка за многоезично следване на инструкции и генериране."
+ },
+ "github": {
+ "description": "С моделите на GitHub разработчиците могат да станат AI инженери и да изграждат с водещите AI модели в индустрията."
+ },
+ "google": {
+ "description": "Серията Gemini на Google е най-напредналият и универсален AI модел, разработен от Google DeepMind, проектиран за мултимодално разбиране и обработка на текст, код, изображения, аудио и видео. Подходящ за различни среди, от центрове за данни до мобилни устройства, значително увеличава ефективността и приложимостта на AI моделите."
+ },
+ "groq": {
+ "description": "Инженерният двигател LPU на Groq показва изключителни резултати в последните независими тестове на големи езикови модели (LLM), преосмисляйки стандартите за AI решения с невероятната си скорост и ефективност. Groq е представител на мигновен скорост на изводите, демонстрирайки добро представяне в облачни внедрения."
+ },
+ "minimax": {
+ "description": "MiniMax е компания за универсален изкуствен интелект, основана през 2021 г., която се стреми да създаде интелигентност заедно с потребителите. MiniMax е разработила различни универсални големи модели, включително текстови модели с трилйон параметри, модели за глас и модели за изображения. Също така е пуснала приложения като Conch AI."
+ },
+ "mistral": {
+ "description": "Mistral предлага напреднали универсални, професионални и изследователски модели, широко използвани в сложни разсъждения, многоезични задачи, генериране на код и др. Чрез интерфейси за извикване на функции, потребителите могат да интегрират персонализирани функции за специфични приложения."
+ },
+ "moonshot": {
+ "description": "Moonshot е отворена платформа, представена от Beijing Dark Side Technology Co., Ltd., предлагаща множество модели за обработка на естествен език, с широко приложение, включително, но не само, създаване на съдържание, академични изследвания, интелигентни препоръки, медицинска диагностика и др., поддържаща обработка на дълги текстове и сложни генериращи задачи."
+ },
+ "novita": {
+ "description": "Novita AI е платформа, предлагаща API услуги за множество големи езикови модели и генериране на AI изображения, гъвкава, надеждна и икономически ефективна. Поддържа най-новите отворени модели, като Llama3 и Mistral, и предлага цялостни, потребителски приятелски и автоматично разширяеми API решения за разработка на генеративни AI приложения, подходящи за бързото развитие на AI стартъпи."
+ },
+ "ollama": {
+ "description": "Моделите, предоставени от Ollama, обхващат широк спектър от области, включително генериране на код, математически операции, многоезично обработване и диалогова интеракция, отговарящи на разнообразните нужди на предприятията и локализирани внедрявания."
+ },
+ "openai": {
+ "description": "OpenAI е водеща световна изследователска институция в областта на изкуствения интелект, чийто модели, като серията GPT, напредват в границите на обработката на естествен език. OpenAI се стреми да трансформира множество индустрии чрез иновации и ефективни AI решения. Продуктите им предлагат значителна производителност и икономичност, широко използвани в изследвания, бизнес и иновационни приложения."
+ },
+ "openrouter": {
+ "description": "OpenRouter е платформа за услуги, предлагаща интерфейси за множество авангардни големи модели, поддържащи OpenAI, Anthropic, LLaMA и много други, подходяща за разнообразни нужди от разработка и приложение. Потребителите могат гъвкаво да избират оптималния модел и цена в зависимост от собствените си нужди, подобрявайки AI опита."
+ },
+ "perplexity": {
+ "description": "Perplexity е водещ доставчик на модели за генериране на диалози, предлагащ множество напреднали модели Llama 3.1, поддържащи онлайн и офлайн приложения, особено подходящи за сложни задачи по обработка на естествен език."
+ },
+ "qwen": {
+ "description": "Qwen е самостоятелно разработен свръхголям езиков модел на Alibaba Cloud, с мощни способности за разбиране и генериране на естествен език. Може да отговаря на различни въпроси, да създава текстово съдържание, да изразява мнения и да пише код, играейки роля в множество области."
+ },
+ "siliconcloud": {
+ "description": "SiliconFlow се стреми да ускори AGI, за да бъде от полза за човечеството, повишавайки ефективността на мащабния AI чрез лесен за използване и икономически изгоден GenAI стек."
+ },
+ "spark": {
+ "description": "Spark на iFlytek предлага мощни AI способности в множество области и езици, използвайки напреднали технологии за обработка на естествен език, за изграждане на иновационни приложения, подходящи за интелигентни устройства, интелигентно здравеопазване, интелигентни финанси и други вертикални сцени."
+ },
+ "stepfun": {
+ "description": "StepFun предлага индустриално водещи мултимодални и сложни разсъждения, поддържащи разбиране на свръхдълги текстове и мощни функции за самостоятелно планиране на търсене."
+ },
+ "taichu": {
+ "description": "Институтът по автоматизация на Китайската академия на науките и Институтът по изкуствен интелект в Ухан представят ново поколение мултимодални големи модели, поддържащи многократни въпроси и отговори, текстово създаване, генериране на изображения, 3D разбиране, анализ на сигнали и др., с по-силни способности за познание, разбиране и създаване, предоставяйки ново взаимодействие."
+ },
+ "togetherai": {
+ "description": "Together AI се стреми да постигне водеща производителност чрез иновационни AI модели, предлагащи широки възможности за персонализация, включително бърза поддръжка за разширяване и интуитивни процеси на внедряване, отговарящи на разнообразните нужди на предприятията."
+ },
+ "upstage": {
+ "description": "Upstage се фокусира върху разработването на AI модели за различни бизнес нужди, включително Solar LLM и документен AI, с цел постигане на човешки универсален интелект (AGI). Създава прости диалогови агенти чрез Chat API и поддържа извикване на функции, превод, вграждане и специфични приложения."
+ },
+ "zeroone": {
+ "description": "01.AI се фокусира върху технологии за изкуствен интелект от ерата на AI 2.0, активно насърчавайки иновации и приложения на \"човек + изкуствен интелект\", използвайки мощни модели и напреднали AI технологии за повишаване на производителността на човека и реализиране на технологично овластяване."
+ },
+ "zhipu": {
+ "description": "Zhipu AI предлага отворена платформа за мултимодални и езикови модели, поддържащи широк спектър от AI приложения, включително обработка на текст, разбиране на изображения и помощ при програмиране."
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/ragEval.json b/DigitalHumanWeb/locales/bg-BG/ragEval.json
new file mode 100644
index 0000000..1162af3
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "Създаване",
+ "description": {
+ "placeholder": "Описание на набора от данни (по избор)"
+ },
+ "name": {
+ "placeholder": "Име на набора от данни",
+ "required": "Моля, попълнете името на набора от данни"
+ },
+ "title": "Добавяне на набор от данни"
+ },
+ "dataset": {
+ "addNewButton": "Създаване на набор от данни",
+ "emptyGuide": "Текущият набор от данни е празен, моля, създайте нов набор от данни.",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "Импорт на данни"
+ },
+ "columns": {
+ "actions": "Действия",
+ "ideal": {
+ "title": "Очакван отговор"
+ },
+ "question": {
+ "title": "Въпрос"
+ },
+ "referenceFiles": {
+ "title": "Референтни файлове"
+ }
+ },
+ "notSelected": "Моля, изберете набор от данни отляво",
+ "title": "Детайли на набора от данни"
+ },
+ "title": "Набор от данни"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "Създаване",
+ "datasetId": {
+ "placeholder": "Моля, изберете вашия набор от данни за оценка",
+ "required": "Моля, изберете набор от данни за оценка"
+ },
+ "description": {
+ "placeholder": "Описание на задачата за оценка (по избор)"
+ },
+ "name": {
+ "placeholder": "Име на задачата за оценка",
+ "required": "Моля, попълнете името на задачата за оценка"
+ },
+ "title": "Добавяне на задача за оценка"
+ },
+ "addNewButton": "Създаване на оценка",
+ "emptyGuide": "Текущата задача за оценка е празна, започнете да създавате оценка.",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "Проверка на статуса",
+ "confirmDelete": "Наистина ли искате да изтриете тази оценка?",
+ "confirmRun": "Наистина ли искате да стартирате? След стартиране, задачата за оценка ще се изпълнява асинхронно на заден план, затварянето на страницата няма да повлияе на изпълнението на асинхронната задача.",
+ "downloadRecords": "Изтегляне на оценки",
+ "retry": "Опитай отново",
+ "run": "Стартиране",
+ "title": "Действия"
+ },
+ "datasetId": {
+ "title": "Набор от данни"
+ },
+ "name": {
+ "title": "Име на задачата за оценка"
+ },
+ "records": {
+ "title": "Брой оценки"
+ },
+ "referenceFiles": {
+ "title": "Референтни файлове"
+ },
+ "status": {
+ "error": "Грешка при изпълнение",
+ "pending": "В очакване на изпълнение",
+ "processing": "Изпълнява се",
+ "success": "Успешно изпълнение",
+ "title": "Статус"
+ }
+ },
+ "title": "Списък с задачи за оценка"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/setting.json b/DigitalHumanWeb/locales/bg-BG/setting.json
new file mode 100644
index 0000000..65b74e6
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "Относно"
+ },
+ "agentTab": {
+ "chat": "Предпочитания за чат",
+ "meta": "Информация за асистента",
+ "modal": "Настройки на модела",
+ "plugin": "Настройки на добавката",
+ "prompt": "Настройки на ролята",
+ "tts": "Гласова услуга"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "Чрез избора на изпращане на телеметрични данни, можете да ни помогнете да подобрим общото потребителско изживяване на {{appName}}",
+ "title": "Изпращане на анонимни данни за използване"
+ },
+ "title": "Анализи"
+ },
+ "danger": {
+ "clear": {
+ "action": "Изчисти сега",
+ "confirm": "Потвърдете изчистването на всички данни от чата?",
+ "desc": "Това ще изчисти всички данни от сесията, включително агент, файлове, съобщения, плъгини и др.",
+ "success": "Всички съобщения от сесията са изчистени",
+ "title": "Изчисти всички съобщения от сесията"
+ },
+ "reset": {
+ "action": "Нулирай сега",
+ "confirm": "Потвърдете нулирането на всички настройки?",
+ "currentVersion": "Текуща версия",
+ "desc": "Нулирайте всички настройки до стойностите по подразбиране",
+ "success": "Всички настройки са нулирани успешно",
+ "title": "Нулиране на всички настройки"
+ }
+ },
+ "header": {
+ "desc": "Предпочитания и настройки на модела.",
+ "global": "Глобални настройки",
+ "session": "Настройки на сесията",
+ "sessionDesc": "Задаване на роля и предпочитания за сесия.",
+ "sessionWithName": "Настройки на сесията · {{name}}",
+ "title": "Настройки"
+ },
+ "llm": {
+ "aesGcm": "Вашият ключ и адрес на агента ще бъдат криптирани с алгоритъма за криптиране <1>AES-GCM1>",
+ "apiKey": {
+ "desc": "Моля, въведете вашия {{name}} API ключ",
+ "placeholder": "{{name}} API ключ",
+ "title": "API ключ"
+ },
+ "checker": {
+ "button": "Провери",
+ "desc": "Проверете дали API ключът и адресът на прокси сървъра са попълнени правилно",
+ "pass": "Проверката е успешна",
+ "title": "Проверка на свързаността"
+ },
+ "customModelCards": {
+ "addNew": "Създайте и добавете модел {{id}}",
+ "config": "Конфигуриране на модела",
+ "confirmDelete": "Ще бъде изтрит този персонализиран модел и няма да може да бъде възстановен. Моля, бъдете внимателни.",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "Полето, използвано за искане на реално име на разгърнатия модел в Azure OpenAI",
+ "placeholder": "Моля, въведете името на разгърнатия модел в Azure",
+ "title": "Име на разгърнатия модел"
+ },
+ "displayName": {
+ "placeholder": "Моля, въведете името за показване на модела, като например ChatGPT, GPT-4 и други",
+ "title": "Име за показване на модела"
+ },
+ "files": {
+ "extra": "Текущата функция за качване на файлове е само един хак, предназначен за лични опити. Пълната функционалност за качване на файлове ще бъде налична в бъдеще.",
+ "title": "Поддръжка на качване на файлове"
+ },
+ "functionCall": {
+ "extra": "Тази конфигурация ще активира само функцията за извикване на функции в приложението. Поддръжката на извиквания на функции зависи изцяло от самия модел, моля, тествайте наличността на функцията за извикване на функции на този модел.",
+ "title": "Поддръжка на извикване на функции"
+ },
+ "id": {
+ "extra": "Ще бъде използван като етикет на модела",
+ "placeholder": "Моля, въведете идентификатор на модела, като например gpt-4-turbo-preview или claude-2.1",
+ "title": "Идентификатор на модела"
+ },
+ "modalTitle": "Конфигурация на персонализиран модел",
+ "tokens": {
+ "title": "Максимален брой токени",
+ "unlimited": "неограничен"
+ },
+ "vision": {
+ "extra": "Тази конфигурация ще активира само настройките за качване на изображения в приложението. Поддръжката на разпознаване зависи изцяло от самия модел, моля, тествайте наличността на визуалната разпознаваемост на този модел.",
+ "title": "Поддръжка на разпознаване на изображения"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "Режимът на заявка от клиента стартира заявката директно от браузъра, което може да увеличи скоростта на отговора",
+ "title": "Използване на режим на заявка от клиента"
+ },
+ "fetcher": {
+ "fetch": "Изтегляне на списъка с модели",
+ "fetching": "Изтегляне на списъка с модели...",
+ "latestTime": "Последно актуализирано: {{time}}",
+ "noLatestTime": "В момента няма наличен списък"
+ },
+ "helpDoc": "Настройки за документация",
+ "modelList": {
+ "desc": "Изберете модел, който да се показва по време на разговор. Избраният модел ще бъде показан в списъка с модели.",
+ "placeholder": "Моля, изберете модел от списъка",
+ "title": "Списък с модели",
+ "total": "Общо {{count}} налични модела"
+ },
+ "proxyUrl": {
+ "desc": "Включващ адреса по подразбиране, трябва да включва http(s)://",
+ "title": "Адрес на API прокси"
+ },
+ "waitingForMore": "Още модели са <1>планирани да бъдат добавени1>, очаквайте"
+ },
+ "plugin": {
+ "addTooltip": "Персонализиран плъгин",
+ "clearDeprecated": "Премахване на остарели плъгини",
+ "empty": "Все още няма инсталирани плъгини, не се колебайте да разгледате <1>магазина за плъгини1>",
+ "installStatus": {
+ "deprecated": "Деинсталиран"
+ },
+ "settings": {
+ "hint": "Моля, попълнете следните конфигурации въз основа на описанието",
+ "title": "Конфигурация на плъгина {{id}}",
+ "tooltip": "Конфигурация на плъгина"
+ },
+ "store": "Магазин за плъгини"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "Аватар"
+ },
+ "backgroundColor": {
+ "title": "Цвят на фона"
+ },
+ "description": {
+ "placeholder": "Въведете описание на агента",
+ "title": "Описание на агента"
+ },
+ "name": {
+ "placeholder": "Въведете име на агента",
+ "title": "Име"
+ },
+ "prompt": {
+ "placeholder": "Въведете дума за подкана за роля",
+ "title": "Настройка на ролята"
+ },
+ "tag": {
+ "placeholder": "Въведете таг",
+ "title": "Таг"
+ },
+ "title": "Информация за агента"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "Автоматично създайте тема, когато текущият брой съобщения надвиши тази стойност",
+ "title": "Праг на съобщенията"
+ },
+ "chatStyleType": {
+ "title": "Стил на прозореца за чат",
+ "type": {
+ "chat": "Режим на разговор",
+ "docs": "Режим на документ"
+ }
+ },
+ "compressThreshold": {
+ "desc": "Когато некомпресираните съобщения в историята надвишат тази стойност, ще се приложи компресия",
+ "title": "Праг на компресия на дължината на съобщенията в историята"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "Дали да се създава автоматично тема по време на разговора, ефективно само във временни теми",
+ "title": "Автоматично създаване на тема"
+ },
+ "enableCompressThreshold": {
+ "title": "Активиране на прага на компресия на дължината на съобщенията в историята"
+ },
+ "enableHistoryCount": {
+ "alias": "Неограничен",
+ "limited": "Включете само {{number}} съобщения от разговора",
+ "setlimited": "Задайте ограничение за използване на брой исторически съобщения",
+ "title": "Ограничаване на броя на съобщенията в историята",
+ "unlimited": "Неограничен брой съобщения в историята"
+ },
+ "historyCount": {
+ "desc": "Брой исторически съобщения, носени с всяка заявка",
+ "title": "Брой прикачени съобщения в историята"
+ },
+ "inputTemplate": {
+ "desc": "Последното съобщение на потребителя ще бъде попълнено в този шаблон",
+ "placeholder": "Шаблонът за предварителна обработка {{text}} ще бъде заменен с информация за въвеждане в реално време",
+ "title": "Предварителна обработка на потребителския вход"
+ },
+ "title": "Настройки на чата"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "Активиране на ограничението за максимален брой токени"
+ },
+ "frequencyPenalty": {
+ "desc": "Колкото по-висока е стойността, толкова по-вероятно е да се намалят повтарящите се думи",
+ "title": "Наказание за честота"
+ },
+ "maxTokens": {
+ "desc": "Максималният брой токени, използвани за всяко взаимодействие",
+ "title": "Ограничение за максимален брой токени"
+ },
+ "model": {
+ "desc": "{{provider}} модел",
+ "title": "Модел"
+ },
+ "presencePenalty": {
+ "desc": "Колкото по-висока е стойността, толкова по-вероятно е да се разшири до нови теми",
+ "title": "Свежест на темата"
+ },
+ "temperature": {
+ "desc": "Колкото по-висока е стойността, толкова по-случаен е отговорът",
+ "title": "Случайност",
+ "titleWithValue": "Случайност {{value}}"
+ },
+ "title": "Настройки на модела",
+ "topP": {
+ "desc": "Подобно на случайността, но не се променя заедно със случайността",
+ "title": "Top P вземане на проби"
+ }
+ },
+ "settingPlugin": {
+ "title": "Списък с плъгини"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "Достъпът с криптиране е активиран от администратора",
+ "placeholder": "Въведете парола за достъп",
+ "title": "Парола за достъп"
+ },
+ "oauth": {
+ "info": {
+ "desc": "Влязъл",
+ "title": "Информация за акаунта"
+ },
+ "signin": {
+ "action": "Вход",
+ "desc": "Влезте с SSO, за да отключите приложението",
+ "title": "Влезте в акаунта си"
+ },
+ "signout": {
+ "action": "Изход",
+ "confirm": "Потвърдете излизането?",
+ "success": "Изходът е успешен"
+ }
+ },
+ "title": "Системни настройки"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "Модел за преобразуване на реч в текст на OpenAI",
+ "title": "OpenAI",
+ "ttsModel": "Модел за преобразуване на текст в реч на OpenAI"
+ },
+ "showAllLocaleVoice": {
+ "desc": "Ако е затворено, ще се показват само гласове на текущия език",
+ "title": "Показване на всички локални гласове"
+ },
+ "stt": "Настройки за разпознаване на реч",
+ "sttAutoStop": {
+ "desc": "Когато е затворено, разпознаването на реч няма да приключи автоматично и изисква ръчно щракване, за да спре",
+ "title": "Автоматично спиране на разпознаването на реч"
+ },
+ "sttLocale": {
+ "desc": "Езикът на въвеждане на реч, тази опция може да подобри точността на разпознаването на реч",
+ "title": "Език за разпознаване на реч"
+ },
+ "sttService": {
+ "desc": "Където „браузър“ е родната услуга за разпознаване на реч на браузъра",
+ "title": "Услуга за разпознаване на реч"
+ },
+ "title": "Услуга за реч",
+ "tts": "Настройки за преобразуване на текст в реч",
+ "ttsService": {
+ "desc": "Ако използвате услугата за преобразуване на текст в реч на OpenAI, уверете се, че услугата на модела OpenAI е активирана",
+ "title": "Услуга за преобразуване на текст в реч"
+ },
+ "voice": {
+ "desc": "Изберете глас за текущия агент, различните TTS услуги поддържат различни гласове",
+ "preview": "Преглед на гласа",
+ "title": "Глас за преобразуване на текст в реч"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "Аватар"
+ },
+ "fontSize": {
+ "desc": "Размер на шрифта за съдържанието на чата",
+ "marks": {
+ "normal": "Нормален"
+ },
+ "title": "Размер на шрифта"
+ },
+ "lang": {
+ "autoMode": "Следвай системата",
+ "title": "Език"
+ },
+ "neutralColor": {
+ "desc": "Персонализиран неутрален цвят за различни цветови тенденции",
+ "title": "Неутрален цвят"
+ },
+ "primaryColor": {
+ "desc": "Персонализиран основен цвят на темата",
+ "title": "Основен цвят"
+ },
+ "themeMode": {
+ "auto": "Автоматично",
+ "dark": "Тъмен",
+ "light": "Светъл",
+ "title": "Тема"
+ },
+ "title": "Настройки на темата"
+ },
+ "submitAgentModal": {
+ "button": "Изпрати агент",
+ "identifier": "Идентификатор на агент",
+ "metaMiss": "Моля, попълнете информацията за агента, преди да го изпратите. Тя трябва да включва име, описание и тагове",
+ "placeholder": "Въведете уникален идентификатор за агента, напр. web-development",
+ "tooltips": "Споделяне на пазара на агенти"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "Добавете име за лесна идентификация",
+ "placeholder": "Въведете име на устройството",
+ "title": "Име на устройството"
+ },
+ "title": "Информация за устройството",
+ "unknownBrowser": "Неизвестен браузър",
+ "unknownOS": "Неизвестна операционна система"
+ },
+ "warning": {
+ "tip": "След дълъг период на обществено тестване, синхронизацията на WebRTC може да не бъде стабилна за общите изисквания за синхронизация на данни. Моля, <1>инсталирайте сигналния сървър1> и го използвайте след това."
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "WebRTC ще използва това име, за да създаде канал за синхронизиране. Уверете се, че името на канала е уникално.",
+ "placeholder": "Въведете име на канал за синхронизиране",
+ "shuffle": "Генерирай произволно",
+ "title": "Име на канал за синхронизиране"
+ },
+ "channelPassword": {
+ "desc": "Добавете парола, за да осигурите поверителност на канала. Само устройства с правилната парола могат да се присъединят към канала.",
+ "placeholder": "Въведете парола за канал за синхронизиране",
+ "title": "Парола за канал за синхронизиране"
+ },
+ "desc": "Комуникацията на данни в реално време между партньори изисква всички устройства да бъдат онлайн за синхронизиране.",
+ "enabled": {
+ "invalid": "Моля, попълнете адреса на сигналния сървър и името на синхронизиращия канал, преди да го активирате.",
+ "title": "Активиране на синхронизиране"
+ },
+ "signaling": {
+ "desc": "WebRTC ще използва този адрес за синхронизация",
+ "placeholder": "Моля, въведете адреса на сигналния сървър",
+ "title": "Сигнален сървър"
+ },
+ "title": "WebRTC синхронизиране"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "Модел за генериране на помощни метаданни",
+ "modelDesc": "Модел, определен за генериране на име, описание, профилна снимка и етикети на помощник",
+ "title": "Автоматично генериране на информация за помощник"
+ },
+ "queryRewrite": {
+ "label": "Модел за пренаписване на запитвания",
+ "modelDesc": "Определя модел за оптимизиране на запитванията на потребителите",
+ "title": "База знания"
+ },
+ "title": "Системен асистент",
+ "topic": {
+ "label": "Модел за именуване на теми",
+ "modelDesc": "Модел, определен за автоматично преименуване на теми",
+ "title": "Автоматично именуване на теми"
+ },
+ "translation": {
+ "label": "Модел за превод",
+ "modelDesc": "Определя модела, използван за превод",
+ "title": "Настройки на преводния асистент"
+ }
+ },
+ "tab": {
+ "about": "Относно",
+ "agent": "Агент по подразбиране",
+ "common": "Общи настройки",
+ "experiment": "Експеримент",
+ "llm": "Езиков модел",
+ "sync": "Синхронизиране в облака",
+ "system-agent": "Системен асистент",
+ "tts": "Текст към реч"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "Вградени"
+ },
+ "disabled": "Текущият модел не поддържа извиквания на функции и не може да използва плъгина",
+ "plugins": {
+ "enabled": "Активирани: {{num}}",
+ "groupName": "Плъгини",
+ "noEnabled": "Няма активирани плъгини",
+ "store": "Магазин за плъгини"
+ },
+ "title": "Инструменти за разширение"
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/tool.json b/DigitalHumanWeb/locales/bg-BG/tool.json
new file mode 100644
index 0000000..0f7f661
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "Автоматично генериране",
+ "downloading": "Връзките към изображенията, генерирани от DALL·E3, са валидни само за 1 час, кеширане на изображенията локално...",
+ "generate": "Генерирай",
+ "generating": "Генериране...",
+ "images": "Изображения:",
+ "prompt": "подсказка"
+ }
+}
diff --git a/DigitalHumanWeb/locales/bg-BG/welcome.json b/DigitalHumanWeb/locales/bg-BG/welcome.json
new file mode 100644
index 0000000..30a0add
--- /dev/null
+++ b/DigitalHumanWeb/locales/bg-BG/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "Импортирай конфигурация",
+ "market": "Пазар",
+ "start": "Започни сега"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "Смени",
+ "title": "Препоръчване на нови асистенти:"
+ },
+ "defaultMessage": "Аз съм вашият личен интелигентен асистент {{appName}}. Как мога да ви помогна сега?\nАко имате нужда от по-професионален или персонализиран асистент, можете да кликнете на `+`, за да създадете персонализиран асистент.",
+ "defaultMessageWithoutCreate": "Аз съм вашият личен интелигентен асистент {{appName}}. Как мога да ви помогна сега?",
+ "qa": {
+ "q01": "Какво е LobeHub?",
+ "q02": "Какво е {{appName}}?",
+ "q03": "Има ли общностна поддръжка за {{appName}}?",
+ "q04": "Какви функции поддържа {{appName}}?",
+ "q05": "Как да инсталирам и използвам {{appName}}?",
+ "q06": "Каква е ценовата политика на {{appName}}?",
+ "q07": "Дали {{appName}} е безплатен?",
+ "q08": "Има ли облачна версия на услугата?",
+ "q09": "Поддържа ли локални езикови модели?",
+ "q10": "Поддържа ли разпознаване и генериране на изображения?",
+ "q11": "Поддържа ли синтез на реч и разпознаване на реч?",
+ "q12": "Поддържа ли система за плъгини?",
+ "q13": "Има ли собствен пазар за получаване на GPTs?",
+ "q14": "Поддържа ли различни доставчици на AI услуги?",
+ "q15": "Какво да направя, ако срещна проблеми при използването?"
+ },
+ "questions": {
+ "moreBtn": "Научи повече",
+ "title": "Често задавани въпроси:"
+ },
+ "welcome": {
+ "afternoon": "Добър ден",
+ "morning": "Добро утро",
+ "night": "Добър вечер",
+ "noon": "Добър ден"
+ }
+ },
+ "header": "Добре дошли",
+ "pickAgent": "Или изберете от следните шаблони на агенти",
+ "skip": "Пропусни създаването",
+ "slogan": {
+ "desc1": "Проправяйки път на новата ера на мислене и създаване. Създаден за вас, Супер индивида.",
+ "desc2": "Създайте първия си агент и нека започнем~",
+ "title": "Отключете свръхсилата на мозъка си"
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/auth.json b/DigitalHumanWeb/locales/de-DE/auth.json
new file mode 100644
index 0000000..7d6a7d8
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "Anmelden",
+ "loginOrSignup": "Anmelden / Registrieren",
+ "profile": "Profil",
+ "security": "Sicherheit",
+ "signout": "Abmelden",
+ "signup": "Registrieren"
+}
diff --git a/DigitalHumanWeb/locales/de-DE/chat.json b/DigitalHumanWeb/locales/de-DE/chat.json
new file mode 100644
index 0000000..891721b
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "Modell"
+ },
+ "agentDefaultMessage": "Hallo, ich bin **{{name}}**. Du kannst sofort mit mir sprechen oder zu den [Assistenteneinstellungen]({{url}}) gehen, um meine Informationen zu vervollständigen.",
+ "agentDefaultMessageWithSystemRole": "Hallo, ich bin **{{name}}**, {{systemRole}}. Lass uns chatten!",
+ "agentDefaultMessageWithoutEdit": "Hallo, ich bin **{{name}}**. Lassen Sie uns ins Gespräch kommen!",
+ "agents": "Assistent",
+ "artifact": {
+ "generating": "Wird generiert",
+ "thinking": "Denken",
+ "thought": "Denkenprozess",
+ "unknownTitle": "Unbenanntes Werk"
+ },
+ "backToBottom": "Zurück zum Ende",
+ "chatList": {
+ "longMessageDetail": "Details anzeigen"
+ },
+ "clearCurrentMessages": "Aktuelle Nachrichten löschen",
+ "confirmClearCurrentMessages": "Möchtest du wirklich die aktuellen Nachrichten löschen? Diese Aktion kann nicht rückgängig gemacht werden.",
+ "confirmRemoveSessionItemAlert": "Möchtest du diesen Assistenten wirklich löschen? Diese Aktion kann nicht rückgängig gemacht werden.",
+ "confirmRemoveSessionSuccess": "Hilfe wurde erfolgreich entfernt",
+ "defaultAgent": "Standardassistent",
+ "defaultList": "Standardliste",
+ "defaultSession": "Standardassistent",
+ "duplicateSession": {
+ "loading": "Kopieren läuft...",
+ "success": "Kopieren erfolgreich",
+ "title": "{{title}} Kopie"
+ },
+ "duplicateTitle": "{{title}} Kopie",
+ "emptyAgent": "Kein Assistent verfügbar",
+ "historyRange": "Verlaufsbereich",
+ "inbox": {
+ "desc": "Aktiviere das Gehirncluster und entfache den Funken des Denkens. Dein intelligenter Assistent, der mit dir über alles kommuniziert.",
+ "title": "Lass uns plaudern"
+ },
+ "input": {
+ "addAi": "Fügen Sie eine AI-Nachricht hinzu",
+ "addUser": "Fügen Sie eine Benutzer-Nachricht hinzu",
+ "more": "Mehr",
+ "send": "Senden",
+ "sendWithCmdEnter": "Mit {{meta}} + Eingabetaste senden",
+ "sendWithEnter": "Mit Eingabetaste senden",
+ "stop": "Stoppen",
+ "warp": "Zeilenumbruch"
+ },
+ "knowledgeBase": {
+ "all": "Alle Inhalte",
+ "allFiles": "Alle Dateien",
+ "allKnowledgeBases": "Alle Wissensdatenbanken",
+ "disabled": "Der aktuelle Bereitstellungsmodus unterstützt keine Dialoge mit der Wissensdatenbank. Bitte wechseln Sie zur Bereitstellung mit einer Serverdatenbank oder nutzen Sie den {{cloud}}-Dienst.",
+ "library": {
+ "action": {
+ "add": "Hinzufügen",
+ "detail": "Details",
+ "remove": "Entfernen"
+ },
+ "title": "Datei/Wissensdatenbank"
+ },
+ "relativeFilesOrKnowledgeBases": "Verwandte Dateien/Wissensdatenbanken",
+ "title": "Wissensdatenbank",
+ "uploadGuide": "Hochgeladene Dateien können in der „Wissensdatenbank“ eingesehen werden.",
+ "viewMore": "Mehr anzeigen"
+ },
+ "messageAction": {
+ "delAndRegenerate": "Löschen und neu generieren",
+ "regenerate": "Neu generieren"
+ },
+ "newAgent": "Neuer Assistent",
+ "pin": "Anheften",
+ "pinOff": "Anheften aufheben",
+ "rag": {
+ "referenceChunks": "Referenzstücke",
+ "userQuery": {
+ "actions": {
+ "delete": "Abfrage löschen",
+ "regenerate": "Abfrage neu generieren"
+ }
+ }
+ },
+ "regenerate": "Neu generieren",
+ "roleAndArchive": "Rolle und Archiv",
+ "searchAgentPlaceholder": "Suchassistent...",
+ "sendPlaceholder": "Chat-Nachricht eingeben...",
+ "sessionGroup": {
+ "config": "Gruppenkonfiguration",
+ "confirmRemoveGroupAlert": "Die Gruppe wird bald gelöscht. Nach dem Löschen werden die Assistenten in die Standardliste verschoben. Bitte bestätigen Sie Ihre Aktion.",
+ "createAgentSuccess": "Assistent erfolgreich erstellt",
+ "createGroup": "Neue Gruppe erstellen",
+ "createSuccess": "Erstellung erfolgreich",
+ "creatingAgent": "Assistent wird erstellt...",
+ "inputPlaceholder": "Geben Sie den Gruppennamen ein...",
+ "moveGroup": "In Gruppe verschieben",
+ "newGroup": "Neue Gruppe",
+ "rename": "Gruppe umbenennen",
+ "renameSuccess": "Umbenennung erfolgreich",
+ "sortSuccess": "Sortierung erfolgreich aktualisiert",
+ "sorting": "Gruppensortierung wird aktualisiert...",
+ "tooLong": "Gruppenname muss zwischen 1 und 20 Zeichen lang sein"
+ },
+ "shareModal": {
+ "download": "Screenshot herunterladen",
+ "imageType": "Bildformat",
+ "screenshot": "Screenshot",
+ "settings": "Exporteinstellungen",
+ "shareToShareGPT": "ShareGPT-Link generieren",
+ "withBackground": "Mit Hintergrundbild",
+ "withFooter": "Mit Fußzeile",
+ "withPluginInfo": "Mit Plugin-Informationen",
+ "withSystemRole": "Mit Assistentenrolle"
+ },
+ "stt": {
+ "action": "Spracheingabe",
+ "loading": "Erkenne...",
+ "prettifying": "Verschönern..."
+ },
+ "temp": "Temporär",
+ "tokenDetails": {
+ "chats": "Chats",
+ "rest": "Verbleibend",
+ "systemRole": "Systemrolle",
+ "title": "Kontextdetails",
+ "tools": "Werkzeuge",
+ "total": "Insgesamt",
+ "used": "Verwendet"
+ },
+ "tokenTag": {
+ "overload": "Überlastung",
+ "remained": "Verbleibend",
+ "used": "Verwendet"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "Intelligent umbenennen",
+ "duplicate": "Kopie erstellen",
+ "export": "Thema exportieren"
+ },
+ "checkOpenNewTopic": "Soll ein neues Thema eröffnet werden?",
+ "checkSaveCurrentMessages": "Möchten Sie die aktuelle Konversation als Thema speichern?",
+ "confirmRemoveAll": "Möchtest du wirklich alle Themen löschen? Diese Aktion kann nicht rückgängig gemacht werden.",
+ "confirmRemoveTopic": "Möchtest du dieses Thema wirklich löschen? Diese Aktion kann nicht rückgängig gemacht werden.",
+ "confirmRemoveUnstarred": "Möchtest du die nicht markierten Themen wirklich löschen? Diese Aktion kann nicht rückgängig gemacht werden.",
+ "defaultTitle": "Standardthema",
+ "duplicateLoading": "Thema wird kopiert...",
+ "duplicateSuccess": "Thema erfolgreich kopiert",
+ "guide": {
+ "desc": "Klicken Sie auf die Schaltfläche links, um das aktuelle Gespräch als historisches Thema zu speichern und eine neue Gesprächsrunde zu starten",
+ "title": "Themenliste"
+ },
+ "openNewTopic": "Neues Thema öffnen",
+ "removeAll": "Alle Themen löschen",
+ "removeUnstarred": "Nicht markierte Themen löschen",
+ "saveCurrentMessages": "Aktuelle Unterhaltung als Thema speichern",
+ "searchPlaceholder": "Themen durchsuchen...",
+ "title": "Themenliste"
+ },
+ "translate": {
+ "action": "Übersetzen",
+ "clear": "Übersetzung löschen"
+ },
+ "tts": {
+ "action": "Sprachausgabe",
+ "clear": "Sprachausgabe löschen"
+ },
+ "updateAgent": "Assistenteninformationen aktualisieren",
+ "upload": {
+ "action": {
+ "fileUpload": "Datei hochladen",
+ "folderUpload": "Ordner hochladen",
+ "imageDisabled": "Das aktuelle Modell unterstützt keine visuelle Erkennung. Bitte wechseln Sie das Modell, um diese Funktion zu nutzen.",
+ "imageUpload": "Bild hochladen",
+ "tooltip": "Hochladen"
+ },
+ "clientMode": {
+ "actionFiletip": "Datei hochladen",
+ "actionTooltip": "Hochladen",
+ "disabled": "Das aktuelle Modell unterstützt keine visuelle Erkennung und Dateianalyse. Bitte wechseln Sie das Modell, um diese Funktionen zu nutzen."
+ },
+ "preview": {
+ "prepareTasks": "Vorbereitung der Teile...",
+ "status": {
+ "pending": "Vorbereitung des Uploads...",
+ "processing": "Datei wird verarbeitet..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/clerk.json b/DigitalHumanWeb/locales/de-DE/clerk.json
new file mode 100644
index 0000000..d1b54ae
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "Zurück",
+ "badge__default": "Standard",
+ "badge__otherImpersonatorDevice": "Anderes Impersonator-Gerät",
+ "badge__primary": "Primär",
+ "badge__requiresAction": "Erfordert Handlung",
+ "badge__thisDevice": "Dieses Gerät",
+ "badge__unverified": "Nicht verifiziert",
+ "badge__userDevice": "Benutzergerät",
+ "badge__you": "Du",
+ "createOrganization": {
+ "formButtonSubmit": "Organisation erstellen",
+ "invitePage": {
+ "formButtonReset": "Überspringen"
+ },
+ "title": "Organisation erstellen"
+ },
+ "dates": {
+ "lastDay": "Gestern um {{ date | timeString('de-DE') }}",
+ "next6Days": "{{ date | weekday('de-DE', 'long') }} um {{ date | timeString('de-DE') }}",
+ "nextDay": "Morgen um {{ date | timeString('de-DE') }}",
+ "numeric": "{{ date | numeric('de-DE') }}",
+ "previous6Days": "Letzten {{ date | weekday('de-DE', 'long') }} um {{ date | timeString('de-DE') }}",
+ "sameDay": "Heute um {{ date | timeString('de-DE') }}"
+ },
+ "dividerText": "oder",
+ "footerActionLink__useAnotherMethod": "Andere Methode verwenden",
+ "footerPageLink__help": "Hilfe",
+ "footerPageLink__privacy": "Datenschutz",
+ "footerPageLink__terms": "Nutzungsbedingungen",
+ "formButtonPrimary": "Weiter",
+ "formButtonPrimary__verify": "Überprüfen",
+ "formFieldAction__forgotPassword": "Passwort vergessen?",
+ "formFieldError__matchingPasswords": "Passwörter stimmen überein.",
+ "formFieldError__notMatchingPasswords": "Passwörter stimmen nicht überein.",
+ "formFieldError__verificationLinkExpired": "Der Verifizierungslink ist abgelaufen. Bitte fordern Sie einen neuen Link an.",
+ "formFieldHintText__optional": "Optional",
+ "formFieldHintText__slug": "Ein Slug ist eine menschenlesbare ID, die eindeutig sein muss. Wird oft in URLs verwendet.",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "Konto löschen",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "beispiel@email.com, beispiel2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "meine-org",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "Automatische Einladungen für diese Domain aktivieren",
+ "formFieldLabel__backupCode": "Backup-Code",
+ "formFieldLabel__confirmDeletion": "Bestätigung",
+ "formFieldLabel__confirmPassword": "Passwort bestätigen",
+ "formFieldLabel__currentPassword": "Aktuelles Passwort",
+ "formFieldLabel__emailAddress": "E-Mail-Adresse",
+ "formFieldLabel__emailAddress_username": "E-Mail-Adresse oder Benutzername",
+ "formFieldLabel__emailAddresses": "E-Mail-Adressen",
+ "formFieldLabel__firstName": "Vorname",
+ "formFieldLabel__lastName": "Nachname",
+ "formFieldLabel__newPassword": "Neues Passwort",
+ "formFieldLabel__organizationDomain": "Domain",
+ "formFieldLabel__organizationDomainDeletePending": "Ausstehende Einladungen und Vorschläge löschen",
+ "formFieldLabel__organizationDomainEmailAddress": "Verifizierungs-E-Mail-Adresse",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "Geben Sie eine E-Mail-Adresse unter dieser Domain ein, um einen Code zu erhalten und diese Domain zu verifizieren.",
+ "formFieldLabel__organizationName": "Name",
+ "formFieldLabel__organizationSlug": "Slug",
+ "formFieldLabel__passkeyName": "Name des Passkeys",
+ "formFieldLabel__password": "Passwort",
+ "formFieldLabel__phoneNumber": "Telefonnummer",
+ "formFieldLabel__role": "Rolle",
+ "formFieldLabel__signOutOfOtherSessions": "Abmelden von allen anderen Geräten",
+ "formFieldLabel__username": "Benutzername",
+ "impersonationFab": {
+ "action__signOut": "Abmelden",
+ "title": "Angemeldet als {{identifier}}"
+ },
+ "locale": "de-DE",
+ "maintenanceMode": "Wir führen derzeit Wartungsarbeiten durch, aber keine Sorge, es sollte nicht länger als ein paar Minuten dauern.",
+ "membershipRole__admin": "Admin",
+ "membershipRole__basicMember": "Mitglied",
+ "membershipRole__guestMember": "Gast",
+ "organizationList": {
+ "action__createOrganization": "Organisation erstellen",
+ "action__invitationAccept": "Beitreten",
+ "action__suggestionsAccept": "Anfrage zum Beitritt",
+ "createOrganization": "Organisation erstellen",
+ "invitationAcceptedLabel": "Beigetreten",
+ "subtitle": "um mit {{applicationName}} fortzufahren",
+ "suggestionsAcceptedLabel": "Ausstehende Genehmigung",
+ "title": "Wähle einen Account",
+ "titleWithoutPersonal": "Wähle eine Organisation"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "Automatische Einladungen",
+ "badge__automaticSuggestion": "Automatische Vorschläge",
+ "badge__manualInvitation": "Keine automatische Einschreibung",
+ "badge__unverified": "Nicht verifiziert",
+ "createDomainPage": {
+ "subtitle": "Füge die Domain zur Verifizierung hinzu. Benutzer mit E-Mail-Adressen in dieser Domain können der Organisation automatisch beitreten oder eine Beitrittsanfrage stellen.",
+ "title": "Domain hinzufügen"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "Die Einladungen konnten nicht gesendet werden. Es gibt bereits ausstehende Einladungen für die folgenden E-Mail-Adressen: {{email_addresses}}.",
+ "formButtonPrimary__continue": "Einladungen senden",
+ "selectDropdown__role": "Rolle auswählen",
+ "subtitle": "Gib eine oder mehrere E-Mail-Adressen ein oder füge sie ein, getrennt durch Leerzeichen oder Kommas.",
+ "successMessage": "Einladungen erfolgreich gesendet",
+ "title": "Neue Mitglieder einladen"
+ },
+ "membersPage": {
+ "action__invite": "Einladen",
+ "activeMembersTab": {
+ "menuAction__remove": "Mitglied entfernen",
+ "tableHeader__actions": "",
+ "tableHeader__joined": "Beigetreten",
+ "tableHeader__role": "Rolle",
+ "tableHeader__user": "Benutzer"
+ },
+ "detailsTitle__emptyRow": "Keine Mitglieder zum Anzeigen",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "Laden Sie Benutzer ein, indem Sie eine E-Mail-Domain mit Ihrer Organisation verbinden. Jeder, der sich mit einer passenden E-Mail-Domain anmeldet, kann jederzeit der Organisation beitreten.",
+ "headerTitle": "Automatische Einladungen",
+ "primaryButton": "Verifizierte Domains verwalten"
+ },
+ "table__emptyRow": "Keine Einladungen zum Anzeigen"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "Einladung widerrufen",
+ "tableHeader__invited": "Eingeladen"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "Benutzer, die sich mit einer passenden E-Mail-Domain anmelden, erhalten einen Vorschlag, um eine Beitrittsanfrage für Ihre Organisation zu stellen.",
+ "headerTitle": "Automatische Vorschläge",
+ "primaryButton": "Verifizierte Domains verwalten"
+ },
+ "menuAction__approve": "Genehmigen",
+ "menuAction__reject": "Abweisen",
+ "tableHeader__requested": "Zugriff angefragt",
+ "table__emptyRow": "Keine Anfragen zum Anzeigen"
+ },
+ "start": {
+ "headerTitle__invitations": "Einladungen",
+ "headerTitle__members": "Mitglieder",
+ "headerTitle__requests": "Anfragen"
+ }
+ },
+ "navbar": {
+ "description": "Verwalten Sie Ihre Organisation.",
+ "general": "Allgemein",
+ "members": "Mitglieder",
+ "title": "Organisation"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "Geben Sie unten \"{{organizationName}}\" ein, um fortzufahren.",
+ "messageLine1": "Sind Sie sicher, dass Sie diese Organisation löschen möchten?",
+ "messageLine2": "Diese Aktion ist endgültig und nicht rückgängig zu machen.",
+ "successMessage": "Sie haben die Organisation gelöscht.",
+ "title": "Organisation löschen"
+ },
+ "leaveOrganization": {
+ "actionDescription": "Geben Sie unten \"{{organizationName}}\" ein, um fortzufahren.",
+ "messageLine1": "Sind Sie sicher, dass Sie diese Organisation verlassen möchten? Sie verlieren den Zugriff auf diese Organisation und ihre Anwendungen.",
+ "messageLine2": "Diese Aktion ist endgültig und nicht rückgängig zu machen.",
+ "successMessage": "Sie haben die Organisation verlassen.",
+ "title": "Organisation verlassen"
+ },
+ "title": "Gefahr"
+ },
+ "domainSection": {
+ "menuAction__manage": "Verwalten",
+ "menuAction__remove": "Löschen",
+ "menuAction__verify": "Verifizieren",
+ "primaryButton": "Domain hinzufügen",
+ "subtitle": "Ermöglichen Sie Benutzern, automatisch der Organisation beizutreten oder basierend auf einer verifizierten E-Mail-Domain eine Beitrittsanfrage zu stellen.",
+ "title": "Verifizierte Domains"
+ },
+ "successMessage": "Die Organisation wurde aktualisiert.",
+ "title": "Profil aktualisieren"
+ },
+ "removeDomainPage": {
+ "messageLine1": "Die E-Mail-Domain {{domain}} wird entfernt.",
+ "messageLine2": "Benutzer können sich nach diesem Schritt nicht mehr automatisch der Organisation anschließen.",
+ "successMessage": "{{domain}} wurde entfernt.",
+ "title": "Domain entfernen"
+ },
+ "start": {
+ "headerTitle__general": "Allgemein",
+ "headerTitle__members": "Mitglieder",
+ "profileSection": {
+ "primaryButton": "Profil aktualisieren",
+ "title": "Organisationsprofil",
+ "uploadAction__title": "Logo hochladen"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "Das Entfernen dieser Domain wird sich auf eingeladene Benutzer auswirken.",
+ "removeDomainActionLabel__remove": "Domain entfernen",
+ "removeDomainSubtitle": "Entfernen Sie diese Domain aus Ihren verifizierten Domains",
+ "removeDomainTitle": "Domain entfernen"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "Benutzer werden automatisch eingeladen, der Organisation beizutreten, wenn sie sich anmelden und können jederzeit beitreten.",
+ "automaticInvitationOption__label": "Automatische Einladungen",
+ "automaticSuggestionOption__description": "Benutzer erhalten einen Vorschlag, um eine Beitrittsanfrage zu stellen, müssen jedoch von einem Admin genehmigt werden, bevor sie der Organisation beitreten können.",
+ "automaticSuggestionOption__label": "Automatische Vorschläge",
+ "calloutInfoLabel": "Die Änderung des Einschreibemodus betrifft nur neue Benutzer.",
+ "calloutInvitationCountLabel": "Ausstehende Einladungen an Benutzer gesendet: {{count}}",
+ "calloutSuggestionCountLabel": "Ausstehende Vorschläge an Benutzer gesendet: {{count}}",
+ "manualInvitationOption__description": "Benutzer können nur manuell zur Organisation eingeladen werden.",
+ "manualInvitationOption__label": "Keine automatische Einschreibung",
+ "subtitle": "Wählen Sie aus, wie Benutzer aus dieser Domain der Organisation beitreten können."
+ },
+ "start": {
+ "headerTitle__danger": "Gefahr",
+ "headerTitle__enrollment": "Einschreibemöglichkeiten"
+ },
+ "subtitle": "Die Domain {{domain}} ist jetzt verifiziert. Fahren Sie fort, indem Sie den Einschreibemodus auswählen.",
+ "title": "{{domain}} aktualisieren"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "Geben Sie den Verifizierungscode ein, der an Ihre E-Mail-Adresse gesendet wurde.",
+ "formTitle": "Verifizierungscode",
+ "resendButton": "Code nicht erhalten? Erneut senden",
+ "subtitle": "Die Domain {{domainName}} muss per E-Mail verifiziert werden.",
+ "subtitleVerificationCodeScreen": "Ein Verifizierungscode wurde an {{emailAddress}} gesendet. Geben Sie den Code ein, um fortzufahren.",
+ "title": "Domain verifizieren"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "Organisation erstellen",
+ "action__invitationAccept": "Beitreten",
+ "action__manageOrganization": "Verwalten",
+ "action__suggestionsAccept": "Anfrage zum Beitritt",
+ "notSelected": "Keine Organisation ausgewählt",
+ "personalWorkspace": "Persönliches Konto",
+ "suggestionsAcceptedLabel": "Ausstehende Genehmigung"
+ },
+ "paginationButton__next": "Weiter",
+ "paginationButton__previous": "Zurück",
+ "paginationRowText__displaying": "Anzeige",
+ "paginationRowText__of": "von",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "Konto hinzufügen",
+ "action__signOutAll": "Aus allen Konten abmelden",
+ "subtitle": "Wählen Sie das Konto aus, mit dem Sie fortfahren möchten.",
+ "title": "Konto auswählen"
+ },
+ "alternativeMethods": {
+ "actionLink": "Hilfe erhalten",
+ "actionText": "Keins davon? ",
+ "blockButton__backupCode": "Backup-Code verwenden",
+ "blockButton__emailCode": "E-Mail-Code an {{identifier}} senden",
+ "blockButton__emailLink": "Link an {{identifier}} senden",
+ "blockButton__passkey": "Mit Ihrem Passkey anmelden",
+ "blockButton__password": "Mit Ihrem Passwort anmelden",
+ "blockButton__phoneCode": "SMS-Code an {{identifier}} senden",
+ "blockButton__totp": "Authenticator-App verwenden",
+ "getHelp": {
+ "blockButton__emailSupport": "E-Mail-Support",
+ "content": "Wenn Sie Probleme beim Anmelden haben, senden Sie uns eine E-Mail, und wir werden mit Ihnen zusammenarbeiten, um den Zugriff so schnell wie möglich wiederherzustellen.",
+ "title": "Hilfe erhalten"
+ },
+ "subtitle": "Probleme? Sie können eine dieser Methoden verwenden, um sich anzumelden.",
+ "title": "Eine andere Methode verwenden"
+ },
+ "backupCodeMfa": {
+ "subtitle": "Ihr Backup-Code ist der, den Sie bei der Einrichtung der Zwei-Faktor-Authentifizierung erhalten haben.",
+ "title": "Backup-Code eingeben"
+ },
+ "emailCode": {
+ "formTitle": "Verifizierungscode",
+ "resendButton": "Code nicht erhalten? Erneut senden",
+ "subtitle": "Weiter zu {{applicationName}}",
+ "title": "E-Mail überprüfen"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "Zurück zum Original-Tab, um fortzufahren.",
+ "title": "Dieser Bestätigungslink ist abgelaufen"
+ },
+ "failed": {
+ "subtitle": "Zurück zum Original-Tab, um fortzufahren.",
+ "title": "Dieser Bestätigungslink ist ungültig"
+ },
+ "formSubtitle": "Verwenden Sie den Bestätigungslink, der an Ihre E-Mail gesendet wurde",
+ "formTitle": "Bestätigungslink",
+ "loading": {
+ "subtitle": "Sie werden bald weitergeleitet",
+ "title": "Anmelden..."
+ },
+ "resendButton": "Link nicht erhalten? Erneut senden",
+ "subtitle": "Weiter zu {{applicationName}}",
+ "title": "E-Mail überprüfen",
+ "unusedTab": {
+ "title": "Sie können diesen Tab schließen"
+ },
+ "verified": {
+ "subtitle": "Sie werden bald weitergeleitet",
+ "title": "Erfolgreich angemeldet"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Zurück zum Original-Tab, um fortzufahren",
+ "subtitleNewTab": "Zur neu geöffneten Registerkarte zurückkehren, um fortzufahren",
+ "titleNewTab": "Auf anderer Registerkarte angemeldet"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "Passwort zurücksetzen",
+ "resendButton": "Code nicht erhalten? Erneut senden",
+ "subtitle": "Zum Zurücksetzen Ihres Passworts",
+ "subtitle_email": "Geben Sie zuerst den an Ihre E-Mail-Adresse gesendeten Code ein",
+ "subtitle_phone": "Geben Sie zuerst den an Ihr Telefon gesendeten Code ein",
+ "title": "Passwort zurücksetzen"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "Passwort zurücksetzen",
+ "label__alternativeMethods": "Oder melden Sie sich mit einer anderen Methode an",
+ "title": "Passwort vergessen?"
+ },
+ "noAvailableMethods": {
+ "message": "Anmeldung nicht möglich. Es gibt keinen verfügbaren Authentifizierungsfaktor.",
+ "subtitle": "Ein Fehler ist aufgetreten",
+ "title": "Anmeldung nicht möglich"
+ },
+ "passkey": {
+ "subtitle": "Die Verwendung Ihres Passkeys bestätigt, dass Sie es sind. Ihr Gerät kann nach Ihrem Fingerabdruck, Gesicht oder Bildschirmsperre fragen.",
+ "title": "Ihren Passkey verwenden"
+ },
+ "password": {
+ "actionLink": "Andere Methode verwenden",
+ "subtitle": "Geben Sie das Passwort für Ihr Konto ein",
+ "title": "Geben Sie Ihr Passwort ein"
+ },
+ "passwordPwned": {
+ "title": "Passwort kompromittiert"
+ },
+ "phoneCode": {
+ "formTitle": "Verifizierungscode",
+ "resendButton": "Code nicht erhalten? Erneut senden",
+ "subtitle": "Weiter zu {{applicationName}}",
+ "title": "Telefon überprüfen"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "Verifizierungscode",
+ "resendButton": "Code nicht erhalten? Erneut senden",
+ "subtitle": "Geben Sie den Verifizierungscode ein, der an Ihr Telefon gesendet wurde",
+ "title": "Telefon überprüfen"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "Passwort zurücksetzen",
+ "requiredMessage": "Aus Sicherheitsgründen ist es erforderlich, Ihr Passwort zurückzusetzen.",
+ "successMessage": "Ihr Passwort wurde erfolgreich geändert. Wir melden Sie an, bitte warten Sie einen Moment.",
+ "title": "Neues Passwort festlegen"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "Wir müssen Ihre Identität überprüfen, bevor wir Ihr Passwort zurücksetzen."
+ },
+ "start": {
+ "actionLink": "Registrieren",
+ "actionLink__use_email": "E-Mail verwenden",
+ "actionLink__use_email_username": "E-Mail oder Benutzernamen verwenden",
+ "actionLink__use_passkey": "Stattdessen Passkey verwenden",
+ "actionLink__use_phone": "Telefon verwenden",
+ "actionLink__use_username": "Benutzernamen verwenden",
+ "actionText": "Sie haben noch kein Konto?",
+ "subtitle": "Willkommen zurück! Bitte melden Sie sich an, um fortzufahren",
+ "title": "Anmelden bei {{applicationName}}"
+ },
+ "totpMfa": {
+ "formTitle": "Verifizierungscode",
+ "subtitle": "Geben Sie den Verifizierungscode ein, den Ihre Authenticator-App generiert",
+ "title": "Zwei-Faktor-Verifizierung"
+ }
+ },
+ "signInEnterPasswordTitle": "Geben Sie Ihr Passwort ein",
+ "signUp": {
+ "continue": {
+ "actionLink": "Anmelden",
+ "actionText": "Haben Sie bereits ein Konto?",
+ "subtitle": "Bitte füllen Sie die fehlenden Details aus, um fortzufahren.",
+ "title": "Fehlende Felder ausfüllen"
+ },
+ "emailCode": {
+ "formSubtitle": "Geben Sie den an Ihre E-Mail-Adresse gesendeten Verifizierungscode ein",
+ "formTitle": "Verifizierungscode",
+ "resendButton": "Code nicht erhalten? Erneut senden",
+ "subtitle": "Geben Sie den an Ihre E-Mail-Adresse gesendeten Verifizierungscode ein",
+ "title": "E-Mail überprüfen"
+ },
+ "emailLink": {
+ "formSubtitle": "Verwenden Sie den an Ihre E-Mail-Adresse gesendeten Bestätigungslink",
+ "formTitle": "Bestätigungslink",
+ "loading": {
+ "title": "Anmelden..."
+ },
+ "resendButton": "Link nicht erhalten? Erneut senden",
+ "subtitle": "Weiter zu {{applicationName}}",
+ "title": "E-Mail überprüfen",
+ "verified": {
+ "title": "Erfolgreich angemeldet"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Zur neu geöffneten Registerkarte zurückkehren, um fortzufahren",
+ "subtitleNewTab": "Zur vorherigen Registerkarte zurückkehren, um fortzufahren",
+ "title": "E-Mail erfolgreich bestätigt"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "Geben Sie den an Ihre Telefonnummer gesendeten Verifizierungscode ein",
+ "formTitle": "Verifizierungscode",
+ "resendButton": "Code nicht erhalten? Erneut senden",
+ "subtitle": "Geben Sie den an Ihre Telefonnummer gesendeten Verifizierungscode ein",
+ "title": "Telefon überprüfen"
+ },
+ "start": {
+ "actionLink": "Anmelden",
+ "actionText": "Haben Sie bereits ein Konto?",
+ "subtitle": "Willkommen! Bitte füllen Sie die Details aus, um zu beginnen.",
+ "title": "Konto erstellen"
+ }
+ },
+ "socialButtonsBlockButton": "Weiter mit {{provider|titleize}}",
+ "unstable__errors": {
+ "captcha_invalid": "Die Anmeldung war aufgrund fehlgeschlagener Sicherheitsüberprüfungen nicht erfolgreich. Bitte aktualisieren Sie die Seite, um es erneut zu versuchen, oder wenden Sie sich an den Support für weitere Unterstützung.",
+ "captcha_unavailable": "Die Anmeldung war aufgrund fehlgeschlagener Bot-Überprüfungen nicht erfolgreich. Bitte aktualisieren Sie die Seite, um es erneut zu versuchen, oder wenden Sie sich an den Support für weitere Unterstützung.",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "Diese E-Mail-Adresse ist bereits vergeben. Bitte versuchen Sie es mit einer anderen.",
+ "form_identifier_exists__phone_number": "Diese Telefonnummer ist bereits vergeben. Bitte versuchen Sie es mit einer anderen.",
+ "form_identifier_exists__username": "Dieser Benutzername ist bereits vergeben. Bitte versuchen Sie es mit einem anderen.",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "Die E-Mail-Adresse muss eine gültige E-Mail-Adresse sein.",
+ "form_param_format_invalid__phone_number": "Die Telefonnummer muss im gültigen internationalen Format sein.",
+ "form_param_max_length_exceeded__first_name": "Der Vorname darf 256 Zeichen nicht überschreiten.",
+ "form_param_max_length_exceeded__last_name": "Der Nachname darf 256 Zeichen nicht überschreiten.",
+ "form_param_max_length_exceeded__name": "Der Name darf 256 Zeichen nicht überschreiten.",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "Ihr Passwort ist nicht stark genug.",
+ "form_password_pwned": "Dieses Passwort wurde bei einem Datenleck gefunden und kann nicht verwendet werden. Bitte verwenden Sie stattdessen ein anderes Passwort.",
+ "form_password_pwned__sign_in": "Dieses Passwort wurde bei einem Datenleck gefunden und kann nicht verwendet werden. Bitte setzen Sie Ihr Passwort zurück.",
+ "form_password_size_in_bytes_exceeded": "Ihr Passwort hat die zulässige Anzahl von Bytes überschritten. Bitte kürzen Sie es oder entfernen Sie einige Sonderzeichen.",
+ "form_password_validation_failed": "Falsches Passwort",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "Sie können Ihre letzte Identifikation nicht löschen.",
+ "not_allowed_access": "",
+ "passkey_already_exists": "Ein Passkey ist bereits mit diesem Gerät registriert.",
+ "passkey_not_supported": "Passkeys werden auf diesem Gerät nicht unterstützt.",
+ "passkey_pa_not_supported": "Die Registrierung erfordert einen Plattformauthentifikator, aber das Gerät unterstützt dies nicht.",
+ "passkey_registration_cancelled": "Die Passkey-Registrierung wurde abgebrochen oder ist abgelaufen.",
+ "passkey_retrieval_cancelled": "Die Passkey-Verifizierung wurde abgebrochen oder ist abgelaufen.",
+ "passwordComplexity": {
+ "maximumLength": "weniger als {{length}} Zeichen",
+ "minimumLength": "{{length}} oder mehr Zeichen",
+ "requireLowercase": "ein Kleinbuchstabe",
+ "requireNumbers": "eine Zahl",
+ "requireSpecialCharacter": "ein Sonderzeichen",
+ "requireUppercase": "ein Großbuchstabe",
+ "sentencePrefix": "Ihr Passwort muss enthalten"
+ },
+ "phone_number_exists": "Diese Telefonnummer ist bereits vergeben. Bitte versuchen Sie es mit einer anderen.",
+ "zxcvbn": {
+ "couldBeStronger": "Ihr Passwort funktioniert, könnte aber stärker sein. Versuchen Sie, mehr Zeichen hinzuzufügen.",
+ "goodPassword": "Ihr Passwort erfüllt alle erforderlichen Anforderungen.",
+ "notEnough": "Ihr Passwort ist nicht stark genug.",
+ "suggestions": {
+ "allUppercase": "Verwenden Sie Großbuchstaben, aber nicht ausschließlich.",
+ "anotherWord": "Fügen Sie weitere Wörter hinzu, die weniger gebräuchlich sind.",
+ "associatedYears": "Vermeiden Sie Jahre, die mit Ihnen in Verbindung stehen.",
+ "capitalization": "Verwenden Sie mehr als nur den ersten Buchstaben in Großbuchstaben.",
+ "dates": "Vermeiden Sie Daten und Jahre, die mit Ihnen in Verbindung stehen.",
+ "l33t": "Vermeiden Sie vorhersehbare Buchstabenersetzungen wie '@' für 'a'.",
+ "longerKeyboardPattern": "Verwenden Sie längere Tastaturmuster und ändern Sie die Schreibrichtung mehrmals.",
+ "noNeed": "Sie können starke Passwörter erstellen, ohne Symbole, Zahlen oder Großbuchstaben zu verwenden.",
+ "pwned": "Wenn Sie dieses Passwort auch anderswo verwenden, sollten Sie es ändern.",
+ "recentYears": "Vermeiden Sie aktuelle Jahre.",
+ "repeated": "Vermeiden Sie wiederholte Wörter und Zeichen.",
+ "reverseWords": "Vermeiden Sie umgekehrte Schreibweisen von gebräuchlichen Wörtern.",
+ "sequences": "Vermeiden Sie gebräuchliche Zeichenfolgen.",
+ "useWords": "Verwenden Sie mehrere Wörter, aber vermeiden Sie gebräuchliche Phrasen."
+ },
+ "warnings": {
+ "common": "Dies ist ein häufig verwendetes Passwort.",
+ "commonNames": "Gemeinsame Namen und Nachnamen sind leicht zu erraten.",
+ "dates": "Daten sind leicht zu erraten.",
+ "extendedRepeat": "Wiederholte Zeichenmuster wie \"abcabcabc\" sind leicht zu erraten.",
+ "keyPattern": "Kurze Tastaturmuster sind leicht zu erraten.",
+ "namesByThemselves": "Einzelne Namen oder Nachnamen sind leicht zu erraten.",
+ "pwned": "Ihr Passwort wurde bei einem Datenleck im Internet offengelegt.",
+ "recentYears": "Aktuelle Jahre sind leicht zu erraten.",
+ "sequences": "Gebräuchliche Zeichenfolgen wie \"abc\" sind leicht zu erraten.",
+ "similarToCommon": "Dies ähnelt einem häufig verwendeten Passwort.",
+ "simpleRepeat": "Wiederholte Zeichen wie \"aaa\" sind leicht zu erraten.",
+ "straightRow": "Gerade Tastenreihen auf Ihrer Tastatur sind leicht zu erraten.",
+ "topHundred": "Dies ist ein häufig verwendetes Passwort.",
+ "topTen": "Dies ist ein stark verwendetes Passwort.",
+ "userInputs": "Es sollten keine persönlichen oder seitenbezogenen Daten enthalten sein.",
+ "wordByItself": "Einzelne Wörter sind leicht zu erraten."
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "Konto hinzufügen",
+ "action__manageAccount": "Konto verwalten",
+ "action__signOut": "Abmelden",
+ "action__signOutAll": "Aus allen Konten abmelden"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "Kopiert!",
+ "actionLabel__copy": "Alle kopieren",
+ "actionLabel__download": "Herunterladen .txt",
+ "actionLabel__print": "Drucken",
+ "infoText1": "Backup-Codes werden für dieses Konto aktiviert.",
+ "infoText2": "Bewahren Sie die Backup-Codes geheim auf und speichern Sie sie sicher. Sie können Backup-Codes neu generieren, wenn Sie vermuten, dass sie kompromittiert wurden.",
+ "subtitle__codelist": "Speichern Sie sie sicher und halten Sie sie geheim.",
+ "successMessage": "Backup-Codes sind jetzt aktiviert. Sie können einen davon verwenden, um sich in Ihr Konto einzuloggen, wenn Sie den Zugriff auf Ihr Authentifizierungsgerät verlieren. Jeder Code kann nur einmal verwendet werden.",
+ "successSubtitle": "Sie können einen davon verwenden, um sich in Ihr Konto einzuloggen, wenn Sie den Zugriff auf Ihr Authentifizierungsgerät verlieren.",
+ "title": "Backup-Code-Verifizierung hinzufügen",
+ "title__codelist": "Backup-Codes"
+ },
+ "connectedAccountPage": {
+ "formHint": "Wählen Sie einen Anbieter aus, um Ihr Konto zu verbinden.",
+ "formHint__noAccounts": "Es sind keine externen Kontenanbieter verfügbar.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} wird von diesem Konto entfernt.",
+ "messageLine2": "Sie können dieses verbundene Konto nicht mehr verwenden, und alle abhängigen Funktionen funktionieren nicht mehr.",
+ "successMessage": "{{connectedAccount}} wurde von Ihrem Konto entfernt.",
+ "title": "Verbundenes Konto entfernen"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "Der Anbieter wurde Ihrem Konto hinzugefügt",
+ "title": "Verbundenes Konto hinzufügen"
+ },
+ "deletePage": {
+ "actionDescription": "Geben Sie unten \"Konto löschen\" ein, um fortzufahren.",
+ "confirm": "Konto löschen",
+ "messageLine1": "Möchten Sie Ihr Konto wirklich löschen?",
+ "messageLine2": "Diese Aktion ist dauerhaft und nicht rückgängig zu machen.",
+ "title": "Konto löschen"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "Eine E-Mail mit einem Verifizierungscode wird an diese E-Mail-Adresse gesendet.",
+ "formSubtitle": "Geben Sie den Verifizierungscode ein, der an {{identifier}} gesendet wurde.",
+ "formTitle": "Verifizierungscode",
+ "resendButton": "Code nicht erhalten? Erneut senden",
+ "successMessage": "Die E-Mail {{identifier}} wurde Ihrem Konto hinzugefügt."
+ },
+ "emailLink": {
+ "formHint": "Eine E-Mail mit einem Verifizierungslink wird an diese E-Mail-Adresse gesendet.",
+ "formSubtitle": "Klicken Sie auf den Verifizierungslink in der E-Mail, die an {{identifier}} gesendet wurde.",
+ "formTitle": "Verifizierungslink",
+ "resendButton": "Link nicht erhalten? Erneut senden",
+ "successMessage": "Die E-Mail {{identifier}} wurde Ihrem Konto hinzugefügt."
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}} wird von diesem Konto entfernt.",
+ "messageLine2": "Sie können sich nicht mehr mit dieser E-Mail-Adresse anmelden.",
+ "successMessage": "{{emailAddress}} wurde von Ihrem Konto entfernt.",
+ "title": "E-Mail-Adresse entfernen"
+ },
+ "title": "E-Mail-Adresse hinzufügen",
+ "verifyTitle": "E-Mail-Adresse verifizieren"
+ },
+ "formButtonPrimary__add": "Hinzufügen",
+ "formButtonPrimary__continue": "Weiter",
+ "formButtonPrimary__finish": "Fertig",
+ "formButtonPrimary__remove": "Entfernen",
+ "formButtonPrimary__save": "Speichern",
+ "formButtonReset": "Abbrechen",
+ "mfaPage": {
+ "formHint": "Wählen Sie eine Methode zum Hinzufügen aus.",
+ "title": "Zweistufige Verifizierung hinzufügen"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "Bestehende Nummer verwenden",
+ "primaryButton__addPhoneNumber": "Telefonnummer hinzufügen",
+ "removeResource": {
+ "messageLine1": "{{identifier}} erhält keine Verifizierungscodes mehr beim Anmelden.",
+ "messageLine2": "Ihr Konto ist möglicherweise nicht mehr so sicher. Möchten Sie wirklich fortfahren?",
+ "successMessage": "Zweistufige Verifizierung per SMS-Code wurde für {{mfaPhoneCode}} entfernt",
+ "title": "Zweistufige Verifizierung entfernen"
+ },
+ "subtitle__availablePhoneNumbers": "Wählen Sie eine vorhandene Telefonnummer aus, um sich für die zweistufige Verifizierung per SMS-Code zu registrieren, oder fügen Sie eine neue hinzu.",
+ "subtitle__unavailablePhoneNumbers": "Es sind keine verfügbaren Telefonnummern zur Registrierung für die zweistufige Verifizierung per SMS-Code vorhanden. Bitte fügen Sie eine neue hinzu.",
+ "successMessage1": "Beim Anmelden müssen Sie einen Verifizierungscode eingeben, der an diese Telefonnummer gesendet wird.",
+ "successMessage2": "Speichern Sie diese Backup-Codes und bewahren Sie sie an einem sicheren Ort auf. Wenn Sie den Zugriff auf Ihr Authentifizierungsgerät verlieren, können Sie Backup-Codes zum Einloggen verwenden.",
+ "successTitle": "SMS-Code-Verifizierung aktiviert",
+ "title": "SMS-Code-Verifizierung hinzufügen"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "QR-Code stattdessen scannen",
+ "buttonUnableToScan__nonPrimary": "Kann QR-Code nicht scannen?",
+ "infoText__ableToScan": "Richten Sie eine neue Anmelde-Methode in Ihrer Authenticator-App ein und scannen Sie den folgenden QR-Code, um ihn mit Ihrem Konto zu verknüpfen.",
+ "infoText__unableToScan": "Richten Sie eine neue Anmelde-Methode in Ihrem Authenticator ein und geben Sie den unten bereitgestellten Schlüssel ein.",
+ "inputLabel__unableToScan1": "Stellen Sie sicher, dass Zeitbasierte oder Einmalpasswörter aktiviert sind, und beenden Sie dann die Verknüpfung Ihres Kontos.",
+ "inputLabel__unableToScan2": "Alternativ können Sie, wenn Ihr Authenticator TOTP-URIs unterstützt, auch die vollständige URI kopieren."
+ },
+ "removeResource": {
+ "messageLine1": "Verifizierungscodes von diesem Authenticator sind beim Anmelden nicht mehr erforderlich.",
+ "messageLine2": "Ihr Konto ist möglicherweise nicht mehr so sicher. Möchten Sie wirklich fortfahren?",
+ "successMessage": "Zweistufige Verifizierung über Authenticator-App wurde entfernt.",
+ "title": "Zweistufige Verifizierung entfernen"
+ },
+ "successMessage": "Zweistufige Verifizierung ist jetzt aktiviert. Beim Anmelden müssen Sie einen Verifizierungscode von diesem Authenticator als zusätzlichen Schritt eingeben.",
+ "title": "Authenticator-App hinzufügen",
+ "verifySubtitle": "Geben Sie den Verifizierungscode ein, der von Ihrem Authenticator generiert wurde.",
+ "verifyTitle": "Verifizierungscode"
+ },
+ "mobileButton__menu": "Menü",
+ "navbar": {
+ "account": "Profil",
+ "description": "Verwalten Sie Ihre Kontoinformationen.",
+ "security": "Sicherheit",
+ "title": "Konto"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} wird von diesem Konto entfernt.",
+ "title": "Passkey entfernen"
+ },
+ "subtitle__rename": "Sie können den Passkey-Namen ändern, um ihn leichter zu finden.",
+ "title__rename": "Passkey umbenennen"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "Es wird empfohlen, sich von allen anderen Geräten abzumelden, die Ihr altes Passwort verwendet haben.",
+ "readonly": "Ihr Passwort kann derzeit nicht bearbeitet werden, da Sie sich nur über die Unternehmensverbindung anmelden können.",
+ "successMessage__set": "Ihr Passwort wurde festgelegt.",
+ "successMessage__signOutOfOtherSessions": "Alle anderen Geräte wurden abgemeldet.",
+ "successMessage__update": "Ihr Passwort wurde aktualisiert.",
+ "title__set": "Passwort festlegen",
+ "title__update": "Passwort aktualisieren"
+ },
+ "phoneNumberPage": {
+ "infoText": "Eine SMS mit einem Bestätigungscode wird an diese Telefonnummer gesendet. Es können Nachrichten- und Datengebühren anfallen.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} wird von diesem Konto entfernt.",
+ "messageLine2": "Sie können sich nicht mehr mit dieser Telefonnummer anmelden.",
+ "successMessage": "{{phoneNumber}} wurde von Ihrem Konto entfernt.",
+ "title": "Telefonnummer entfernen"
+ },
+ "successMessage": "{{identifier}} wurde Ihrem Konto hinzugefügt.",
+ "title": "Telefonnummer hinzufügen",
+ "verifySubtitle": "Geben Sie den Bestätigungscode ein, der an {{identifier}} gesendet wurde.",
+ "verifyTitle": "Telefonnummer überprüfen"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "Empfohlene Größe 1:1, bis zu 10 MB.",
+ "imageFormDestructiveActionSubtitle": "Entfernen",
+ "imageFormSubtitle": "Hochladen",
+ "imageFormTitle": "Profilbild",
+ "readonly": "Ihre Profilinformationen wurden von der Unternehmensverbindung bereitgestellt und können nicht bearbeitet werden.",
+ "successMessage": "Ihr Profil wurde aktualisiert.",
+ "title": "Profil aktualisieren"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "Vom Gerät abmelden",
+ "title": "Aktive Geräte"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "Erneut versuchen",
+ "actionLabel__reauthorize": "Jetzt autorisieren",
+ "destructiveActionTitle": "Entfernen",
+ "primaryButton": "Konto verbinden",
+ "subtitle__reauthorize": "Die erforderlichen Berechtigungen wurden aktualisiert, und Sie könnten eine eingeschränkte Funktionalität erleben. Bitte autorisieren Sie diese Anwendung erneut, um Probleme zu vermeiden.",
+ "title": "Verbundene Konten"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "Konto löschen",
+ "title": "Konto löschen"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "E-Mail entfernen",
+ "detailsAction__nonPrimary": "Als primär festlegen",
+ "detailsAction__primary": "Verifizierung abschließen",
+ "detailsAction__unverified": "Verifizieren",
+ "primaryButton": "E-Mail-Adresse hinzufügen",
+ "title": "E-Mail-Adressen"
+ },
+ "enterpriseAccountsSection": {
+ "title": "Unternehmenskonten"
+ },
+ "headerTitle__account": "Profilinformationen",
+ "headerTitle__security": "Sicherheit",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "Neu generieren",
+ "headerTitle": "Backup-Codes",
+ "subtitle__regenerate": "Erhalten Sie einen neuen Satz sicherer Backup-Codes. Vorherige Backup-Codes werden gelöscht und können nicht mehr verwendet werden.",
+ "title__regenerate": "Backup-Codes neu generieren"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "Als Standard festlegen",
+ "destructiveActionLabel": "Entfernen"
+ },
+ "primaryButton": "Zweistufige Verifizierung hinzufügen",
+ "title": "Zweistufige Verifizierung",
+ "totp": {
+ "destructiveActionTitle": "Entfernen",
+ "headerTitle": "Authentifizierungsanwendung"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "Entfernen",
+ "menuAction__rename": "Umbenennen",
+ "title": "Passwörter"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "Passwort festlegen",
+ "primaryButton__updatePassword": "Passwort aktualisieren",
+ "title": "Passwort"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "Telefonnummer entfernen",
+ "detailsAction__nonPrimary": "Als primär festlegen",
+ "detailsAction__primary": "Verifizierung abschließen",
+ "detailsAction__unverified": "Telefonnummer verifizieren",
+ "primaryButton": "Telefonnummer hinzufügen",
+ "title": "Telefonnummern"
+ },
+ "profileSection": {
+ "primaryButton": "Profil aktualisieren",
+ "title": "Profil"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "Benutzernamen festlegen",
+ "primaryButton__updateUsername": "Benutzernamen aktualisieren",
+ "title": "Benutzername"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "Wallet entfernen",
+ "primaryButton": "Web3-Wallets",
+ "title": "Web3-Wallets"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "Ihr Benutzername wurde aktualisiert.",
+ "title__set": "Benutzername festlegen",
+ "title__update": "Benutzername aktualisieren"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} wird von diesem Konto entfernt.",
+ "messageLine2": "Sie können sich nicht mehr mit diesem Web3-Wallet anmelden.",
+ "successMessage": "{{web3Wallet}} wurde von Ihrem Konto entfernt.",
+ "title": "Web3-Wallet entfernen"
+ },
+ "subtitle__availableWallets": "Wählen Sie ein Web3-Wallet aus, um es mit Ihrem Konto zu verbinden.",
+ "subtitle__unavailableWallets": "Es sind keine verfügbaren Web3-Wallets vorhanden.",
+ "successMessage": "Das Wallet wurde Ihrem Konto hinzugefügt.",
+ "title": "Web3-Wallet hinzufügen"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/common.json b/DigitalHumanWeb/locales/de-DE/common.json
new file mode 100644
index 0000000..4ca51c5
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "Über",
+ "advanceSettings": "Erweiterte Einstellungen",
+ "alert": {
+ "cloud": {
+ "action": "Kostenlose Testversion",
+ "desc": "Wir bieten allen registrierten Benutzern {{credit}} kostenlose Berechnungspunkte an, ohne komplizierte Konfiguration, sofort einsatzbereit, unterstützt unbegrenzte Chat-Verlaufsprotokolle und globale Cloud-Synchronisierung. Entdecken Sie gemeinsam weitere fortschrittliche Funktionen.",
+ "descOnMobile": "Wir bieten allen registrierten Benutzern {{credit}} kostenlose Rechenpunkte, die ohne komplizierte Konfiguration sofort einsatzbereit sind.",
+ "title": "Willkommen bei {{name}}"
+ }
+ },
+ "appInitializing": "Anwendung wird gestartet...",
+ "autoGenerate": "Automatisch generieren",
+ "autoGenerateTooltip": "Assistentenbeschreibung automatisch auf Basis von Vorschlägen vervollständigen",
+ "autoGenerateTooltipDisabled": "Bitte geben Sie einen Hinweis ein, um die automatische Vervollständigung zu aktivieren",
+ "back": "Zurück",
+ "batchDelete": "Massenlöschung",
+ "blog": "Produkt-Blog",
+ "cancel": "Abbrechen",
+ "changelog": "Änderungsprotokoll",
+ "close": "Schließen",
+ "contact": "Kontakt",
+ "copy": "Kopieren",
+ "copyFail": "Kopieren fehlgeschlagen",
+ "copySuccess": "Kopieren erfolgreich",
+ "dataStatistics": {
+ "messages": "Nachrichten",
+ "sessions": "Sitzungen",
+ "today": "Heute",
+ "topics": "Themen"
+ },
+ "defaultAgent": "Standardassistent",
+ "defaultSession": "Standardassistent",
+ "delete": "Löschen",
+ "document": "Dokumentation",
+ "download": "Herunterladen",
+ "duplicate": "Duplikat erstellen",
+ "edit": "Bearbeiten",
+ "export": "Exportieren",
+ "exportType": {
+ "agent": "Assistenteneinstellungen exportieren",
+ "agentWithMessage": "Assistent und Nachrichten exportieren",
+ "all": "Globale Einstellungen und alle Assistentendaten exportieren",
+ "allAgent": "Alle Assistenteneinstellungen exportieren",
+ "allAgentWithMessage": "Alle Assistenten und Nachrichten exportieren",
+ "globalSetting": "Globale Einstellungen exportieren"
+ },
+ "feedback": "Feedback und Vorschläge",
+ "follow": "Folge uns auf {{name}}",
+ "footer": {
+ "action": {
+ "feedback": "Teilen Sie uns Ihr wertvolles Feedback mit",
+ "star": "Geben Sie uns auf GitHub einen Stern"
+ },
+ "and": "und",
+ "feedback": {
+ "action": "Feedback teilen",
+ "desc": "Jede Ihrer Ideen und Vorschläge sind für uns von unschätzbarem Wert. Wir sind gespannt auf Ihre Meinung! Kontaktieren Sie uns gerne, um Feedback zu Produktfunktionen und Benutzererfahrungen zu geben und helfen Sie uns, LobeChat weiter zu verbessern.",
+ "title": "Teilen Sie uns Ihr wertvolles Feedback auf GitHub mit"
+ },
+ "later": "Später",
+ "star": {
+ "action": "Stern hinzufügen",
+ "desc": "Wenn Sie unser Produkt mögen und uns unterstützen möchten, könnten Sie uns auf GitHub einen Stern geben? Diese kleine Geste bedeutet uns viel und motiviert uns, Ihnen weiterhin besondere Erlebnisse zu bieten.",
+ "title": "Geben Sie uns auf GitHub einen Stern"
+ },
+ "title": "Mögen Sie unser Produkt?"
+ },
+ "fullscreen": "Vollbildmodus",
+ "historyRange": "Verlaufsbereich",
+ "import": "Importieren",
+ "importModal": {
+ "error": {
+ "desc": "Es tut uns sehr leid, aber beim Importieren der Daten ist ein Fehler aufgetreten. Bitte versuchen Sie es erneut oder <1>senden Sie uns eine Anfrage1>, damit wir das Problem umgehend für Sie lösen können.",
+ "title": "Datenimport fehlgeschlagen"
+ },
+ "finish": {
+ "onlySettings": "Systemeinstellungen erfolgreich importiert",
+ "start": "Starten",
+ "subTitle": "Daten erfolgreich importiert. Dauer: {{duration}} Sekunden. Details des Imports:",
+ "title": "Import abgeschlossen"
+ },
+ "loading": "Daten werden importiert. Bitte haben Sie einen Moment Geduld...",
+ "preparing": "Vorbereitung für den Datenimport läuft...",
+ "result": {
+ "added": "Erfolgreich importiert",
+ "errors": "Fehler beim Import",
+ "messages": "Nachrichten",
+ "sessionGroups": "Sitzungsgruppen",
+ "sessions": "Assistenten",
+ "skips": "Übersprungen (doppelt)",
+ "topics": "Themen",
+ "type": "Datentyp"
+ },
+ "title": "Daten importieren",
+ "uploading": {
+ "desc": "Die Datei ist momentan zu groß und wird mit Hochdruck hochgeladen...",
+ "restTime": "Verbleibende Zeit",
+ "speed": "Upload-Geschwindigkeit"
+ }
+ },
+ "information": "Community und Informationen",
+ "installPWA": "Installiere die Browser-App",
+ "lang": {
+ "ar": "Arabisch",
+ "bg-BG": "Bulgarisch",
+ "bn": "Bengalisch",
+ "cs-CZ": "Tschechisch",
+ "da-DK": "Dänisch",
+ "de-DE": "Deutsch",
+ "el-GR": "Griechisch",
+ "en": "Englisch",
+ "en-US": "Englisch",
+ "es-ES": "Spanisch",
+ "fi-FI": "Finnisch",
+ "fr-FR": "Französisch",
+ "hi-IN": "Hindi",
+ "hu-HU": "Ungarisch",
+ "id-ID": "Indonesisch",
+ "it-IT": "Italienisch",
+ "ja-JP": "Japanisch",
+ "ko-KR": "Koreanisch",
+ "nl-NL": "Niederländisch",
+ "no-NO": "Norwegisch",
+ "pl-PL": "Polnisch",
+ "pt-BR": "Portugiesisch",
+ "pt-PT": "Portugiesisch",
+ "ro-RO": "Rumänisch",
+ "ru-RU": "Russisch",
+ "sk-SK": "Slowakisch",
+ "sr-RS": "Serbisch",
+ "sv-SE": "Schwedisch",
+ "th-TH": "Thailändisch",
+ "tr-TR": "Türkisch",
+ "uk-UA": "Ukrainisch",
+ "vi-VN": "Vietnamesisch",
+ "zh": "Chinesisch",
+ "zh-CN": "Chinesisch (vereinfacht)",
+ "zh-TW": "Chinesisch (traditionell)"
+ },
+ "layoutInitializing": "Layout wird geladen...",
+ "legal": "Rechtliches",
+ "loading": "Laden...",
+ "mail": {
+ "business": "Geschäftliche Zusammenarbeit",
+ "support": "E-Mail-Support"
+ },
+ "oauth": "SSO-Anmeldung",
+ "officialSite": "Offizielle Website",
+ "ok": "OK",
+ "password": "Passwort",
+ "pin": "Anheften",
+ "pinOff": "Anheften aufheben",
+ "privacy": "Datenschutzrichtlinie",
+ "regenerate": "Neu generieren",
+ "rename": "Umbenennen",
+ "reset": "Zurücksetzen",
+ "retry": "Erneut versuchen",
+ "send": "Senden",
+ "setting": "Einstellung",
+ "share": "Teilen",
+ "stop": "Stoppen",
+ "sync": {
+ "actions": {
+ "settings": "Sync Einstellungen",
+ "sync": "Jetzt Syncen"
+ },
+ "awareness": {
+ "current": "Aktuelles Gerät"
+ },
+ "channel": "Kanal",
+ "disabled": {
+ "actions": {
+ "enable": "Cloud Sync Aktivieren",
+ "settings": "Sync Einstellungen"
+ },
+ "desc": "Momentane Session Daten ist nur im Brwoser gespeichert. Wenn Sie Daten über mehrere Geräte syncen möchten, konfigurieren und aktivieren Sie bitte cloud sync.",
+ "title": "Daten Sync Deaktiviert"
+ },
+ "enabled": {
+ "title": "Daten Sync Aktiviert"
+ },
+ "status": {
+ "connecting": "Verbinden",
+ "disabled": "Sync Deaktiviert",
+ "ready": "Verbunden",
+ "synced": "Synchronisiert",
+ "syncing": "Synchronisierung",
+ "unconnected": "Verbindung gescheitert"
+ },
+ "title": "Sync Status",
+ "unconnected": {
+ "tip": "Die Verbindung zum Signalisierungsserver ist fehlgeschlagen, und der Peer-to-Peer-Kommunikationskanal kann nicht hergestellt werden. Bitte überprüfen Sie das Netzwerk und versuchen Sie es erneut."
+ }
+ },
+ "tab": {
+ "chat": "Chat",
+ "discover": "Entdecken",
+ "files": "Dateien",
+ "me": "Ich",
+ "setting": "Einstellung"
+ },
+ "telemetry": {
+ "allow": "Erlauben",
+ "deny": "Verweigern",
+ "desc": "Wir möchten anonyme Nutzungsdaten sammeln, um uns bei der Verbesserung von LobeChat zu helfen und dir ein besseres Produkterlebnis zu bieten. Du kannst dies jederzeit in den „Einstellungen“ - „Über“ deaktivieren.",
+ "learnMore": "Mehr erfahren",
+ "title": "Hilf LobeChat, besser zu werden"
+ },
+ "temp": "Temporär",
+ "terms": "Nutzungsbedingungen",
+ "updateAgent": "Assistentenprofil aktualisieren",
+ "upgradeVersion": {
+ "action": "Aktualisieren",
+ "hasNew": "Neue Version verfügbar",
+ "newVersion": "Neue Version verfügbar: {{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "Anonymer Benutzer",
+ "billing": "Abrechnung verwalten",
+ "cloud": "Erleben Sie {{name}}",
+ "data": "Daten speichern",
+ "defaultNickname": "Community User",
+ "discord": "Community-Support",
+ "docs": "Dokumentation",
+ "email": "E-Mail-Support",
+ "feedback": "Feedback und Vorschläge",
+ "help": "Hilfezentrum",
+ "moveGuide": "Die Einstellungen wurden hierher verschoben.",
+ "plans": "Abonnementpläne",
+ "preview": "Vorschau",
+ "profile": "Kontoverwaltung",
+ "setting": "App-Einstellungen",
+ "usages": "Nutzungsstatistiken"
+ },
+ "version": "Version"
+}
diff --git a/DigitalHumanWeb/locales/de-DE/components.json b/DigitalHumanWeb/locales/de-DE/components.json
new file mode 100644
index 0000000..0775a66
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "Ziehen Sie Dateien hierher, um mehrere Bilder hochzuladen.",
+ "dragFileDesc": "Ziehen Sie Bilder und Dateien hierher, um mehrere Bilder und Dateien hochzuladen.",
+ "dragFileTitle": "Dateien hochladen",
+ "dragTitle": "Bilder hochladen"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "Zur Wissensdatenbank hinzufügen",
+ "addToOtherKnowledgeBase": "Zur anderen Wissensdatenbank hinzufügen",
+ "batchChunking": "Batch-Zerteilung",
+ "chunking": "Zerteilung",
+ "chunkingTooltip": "Teilen Sie die Datei in mehrere Textblöcke und vektorisieren Sie sie, um sie für die semantische Suche und Dateidialoge zu verwenden.",
+ "confirmDelete": "Die Datei wird gelöscht. Nach dem Löschen kann sie nicht wiederhergestellt werden. Bitte bestätigen Sie Ihre Aktion.",
+ "confirmDeleteMultiFiles": "Die ausgewählten {{count}} Dateien werden gelöscht. Nach dem Löschen können sie nicht wiederhergestellt werden. Bitte bestätigen Sie Ihre Aktion.",
+ "confirmRemoveFromKnowledgeBase": "Die ausgewählten {{count}} Dateien werden aus der Wissensdatenbank entfernt. Die Dateien sind weiterhin in allen Dateien sichtbar. Bitte bestätigen Sie Ihre Aktion.",
+ "copyUrl": "Link kopieren",
+ "copyUrlSuccess": "Dateiadresse erfolgreich kopiert",
+ "createChunkingTask": "Wird vorbereitet...",
+ "deleteSuccess": "Datei erfolgreich gelöscht",
+ "downloading": "Datei wird heruntergeladen...",
+ "removeFromKnowledgeBase": "Aus der Wissensdatenbank entfernen",
+ "removeFromKnowledgeBaseSuccess": "Datei erfolgreich entfernt"
+ },
+ "bottom": "Das Ende ist erreicht",
+ "config": {
+ "showFilesInKnowledgeBase": "In der Wissensdatenbank angezeigte Inhalte"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "Datei hochladen",
+ "folder": "Ordner hochladen",
+ "knowledgeBase": "Neue Wissensdatenbank erstellen"
+ },
+ "or": "oder",
+ "title": "Ziehen Sie Dateien oder Ordner hierher"
+ },
+ "title": {
+ "createdAt": "Erstellungsdatum",
+ "size": "Größe",
+ "title": "Datei"
+ },
+ "total": {
+ "fileCount": "Insgesamt {{count}} Elemente",
+ "selectedCount": "Ausgewählt {{count}} Elemente"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "Textblöcke sind noch nicht vollständig vektorisiert, was die Funktion der semantischen Suche beeinträchtigen kann. Um die Suchqualität zu verbessern, vektorisieren Sie die Textblöcke.",
+ "error": "Vektorisierung fehlgeschlagen",
+ "errorResult": "Vektorisierung fehlgeschlagen, bitte überprüfen Sie und versuchen Sie es erneut. Grund für das Scheitern:",
+ "processing": "Textblöcke werden vektorisiert, bitte haben Sie Geduld.",
+ "success": "Alle aktuellen Textblöcke sind vektorisiert."
+ },
+ "embeddings": "Vektorisierung",
+ "status": {
+ "error": "Zerteilung fehlgeschlagen",
+ "errorResult": "Zerteilung fehlgeschlagen, bitte überprüfen Sie und versuchen Sie es erneut. Grund für das Scheitern:",
+ "processing": "Zerteilung läuft",
+ "processingTip": "Der Server zerteilt die Textblöcke. Das Schließen der Seite hat keinen Einfluss auf den Zerteilungsfortschritt."
+ }
+ }
+ },
+ "GoBack": {
+ "back": "Zurück"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "Benutzerdefiniertes Modell, standardmäßig unterstützt es sowohl Funktionsaufrufe als auch visuelle Erkennung. Bitte überprüfen Sie die Verfügbarkeit dieser Fähigkeiten basierend auf den tatsächlichen Gegebenheiten.",
+ "file": "Dieses Modell unterstützt das Hochladen von Dateien und deren Erkennung.",
+ "functionCall": "Dieses Modell unterstützt Funktionsaufrufe.",
+ "tokens": "Dieses Modell unterstützt maximal {{tokens}} Tokens pro Sitzung.",
+ "vision": "Dieses Modell unterstützt die visuelle Erkennung."
+ },
+ "removed": "Das Modell wurde aus der Liste entfernt. Wenn Sie die Auswahl aufheben, wird es automatisch entfernt."
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "Kein aktiviertes Modell. Bitte gehen Sie zu den Einstellungen, um es zu aktivieren.",
+ "provider": "Anbieter"
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/discover.json b/DigitalHumanWeb/locales/de-DE/discover.json
new file mode 100644
index 0000000..4102d2d
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "Assistent hinzufügen",
+ "addAgentAndConverse": "Assistent hinzufügen und chatten",
+ "addAgentSuccess": "Erfolgreich hinzugefügt",
+ "conversation": {
+ "l1": "Hallo, ich bin **{{name}}**, du kannst mir jede Frage stellen, ich werde mein Bestes tun, um dir zu antworten ~",
+ "l2": "Hier sind meine Fähigkeiten: ",
+ "l3": "Lass uns das Gespräch beginnen!"
+ },
+ "description": "Assistentenbeschreibung",
+ "detail": "Details",
+ "list": "Assistentenliste",
+ "more": "Mehr",
+ "plugins": "Integrations-Plugins",
+ "recentSubmits": "Neueste Aktualisierungen",
+ "suggestions": "Ähnliche Empfehlungen",
+ "systemRole": "Assistenteneinstellungen",
+ "try": "Ausprobieren"
+ },
+ "back": "Zurück zur Entdeckung",
+ "category": {
+ "assistant": {
+ "academic": "Akademisch",
+ "all": "Alle",
+ "career": "Karriere",
+ "copywriting": "Texterstellung",
+ "design": "Design",
+ "education": "Bildung",
+ "emotions": "Emotionen",
+ "entertainment": "Unterhaltung",
+ "games": "Spiele",
+ "general": "Allgemein",
+ "life": "Leben",
+ "marketing": "Marketing",
+ "office": "Büro",
+ "programming": "Programmierung",
+ "translation": "Übersetzung"
+ },
+ "plugin": {
+ "all": "Alle",
+ "gaming-entertainment": "Gaming & Unterhaltung",
+ "life-style": "Lebensstil",
+ "media-generate": "Medienerstellung",
+ "science-education": "Wissenschaft & Bildung",
+ "social": "Soziale Medien",
+ "stocks-finance": "Aktien & Finanzen",
+ "tools": "Praktische Werkzeuge",
+ "web-search": "Websuche"
+ }
+ },
+ "cleanFilter": "Filter zurücksetzen",
+ "create": "Erstellen",
+ "createGuide": {
+ "func1": {
+ "desc1": "Gehe im Chatfenster über die Einstellungen in der oberen rechten Ecke zur Seite, auf der du deinen Assistenten einreichen möchtest;",
+ "desc2": "Klicke auf die Schaltfläche 'In den Assistentenmarkt einreichen' in der oberen rechten Ecke.",
+ "tag": "Methode Eins",
+ "title": "Einreichung über LobeChat"
+ },
+ "func2": {
+ "button": "Gehe zum Github-Assistenten-Repository",
+ "desc": "Wenn du den Assistenten im Index hinzufügen möchtest, erstelle einen Eintrag mit agent-template.json oder agent-template-full.json im plugins-Verzeichnis, schreibe eine kurze Beschreibung und markiere sie entsprechend, und erstelle dann eine Pull-Anfrage.",
+ "tag": "Methode Zwei",
+ "title": "Einreichung über Github"
+ }
+ },
+ "dislike": "Nicht mögen",
+ "filter": "Filter",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "Alle Autoren",
+ "followed": "Folgte Autoren",
+ "title": "Autorenbereich"
+ },
+ "contentLength": "Minimale Kontextlänge",
+ "maxToken": {
+ "title": "Maximale Länge festlegen (Token)",
+ "unlimited": "Unbegrenzt"
+ },
+ "other": {
+ "functionCall": "Funktionaufrufe unterstützen",
+ "title": "Sonstiges",
+ "vision": "Visuelle Erkennung unterstützen",
+ "withKnowledge": "Mit Wissensdatenbank",
+ "withTool": "Mit Plugin"
+ },
+ "pricing": "Modellpreise",
+ "timePeriod": {
+ "all": "Alle Zeiten",
+ "day": "Letzte 24 Stunden",
+ "month": "Letzte 30 Tage",
+ "title": "Zeitraum",
+ "week": "Letzte 7 Tage",
+ "year": "Letztes Jahr"
+ }
+ },
+ "home": {
+ "featuredAssistants": "Empfohlene Assistenten",
+ "featuredModels": "Empfohlene Modelle",
+ "featuredProviders": "Empfohlene Modellanbieter",
+ "featuredTools": "Empfohlene Plugins",
+ "more": "Mehr entdecken"
+ },
+ "like": "Mögen",
+ "models": {
+ "chat": "Gespräch starten",
+ "contentLength": "Maximale Kontextlänge",
+ "free": "Kostenlos",
+ "guide": "Konfigurationsanleitung",
+ "list": "Modellliste",
+ "more": "Mehr",
+ "parameterList": {
+ "defaultValue": "Standardwert",
+ "docs": "Dokumentation ansehen",
+ "frequency_penalty": {
+ "desc": "Diese Einstellung passt die Häufigkeit an, mit der das Modell bestimmte Wörter, die bereits im Input erschienen sind, wiederverwendet. Höhere Werte verringern die Wahrscheinlichkeit dieser Wiederholung, während negative Werte den gegenteiligen Effekt haben. Die Wortstrafe erhöht sich nicht mit der Anzahl der Vorkommen. Negative Werte fördern die Wiederverwendung von Wörtern.",
+ "title": "Häufigkeitsstrafe"
+ },
+ "max_tokens": {
+ "desc": "Diese Einstellung definiert die maximale Länge, die das Modell in einer einzelnen Antwort generieren kann. Höhere Werte ermöglichen es dem Modell, längere Antworten zu generieren, während niedrigere Werte die Länge der Antwort einschränken und sie prägnanter machen. Eine angemessene Anpassung dieses Wertes je nach Anwendungsfall kann helfen, die gewünschte Länge und Detailgenauigkeit der Antwort zu erreichen.",
+ "title": "Begrenzung der Antwortlänge"
+ },
+ "presence_penalty": {
+ "desc": "Diese Einstellung soll die Wiederverwendung von Wörtern basierend auf deren Häufigkeit im Input steuern. Sie versucht, weniger häufig Wörter zu verwenden, die im Input häufig vorkommen, wobei die Verwendungshäufigkeit proportional zur Häufigkeit ist. Die Wortstrafe erhöht sich mit der Anzahl der Vorkommen. Negative Werte fördern die Wiederverwendung von Wörtern.",
+ "title": "Themenfrische"
+ },
+ "range": "Bereich",
+ "temperature": {
+ "desc": "Diese Einstellung beeinflusst die Vielfalt der Antworten des Modells. Niedrigere Werte führen zu vorhersehbareren und typischen Antworten, während höhere Werte zu vielfältigeren und weniger häufigen Antworten anregen. Wenn der Wert auf 0 gesetzt wird, gibt das Modell für einen bestimmten Input immer die gleiche Antwort.",
+ "title": "Zufälligkeit"
+ },
+ "title": "Modellparameter",
+ "top_p": {
+ "desc": "Diese Einstellung beschränkt die Auswahl des Modells auf die Wörter mit der höchsten Wahrscheinlichkeit, die einen bestimmten Anteil erreichen: Es werden nur die Wörter ausgewählt, deren kumulative Wahrscheinlichkeit P erreicht. Niedrigere Werte machen die Antworten des Modells vorhersehbarer, während die Standardeinstellung dem Modell erlaubt, aus dem gesamten Wortschatz auszuwählen.",
+ "title": "Kernsampling"
+ },
+ "type": "Typ"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChat unterstützt die Verwendung eines benutzerdefinierten API-Schlüssels für diesen Anbieter.",
+ "input": "Eingabepreis",
+ "inputTooltip": "Kosten pro Million Token",
+ "latency": "Latenz",
+ "latencyTooltip": "Durchschnittliche Antwortzeit des Anbieters für das erste Token",
+ "maxOutput": "Maximale Ausgabelänge",
+ "maxOutputTooltip": "Maximale Anzahl von Tokens, die dieser Endpunkt generieren kann",
+ "officialTooltip": "Offizieller LobeHub-Dienst",
+ "output": "Ausgabepreis",
+ "outputTooltip": "Kosten pro Million Token",
+ "streamCancellationTooltip": "Dieser Anbieter unterstützt die Stream-Abbruchfunktion.",
+ "throughput": "Durchsatz",
+ "throughputTooltip": "Durchschnittliche Anzahl von Tokens, die pro Sekunde bei Stream-Anfragen übertragen werden"
+ },
+ "suggestions": "Verwandte Modelle",
+ "supportedProviders": "Anbieter, die dieses Modell unterstützen"
+ },
+ "plugins": {
+ "community": "Community-Plugins",
+ "install": "Plugin installieren",
+ "installed": "Installiert",
+ "list": "Plugin-Liste",
+ "meta": {
+ "description": "Beschreibung",
+ "parameter": "Parameter",
+ "title": "Werkzeugparameter",
+ "type": "Typ"
+ },
+ "more": "Mehr",
+ "official": "Offizielle Plugins",
+ "recentSubmits": "Neueste Aktualisierungen",
+ "suggestions": "Ähnliche Empfehlungen"
+ },
+ "providers": {
+ "config": "Anbieter konfigurieren",
+ "list": "Liste der Modellanbieter",
+ "modelCount": "{{count}} Modelle",
+ "modelSite": "Modell-Dokumentation",
+ "more": "Mehr",
+ "officialSite": "Offizielle Webseite",
+ "showAllModels": "Alle Modelle anzeigen",
+ "suggestions": "Verwandte Anbieter",
+ "supportedModels": "Unterstützte Modelle"
+ },
+ "search": {
+ "placeholder": "Suche nach Namen, Beschreibung oder Schlüsselwörtern...",
+ "result": "{{count}} Ergebnisse zu {{keyword}}",
+ "searching": "Suche läuft..."
+ },
+ "sort": {
+ "mostLiked": "Am meisten gemocht",
+ "mostUsed": "Am häufigsten verwendet",
+ "newest": "Neueste zuerst",
+ "oldest": "Älteste zuerst",
+ "recommended": "Empfohlen"
+ },
+ "tab": {
+ "assistants": "Assistenten",
+ "home": "Startseite",
+ "models": "Modelle",
+ "plugins": "Plugins",
+ "providers": "Modellanbieter"
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/error.json b/DigitalHumanWeb/locales/de-DE/error.json
new file mode 100644
index 0000000..8b6a66b
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "Weitermachen",
+ "desc": "{{greeting}}, es freut mich, dass ich dir weiterhelfen kann. Lass uns das Gespräch fortsetzen.",
+ "title": "Willkommen zurück, {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "Zurück zur Startseite",
+ "desc": "Versuchen Sie es später erneut oder kehren Sie in die bekannte Welt zurück",
+ "retry": "Erneut laden",
+ "title": "Ein Problem ist aufgetreten auf der Seite.."
+ },
+ "fetchError": "Anforderung fehlgeschlagen",
+ "fetchErrorDetail": "Fehlerdetails",
+ "notFound": {
+ "backHome": "Zurück zur Startseite",
+ "check": "Bitte überprüfen Sie, ob Ihre URL korrekt ist.",
+ "desc": "Die von Ihnen gesuchte Seite konnte nicht gefunden werden.",
+ "title": "In unbekanntes Gebiet geraten?"
+ },
+ "pluginSettings": {
+ "desc": "Führen Sie die folgende Konfiguration durch, um das Plugin zu verwenden",
+ "title": "{{name}} Plugin-Konfiguration"
+ },
+ "response": {
+ "400": "Entschuldigung, der Server versteht Ihre Anfrage nicht. Bitte überprüfen Sie die Richtigkeit Ihrer Anfrageparameter",
+ "401": "Entschuldigung, der Server hat Ihre Anfrage abgelehnt. Möglicherweise aufgrund unzureichender Berechtigungen oder fehlender gültiger Authentifizierung",
+ "403": "Entschuldigung, der Server hat Ihre Anfrage abgelehnt. Sie haben keine Berechtigung, auf diesen Inhalt zuzugreifen",
+ "404": "Entschuldigung, der Server konnte die angeforderte Seite oder Ressource nicht finden. Bitte überprüfen Sie die Richtigkeit Ihrer URL",
+ "405": "Entschuldigung, der Server unterstützt die von Ihnen verwendete Anfragemethode nicht. Bitte überprüfen Sie die Richtigkeit Ihrer Anfragemethode",
+ "406": "Entschuldigung, der Server kann die Anfrage aufgrund der Eigenschaften des angeforderten Inhalts nicht erfüllen",
+ "407": "Entschuldigung, Sie müssen sich zuerst authentifizieren, um mit dieser Anfrage fortzufahren",
+ "408": "Entschuldigung, der Server hat beim Warten auf die Anfrage eine Zeitüberschreitung. Bitte überprüfen Sie Ihre Netzwerkverbindung und versuchen Sie es erneut",
+ "409": "Entschuldigung, die Anfrage konnte aufgrund eines Konflikts nicht verarbeitet werden, möglicherweise weil der Zustand der Ressource nicht mit der Anfrage kompatibel ist",
+ "410": "Entschuldigung, die angeforderte Ressource wurde dauerhaft entfernt und kann nicht gefunden werden",
+ "411": "Entschuldigung, der Server kann die Anfrage ohne gültige Inhaltslänge nicht verarbeiten",
+ "412": "Entschuldigung, Ihre Anfrage erfüllt die Bedingungen auf Serverseite nicht und kann nicht abgeschlossen werden",
+ "413": "Entschuldigung, Ihre Anfragedaten sind zu groß und können vom Server nicht verarbeitet werden",
+ "414": "Entschuldigung, die URI Ihrer Anfrage ist zu lang und kann vom Server nicht verarbeitet werden",
+ "415": "Entschuldigung, der Server kann das angeforderte Medienformat nicht verarbeiten",
+ "416": "Entschuldigung, der Server kann Ihren Anforderungen nicht entsprechen",
+ "417": "Entschuldigung, der Server kann Ihre Erwartungen nicht erfüllen",
+ "422": "Entschuldigung, Ihre Anfrage ist syntaktisch korrekt, aber aufgrund semantischer Fehler kann nicht geantwortet werden",
+ "423": "Entschuldigung, die angeforderte Ressource ist gesperrt",
+ "424": "Entschuldigung, aufgrund eines früheren Fehlers kann die aktuelle Anfrage nicht abgeschlossen werden",
+ "426": "Entschuldigung, der Server verlangt, dass Ihr Client auf eine höhere Protokollversion aktualisiert wird",
+ "428": "Entschuldigung, der Server verlangt Voraussetzungen und fordert, dass Ihre Anfrage die richtigen Bedingungsköpfe enthält",
+ "429": "Entschuldigung, Ihre Anfrage ist zu häufig. Der Server ist etwas überlastet. Bitte versuchen Sie es später erneut",
+ "431": "Entschuldigung, der Header Ihrer Anfrage ist zu groß und kann vom Server nicht verarbeitet werden",
+ "451": "Entschuldigung, aus rechtlichen Gründen verweigert der Server die Bereitstellung dieser Ressource",
+ "500": "Entschuldigung, der Server hat anscheinend einige Schwierigkeiten und kann Ihre Anfrage vorübergehend nicht bearbeiten. Bitte versuchen Sie es später erneut",
+ "502": "Entschuldigung, der Server scheint die Orientierung verloren zu haben und kann vorübergehend keinen Service bereitstellen. Bitte versuchen Sie es später erneut",
+ "503": "Entschuldigung, der Server kann Ihre Anfrage derzeit nicht verarbeiten. Möglicherweise aufgrund von Überlastung oder Wartungsarbeiten. Bitte versuchen Sie es später erneut",
+ "504": "Entschuldigung, der Server hat keine Antwort vom Upstream-Server erhalten. Bitte versuchen Sie es später erneut",
+ "AgentRuntimeError": "Es ist ein Fehler bei der Ausführung des Lobe-Sprachmodells aufgetreten. Bitte überprüfen Sie die folgenden Informationen oder versuchen Sie es erneut.",
+ "FreePlanLimit": "Sie sind derzeit ein kostenloser Benutzer und können diese Funktion nicht nutzen. Bitte aktualisieren Sie auf ein kostenpflichtiges Abonnement, um fortzufahren.",
+ "InvalidAccessCode": "Das Passwort ist ungültig oder leer. Bitte geben Sie das richtige Zugangspasswort ein oder fügen Sie einen benutzerdefinierten API-Schlüssel hinzu.",
+ "InvalidBedrockCredentials": "Die Bedrock-Authentifizierung ist fehlgeschlagen. Bitte überprüfen Sie AccessKeyId/SecretAccessKey und versuchen Sie es erneut.",
+ "InvalidClerkUser": "Entschuldigung, du bist derzeit nicht angemeldet. Bitte melde dich an oder registriere ein Konto, um fortzufahren.",
+ "InvalidGithubToken": "Der persönliche Zugriffstoken für Github ist ungültig oder leer. Bitte überprüfen Sie den persönlichen Zugriffstoken für Github und versuchen Sie es erneut.",
+ "InvalidOllamaArgs": "Ollama-Konfiguration ist ungültig. Bitte überprüfen Sie die Ollama-Konfiguration und versuchen Sie es erneut.",
+ "InvalidProviderAPIKey": "{{provider}} API-Schlüssel ist ungültig oder leer. Bitte überprüfen Sie den {{provider}} API-Schlüssel und versuchen Sie es erneut.",
+ "LocationNotSupportError": "Entschuldigung, Ihr Standort unterstützt diesen Modellservice möglicherweise aufgrund von regionalen Einschränkungen oder nicht aktivierten Diensten nicht. Bitte überprüfen Sie, ob der aktuelle Standort die Verwendung dieses Dienstes unterstützt, oder versuchen Sie, andere Standortinformationen zu verwenden.",
+ "NoOpenAIAPIKey": "Der OpenAI-API-Schlüssel ist leer. Bitte fügen Sie einen benutzerdefinierten OpenAI-API-Schlüssel hinzu",
+ "OllamaBizError": "Fehler bei der Anforderung des Ollama-Dienstes. Bitte überprüfen Sie die folgenden Informationen oder versuchen Sie es erneut.",
+ "OllamaServiceUnavailable": "Der Ollama-Dienst ist nicht verfügbar. Bitte überprüfen Sie, ob Ollama ordnungsgemäß ausgeführt wird und ob die CORS-Konfiguration von Ollama korrekt ist.",
+ "OpenAIBizError": "Fehler bei der Anforderung des OpenAI-Dienstes. Bitte überprüfen Sie die folgenden Informationen oder versuchen Sie es erneut.",
+ "PluginApiNotFound": "Entschuldigung, das API des Plugins im Plugin-Manifest existiert nicht. Bitte überprüfen Sie, ob Ihre Anfragemethode mit dem Plugin-Manifest-API übereinstimmt",
+ "PluginApiParamsError": "Entschuldigung, die Eingabeüberprüfung der Plugin-Anfrage ist fehlgeschlagen. Bitte überprüfen Sie, ob die Eingabe mit den API-Beschreibungsinformationen übereinstimmt",
+ "PluginFailToTransformArguments": "Es tut uns leid, die Plugin-Aufrufargumente konnten nicht transformiert werden. Bitte versuchen Sie, die Assistentennachricht erneut zu generieren, oder wechseln Sie zu einem leistungsstärkeren AI-Modell mit Tools Calling-Fähigkeiten und versuchen Sie es erneut.",
+ "PluginGatewayError": "Entschuldigung, es ist ein Fehler im Plugin-Gateway aufgetreten. Bitte überprüfen Sie die Plugin-Gateway-Konfiguration auf Richtigkeit",
+ "PluginManifestInvalid": "Entschuldigung, das Manifest des Plugins hat die Überprüfung nicht bestanden. Bitte überprüfen Sie das Format des Manifests",
+ "PluginManifestNotFound": "Entschuldigung, der Server konnte das Manifest (manifest.json) des Plugins nicht finden. Bitte überprüfen Sie die Adresse der Plugin-Beschreibungsdatei",
+ "PluginMarketIndexInvalid": "Entschuldigung, die Plugin-Marktindexüberprüfung ist fehlgeschlagen. Bitte überprüfen Sie das Format der Indexdatei",
+ "PluginMarketIndexNotFound": "Entschuldigung, der Server konnte den Plugin-Marktindex nicht finden. Bitte überprüfen Sie die Indexadresse auf Richtigkeit",
+ "PluginMetaInvalid": "Entschuldigung, die Metadaten des Plugins haben die Überprüfung nicht bestanden. Bitte überprüfen Sie das Format der Plugin-Metadaten",
+ "PluginMetaNotFound": "Entschuldigung, das Plugin wurde im Index nicht gefunden. Bitte überprüfen Sie die Konfigurationsinformationen des Plugins im Index",
+ "PluginOpenApiInitError": "Entschuldigung, die Initialisierung des OpenAPI-Clients ist fehlgeschlagen. Bitte überprüfen Sie die Konfigurationsinformationen des OpenAPI auf Richtigkeit",
+ "PluginServerError": "Fehler bei der Serveranfrage des Plugins. Bitte überprüfen Sie die Fehlerinformationen unten in Ihrer Plugin-Beschreibungsdatei, Plugin-Konfiguration oder Serverimplementierung",
+ "PluginSettingsInvalid": "Das Plugin muss korrekt konfiguriert werden, um verwendet werden zu können. Bitte überprüfen Sie Ihre Konfiguration auf Richtigkeit",
+ "ProviderBizError": "Fehler bei der Anforderung des {{provider}}-Dienstes. Bitte überprüfen Sie die folgenden Informationen oder versuchen Sie es erneut.",
+ "StreamChunkError": "Fehler beim Parsen des Nachrichtenchunks der Streaming-Anfrage. Bitte überprüfen Sie, ob die aktuelle API-Schnittstelle den Standards entspricht, oder wenden Sie sich an Ihren API-Anbieter.",
+ "SubscriptionPlanLimit": "Ihr Abonnementkontingent wurde aufgebraucht und Sie können diese Funktion nicht nutzen. Bitte aktualisieren Sie auf ein höheres Abonnement oder kaufen Sie ein Ressourcenpaket, um fortzufahren.",
+ "UnknownChatFetchError": "Es tut uns leid, es ist ein unbekannter Anforderungsfehler aufgetreten. Bitte überprüfen Sie die folgenden Informationen oder versuchen Sie es erneut."
+ },
+ "stt": {
+ "responseError": "Serviceanfrage fehlgeschlagen. Bitte überprüfen Sie die Konfiguration oder versuchen Sie es erneut"
+ },
+ "tts": {
+ "responseError": "Serviceanfrage fehlgeschlagen. Bitte überprüfen Sie die Konfiguration oder versuchen Sie es erneut"
+ },
+ "unlock": {
+ "addProxyUrl": "Fügen Sie die OpenAI-Proxy-URL hinzu (optional)",
+ "apiKey": {
+ "description": "Geben Sie Ihren {{name}} API-Schlüssel ein, um die Sitzung zu starten.",
+ "title": "Verwenden Sie Ihren benutzerdefinierten {{name}} API-Schlüssel"
+ },
+ "closeMessage": "Hinweis schließen",
+ "confirm": "Bestätigen und erneut versuchen",
+ "oauth": {
+ "description": "Der Administrator hat die einheitliche Anmeldeauthentifizierung aktiviert. Klicken Sie unten auf die Schaltfläche, um sich anzumelden und die App zu entsperren.",
+ "success": "Anmeldung erfolgreich",
+ "title": "Anmelden",
+ "welcome": "Willkommen!"
+ },
+ "password": {
+ "description": "Der Administrator hat die App-Verschlüsselung aktiviert. Gib das App-Passwort ein, um die App zu entsperren. Das Passwort muss nur einmal eingegeben werden.",
+ "placeholder": "Passwort eingeben",
+ "title": "App entsperren durch Passworteingabe"
+ },
+ "tabs": {
+ "apiKey": "Benutzerdefinierter API-Schlüssel",
+ "password": "Passwort"
+ }
+ },
+ "upload": {
+ "desc": "Details: {{detail}}",
+ "fileOnlySupportInServerMode": "Der aktuelle Bereitstellungsmodus unterstützt das Hochladen von Nicht-Bilddateien nicht. Um Dateien im {{ext}}-Format hochzuladen, wechseln Sie bitte zum Serverdatenbank-Bereitstellungsmodus oder verwenden Sie den {{cloud}}-Dienst.",
+ "networkError": "Bitte überprüfen Sie, ob Ihre Internetverbindung stabil ist, und prüfen Sie die Cross-Origin-Konfiguration des Dateispeicherdienstes.",
+ "title": "Dateiupload fehlgeschlagen. Bitte überprüfen Sie Ihre Netzwerkverbindung und versuchen Sie es später erneut.",
+ "unknownError": "Fehlerursache: {{reason}}",
+ "uploadFailed": "Der Datei-Upload ist fehlgeschlagen."
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/file.json b/DigitalHumanWeb/locales/de-DE/file.json
new file mode 100644
index 0000000..eccf342
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "Verwalten Sie Ihre Dateien und Wissensdatenbank",
+ "detail": {
+ "basic": {
+ "createdAt": "Erstellungszeit",
+ "filename": "Dateiname",
+ "size": "Dateigröße",
+ "title": "Grundinformationen",
+ "type": "Format",
+ "updatedAt": "Aktualisierungszeit"
+ },
+ "data": {
+ "chunkCount": "Anzahl der Teile",
+ "embedding": {
+ "default": "Noch nicht vektorisiert",
+ "error": "Fehler",
+ "pending": "Warten auf Start",
+ "processing": "Wird bearbeitet",
+ "success": "Abgeschlossen"
+ },
+ "embeddingStatus": "Vektorisierung"
+ }
+ },
+ "empty": "Keine hochgeladenen Dateien/Ordner vorhanden",
+ "header": {
+ "actions": {
+ "newFolder": "Neuen Ordner erstellen",
+ "uploadFile": "Datei hochladen",
+ "uploadFolder": "Ordner hochladen"
+ },
+ "uploadButton": "Hochladen"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "Die Wissensdatenbank wird gelöscht, die darin enthaltenen Dateien werden nicht gelöscht, sondern in den gesamten Dateien verschoben. Nach dem Löschen der Wissensdatenbank kann sie nicht wiederhergestellt werden, bitte vorsichtig vorgehen.",
+ "empty": "Klicken Sie auf <1>+1>, um eine Wissensdatenbank zu erstellen"
+ },
+ "new": "Neue Wissensdatenbank",
+ "title": "Wissensdatenbank"
+ },
+ "networkError": "Fehler beim Abrufen der Wissensdatenbank. Bitte überprüfen Sie Ihre Netzwerkverbindung und versuchen Sie es erneut.",
+ "notSupportGuide": {
+ "desc": "Die aktuelle Bereitstellung ist im Client-Datenbankmodus und unterstützt keine Dateiverwaltungsfunktionen. Bitte wechseln Sie zu <1>Server-Datenbank-Bereitstellungsmodus1> oder verwenden Sie direkt <3>LobeChat Cloud3>",
+ "features": {
+ "allKind": {
+ "desc": "Unterstützt gängige Dateitypen, einschließlich Word, PPT, Excel, PDF, TXT und andere gängige Dokumentformate sowie JS, Python und andere gängige Code-Dateien",
+ "title": "Verschiedene Dateitypen analysieren"
+ },
+ "embeddings": {
+ "desc": "Verwendet leistungsstarke Vektormodelle zur Vektorisierung von Textteilen, um eine semantische Suche nach Dateiinhalten zu ermöglichen",
+ "title": "Vektor-Semantisierung"
+ },
+ "repos": {
+ "desc": "Unterstützt die Erstellung von Wissensdatenbanken und ermöglicht das Hinzufügen verschiedener Dateitypen, um Ihr Fachwissen aufzubauen",
+ "title": "Wissensdatenbank"
+ }
+ },
+ "title": "Der aktuelle Bereitstellungsmodus unterstützt keine Dateiverwaltung"
+ },
+ "preview": {
+ "downloadFile": "Datei herunterladen",
+ "unsupportedFileAndContact": "Dieses Dateiformat wird derzeit nicht für die Online-Vorschau unterstützt. Wenn Sie eine Vorschau wünschen, können Sie uns gerne <1>Feedback geben1>."
+ },
+ "searchFilePlaceholder": "Datei suchen",
+ "tab": {
+ "all": "Alle Dateien",
+ "audios": "Audio",
+ "documents": "Dokumente",
+ "images": "Bilder",
+ "videos": "Videos",
+ "websites": "Webseiten"
+ },
+ "title": "Dateien",
+ "uploadDock": {
+ "body": {
+ "collapse": "Einklappen",
+ "item": {
+ "done": "Hochgeladen",
+ "error": "Hochladen fehlgeschlagen, bitte erneut versuchen",
+ "pending": "Bereit zum Hochladen...",
+ "processing": "Datei wird bearbeitet...",
+ "restTime": "Verbleibende Zeit {{time}}"
+ }
+ },
+ "totalCount": "Insgesamt {{count}} Elemente",
+ "uploadStatus": {
+ "error": "Fehler beim Hochladen",
+ "pending": "Warten auf Upload",
+ "processing": "Wird hochgeladen",
+ "success": "Hochladen abgeschlossen",
+ "uploading": "Wird hochgeladen"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/knowledgeBase.json b/DigitalHumanWeb/locales/de-DE/knowledgeBase.json
new file mode 100644
index 0000000..a4fa795
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "Datei erfolgreich hinzugefügt, <1>jetzt ansehen1>",
+ "confirm": "Hinzufügen",
+ "id": {
+ "placeholder": "Bitte wählen Sie die zu hinzuzufügende Wissensdatenbank",
+ "required": "Bitte wählen Sie eine Wissensdatenbank",
+ "title": "Ziel-Wissensdatenbank"
+ },
+ "title": "Zur Wissensdatenbank hinzufügen",
+ "totalFiles": "{{count}} Datei(en) ausgewählt"
+ },
+ "createNew": {
+ "confirm": "Neu erstellen",
+ "description": {
+ "placeholder": "Beschreibung der Wissensdatenbank (optional)"
+ },
+ "formTitle": "Grundinformationen",
+ "name": {
+ "placeholder": "Name der Wissensdatenbank",
+ "required": "Bitte geben Sie den Namen der Wissensdatenbank ein"
+ },
+ "title": "Wissensdatenbank neu erstellen"
+ },
+ "tab": {
+ "evals": "Bewertungen",
+ "files": "Dokumente",
+ "settings": "Einstellungen",
+ "testing": "Rückruf-Test"
+ },
+ "title": "Wissensdatenbank"
+}
diff --git a/DigitalHumanWeb/locales/de-DE/market.json b/DigitalHumanWeb/locales/de-DE/market.json
new file mode 100644
index 0000000..dce529f
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "Helfer hinzufügen",
+ "addAgentAndConverse": "Assistent hinzufügen und Konversation starten",
+ "addAgentSuccess": "Erfolgreich hinzugefügt",
+ "guide": {
+ "func1": {
+ "desc1": "Gehen Sie im Chatfenster über das Einstellungssymbol oben rechts zur Seite, auf der Sie die Einstellungen für Ihren Helfer einreichen können.",
+ "desc2": "Klicken Sie auf die Schaltfläche 'Zum Helfer-Marktplatz einreichen' oben rechts.",
+ "tag": "Methode 1",
+ "title": "Über LobeChat einreichen"
+ },
+ "func2": {
+ "button": "Zum Github-Helfer-Repository gehen",
+ "desc": "Wenn Sie einen Helfer zum Index hinzufügen möchten, erstellen Sie einen Eintrag in den Plugins-Verzeichnissen agent-template.json oder agent-template-full.json, verfassen Sie eine kurze Beschreibung und markieren Sie diese entsprechend. Anschließend erstellen Sie eine Pull-Anfrage.",
+ "tag": "Methode 2",
+ "title": "Über Github einreichen"
+ }
+ },
+ "search": {
+ "placeholder": "Helfername, Beschreibung oder Stichwort suchen..."
+ },
+ "sidebar": {
+ "comment": "Kommentare",
+ "prompt": "Hinweis",
+ "title": "Helfer-Details"
+ },
+ "submitAgent": "Helfer einreichen",
+ "title": {
+ "allAgents": "Alle Helfer",
+ "recentSubmits": "Kürzlich hinzugefügt"
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/metadata.json b/DigitalHumanWeb/locales/de-DE/metadata.json
new file mode 100644
index 0000000..d5e0175
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}} bietet dir das beste Erlebnis mit ChatGPT, Claude, Gemini und OLLaMA WebUI",
+ "title": "{{appName}}: Persönliches KI-Effizienzwerkzeug, gib dir selbst ein schlaueres Gehirn"
+ },
+ "discover": {
+ "assistants": {
+ "description": "Inhaltserstellung, Textverfassung, Fragen und Antworten, Bildgenerierung, Videoerstellung, Sprachsynthese, intelligente Agenten, automatisierte Workflows, passe deinen eigenen AI / GPTs / OLLaMA intelligenten Assistenten an",
+ "title": "KI-Assistenten"
+ },
+ "description": "Inhaltserstellung, Textverfassung, Fragen und Antworten, Bildgenerierung, Videoerstellung, Sprachsynthese, intelligente Agenten, automatisierte Workflows, benutzerdefinierte AI-Anwendungen, passe deine eigene AI-Anwendungsplattform an",
+ "models": {
+ "description": "Entdecke gängige AI-Modelle wie OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek",
+ "title": "KI-Modelle"
+ },
+ "plugins": {
+ "description": "Entdecken Sie die Möglichkeiten zur Erstellung von Diagrammen, wissenschaftlichen Inhalten, Bildgenerierung, Videoerstellung, Sprachsynthese und automatisierten Workflows, um Ihrem Assistenten eine Vielzahl von Plugin-Funktionen zu integrieren.",
+ "title": "KI-Plugins"
+ },
+ "providers": {
+ "description": "Entdecke führende Modellanbieter wie OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter",
+ "title": "Anbieter von KI-Modellen"
+ },
+ "search": "Suche",
+ "title": "Entdecken"
+ },
+ "plugins": {
+ "description": "Suche, Diagrammerstellung, akademisch, Bilderzeugung, Videoerzeugung, Spracherzeugung, automatisierte Workflows, passe die ToolCall-Plugin-Funktionen von ChatGPT / Claude an",
+ "title": "Plugin-Markt"
+ },
+ "welcome": {
+ "description": "{{appName}} bietet dir das beste Erlebnis mit ChatGPT, Claude, Gemini und OLLaMA WebUI",
+ "title": "Willkommen bei {{appName}}: Persönliches KI-Effizienzwerkzeug, gib dir selbst ein schlaueres Gehirn"
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/migration.json b/DigitalHumanWeb/locales/de-DE/migration.json
new file mode 100644
index 0000000..3982d1c
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "Lokale Daten löschen",
+ "downloadBackup": "Datenbackup herunterladen",
+ "reUpgrade": "Erneut aktualisieren",
+ "start": "Starten",
+ "upgrade": "Upgrade durchführen"
+ },
+ "clear": {
+ "confirm": "Lokale Daten werden gelöscht (Globale Einstellungen bleiben unberührt). Bitte bestätigen Sie, dass Sie ein Datenbackup heruntergeladen haben."
+ },
+ "description": "In der neuen Version hat die Datenspeicherung von {{appName}} einen riesigen Sprung gemacht. Daher müssen wir die alten Daten aktualisieren, um dir ein besseres Nutzungserlebnis zu bieten.",
+ "features": {
+ "capability": {
+ "desc": "Basierend auf der IndexedDB-Technologie, die genug Platz für alle deine Lebensnachrichten bietet.",
+ "title": "Großer Speicher"
+ },
+ "performance": {
+ "desc": "Millionen von Nachrichten werden automatisch indiziert, Abfragen reagieren in Millisekunden.",
+ "title": "Hohe Leistung"
+ },
+ "use": {
+ "desc": "Unterstützt die Suche nach Titel, Beschreibung, Tags, Nachrichteninhalt und sogar übersetzten Texten, wodurch die Effizienz der täglichen Suche erheblich gesteigert wird.",
+ "title": "Benutzerfreundlicher"
+ }
+ },
+ "title": "Datenentwicklung von {{appName}}",
+ "upgrade": {
+ "error": {
+ "subTitle": "Es tut uns leid, während des Datenbank-Upgrades ist ein Fehler aufgetreten. Bitte versuche die folgenden Lösungen: A. Leere die lokalen Daten und importiere die Sicherungsdaten erneut; B. Klicke auf die Schaltfläche „Erneut upgraden“.
Wenn der Fehler weiterhin besteht, bitte <1>ein Problem melden1>, wir werden dir umgehend helfen.",
+ "title": "Datenbank-Upgrade fehlgeschlagen"
+ },
+ "success": {
+ "subTitle": "Die Datenbank von {{appName}} wurde auf die neueste Version aktualisiert, beginne sofort mit dem Erlebnis.",
+ "title": "Datenbank-Upgrade erfolgreich"
+ }
+ },
+ "upgradeTip": "Das Upgrade dauert etwa 10 bis 20 Sekunden. Bitte schließe {{appName}} während des Upgrade-Vorgangs nicht."
+ },
+ "migrateError": {
+ "missVersion": "Die importierten Daten enthalten keine Versionsnummer. Bitte überprüfen Sie die Datei und versuchen Sie es erneut.",
+ "noMigration": "Es wurde kein Migrationsplan für die aktuelle Version gefunden. Bitte überprüfen Sie die Versionsnummer und versuchen Sie es erneut. Wenn das Problem weiterhin besteht, reichen Sie bitte eine Problemmeldung ein."
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/modelProvider.json b/DigitalHumanWeb/locales/de-DE/modelProvider.json
new file mode 100644
index 0000000..99df752
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "Die API-Version von Azure im Format JJJJ-MM-TT, siehe [neueste Version](https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions)",
+ "fetch": "Liste abrufen",
+ "title": "Azure-API-Version"
+ },
+ "empty": "Geben Sie eine Modell-ID ein, um das erste Modell hinzuzufügen",
+ "endpoint": {
+ "desc": "Diesen Wert finden Sie im Abschnitt 'Schlüssel und Endpunkte', wenn Sie in Azure Portal Ihre Ressource überprüfen",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "Azure-API-Adresse"
+ },
+ "modelListPlaceholder": "Wählen Sie ein bereitgestelltes OpenAI-Modell aus oder fügen Sie eines hinzu",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "Diesen Wert finden Sie im Abschnitt 'Schlüssel und Endpunkte', wenn Sie in Azure Portal Ihre Ressource überprüfen. Sie können KEY1 oder KEY2 verwenden",
+ "placeholder": "Azure API Key",
+ "title": "API Key"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "Geben Sie Ihre AWS Access Key Id ein",
+ "placeholder": "AWS Access Key Id",
+ "title": "AWS Access Key Id"
+ },
+ "checker": {
+ "desc": "Testen Sie, ob AccessKeyId / SecretAccessKey korrekt eingegeben wurden"
+ },
+ "region": {
+ "desc": "Geben Sie Ihre AWS Region ein",
+ "placeholder": "AWS Region",
+ "title": "AWS Region"
+ },
+ "secretAccessKey": {
+ "desc": "Geben Sie Ihren AWS Secret Access Key ein",
+ "placeholder": "AWS Secret Access Key",
+ "title": "AWS Secret Access Key"
+ },
+ "sessionToken": {
+ "desc": "Wenn Sie AWS SSO/STS verwenden, geben Sie Ihr AWS Session Token ein",
+ "placeholder": "AWS Session Token",
+ "title": "AWS Session Token (optional)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "Benutzerdefinierter Regionsservice",
+ "customSessionToken": "Benutzerdefiniertes Sitzungstoken",
+ "description": "Geben Sie Ihren AWS AccessKeyId / SecretAccessKey ein, um das Gespräch zu beginnen. Die App speichert Ihre Authentifizierungsinformationen nicht.",
+ "title": "Verwenden Sie benutzerdefinierte Bedrock-Authentifizierungsinformationen"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "Geben Sie Ihr GitHub-PAT ein und klicken Sie [hier](https://github.com/settings/tokens), um eines zu erstellen.",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "Testen Sie, ob die Proxy-Adresse korrekt eingetragen wurde",
+ "title": "Konnektivitätsprüfung"
+ },
+ "customModelName": {
+ "desc": "Fügen Sie benutzerdefinierte Modelle hinzu, trennen Sie mehrere Modelle mit Kommas (,)",
+ "placeholder": "vicuna,llava,codellama,llama2:13b-text",
+ "title": "Benutzerdefinierte Modellnamen"
+ },
+ "download": {
+ "desc": "Ollama is downloading the model. Please try not to close this page. The download will resume from where it left off if interrupted.",
+ "remainingTime": "Remaining Time",
+ "speed": "Download Speed",
+ "title": "Downloading model {{model}}"
+ },
+ "endpoint": {
+ "desc": "Geben Sie die Proxy-Adresse der Ollama-Schnittstelle ein, leer lassen, wenn lokal nicht spezifiziert",
+ "title": "Schnittstellen-Proxy-Adresse"
+ },
+ "setup": {
+ "cors": {
+ "description": "Aufgrund von Browser-Sicherheitsbeschränkungen müssen Sie die CORS-Einstellungen für Ollama konfigurieren, um es ordnungsgemäß zu verwenden.",
+ "linux": {
+ "env": "Fügen Sie unter [Service] `Environment` hinzu und setzen Sie die Umgebungsvariable OLLAMA_ORIGINS:",
+ "reboot": "Systemd neu laden und Ollama neu starten",
+ "systemd": "Rufen Sie systemd auf, um den Ollama-Dienst zu bearbeiten:"
+ },
+ "macos": "Öffnen Sie das Terminal und fügen Sie den folgenden Befehl ein, um fortzufahren.",
+ "reboot": "Starten Sie den Ollama-Dienst nach Abschluss der Ausführung neu.",
+ "title": "Konfigurieren Sie Ollama für den Zugriff über CORS",
+ "windows": "Klicken Sie auf Windows auf 'Systemsteuerung', um die Systemumgebungsvariablen zu bearbeiten. Erstellen Sie eine Umgebungsvariable namens 'OLLAMA_ORIGINS' für Ihr Benutzerkonto mit dem Wert '*', und klicken Sie auf 'OK/Anwenden', um zu speichern."
+ },
+ "install": {
+ "description": "Stelle sicher, dass du Ollama aktiviert hast. Wenn du Ollama noch nicht heruntergeladen hast, besuche die offizielle Website, um es <1>herunterzuladen1>.",
+ "docker": "Wenn Sie Docker bevorzugen, bietet Ollama auch offizielle Docker-Images an. Sie können sie mit dem folgenden Befehl abrufen:",
+ "linux": {
+ "command": "Installieren Sie mit dem folgenden Befehl:",
+ "manual": "Alternativ können Sie die <1>Linux-Installationsanleitung1> für die manuelle Installation verwenden."
+ },
+ "title": "Installieren und starten Sie die lokale Ollama-Anwendung",
+ "windowsTab": "Windows (Vorschau)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "Cancel Download",
+ "confirm": "Download",
+ "description": "Enter your Ollama model tag to continue the session",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "Starting download...",
+ "title": "Download specified Ollama model"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI Alles und Nichts"
+ },
+ "zhipu": {
+ "title": "Zhipu"
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/models.json b/DigitalHumanWeb/locales/de-DE/models.json
new file mode 100644
index 0000000..bfa79a8
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34B bietet mit umfangreichen Trainingsbeispielen überlegene Leistungen in der Branchenanwendung."
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9B unterstützt 16K Tokens und bietet effiziente, flüssige Sprachgenerierungsfähigkeiten."
+ },
+ "360gpt-pro": {
+ "description": "360GPT Pro ist ein wichtiger Bestandteil der 360 AI-Modellreihe und erfüllt mit seiner effizienten Textverarbeitungsfähigkeit vielfältige Anwendungen der natürlichen Sprache, unterstützt das Verständnis langer Texte und Mehrfachdialoge."
+ },
+ "360gpt-turbo": {
+ "description": "360GPT Turbo bietet leistungsstarke Berechnungs- und Dialogfähigkeiten, mit hervorragendem semantischen Verständnis und Generierungseffizienz, und ist die ideale intelligente Assistentenlösung für Unternehmen und Entwickler."
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8K betont semantische Sicherheit und verantwortungsbewusste Ausrichtung, speziell für Anwendungen mit hohen Anforderungen an die Inhaltssicherheit konzipiert, um die Genauigkeit und Robustheit der Benutzererfahrung zu gewährleisten."
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Pro ist ein fortschrittliches Modell zur Verarbeitung natürlicher Sprache, das von der 360 Company entwickelt wurde und über außergewöhnliche Textgenerierungs- und Verständnisfähigkeiten verfügt, insbesondere im Bereich der Generierung und Kreativität, und in der Lage ist, komplexe Sprachumwandlungs- und Rollendarstellungsaufgaben zu bewältigen."
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultra ist die leistungsstärkste Version der Spark-Großmodellreihe, die die Online-Suchverbindung aktualisiert und die Fähigkeit zur Textverständnis und -zusammenfassung verbessert. Es ist eine umfassende Lösung zur Steigerung der Büroproduktivität und zur genauen Reaktion auf Anforderungen und ein führendes intelligentes Produkt in der Branche."
+ },
+ "Baichuan2-Turbo": {
+ "description": "Verwendet Suchverbesserungstechnologie, um eine umfassende Verknüpfung zwischen großen Modellen und Fachwissen sowie Wissen aus dem gesamten Internet zu ermöglichen. Unterstützt das Hochladen von Dokumenten wie PDF, Word und die Eingabe von URLs, um Informationen zeitnah und umfassend zu erhalten, mit genauen und professionellen Ergebnissen."
+ },
+ "Baichuan3-Turbo": {
+ "description": "Für häufige Unternehmensszenarien optimiert, mit erheblichen Leistungssteigerungen und einem hohen Preis-Leistungs-Verhältnis. Im Vergleich zum Baichuan2-Modell wurde die Inhaltserstellung um 20 %, die Wissensabfrage um 17 % und die Rollenspiel-Fähigkeit um 40 % verbessert. Die Gesamtleistung übertrifft die von GPT-3.5."
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "Verfügt über ein 128K Ultra-Langkontextfenster, optimiert für häufige Unternehmensszenarien, mit erheblichen Leistungssteigerungen und einem hohen Preis-Leistungs-Verhältnis. Im Vergleich zum Baichuan2-Modell wurde die Inhaltserstellung um 20 %, die Wissensabfrage um 17 % und die Rollenspiel-Fähigkeit um 40 % verbessert. Die Gesamtleistung übertrifft die von GPT-3.5."
+ },
+ "Baichuan4": {
+ "description": "Das Modell hat die höchste Fähigkeit im Inland und übertrifft ausländische Mainstream-Modelle in Aufgaben wie Wissensdatenbanken, langen Texten und kreativer Generierung. Es verfügt auch über branchenführende multimodale Fähigkeiten und zeigt in mehreren autoritativen Bewertungsbenchmarks hervorragende Leistungen."
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B) ist ein innovatives Modell, das sich für Anwendungen in mehreren Bereichen und komplexe Aufgaben eignet."
+ },
+ "Max-32k": {
+ "description": "Spark Max 32K ist mit einer hohen Kontextverarbeitungsfähigkeit ausgestattet, die ein besseres Verständnis des Kontexts und eine stärkere logische Schlussfolgerung ermöglicht. Es unterstützt Texteingaben von bis zu 32K Tokens und eignet sich für Szenarien wie das Lesen langer Dokumente und private Wissensabfragen."
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPO ist eine hochflexible Multi-Modell-Kombination, die darauf abzielt, außergewöhnliche kreative Erlebnisse zu bieten."
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B) ist ein hochpräzises Anweisungsmodell, das für komplexe Berechnungen geeignet ist."
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B) bietet optimierte Sprachausgaben und vielfältige Anwendungsmöglichkeiten."
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "Aktualisierung des Phi-3-mini-Modells."
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "Das gleiche Phi-3-medium-Modell, jedoch mit einer größeren Kontextgröße für RAG oder Few-Shot-Prompting."
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "Ein Modell mit 14 Milliarden Parametern, das eine bessere Qualität als Phi-3-mini bietet und sich auf qualitativ hochwertige, reasoning-dense Daten konzentriert."
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "Das gleiche Phi-3-mini-Modell, jedoch mit einer größeren Kontextgröße für RAG oder Few-Shot-Prompting."
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "Das kleinste Mitglied der Phi-3-Familie. Optimiert für Qualität und geringe Latenz."
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "Das gleiche Phi-3-small-Modell, jedoch mit einer größeren Kontextgröße für RAG oder Few-Shot-Prompting."
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "Ein Modell mit 7 Milliarden Parametern, das eine bessere Qualität als Phi-3-mini bietet und sich auf qualitativ hochwertige, reasoning-dense Daten konzentriert."
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128K ist mit einer extrem großen Kontextverarbeitungsfähigkeit ausgestattet, die bis zu 128K Kontextinformationen verarbeiten kann, besonders geeignet für lange Texte, die eine umfassende Analyse und langfristige logische Verknüpfung erfordern, und bietet in komplexen Textkommunikationen flüssige und konsistente Logik sowie vielfältige Zitationsunterstützung."
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "Als Testversion von Qwen2 bietet Qwen1.5 präzisere Dialogfunktionen durch den Einsatz großer Datenmengen."
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B) bietet schnelle Antworten und natürliche Dialogfähigkeiten, die sich für mehrsprachige Umgebungen eignen."
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2 ist ein fortschrittliches allgemeines Sprachmodell, das eine Vielzahl von Anweisungsarten unterstützt."
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5 ist eine brandneue Serie von großen Sprachmodellen, die darauf abzielt, die Verarbeitung von Anweisungsaufgaben zu optimieren."
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5 ist eine brandneue Serie von großen Sprachmodellen, die darauf abzielt, die Verarbeitung von Anweisungsaufgaben zu optimieren."
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5 ist eine brandneue Serie von großen Sprachmodellen mit verbesserter Verständnis- und Generierungsfähigkeit."
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5 ist eine brandneue Serie von großen Sprachmodellen, die darauf abzielt, die Verarbeitung von Anweisungsaufgaben zu optimieren."
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coder konzentriert sich auf die Programmierung."
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Math konzentriert sich auf die Problemlösung im Bereich Mathematik und bietet professionelle Lösungen für schwierige Aufgaben."
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9B ist die Open-Source-Version, die ein optimiertes Dialogerlebnis für Konversationsanwendungen bietet."
+ },
+ "abab5.5-chat": {
+ "description": "Für produktivitätsorientierte Szenarien konzipiert, unterstützt es die Verarbeitung komplexer Aufgaben und die effiziente Textgenerierung, geeignet für professionelle Anwendungen."
+ },
+ "abab5.5s-chat": {
+ "description": "Speziell für chinesische Charakterdialoge konzipiert, bietet es hochwertige chinesische Dialoggenerierung und ist für verschiedene Anwendungsszenarien geeignet."
+ },
+ "abab6.5g-chat": {
+ "description": "Speziell für mehrsprachige Charakterdialoge konzipiert, unterstützt die hochwertige Dialoggenerierung in Englisch und anderen Sprachen."
+ },
+ "abab6.5s-chat": {
+ "description": "Geeignet für eine Vielzahl von Aufgaben der natürlichen Sprachverarbeitung, einschließlich Textgenerierung und Dialogsystemen."
+ },
+ "abab6.5t-chat": {
+ "description": "Für chinesische Charakterdialoge optimiert, bietet es flüssige und den chinesischen Ausdrucksgewohnheiten entsprechende Dialoggenerierung."
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "Das Open-Source-Funktionsaufrufmodell von Fireworks bietet hervorragende Anweisungsdurchführungsfähigkeiten und anpassbare Funktionen."
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "Das neueste Firefunction-v2 von Fireworks ist ein leistungsstarkes Funktionsaufrufmodell, das auf Llama-3 basiert und durch zahlreiche Optimierungen besonders für Funktionsaufrufe, Dialoge und Anweisungsverfolgung geeignet ist."
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13b ist ein visuelles Sprachmodell, das sowohl Bild- als auch Texteingaben verarbeiten kann und für multimodale Aufgaben geeignet ist, nachdem es mit hochwertigen Daten trainiert wurde."
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "Das Gemma 2 9B Instruct-Modell basiert auf früheren Google-Technologien und eignet sich für eine Vielzahl von Textgenerierungsaufgaben wie Fragen beantworten, Zusammenfassen und Schlussfolgern."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "Das Llama 3 70B Instruct-Modell ist speziell für mehrsprachige Dialoge und natürliche Sprachverständnis optimiert und übertrifft die meisten Wettbewerbsmodelle."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "Das Llama 3 70B Instruct-Modell (HF-Version) entspricht den offiziellen Ergebnissen und eignet sich für hochwertige Anweisungsverfolgungsaufgaben."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "Das Llama 3 8B Instruct-Modell ist für Dialoge und mehrsprachige Aufgaben optimiert und bietet hervorragende und effiziente Leistungen."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "Das Llama 3 8B Instruct-Modell (HF-Version) stimmt mit den offiziellen Ergebnissen überein und bietet hohe Konsistenz und plattformübergreifende Kompatibilität."
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "Das Llama 3.1 405B Instruct-Modell verfügt über eine extrem große Anzahl von Parametern und eignet sich für komplexe Aufgaben und Anweisungsverfolgung in hochbelasteten Szenarien."
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "Das Llama 3.1 70B Instruct-Modell bietet hervorragende natürliche Sprachverständnis- und Generierungsfähigkeiten und ist die ideale Wahl für Dialog- und Analyseaufgaben."
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "Das Llama 3.1 8B Instruct-Modell ist speziell für mehrsprachige Dialoge optimiert und kann die meisten Open-Source- und Closed-Source-Modelle in gängigen Branchenbenchmarks übertreffen."
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "Das Mixtral MoE 8x22B Instruct-Modell unterstützt durch seine große Anzahl an Parametern und Multi-Expert-Architektur die effiziente Verarbeitung komplexer Aufgaben."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "Das Mixtral MoE 8x7B Instruct-Modell bietet durch seine Multi-Expert-Architektur effiziente Anweisungsverfolgung und -ausführung."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "Das Mixtral MoE 8x7B Instruct-Modell (HF-Version) bietet die gleiche Leistung wie die offizielle Implementierung und eignet sich für verschiedene effiziente Anwendungsszenarien."
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "Das MythoMax L2 13B-Modell kombiniert neuartige Kombinations-Technologien und ist besonders gut in Erzählungen und Rollenspielen."
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "Das Phi 3 Vision Instruct-Modell ist ein leichtgewichtiges multimodales Modell, das komplexe visuelle und textuelle Informationen verarbeiten kann und über starke Schlussfolgerungsfähigkeiten verfügt."
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "Das StarCoder 15.5B-Modell unterstützt fortgeschrittene Programmieraufgaben und hat verbesserte mehrsprachige Fähigkeiten, die sich für komplexe Codegenerierung und -verständnis eignen."
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "Das StarCoder 7B-Modell wurde für über 80 Programmiersprachen trainiert und bietet hervorragende Programmierausfüllfähigkeiten und Kontextverständnis."
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "Das Yi-Large-Modell bietet hervorragende mehrsprachige Verarbeitungsfähigkeiten und kann für verschiedene Sprachgenerierungs- und Verständnisaufgaben eingesetzt werden."
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "Ein mehrsprachiges Modell mit 398 Milliarden Parametern (94 Milliarden aktiv), das ein 256K langes Kontextfenster, Funktionsaufrufe, strukturierte Ausgaben und fundierte Generierung bietet."
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "Ein mehrsprachiges Modell mit 52 Milliarden Parametern (12 Milliarden aktiv), das ein 256K langes Kontextfenster, Funktionsaufrufe, strukturierte Ausgaben und fundierte Generierung bietet."
+ },
+ "ai21-jamba-instruct": {
+ "description": "Ein produktionsreifes Mamba-basiertes LLM-Modell, das eine erstklassige Leistung, Qualität und Kosteneffizienz erreicht."
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "Claude 3.5 Sonnet hebt den Branchenstandard an, übertrifft die Konkurrenzmodelle und Claude 3 Opus und zeigt in umfassenden Bewertungen hervorragende Leistungen, während es die Geschwindigkeit und Kosten unserer mittleren Modelle beibehält."
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "Claude 3 Haiku ist das schnellste und kompakteste Modell von Anthropic und bietet nahezu sofortige Reaktionsgeschwindigkeiten. Es kann schnell einfache Anfragen und Anforderungen beantworten. Kunden werden in der Lage sein, nahtlose AI-Erlebnisse zu schaffen, die menschliche Interaktionen nachahmen. Claude 3 Haiku kann Bilder verarbeiten und Textausgaben zurückgeben, mit einem Kontextfenster von 200K."
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "Claude 3 Opus ist das leistungsstärkste AI-Modell von Anthropic mit fortschrittlicher Leistung bei hochkomplexen Aufgaben. Es kann offene Eingaben und unbekannte Szenarien verarbeiten und zeigt hervorragende Flüssigkeit und menschenähnliches Verständnis. Claude 3 Opus demonstriert die Grenzen der Möglichkeiten generativer AI. Claude 3 Opus kann Bilder verarbeiten und Textausgaben zurückgeben, mit einem Kontextfenster von 200K."
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "Anthropic's Claude 3 Sonnet erreicht ein ideales Gleichgewicht zwischen Intelligenz und Geschwindigkeit – besonders geeignet für Unternehmensarbeitslasten. Es bietet maximalen Nutzen zu einem Preis, der unter dem der Konkurrenz liegt, und wurde als zuverlässiges, langlebiges Hauptmodell für skalierbare AI-Implementierungen konzipiert. Claude 3 Sonnet kann Bilder verarbeiten und Textausgaben zurückgeben, mit einem Kontextfenster von 200K."
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "Ein schnelles, kostengünstiges und dennoch sehr leistungsfähiges Modell, das eine Reihe von Aufgaben bewältigen kann, darunter alltägliche Gespräche, Textanalysen, Zusammenfassungen und Dokumentenfragen."
+ },
+ "anthropic.claude-v2": {
+ "description": "Anthropic zeigt in einer Vielzahl von Aufgaben, von komplexen Dialogen und kreativer Inhaltserstellung bis hin zu detaillierten Anweisungen, ein hohes Maß an Fähigkeiten."
+ },
+ "anthropic.claude-v2:1": {
+ "description": "Die aktualisierte Version von Claude 2 bietet ein doppelt so großes Kontextfenster sowie Verbesserungen in der Zuverlässigkeit, der Halluzinationsrate und der evidenzbasierten Genauigkeit in langen Dokumenten und RAG-Kontexten."
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 Haiku ist das schnellste und kompakteste Modell von Anthropic, das darauf ausgelegt ist, nahezu sofortige Antworten zu liefern. Es bietet schnelle und präzise zielgerichtete Leistungen."
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opus ist das leistungsstärkste Modell von Anthropic zur Bearbeitung hochkomplexer Aufgaben. Es zeichnet sich durch hervorragende Leistung, Intelligenz, Flüssigkeit und Verständnis aus."
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 Sonnet bietet Fähigkeiten, die über Opus hinausgehen, und eine schnellere Geschwindigkeit als Sonnet, während es den gleichen Preis wie Sonnet beibehält. Sonnet ist besonders gut in Programmierung, Datenwissenschaft, visueller Verarbeitung und Agentenaufgaben."
+ },
+ "aya": {
+ "description": "Aya 23 ist ein mehrsprachiges Modell von Cohere, das 23 Sprachen unterstützt und die Anwendung in einer Vielzahl von Sprachen erleichtert."
+ },
+ "aya:35b": {
+ "description": "Aya 23 ist ein mehrsprachiges Modell von Cohere, das 23 Sprachen unterstützt und die Anwendung in einer Vielzahl von Sprachen erleichtert."
+ },
+ "charglm-3": {
+ "description": "CharGLM-3 ist für Rollenspiele und emotionale Begleitung konzipiert und unterstützt extrem lange Mehrfachgedächtnisse und personalisierte Dialoge, mit breiter Anwendung."
+ },
+ "chatgpt-4o-latest": {
+ "description": "ChatGPT-4o ist ein dynamisches Modell, das in Echtzeit aktualisiert wird, um die neueste Version zu gewährleisten. Es kombiniert starke Sprachverständnis- und Generierungsfähigkeiten und eignet sich für großangelegte Anwendungsszenarien, einschließlich Kundenservice, Bildung und technische Unterstützung."
+ },
+ "claude-2.0": {
+ "description": "Claude 2 bietet Unternehmen Fortschritte in kritischen Fähigkeiten, einschließlich branchenführenden 200K Token Kontext, erheblich reduzierter Häufigkeit von Modellillusionen, Systemaufforderungen und einer neuen Testfunktion: Werkzeugaufrufe."
+ },
+ "claude-2.1": {
+ "description": "Claude 2 bietet Unternehmen Fortschritte in kritischen Fähigkeiten, einschließlich branchenführenden 200K Token Kontext, erheblich reduzierter Häufigkeit von Modellillusionen, Systemaufforderungen und einer neuen Testfunktion: Werkzeugaufrufe."
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnet bietet Fähigkeiten, die über Opus hinausgehen, und ist schneller als Sonnet, während es den gleichen Preis wie Sonnet beibehält. Sonnet ist besonders gut in Programmierung, Datenwissenschaft, visueller Verarbeitung und Agenturaufgaben."
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haiku ist das schnellste und kompakteste Modell von Anthropic, das darauf abzielt, nahezu sofortige Antworten zu liefern. Es bietet schnelle und präzise zielgerichtete Leistungen."
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opus ist das leistungsstärkste Modell von Anthropic für die Verarbeitung hochkomplexer Aufgaben. Es bietet herausragende Leistungen in Bezug auf Leistung, Intelligenz, Flüssigkeit und Verständnis."
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnet bietet eine ideale Balance zwischen Intelligenz und Geschwindigkeit für Unternehmensarbeitslasten. Es bietet maximalen Nutzen zu einem niedrigeren Preis, ist zuverlässig und für großflächige Bereitstellungen geeignet."
+ },
+ "claude-instant-1.2": {
+ "description": "Das Modell von Anthropic wird für latenzarme, hochdurchsatzfähige Textgenerierung verwendet und unterstützt die Generierung von Hunderten von Seiten Text."
+ },
+ "codegeex-4": {
+ "description": "CodeGeeX-4 ist ein leistungsstarker AI-Programmierassistent, der intelligente Fragen und Codevervollständigung in verschiedenen Programmiersprachen unterstützt und die Entwicklungseffizienz steigert."
+ },
+ "codegemma": {
+ "description": "CodeGemma ist ein leichtgewichtiges Sprachmodell, das speziell für verschiedene Programmieraufgaben entwickelt wurde und schnelle Iterationen und Integrationen unterstützt."
+ },
+ "codegemma:2b": {
+ "description": "CodeGemma ist ein leichtgewichtiges Sprachmodell, das speziell für verschiedene Programmieraufgaben entwickelt wurde und schnelle Iterationen und Integrationen unterstützt."
+ },
+ "codellama": {
+ "description": "Code Llama ist ein LLM, das sich auf die Codegenerierung und -diskussion konzentriert und eine breite Unterstützung für Programmiersprachen bietet, die sich für Entwicklerumgebungen eignet."
+ },
+ "codellama:13b": {
+ "description": "Code Llama ist ein LLM, das sich auf die Codegenerierung und -diskussion konzentriert und eine breite Unterstützung für Programmiersprachen bietet, die sich für Entwicklerumgebungen eignet."
+ },
+ "codellama:34b": {
+ "description": "Code Llama ist ein LLM, das sich auf die Codegenerierung und -diskussion konzentriert und eine breite Unterstützung für Programmiersprachen bietet, die sich für Entwicklerumgebungen eignet."
+ },
+ "codellama:70b": {
+ "description": "Code Llama ist ein LLM, das sich auf die Codegenerierung und -diskussion konzentriert und eine breite Unterstützung für Programmiersprachen bietet, die sich für Entwicklerumgebungen eignet."
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5 ist ein großes Sprachmodell, das auf einer umfangreichen Code-Datenbasis trainiert wurde und speziell für die Lösung komplexer Programmieraufgaben entwickelt wurde."
+ },
+ "codestral": {
+ "description": "Codestral ist das erste Code-Modell von Mistral AI und bietet hervorragende Unterstützung für Aufgaben der Codegenerierung."
+ },
+ "codestral-latest": {
+ "description": "Codestral ist ein hochmodernes Generierungsmodell, das sich auf die Codegenerierung konzentriert und für Aufgaben wie das Ausfüllen von Zwischenräumen und die Codevervollständigung optimiert wurde."
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22B ist ein Modell, das für die Befolgung von Anweisungen, Dialoge und Programmierung entwickelt wurde."
+ },
+ "cohere-command-r": {
+ "description": "Command R ist ein skalierbares generatives Modell, das auf RAG und Tool-Nutzung abzielt, um KI in Produktionsgröße für Unternehmen zu ermöglichen."
+ },
+ "cohere-command-r-plus": {
+ "description": "Command R+ ist ein hochmodernes, RAG-optimiertes Modell, das für unternehmensgerechte Arbeitslasten konzipiert ist."
+ },
+ "command-r": {
+ "description": "Command R ist ein LLM, das für Dialoge und Aufgaben mit langen Kontexten optimiert ist und sich besonders gut für dynamische Interaktionen und Wissensmanagement eignet."
+ },
+ "command-r-plus": {
+ "description": "Command R+ ist ein leistungsstarkes großes Sprachmodell, das speziell für reale Unternehmensszenarien und komplexe Anwendungen entwickelt wurde."
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instruct bietet zuverlässige Anweisungsverarbeitungsfähigkeiten und unterstützt Anwendungen in verschiedenen Branchen."
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5 vereint die hervorragenden Merkmale früherer Versionen und verbessert die allgemeinen und kodierenden Fähigkeiten."
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67B ist ein fortschrittliches Modell, das für komplexe Dialoge trainiert wurde."
+ },
+ "deepseek-chat": {
+ "description": "Ein neues Open-Source-Modell, das allgemeine und Codefähigkeiten kombiniert. Es bewahrt nicht nur die allgemeinen Dialogfähigkeiten des ursprünglichen Chat-Modells und die leistungsstarken Codeverarbeitungsfähigkeiten des Coder-Modells, sondern stimmt auch besser mit menschlichen Präferenzen überein. Darüber hinaus hat DeepSeek-V2.5 in mehreren Bereichen wie Schreibaufgaben und Befolgung von Anweisungen erhebliche Verbesserungen erzielt."
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2 ist ein Open-Source-Mischexperten-Code-Modell, das in Codeaufgaben hervorragende Leistungen erbringt und mit GPT4-Turbo vergleichbar ist."
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2 ist ein Open-Source-Mischexperten-Code-Modell, das in Codeaufgaben hervorragende Leistungen erbringt und mit GPT4-Turbo vergleichbar ist."
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2 ist ein effizientes Mixture-of-Experts-Sprachmodell, das für wirtschaftliche Verarbeitungsanforderungen geeignet ist."
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236B ist das Design-Code-Modell von DeepSeek und bietet starke Fähigkeiten zur Codegenerierung."
+ },
+ "deepseek/deepseek-chat": {
+ "description": "Ein neues Open-Source-Modell, das allgemeine und Codefähigkeiten vereint. Es behält nicht nur die allgemeinen Dialogfähigkeiten des ursprünglichen Chat-Modells und die leistungsstarken Codeverarbeitungsfähigkeiten des Coder-Modells bei, sondern stimmt auch besser mit menschlichen Vorlieben überein. Darüber hinaus hat DeepSeek-V2.5 in vielen Bereichen wie Schreibaufgaben und Befehlsbefolgung erhebliche Verbesserungen erzielt."
+ },
+ "emohaa": {
+ "description": "Emohaa ist ein psychologisches Modell mit professionellen Beratungsfähigkeiten, das den Nutzern hilft, emotionale Probleme zu verstehen."
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001 (Tuning) bietet stabile und anpassbare Leistung und ist die ideale Wahl für Lösungen komplexer Aufgaben."
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002 (Tuning) bietet hervorragende multimodale Unterstützung und konzentriert sich auf die effektive Lösung komplexer Aufgaben."
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Pro ist Googles leistungsstarkes KI-Modell, das für die Skalierung einer Vielzahl von Aufgaben konzipiert ist."
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001 ist ein effizientes multimodales Modell, das eine breite Anwendbarkeit unterstützt."
+ },
+ "gemini-1.5-flash-002": {
+ "description": "Gemini 1.5 Flash 002 ist ein effizientes multimodales Modell, das eine breite Palette von Anwendungen unterstützt."
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827 ist für die Verarbeitung großangelegter Aufgabenszenarien konzipiert und bietet unvergleichliche Verarbeitungsgeschwindigkeit."
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "Gemini 1.5 Flash 8B 0924 ist das neueste experimentelle Modell, das in Text- und multimodalen Anwendungsfällen erhebliche Leistungsverbesserungen aufweist."
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827 bietet optimierte multimodale Verarbeitungsfähigkeiten, die für verschiedene komplexe Aufgabenszenarien geeignet sind."
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flash ist Googles neuestes multimodales KI-Modell, das über schnelle Verarbeitungsfähigkeiten verfügt und Text-, Bild- und Videoeingaben unterstützt, um eine effiziente Skalierung für verschiedene Aufgaben zu ermöglichen."
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001 ist eine skalierbare multimodale KI-Lösung, die eine breite Palette komplexer Aufgaben unterstützt."
+ },
+ "gemini-1.5-pro-002": {
+ "description": "Gemini 1.5 Pro 002 ist das neueste produktionsbereite Modell, das eine höhere Ausgabequalität bietet, insbesondere bei mathematischen, langen Kontexten und visuellen Aufgaben erhebliche Verbesserungen aufweist."
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801 bietet hervorragende multimodale Verarbeitungsfähigkeiten und bringt mehr Flexibilität in die Anwendungsentwicklung."
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827 kombiniert die neuesten Optimierungstechniken und bietet eine effizientere multimodale Datenverarbeitung."
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Pro unterstützt bis zu 2 Millionen Tokens und ist die ideale Wahl für mittelgroße multimodale Modelle, die umfassende Unterstützung für komplexe Aufgaben bieten."
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7B eignet sich für die Verarbeitung von mittelgroßen Aufgaben und bietet ein gutes Kosten-Nutzen-Verhältnis."
+ },
+ "gemma2": {
+ "description": "Gemma 2 ist ein effizientes Modell von Google, das eine Vielzahl von Anwendungsszenarien von kleinen Anwendungen bis hin zu komplexen Datenverarbeitungen abdeckt."
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9B ist ein Modell, das für spezifische Aufgaben und die Integration von Werkzeugen optimiert wurde."
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2 ist ein effizientes Modell von Google, das eine Vielzahl von Anwendungsszenarien von kleinen Anwendungen bis hin zu komplexen Datenverarbeitungen abdeckt."
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2 ist ein effizientes Modell von Google, das eine Vielzahl von Anwendungsszenarien von kleinen Anwendungen bis hin zu komplexen Datenverarbeitungen abdeckt."
+ },
+ "general": {
+ "description": "Spark Lite ist ein leichtgewichtiges großes Sprachmodell mit extrem niedriger Latenz und hoher Verarbeitungsfähigkeit, das vollständig kostenlos und offen ist und eine Echtzeitsuchfunktion unterstützt. Seine schnelle Reaktionsfähigkeit macht es in der Inferenzanwendung und Modellanpassung auf Geräten mit geringer Rechenleistung besonders effektiv und bietet den Nutzern ein hervorragendes Kosten-Nutzen-Verhältnis und intelligente Erfahrungen, insbesondere in den Bereichen Wissensabfrage, Inhaltserstellung und Suchszenarien."
+ },
+ "generalv3": {
+ "description": "Spark Pro ist ein hochleistungsfähiges großes Sprachmodell, das für professionelle Bereiche optimiert ist und sich auf Mathematik, Programmierung, Medizin, Bildung und andere Bereiche konzentriert, und unterstützt die Online-Suche sowie integrierte Plugins für Wetter, Datum usw. Das optimierte Modell zeigt hervorragende Leistungen und hohe Effizienz in komplexen Wissensabfragen, Sprachverständnis und hochrangiger Textgenerierung und ist die ideale Wahl für professionelle Anwendungsszenarien."
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Max ist die umfassendste Version, die Online-Suche und zahlreiche integrierte Plugins unterstützt. Ihre umfassend optimierten Kernfähigkeiten sowie die Systemrolleneinstellungen und Funktionsaufrufmöglichkeiten ermöglichen eine außergewöhnliche Leistung in verschiedenen komplexen Anwendungsszenarien."
+ },
+ "glm-4": {
+ "description": "GLM-4 ist die alte Flaggschiffversion, die im Januar 2024 veröffentlicht wurde und mittlerweile durch das leistungsstärkere GLM-4-0520 ersetzt wurde."
+ },
+ "glm-4-0520": {
+ "description": "GLM-4-0520 ist die neueste Modellversion, die für hochkomplexe und vielfältige Aufgaben konzipiert wurde und hervorragende Leistungen zeigt."
+ },
+ "glm-4-air": {
+ "description": "GLM-4-Air ist eine kosteneffiziente Version, die in der Leistung nahe am GLM-4 liegt und schnelle Geschwindigkeiten zu einem erschwinglichen Preis bietet."
+ },
+ "glm-4-airx": {
+ "description": "GLM-4-AirX bietet eine effiziente Version von GLM-4-Air mit einer Inferenzgeschwindigkeit von bis zu 2,6-fach."
+ },
+ "glm-4-alltools": {
+ "description": "GLM-4-AllTools ist ein multifunktionales Agentenmodell, das optimiert wurde, um komplexe Anweisungsplanung und Werkzeugaufrufe zu unterstützen, wie z. B. Web-Browsing, Code-Interpretation und Textgenerierung, geeignet für die Ausführung mehrerer Aufgaben."
+ },
+ "glm-4-flash": {
+ "description": "GLM-4-Flash ist die ideale Wahl für die Verarbeitung einfacher Aufgaben, mit der schnellsten Geschwindigkeit und dem besten Preis."
+ },
+ "glm-4-long": {
+ "description": "GLM-4-Long unterstützt extrem lange Texteingaben und eignet sich für Gedächtnisaufgaben und die Verarbeitung großer Dokumente."
+ },
+ "glm-4-plus": {
+ "description": "GLM-4-Plus ist das hochintelligente Flaggschiffmodell mit starken Fähigkeiten zur Verarbeitung langer Texte und komplexer Aufgaben, mit umfassenden Leistungsverbesserungen."
+ },
+ "glm-4v": {
+ "description": "GLM-4V bietet starke Fähigkeiten zur Bildverständnis und -schlussfolgerung und unterstützt eine Vielzahl visueller Aufgaben."
+ },
+ "glm-4v-plus": {
+ "description": "GLM-4V-Plus hat die Fähigkeit, Videoinhalte und mehrere Bilder zu verstehen und eignet sich für multimodale Aufgaben."
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827 bietet optimierte multimodale Verarbeitungsfähigkeiten und ist für eine Vielzahl komplexer Aufgaben geeignet."
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827 kombiniert die neuesten Optimierungstechnologien und bietet effizientere multimodale Datenverarbeitungsfähigkeiten."
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2 setzt das Designkonzept von Leichtbau und Effizienz fort."
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2 ist eine leichtgewichtige Open-Source-Textmodellreihe von Google."
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2 ist eine leichtgewichtige Open-Source-Textmodellreihe von Google."
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B) bietet grundlegende Anweisungsverarbeitungsfähigkeiten und eignet sich für leichte Anwendungen."
+ },
+ "gpt-3.5-turbo": {
+ "description": "GPT 3.5 Turbo eignet sich für eine Vielzahl von Textgenerierungs- und Verständnisaufgaben. Derzeit verweist es auf gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "GPT 3.5 Turbo eignet sich für eine Vielzahl von Textgenerierungs- und Verständnisaufgaben. Derzeit verweist es auf gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "GPT 3.5 Turbo eignet sich für eine Vielzahl von Textgenerierungs- und Verständnisaufgaben. Derzeit verweist es auf gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "GPT 3.5 Turbo eignet sich für eine Vielzahl von Textgenerierungs- und Verständnisaufgaben. Derzeit verweist es auf gpt-3.5-turbo-0125."
+ },
+ "gpt-4": {
+ "description": "GPT-4 bietet ein größeres Kontextfenster, das in der Lage ist, längere Texteingaben zu verarbeiten, und eignet sich für Szenarien, die eine umfassende Informationsintegration und Datenanalyse erfordern."
+ },
+ "gpt-4-0125-preview": {
+ "description": "Das neueste GPT-4 Turbo-Modell verfügt über visuelle Funktionen. Jetzt können visuelle Anfragen im JSON-Format und durch Funktionsaufrufe gestellt werden. GPT-4 Turbo ist eine verbesserte Version, die kosteneffiziente Unterstützung für multimodale Aufgaben bietet. Es findet ein Gleichgewicht zwischen Genauigkeit und Effizienz und eignet sich für Anwendungen, die Echtzeitanpassungen erfordern."
+ },
+ "gpt-4-0613": {
+ "description": "GPT-4 bietet ein größeres Kontextfenster, das in der Lage ist, längere Texteingaben zu verarbeiten, und eignet sich für Szenarien, die eine umfassende Informationsintegration und Datenanalyse erfordern."
+ },
+ "gpt-4-1106-preview": {
+ "description": "Das neueste GPT-4 Turbo-Modell verfügt über visuelle Funktionen. Jetzt können visuelle Anfragen im JSON-Format und durch Funktionsaufrufe gestellt werden. GPT-4 Turbo ist eine verbesserte Version, die kosteneffiziente Unterstützung für multimodale Aufgaben bietet. Es findet ein Gleichgewicht zwischen Genauigkeit und Effizienz und eignet sich für Anwendungen, die Echtzeitanpassungen erfordern."
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "Das neueste GPT-4 Turbo-Modell verfügt über visuelle Funktionen. Jetzt können visuelle Anfragen im JSON-Format und durch Funktionsaufrufe gestellt werden. GPT-4 Turbo ist eine verbesserte Version, die kosteneffiziente Unterstützung für multimodale Aufgaben bietet. Es findet ein Gleichgewicht zwischen Genauigkeit und Effizienz und eignet sich für Anwendungen, die Echtzeitanpassungen erfordern."
+ },
+ "gpt-4-32k": {
+ "description": "GPT-4 bietet ein größeres Kontextfenster, das in der Lage ist, längere Texteingaben zu verarbeiten, und eignet sich für Szenarien, die eine umfassende Informationsintegration und Datenanalyse erfordern."
+ },
+ "gpt-4-32k-0613": {
+ "description": "GPT-4 bietet ein größeres Kontextfenster, das in der Lage ist, längere Texteingaben zu verarbeiten, und eignet sich für Szenarien, die eine umfassende Informationsintegration und Datenanalyse erfordern."
+ },
+ "gpt-4-turbo": {
+ "description": "Das neueste GPT-4 Turbo-Modell verfügt über visuelle Funktionen. Jetzt können visuelle Anfragen im JSON-Format und durch Funktionsaufrufe gestellt werden. GPT-4 Turbo ist eine verbesserte Version, die kosteneffiziente Unterstützung für multimodale Aufgaben bietet. Es findet ein Gleichgewicht zwischen Genauigkeit und Effizienz und eignet sich für Anwendungen, die Echtzeitanpassungen erfordern."
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "Das neueste GPT-4 Turbo-Modell verfügt über visuelle Funktionen. Jetzt können visuelle Anfragen im JSON-Format und durch Funktionsaufrufe gestellt werden. GPT-4 Turbo ist eine verbesserte Version, die kosteneffiziente Unterstützung für multimodale Aufgaben bietet. Es findet ein Gleichgewicht zwischen Genauigkeit und Effizienz und eignet sich für Anwendungen, die Echtzeitanpassungen erfordern."
+ },
+ "gpt-4-turbo-preview": {
+ "description": "Das neueste GPT-4 Turbo-Modell verfügt über visuelle Funktionen. Jetzt können visuelle Anfragen im JSON-Format und durch Funktionsaufrufe gestellt werden. GPT-4 Turbo ist eine verbesserte Version, die kosteneffiziente Unterstützung für multimodale Aufgaben bietet. Es findet ein Gleichgewicht zwischen Genauigkeit und Effizienz und eignet sich für Anwendungen, die Echtzeitanpassungen erfordern."
+ },
+ "gpt-4-vision-preview": {
+ "description": "Das neueste GPT-4 Turbo-Modell verfügt über visuelle Funktionen. Jetzt können visuelle Anfragen im JSON-Format und durch Funktionsaufrufe gestellt werden. GPT-4 Turbo ist eine verbesserte Version, die kosteneffiziente Unterstützung für multimodale Aufgaben bietet. Es findet ein Gleichgewicht zwischen Genauigkeit und Effizienz und eignet sich für Anwendungen, die Echtzeitanpassungen erfordern."
+ },
+ "gpt-4o": {
+ "description": "ChatGPT-4o ist ein dynamisches Modell, das in Echtzeit aktualisiert wird, um die neueste Version zu gewährleisten. Es kombiniert starke Sprachverständnis- und Generierungsfähigkeiten und eignet sich für großangelegte Anwendungsszenarien, einschließlich Kundenservice, Bildung und technische Unterstützung."
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "ChatGPT-4o ist ein dynamisches Modell, das in Echtzeit aktualisiert wird, um die neueste Version zu gewährleisten. Es kombiniert starke Sprachverständnis- und Generierungsfähigkeiten und eignet sich für großangelegte Anwendungsszenarien, einschließlich Kundenservice, Bildung und technische Unterstützung."
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o ist ein dynamisches Modell, das in Echtzeit aktualisiert wird, um die neueste Version zu gewährleisten. Es kombiniert starke Sprachverständnis- und Generierungsfähigkeiten und eignet sich für großangelegte Anwendungsszenarien, einschließlich Kundenservice, Bildung und technische Unterstützung."
+ },
+ "gpt-4o-mini": {
+ "description": "GPT-4o mini ist das neueste Modell von OpenAI, das nach GPT-4 Omni veröffentlicht wurde und sowohl Text- als auch Bildinput unterstützt. Als ihr fortschrittlichstes kleines Modell ist es viel günstiger als andere neueste Modelle und kostet über 60 % weniger als GPT-3.5 Turbo. Es behält die fortschrittliche Intelligenz bei und bietet gleichzeitig ein hervorragendes Preis-Leistungs-Verhältnis. GPT-4o mini erzielte 82 % im MMLU-Test und rangiert derzeit in den Chat-Präferenzen über GPT-4."
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13B ist ein Sprachmodell, das Kreativität und Intelligenz kombiniert und mehrere führende Modelle integriert."
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "Das innovative Open-Source-Modell InternLM2.5 hat durch eine große Anzahl von Parametern die Dialogintelligenz erhöht."
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5 bietet intelligente Dialoglösungen in mehreren Szenarien."
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "Das Llama 3.1 70B Instruct-Modell hat 70B Parameter und bietet herausragende Leistungen bei der Generierung großer Texte und Anweisungsaufgaben."
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70B bietet leistungsstarke KI-Schlussfolgerungsfähigkeiten, die für komplexe Anwendungen geeignet sind und eine hohe Rechenverarbeitung bei gleichzeitiger Effizienz und Genauigkeit unterstützen."
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8B ist ein leistungsstarkes Modell, das schnelle Textgenerierungsfähigkeiten bietet und sich hervorragend für Anwendungen eignet, die große Effizienz und Kosteneffektivität erfordern."
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "Das Llama 3.1 8B Instruct-Modell hat 8B Parameter und unterstützt die effiziente Ausführung von bildbasierten Anweisungsaufgaben und bietet hochwertige Textgenerierungsfähigkeiten."
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "Das Llama 3.1 Sonar Huge Online-Modell hat 405B Parameter und unterstützt eine Kontextlänge von etwa 127.000 Markierungen, es wurde für komplexe Online-Chat-Anwendungen entwickelt."
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "Das Llama 3.1 Sonar Large Chat-Modell hat 70B Parameter und unterstützt eine Kontextlänge von etwa 127.000 Markierungen, es eignet sich für komplexe Offline-Chat-Aufgaben."
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "Das Llama 3.1 Sonar Large Online-Modell hat 70B Parameter und unterstützt eine Kontextlänge von etwa 127.000 Markierungen, es eignet sich für hochvolumige und vielfältige Chat-Aufgaben."
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "Das Llama 3.1 Sonar Small Chat-Modell hat 8B Parameter und wurde speziell für Offline-Chat entwickelt, es unterstützt eine Kontextlänge von etwa 127.000 Markierungen."
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "Das Llama 3.1 Sonar Small Online-Modell hat 8B Parameter und unterstützt eine Kontextlänge von etwa 127.000 Markierungen, es wurde speziell für Online-Chat entwickelt und kann verschiedene Textinteraktionen effizient verarbeiten."
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70B bietet unvergleichliche Fähigkeiten zur Verarbeitung von Komplexität und ist maßgeschneidert für Projekte mit hohen Anforderungen."
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8B bietet hervorragende Schlussfolgerungsfähigkeiten und eignet sich für eine Vielzahl von Anwendungsanforderungen."
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Use bietet leistungsstarke Werkzeugaufruf-Fähigkeiten und unterstützt die effiziente Verarbeitung komplexer Aufgaben."
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Use ist ein Modell, das für die effiziente Nutzung von Werkzeugen optimiert ist und schnelle parallele Berechnungen unterstützt."
+ },
+ "llama3.1": {
+ "description": "Llama 3.1 ist ein führendes Modell von Meta, das bis zu 405B Parameter unterstützt und in den Bereichen komplexe Dialoge, mehrsprachige Übersetzungen und Datenanalysen eingesetzt werden kann."
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1 ist ein führendes Modell von Meta, das bis zu 405B Parameter unterstützt und in den Bereichen komplexe Dialoge, mehrsprachige Übersetzungen und Datenanalysen eingesetzt werden kann."
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1 ist ein führendes Modell von Meta, das bis zu 405B Parameter unterstützt und in den Bereichen komplexe Dialoge, mehrsprachige Übersetzungen und Datenanalysen eingesetzt werden kann."
+ },
+ "llava": {
+ "description": "LLaVA ist ein multimodales Modell, das visuelle Encoder und Vicuna kombiniert und für starke visuelle und sprachliche Verständnisse sorgt."
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7B bietet integrierte visuelle Verarbeitungsfähigkeiten, um komplexe Ausgaben aus visuellen Informationen zu generieren."
+ },
+ "llava:13b": {
+ "description": "LLaVA ist ein multimodales Modell, das visuelle Encoder und Vicuna kombiniert und für starke visuelle und sprachliche Verständnisse sorgt."
+ },
+ "llava:34b": {
+ "description": "LLaVA ist ein multimodales Modell, das visuelle Encoder und Vicuna kombiniert und für starke visuelle und sprachliche Verständnisse sorgt."
+ },
+ "mathstral": {
+ "description": "MathΣtral ist für wissenschaftliche Forschung und mathematische Schlussfolgerungen konzipiert und bietet effektive Rechenfähigkeiten und Ergebnisinterpretationen."
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "Ein leistungsstarkes Modell mit 70 Milliarden Parametern, das in den Bereichen Schlussfolgerungen, Programmierung und breiten Sprachanwendungen herausragt."
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "Ein vielseitiges Modell mit 8 Milliarden Parametern, das für Dialog- und Textgenerierungsaufgaben optimiert ist."
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "Die Llama 3.1-Modelle, die auf Anweisungen optimiert sind, sind für mehrsprachige Dialoganwendungen optimiert und übertreffen viele der verfügbaren Open-Source- und geschlossenen Chat-Modelle in gängigen Branchenbenchmarks."
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "Die Llama 3.1-Modelle, die auf Anweisungen optimiert sind, sind für mehrsprachige Dialoganwendungen optimiert und übertreffen viele der verfügbaren Open-Source- und geschlossenen Chat-Modelle in gängigen Branchenbenchmarks."
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "Die Llama 3.1-Modelle, die auf Anweisungen optimiert sind, sind für mehrsprachige Dialoganwendungen optimiert und übertreffen viele der verfügbaren Open-Source- und geschlossenen Chat-Modelle in gängigen Branchenbenchmarks."
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B) bietet hervorragende Sprachverarbeitungsfähigkeiten und ein ausgezeichnetes Interaktionserlebnis."
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B) ist ein leistungsstarkes Chat-Modell, das komplexe Dialoganforderungen unterstützt."
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B) bietet mehrsprachige Unterstützung und deckt ein breites Spektrum an Fachwissen ab."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Lite ist für Umgebungen geeignet, die hohe Leistung und niedrige Latenz erfordern."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turbo bietet hervorragende Sprachverständnis- und Generierungsfähigkeiten und eignet sich für die anspruchsvollsten Rechenaufgaben."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Lite ist für ressourcenbeschränkte Umgebungen geeignet und bietet eine hervorragende Balance zwischen Leistung und Effizienz."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turbo ist ein leistungsstarkes großes Sprachmodell, das eine breite Palette von Anwendungsszenarien unterstützt."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405B ist ein leistungsstarkes Modell für Vortraining und Anweisungsanpassung."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "Das 405B Llama 3.1 Turbo-Modell bietet eine enorme Kapazität zur Unterstützung von Kontexten für die Verarbeitung großer Datenmengen und zeigt herausragende Leistungen in groß angelegten KI-Anwendungen."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70B bietet effiziente Dialogunterstützung in mehreren Sprachen."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "Das Llama 3.1 70B-Modell wurde feinabgestimmt und eignet sich für hochbelastete Anwendungen, die auf FP8 quantisiert wurden, um eine effizientere Rechenleistung und Genauigkeit zu bieten und in komplexen Szenarien hervorragende Leistungen zu gewährleisten."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1 bietet Unterstützung für mehrere Sprachen und ist eines der führenden Generierungsmodelle der Branche."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "Das Llama 3.1 8B-Modell verwendet FP8-Quantisierung und unterstützt bis zu 131.072 Kontextmarkierungen, es ist eines der besten Open-Source-Modelle, das sich für komplexe Aufgaben eignet und in vielen Branchenbenchmarks übertrifft."
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instruct ist optimiert für qualitativ hochwertige Dialogszenarien und zeigt hervorragende Leistungen in verschiedenen menschlichen Bewertungen."
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instruct optimiert qualitativ hochwertige Dialogszenarien und bietet bessere Leistungen als viele geschlossene Modelle."
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B Instruct ist die neueste Version von Meta, optimiert zur Generierung qualitativ hochwertiger Dialoge und übertrifft viele führende geschlossene Modelle."
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct ist speziell für qualitativ hochwertige Dialoge konzipiert und zeigt herausragende Leistungen in menschlichen Bewertungen, besonders geeignet für hochinteraktive Szenarien."
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct ist die neueste Version von Meta, optimiert für qualitativ hochwertige Dialogszenarien und übertrifft viele führende geschlossene Modelle."
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1 bietet Unterstützung für mehrere Sprachen und gehört zu den führenden generativen Modellen der Branche."
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 405B Instruct ist das größte und leistungsstärkste Modell innerhalb des Llama 3.1 Instruct Modells. Es handelt sich um ein hochentwickeltes Modell für dialogbasierte Schlussfolgerungen und die Generierung synthetischer Daten, das auch als Grundlage für die professionelle kontinuierliche Vorab- und Feinabstimmung in bestimmten Bereichen verwendet werden kann. Die mehrsprachigen großen Sprachmodelle (LLMs) von Llama 3.1 sind eine Gruppe von vortrainierten, anweisungsoptimierten Generierungsmodellen, die in den Größen 8B, 70B und 405B (Text-Eingabe/Ausgabe) verfügbar sind. Die anweisungsoptimierten Textmodelle (8B, 70B, 405B) sind speziell für mehrsprachige Dialoganwendungen optimiert und haben in gängigen Branchenbenchmarks viele verfügbare Open-Source-Chat-Modelle übertroffen. Llama 3.1 ist für kommerzielle und Forschungszwecke in mehreren Sprachen konzipiert. Die anweisungsoptimierten Textmodelle eignen sich für assistentengleiche Chats, während die vortrainierten Modelle für verschiedene Aufgaben der natürlichen Sprachgenerierung angepasst werden können. Das Llama 3.1 Modell unterstützt auch die Nutzung seiner Ausgaben zur Verbesserung anderer Modelle, einschließlich der Generierung synthetischer Daten und der Verfeinerung. Llama 3.1 ist ein autoregressives Sprachmodell, das auf einer optimierten Transformer-Architektur basiert. Die angepasste Version verwendet überwachte Feinabstimmung (SFT) und verstärkendes Lernen mit menschlichem Feedback (RLHF), um den menschlichen Präferenzen für Hilfsbereitschaft und Sicherheit zu entsprechen."
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "Die aktualisierte Version von Meta Llama 3.1 70B Instruct umfasst eine erweiterte Kontextlänge von 128K, Mehrsprachigkeit und verbesserte Schlussfolgerungsfähigkeiten. Die von Llama 3.1 bereitgestellten mehrsprachigen großen Sprachmodelle (LLMs) sind eine Gruppe von vortrainierten, anweisungsoptimierten Generierungsmodellen, einschließlich Größen von 8B, 70B und 405B (Textinput/-output). Die anweisungsoptimierten Textmodelle (8B, 70B, 405B) sind für mehrsprachige Dialoganwendungen optimiert und übertreffen viele verfügbare Open-Source-Chat-Modelle in gängigen Branchenbenchmarks. Llama 3.1 ist für kommerzielle und Forschungszwecke in mehreren Sprachen konzipiert. Die anweisungsoptimierten Textmodelle eignen sich für assistentengleiche Chats, während die vortrainierten Modelle für eine Vielzahl von Aufgaben der natürlichen Sprachgenerierung angepasst werden können. Llama 3.1-Modelle unterstützen auch die Nutzung ihrer Ausgaben zur Verbesserung anderer Modelle, einschließlich der Generierung synthetischer Daten und der Verfeinerung. Llama 3.1 ist ein autoregressives Sprachmodell, das mit einer optimierten Transformer-Architektur entwickelt wurde. Die angepassten Versionen verwenden überwachte Feinabstimmung (SFT) und verstärkendes Lernen mit menschlichem Feedback (RLHF), um den menschlichen Präferenzen für Hilfsbereitschaft und Sicherheit zu entsprechen."
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "Die aktualisierte Version von Meta Llama 3.1 8B Instruct umfasst eine erweiterte Kontextlänge von 128K, Mehrsprachigkeit und verbesserte Schlussfolgerungsfähigkeiten. Die von Llama 3.1 bereitgestellten mehrsprachigen großen Sprachmodelle (LLMs) sind eine Gruppe von vortrainierten, anweisungsoptimierten Generierungsmodellen, einschließlich Größen von 8B, 70B und 405B (Textinput/-output). Die anweisungsoptimierten Textmodelle (8B, 70B, 405B) sind für mehrsprachige Dialoganwendungen optimiert und übertreffen viele verfügbare Open-Source-Chat-Modelle in gängigen Branchenbenchmarks. Llama 3.1 ist für kommerzielle und Forschungszwecke in mehreren Sprachen konzipiert. Die anweisungsoptimierten Textmodelle eignen sich für assistentengleiche Chats, während die vortrainierten Modelle für eine Vielzahl von Aufgaben der natürlichen Sprachgenerierung angepasst werden können. Llama 3.1-Modelle unterstützen auch die Nutzung ihrer Ausgaben zur Verbesserung anderer Modelle, einschließlich der Generierung synthetischer Daten und der Verfeinerung. Llama 3.1 ist ein autoregressives Sprachmodell, das mit einer optimierten Transformer-Architektur entwickelt wurde. Die angepassten Versionen verwenden überwachte Feinabstimmung (SFT) und verstärkendes Lernen mit menschlichem Feedback (RLHF), um den menschlichen Präferenzen für Hilfsbereitschaft und Sicherheit zu entsprechen."
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3 ist ein offenes großes Sprachmodell (LLM), das sich an Entwickler, Forscher und Unternehmen richtet und ihnen hilft, ihre Ideen für generative KI zu entwickeln, zu experimentieren und verantwortungsbewusst zu skalieren. Als Teil eines globalen Innovationssystems ist es besonders geeignet für die Erstellung von Inhalten, Dialog-KI, Sprachverständnis, Forschung und Unternehmensanwendungen."
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3 ist ein offenes großes Sprachmodell (LLM), das sich an Entwickler, Forscher und Unternehmen richtet und ihnen hilft, ihre Ideen für generative KI zu entwickeln, zu experimentieren und verantwortungsbewusst zu skalieren. Als Teil eines globalen Innovationssystems ist es besonders geeignet für Umgebungen mit begrenzter Rechenleistung und Ressourcen, für Edge-Geräte und schnellere Trainingszeiten."
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7B ist das neueste schnelle und leichte Modell von Microsoft AI, dessen Leistung fast zehnmal so hoch ist wie die bestehender führender Open-Source-Modelle."
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22B ist das fortschrittlichste Wizard-Modell von Microsoft AI und zeigt äußerst wettbewerbsfähige Leistungen."
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-V ist das neue multimodale Großmodell von OpenBMB, das über hervorragende OCR-Erkennungs- und multimodale Verständnisfähigkeiten verfügt und eine Vielzahl von Anwendungsszenarien unterstützt."
+ },
+ "mistral": {
+ "description": "Mistral ist ein 7B-Modell von Mistral AI, das sich für vielfältige Anforderungen an die Sprachverarbeitung eignet."
+ },
+ "mistral-large": {
+ "description": "Mixtral Large ist das Flaggschiff-Modell von Mistral, das die Fähigkeiten zur Codegenerierung, Mathematik und Schlussfolgerungen kombiniert und ein Kontextfenster von 128k unterstützt."
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407) ist ein fortschrittliches großes Sprachmodell (LLM) mit modernsten Fähigkeiten in den Bereichen Schlussfolgerungen, Wissen und Programmierung."
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Large ist das Flaggschiff-Modell, das sich gut für mehrsprachige Aufgaben, komplexe Schlussfolgerungen und Codegenerierung eignet und die ideale Wahl für hochentwickelte Anwendungen ist."
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemo wurde in Zusammenarbeit mit Mistral AI und NVIDIA entwickelt und ist ein leistungsstarkes 12B-Modell."
+ },
+ "mistral-small": {
+ "description": "Mistral Small kann für jede sprachbasierte Aufgabe verwendet werden, die hohe Effizienz und geringe Latenz erfordert."
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Small ist eine kosteneffiziente, schnelle und zuverlässige Option für Anwendungsfälle wie Übersetzung, Zusammenfassung und Sentimentanalyse."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instruct ist bekannt für seine hohe Leistung und eignet sich für eine Vielzahl von Sprachaufgaben."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7B ist ein nach Bedarf feinabgestimmtes Modell, das optimierte Antworten auf Aufgaben bietet."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3 bietet effiziente Rechenleistung und natürliche Sprachverständnisfähigkeiten und eignet sich für eine Vielzahl von Anwendungen."
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B) ist ein super großes Sprachmodell, das extrem hohe Verarbeitungsanforderungen unterstützt."
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7B ist ein vortrainiertes sparsames Mischmodell, das für allgemeine Textaufgaben verwendet wird."
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instruct ist ein hochleistungsfähiges Branchenstandardmodell mit Geschwindigkeitsoptimierung und Unterstützung für lange Kontexte."
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemo ist ein 7,3B-Parameter-Modell mit Unterstützung für mehrere Sprachen und hoher Programmierleistung."
+ },
+ "mixtral": {
+ "description": "Mixtral ist das Expertenmodell von Mistral AI, das über Open-Source-Gewichte verfügt und Unterstützung bei der Codegenerierung und Sprachverständnis bietet."
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7B bietet hochgradig fehlertolerante parallele Berechnungsfähigkeiten und eignet sich für komplexe Aufgaben."
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtral ist das Expertenmodell von Mistral AI, das über Open-Source-Gewichte verfügt und Unterstützung bei der Codegenerierung und Sprachverständnis bietet."
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128K ist ein Modell mit überragenden Fähigkeiten zur Verarbeitung von langen Kontexten, das für die Generierung von sehr langen Texten geeignet ist und die Anforderungen komplexer Generierungsaufgaben erfüllt. Es kann Inhalte mit bis zu 128.000 Tokens verarbeiten und eignet sich hervorragend für Anwendungen in der Forschung, Wissenschaft und der Erstellung großer Dokumente."
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32K bietet die Fähigkeit zur Verarbeitung von mittellangen Kontexten und kann 32.768 Tokens verarbeiten, was es besonders geeignet für die Generierung verschiedener langer Dokumente und komplexer Dialoge macht, die in den Bereichen Inhaltserstellung, Berichtsgenerierung und Dialogsysteme eingesetzt werden."
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8K ist für die Generierung von Kurztextaufgaben konzipiert und bietet eine effiziente Verarbeitungsleistung, die 8.192 Tokens verarbeiten kann. Es eignet sich hervorragend für kurze Dialoge, Notizen und schnelle Inhaltserstellung."
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8B ist die aktualisierte Version von Nous Hermes 2 und enthält die neuesten intern entwickelten Datensätze."
+ },
+ "o1-mini": {
+ "description": "o1-mini ist ein schnelles und kosteneffizientes Inferenzmodell, das für Programmier-, Mathematik- und Wissenschaftsanwendungen entwickelt wurde. Das Modell hat einen Kontext von 128K und einen Wissensstand bis Oktober 2023."
+ },
+ "o1-preview": {
+ "description": "o1 ist OpenAIs neues Inferenzmodell, das für komplexe Aufgaben geeignet ist, die umfangreiches Allgemeinwissen erfordern. Das Modell hat einen Kontext von 128K und einen Wissensstand bis Oktober 2023."
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mamba ist ein auf die Codegenerierung spezialisiertes Mamba 2-Sprachmodell, das starke Unterstützung für fortschrittliche Code- und Schlussfolgerungsaufgaben bietet."
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7B ist ein kompaktes, aber leistungsstarkes Modell, das sich gut für Batch-Verarbeitung und einfache Aufgaben wie Klassifizierung und Textgenerierung eignet und über gute Schlussfolgerungsfähigkeiten verfügt."
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemo ist ein 12B-Modell, das in Zusammenarbeit mit Nvidia entwickelt wurde und hervorragende Schlussfolgerungs- und Codierungsfähigkeiten bietet, die leicht zu integrieren und zu ersetzen sind."
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22B ist ein größeres Expertenmodell, das sich auf komplexe Aufgaben konzentriert und hervorragende Schlussfolgerungsfähigkeiten sowie eine höhere Durchsatzrate bietet."
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7B ist ein spärliches Expertenmodell, das mehrere Parameter nutzt, um die Schlussfolgerungsgeschwindigkeit zu erhöhen und sich für die Verarbeitung mehrsprachiger und Codegenerierungsaufgaben eignet."
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o ist ein dynamisches Modell, das in Echtzeit aktualisiert wird, um die neueste Version zu gewährleisten. Es kombiniert starke Sprachverständnis- und Generierungsfähigkeiten und eignet sich für großangelegte Anwendungsszenarien, einschließlich Kundenservice, Bildung und technische Unterstützung."
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o mini ist das neueste Modell von OpenAI, das nach GPT-4 Omni veröffentlicht wurde und Text- und Bild-Eingaben unterstützt. Als ihr fortschrittlichstes kleines Modell ist es viel günstiger als andere neueste Modelle und über 60 % günstiger als GPT-3.5 Turbo. Es behält die fortschrittlichste Intelligenz bei und bietet gleichzeitig ein hervorragendes Preis-Leistungs-Verhältnis. GPT-4o mini erzielte 82 % im MMLU-Test und rangiert derzeit in den Chat-Präferenzen über GPT-4."
+ },
+ "openai/o1-mini": {
+ "description": "o1-mini ist ein schnelles und kosteneffizientes Inferenzmodell, das für Programmier-, Mathematik- und Wissenschaftsanwendungen entwickelt wurde. Das Modell hat einen Kontext von 128K und einen Wissensstand bis Oktober 2023."
+ },
+ "openai/o1-preview": {
+ "description": "o1 ist OpenAIs neues Inferenzmodell, das für komplexe Aufgaben geeignet ist, die umfangreiches Allgemeinwissen erfordern. Das Modell hat einen Kontext von 128K und einen Wissensstand bis Oktober 2023."
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7B ist eine Open-Source-Sprachmodellbibliothek, die mit der Strategie „C-RLFT (Conditional Reinforcement Learning Fine-Tuning)“ optimiert wurde."
+ },
+ "openrouter/auto": {
+ "description": "Je nach Kontextlänge, Thema und Komplexität wird Ihre Anfrage an Llama 3 70B Instruct, Claude 3.5 Sonnet (selbstregulierend) oder GPT-4o gesendet."
+ },
+ "phi3": {
+ "description": "Phi-3 ist ein leichtgewichtiges offenes Modell von Microsoft, das für effiziente Integration und großangelegte Wissensschlüsse geeignet ist."
+ },
+ "phi3:14b": {
+ "description": "Phi-3 ist ein leichtgewichtiges offenes Modell von Microsoft, das für effiziente Integration und großangelegte Wissensschlüsse geeignet ist."
+ },
+ "pixtral-12b-2409": {
+ "description": "Das Pixtral-Modell zeigt starke Fähigkeiten in Aufgaben wie Diagramm- und Bildverständnis, Dokumentenfragen, multimodale Schlussfolgerungen und Befolgung von Anweisungen. Es kann Bilder in natürlicher Auflösung und Seitenverhältnis aufnehmen und in einem langen Kontextfenster von bis zu 128K Tokens beliebig viele Bilder verarbeiten."
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "Das Tongyi Qianwen Code-Modell."
+ },
+ "qwen-long": {
+ "description": "Qwen ist ein groß angelegtes Sprachmodell, das lange Textkontexte unterstützt und Dialogfunktionen für verschiedene Szenarien wie lange Dokumente und mehrere Dokumente bietet."
+ },
+ "qwen-math-plus-latest": {
+ "description": "Das Tongyi Qianwen Mathematikmodell ist speziell für die Lösung von mathematischen Problemen konzipiert."
+ },
+ "qwen-math-turbo-latest": {
+ "description": "Das Tongyi Qianwen Mathematikmodell ist speziell für die Lösung von mathematischen Problemen konzipiert."
+ },
+ "qwen-max-latest": {
+ "description": "Der Tongyi Qianwen ist ein Sprachmodell mit einem Umfang von mehreren Billionen, das Eingaben in verschiedenen Sprachen wie Chinesisch und Englisch unterstützt und die API-Modelle hinter der aktuellen Version 2.5 von Tongyi Qianwen darstellt."
+ },
+ "qwen-plus-latest": {
+ "description": "Der Tongyi Qianwen ist die erweiterte Version eines groß angelegten Sprachmodells, das Eingaben in verschiedenen Sprachen wie Chinesisch und Englisch unterstützt."
+ },
+ "qwen-turbo-latest": {
+ "description": "Der Tongyi Qianwen ist ein groß angelegtes Sprachmodell, das Eingaben in verschiedenen Sprachen wie Chinesisch und Englisch unterstützt."
+ },
+ "qwen-vl-chat-v1": {
+ "description": "Qwen VL unterstützt flexible Interaktionsmethoden, einschließlich Mehrbild-, Mehrfachfragen und kreativen Fähigkeiten."
+ },
+ "qwen-vl-max": {
+ "description": "Qwen ist ein groß angelegtes visuelles Sprachmodell. Im Vergleich zur verbesserten Version hat es die visuelle Schlussfolgerungsfähigkeit und die Befolgung von Anweisungen weiter verbessert und bietet ein höheres Maß an visueller Wahrnehmung und Kognition."
+ },
+ "qwen-vl-plus": {
+ "description": "Qwen ist eine verbesserte Version des groß angelegten visuellen Sprachmodells. Es verbessert erheblich die Fähigkeit zur Detailerkennung und Texterkennung und unterstützt Bilder mit über einer Million Pixeln und beliebigen Seitenverhältnissen."
+ },
+ "qwen-vl-v1": {
+ "description": "Initiiert mit dem Qwen-7B-Sprachmodell, fügt es ein Bildmodell hinzu, das für Bildeingaben mit einer Auflösung von 448 vortrainiert wurde."
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2 ist eine neue Serie großer Sprachmodelle mit stärkeren Verständnis- und Generierungsfähigkeiten."
+ },
+ "qwen2": {
+ "description": "Qwen2 ist das neue große Sprachmodell von Alibaba, das mit hervorragender Leistung eine Vielzahl von Anwendungsanforderungen unterstützt."
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "Das 14B-Modell von Tongyi Qianwen 2.5 ist öffentlich zugänglich."
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "Das 32B-Modell von Tongyi Qianwen 2.5 ist öffentlich zugänglich."
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "Das 72B-Modell von Tongyi Qianwen 2.5 ist öffentlich zugänglich."
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "Das 7B-Modell von Tongyi Qianwen 2.5 ist öffentlich zugänglich."
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "Die Open-Source-Version des Tongyi Qianwen Code-Modells."
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "Die Open-Source-Version des Tongyi Qianwen Code-Modells."
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "Das Qwen-Math-Modell verfügt über starke Fähigkeiten zur Lösung mathematischer Probleme."
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "Das Qwen-Math-Modell verfügt über starke Fähigkeiten zur Lösung mathematischer Probleme."
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "Das Qwen-Math-Modell verfügt über starke Fähigkeiten zur Lösung mathematischer Probleme."
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2 ist das neue große Sprachmodell von Alibaba, das mit hervorragender Leistung eine Vielzahl von Anwendungsanforderungen unterstützt."
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2 ist das neue große Sprachmodell von Alibaba, das mit hervorragender Leistung eine Vielzahl von Anwendungsanforderungen unterstützt."
+ },
+ "qwen2:72b": {
+ "description": "Qwen2 ist das neue große Sprachmodell von Alibaba, das mit hervorragender Leistung eine Vielzahl von Anwendungsanforderungen unterstützt."
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar Mini ist ein kompaktes LLM, das besser als GPT-3.5 abschneidet und über starke Mehrsprachigkeitsfähigkeiten verfügt, unterstützt Englisch und Koreanisch und bietet eine effiziente, kompakte Lösung."
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja) erweitert die Fähigkeiten von Solar Mini und konzentriert sich auf Japanisch, während es gleichzeitig in der Nutzung von Englisch und Koreanisch effizient und leistungsstark bleibt."
+ },
+ "solar-pro": {
+ "description": "Solar Pro ist ein hochintelligentes LLM, das von Upstage entwickelt wurde und sich auf die Befolgung von Anweisungen mit einer einzigen GPU konzentriert, mit einem IFEval-Score von über 80. Derzeit unterstützt es Englisch, die offizielle Version ist für November 2024 geplant und wird die Sprachunterstützung und Kontextlänge erweitern."
+ },
+ "step-1-128k": {
+ "description": "Bietet ein ausgewogenes Verhältnis zwischen Leistung und Kosten, geeignet für allgemeine Szenarien."
+ },
+ "step-1-256k": {
+ "description": "Verfügt über die Fähigkeit zur Verarbeitung ultra-langer Kontexte, besonders geeignet für die Analyse langer Dokumente."
+ },
+ "step-1-32k": {
+ "description": "Unterstützt mittellange Dialoge und eignet sich für verschiedene Anwendungsszenarien."
+ },
+ "step-1-8k": {
+ "description": "Kleinmodell, geeignet für leichte Aufgaben."
+ },
+ "step-1-flash": {
+ "description": "Hochgeschwindigkeitsmodell, geeignet für Echtzeitdialoge."
+ },
+ "step-1v-32k": {
+ "description": "Unterstützt visuelle Eingaben und verbessert die multimodale Interaktionserfahrung."
+ },
+ "step-1v-8k": {
+ "description": "Kleinvisualmodell, geeignet für grundlegende Text- und Bildaufgaben."
+ },
+ "step-2-16k": {
+ "description": "Unterstützt groß angelegte Kontextinteraktionen und eignet sich für komplexe Dialogszenarien."
+ },
+ "taichu_llm": {
+ "description": "Das Zīdōng Taichu Sprachmodell verfügt über außergewöhnliche Sprachverständnisfähigkeiten sowie Fähigkeiten in Textgenerierung, Wissensabfrage, Programmierung, mathematischen Berechnungen, logischem Denken, Sentimentanalyse und Textzusammenfassung. Es kombiniert innovativ große Datenvortrainings mit reichhaltigem Wissen aus mehreren Quellen, verfeinert kontinuierlich die Algorithmen und absorbiert ständig neues Wissen aus umfangreichen Textdaten in Bezug auf Vokabular, Struktur, Grammatik und Semantik, um die Leistung des Modells kontinuierlich zu verbessern. Es bietet den Nutzern bequemere Informationen und Dienstleistungen sowie ein intelligenteres Erlebnis."
+ },
+ "taichu_vqa": {
+ "description": "Taichu 2.0V vereint Fähigkeiten wie Bildverständnis, Wissensübertragung und logische Attribution und zeigt herausragende Leistungen im Bereich der Bild-Text-Fragen."
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B) bietet durch effiziente Strategien und Modellarchitekturen verbesserte Rechenfähigkeiten."
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B) eignet sich für präzise Anweisungsaufgaben und bietet hervorragende Sprachverarbeitungsfähigkeiten."
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2 ist ein Sprachmodell von Microsoft AI, das in komplexen Dialogen, mehrsprachigen Anwendungen, Schlussfolgerungen und intelligenten Assistenten besonders gut abschneidet."
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2 ist ein Sprachmodell von Microsoft AI, das in komplexen Dialogen, mehrsprachigen Anwendungen, Schlussfolgerungen und intelligenten Assistenten besonders gut abschneidet."
+ },
+ "yi-large": {
+ "description": "Das brandneue Modell mit einer Billion Parametern bietet außergewöhnliche Frage- und Textgenerierungsfähigkeiten."
+ },
+ "yi-large-fc": {
+ "description": "Basierend auf dem yi-large-Modell unterstützt und verstärkt es die Fähigkeit zu Werkzeugaufrufen und eignet sich für verschiedene Geschäftsszenarien, die den Aufbau von Agenten oder Workflows erfordern."
+ },
+ "yi-large-preview": {
+ "description": "Frühe Version, empfohlen wird die Verwendung von yi-large (neue Version)."
+ },
+ "yi-large-rag": {
+ "description": "Ein fortgeschrittener Dienst, der auf dem leistungsstarken yi-large-Modell basiert und präzise Antworten durch die Kombination von Abruf- und Generierungstechnologien bietet, sowie Echtzeit-Informationsdienste aus dem gesamten Web."
+ },
+ "yi-large-turbo": {
+ "description": "Hervorragendes Preis-Leistungs-Verhältnis und außergewöhnliche Leistung. Hochpräzise Feinabstimmung basierend auf Leistung, Schlussfolgerungsgeschwindigkeit und Kosten."
+ },
+ "yi-medium": {
+ "description": "Mittelgroßes Modell mit verbesserten Feinabstimmungen, ausgewogene Fähigkeiten und gutes Preis-Leistungs-Verhältnis. Tiefgehende Optimierung der Anweisungsbefolgung."
+ },
+ "yi-medium-200k": {
+ "description": "200K ultra-lange Kontextfenster bieten tiefes Verständnis und Generierungsfähigkeiten für lange Texte."
+ },
+ "yi-spark": {
+ "description": "Klein und kompakt, ein leichtgewichtiges und schnelles Modell. Bietet verbesserte mathematische Berechnungs- und Programmierfähigkeiten."
+ },
+ "yi-vision": {
+ "description": "Modell für komplexe visuelle Aufgaben, das hohe Leistungsfähigkeit bei der Bildverarbeitung und -analyse bietet."
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/plugin.json b/DigitalHumanWeb/locales/de-DE/plugin.json
new file mode 100644
index 0000000..f1c41d5
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "Argumente",
+ "function_call": "Funktionsaufruf",
+ "off": "Debugging deaktivieren",
+ "on": "Plugin-Aufrufinformationen anzeigen",
+ "payload": "Plugin-Payload",
+ "response": "Antwort",
+ "tool_call": "Tool Call Request"
+ },
+ "detailModal": {
+ "info": {
+ "description": "API-Beschreibung",
+ "name": "API-Name"
+ },
+ "tabs": {
+ "info": "Plugin-Fähigkeiten",
+ "manifest": "Installationsdatei",
+ "settings": "Einstellungen"
+ },
+ "title": "Plugin-Details"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "Möchten Sie das lokale Plugin wirklich löschen? Es kann nach dem Löschen nicht wiederhergestellt werden.",
+ "customParams": {
+ "useProxy": {
+ "label": "Durch Proxy installieren (Bei Problemen mit Cross-Origin-Zugriffsfehlern können Sie versuchen, diese Option zu aktivieren und das Plugin erneut zu installieren)"
+ }
+ },
+ "deleteSuccess": "Plugin erfolgreich gelöscht",
+ "manifest": {
+ "identifier": {
+ "desc": "Eindeutige Kennung des Plugins",
+ "label": "Kennung"
+ },
+ "mode": {
+ "local": "Visuelle Konfiguration",
+ "local-tooltip": "Visuelle Konfiguration vorübergehend nicht unterstützt",
+ "url": "Online-Link"
+ },
+ "name": {
+ "desc": "Plugin-Titel",
+ "label": "Titel",
+ "placeholder": "Suchmaschine"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "Autor des Plugins",
+ "label": "Autor"
+ },
+ "avatar": {
+ "desc": "Symbol des Plugins, kann Emoji oder URL verwenden",
+ "label": "Symbol"
+ },
+ "description": {
+ "desc": "Plugin-Beschreibung",
+ "label": "Beschreibung",
+ "placeholder": "Informationen von Suchmaschinen abrufen"
+ },
+ "formFieldRequired": "Dieses Feld ist erforderlich",
+ "homepage": {
+ "desc": "Startseite des Plugins",
+ "label": "Startseite"
+ },
+ "identifier": {
+ "desc": "Eindeutige Kennung des Plugins, wird automatisch aus dem Manifest erkannt",
+ "errorDuplicate": "Kennung ist bereits für ein anderes Plugin vergeben. Bitte ändern Sie die Kennung",
+ "label": "Kennung",
+ "pattenErrorMessage": "Es können nur Buchstaben, Zahlen, - und _ eingegeben werden"
+ },
+ "manifest": {
+ "desc": "{{appName}} wird das Plugin über diesen Link installieren.",
+ "label": "Plugin-Beschreibungsdatei (Manifest) URL",
+ "preview": "Vorschau des Manifests",
+ "refresh": "Aktualisieren"
+ },
+ "title": {
+ "desc": "Plugin-Titel",
+ "label": "Titel",
+ "placeholder": "Suchmaschine"
+ }
+ },
+ "metaConfig": "Konfiguration der Plugin-Metadaten",
+ "modalDesc": "Nach dem Hinzufügen eines benutzerdefinierten Plugins kann es zur Validierung der Plugin-Entwicklung verwendet oder direkt in Unterhaltungen verwendet werden. Weitere Informationen zur Plugin-Entwicklung finden Sie in den <1>Entwicklerdokumenten↗>.",
+ "openai": {
+ "importUrl": "Von URL-Link importieren",
+ "schema": "Schema"
+ },
+ "preview": {
+ "card": "Vorschau der Plugin-Anzeige",
+ "desc": "Vorschau der Plugin-Beschreibung",
+ "title": "Vorschau des Plugin-Namens"
+ },
+ "save": "Plugin installieren",
+ "saveSuccess": "Plugin-Einstellungen erfolgreich gespeichert",
+ "tabs": {
+ "manifest": "Funktionsbeschreibungsliste (Manifest)",
+ "meta": "Plugin-Metadaten"
+ },
+ "title": {
+ "create": "Benutzerdefiniertes Plugin hinzufügen",
+ "edit": "Benutzerdefiniertes Plugin bearbeiten"
+ },
+ "type": {
+ "lobe": "LobeChat-Plugin",
+ "openai": "OpenAI-Plugin"
+ },
+ "update": "Aktualisieren",
+ "updateSuccess": "Plugin-Einstellungen erfolgreich aktualisiert"
+ },
+ "error": {
+ "fetchError": "Fehler beim Abrufen des Manifest-Links. Stellen Sie sicher, dass der Link gültig ist und dass die Cross-Origin-Anfrage erlaubt ist.",
+ "installError": "Fehler bei der Installation des Plugins {{name}}.",
+ "manifestInvalid": "Das Manifest entspricht nicht den Standards. Validierungsergebnis: \n\n {{error}}",
+ "noManifest": "Manifest nicht vorhanden",
+ "openAPIInvalid": "Fehler beim Parsen von OpenAPI. Fehler: \n\n {{error}}",
+ "reinstallError": "Fehler beim Aktualisieren des Plugins {{name}}.",
+ "urlError": "Der Link hat keine JSON-Format-Inhalte zurückgegeben. Stellen Sie sicher, dass der Link gültig ist."
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "Veraltet",
+ "local.config": "Konfiguration",
+ "local.title": "Benutzerdefiniert"
+ }
+ },
+ "loading": {
+ "content": "Plugin wird aufgerufen...",
+ "plugin": "Plugin wird ausgeführt..."
+ },
+ "pluginList": "Plugin-Liste",
+ "setting": "Plugin-Einstellung",
+ "settings": {
+ "indexUrl": {
+ "title": "Marktindex",
+ "tooltip": "Online-Bearbeitung wird derzeit nicht unterstützt. Bitte über Umgebungsvariablen bei der Bereitstellung festlegen."
+ },
+ "modalDesc": "Nachdem Sie die Adresse des Plugin-Marktes konfiguriert haben, können Sie den benutzerdefinierten Plugin-Markt verwenden.",
+ "title": "Plugin-Markteinstellungen"
+ },
+ "showInPortal": "Bitte überprüfen Sie die Details im Portal",
+ "store": {
+ "actions": {
+ "confirmUninstall": "Das Plugin wird deinstalliert und alle Konfigurationen werden gelöscht. Bitte bestätigen Sie Ihre Aktion.",
+ "detail": "Details",
+ "install": "Installieren",
+ "manifest": "Installationsdatei bearbeiten",
+ "settings": "Einstellungen",
+ "uninstall": "Deinstallieren"
+ },
+ "communityPlugin": "Community",
+ "customPlugin": "Benutzerdefiniert",
+ "empty": "Keine installierten Plugins vorhanden",
+ "installAllPlugins": "Alle installieren",
+ "networkError": "Fehler beim Abrufen des Plugin-Shops. Bitte überprüfen Sie die Netzwerkverbindung und versuchen Sie es erneut.",
+ "placeholder": "Suche nach Plugin-Namen, Beschreibung oder Stichwort...",
+ "releasedAt": "Veröffentlicht am {{createdAt}}",
+ "tabs": {
+ "all": "Alle",
+ "installed": "Installiert"
+ },
+ "title": "Plugin-Shop"
+ },
+ "unknownPlugin": "Unbekanntes Plugin"
+}
diff --git a/DigitalHumanWeb/locales/de-DE/portal.json b/DigitalHumanWeb/locales/de-DE/portal.json
new file mode 100644
index 0000000..a34ae2d
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "Artefakte",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "Chunk",
+ "file": "Datei"
+ }
+ },
+ "Plugins": "Plugins",
+ "actions": {
+ "genAiMessage": "Assistenten-Nachricht erstellen",
+ "summary": "Zusammenfassung",
+ "summaryTooltip": "Zusammenfassung des aktuellen Inhalts"
+ },
+ "artifacts": {
+ "display": {
+ "code": "Code",
+ "preview": "Vorschau"
+ },
+ "svg": {
+ "copyAsImage": "Als Bild kopieren",
+ "copyFail": "Kopieren fehlgeschlagen, Fehlerursache: {{error}}",
+ "copySuccess": "Bild erfolgreich kopiert",
+ "download": {
+ "png": "Als PNG herunterladen",
+ "svg": "Als SVG herunterladen"
+ }
+ }
+ },
+ "emptyArtifactList": "Die Liste der Artefakte ist derzeit leer. Bitte verwenden Sie Plugins in der Sitzung und überprüfen Sie sie erneut.",
+ "emptyKnowledgeList": "Die aktuelle Wissensliste ist leer. Bitte aktivieren Sie die Wissensdatenbank nach Bedarf in der Sitzung, um sie anzuzeigen.",
+ "files": "Dateien",
+ "messageDetail": "Nachrichtendetails",
+ "title": "Erweiterungsfenster"
+}
diff --git a/DigitalHumanWeb/locales/de-DE/providers.json b/DigitalHumanWeb/locales/de-DE/providers.json
new file mode 100644
index 0000000..d1d14a6
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "360 AI ist die von der 360 Company eingeführte Plattform für KI-Modelle und -Dienste, die eine Vielzahl fortschrittlicher Modelle zur Verarbeitung natürlicher Sprache anbietet, darunter 360GPT2 Pro, 360GPT Pro, 360GPT Turbo und 360GPT Turbo Responsibility 8K. Diese Modelle kombinieren große Parameter mit multimodalen Fähigkeiten und finden breite Anwendung in den Bereichen Textgenerierung, semantisches Verständnis, Dialogsysteme und Codegenerierung. Durch flexible Preisstrategien erfüllt 360 AI die vielfältigen Bedürfnisse der Nutzer, unterstützt Entwickler bei der Integration und fördert die Innovation und Entwicklung intelligenter Anwendungen."
+ },
+ "anthropic": {
+ "description": "Anthropic ist ein Unternehmen, das sich auf Forschung und Entwicklung im Bereich der künstlichen Intelligenz spezialisiert hat und eine Reihe fortschrittlicher Sprachmodelle anbietet, darunter Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus und Claude 3 Haiku. Diese Modelle erreichen ein ideales Gleichgewicht zwischen Intelligenz, Geschwindigkeit und Kosten und sind für eine Vielzahl von Anwendungsszenarien geeignet, von unternehmensweiten Arbeitslasten bis hin zu schnellen Reaktionen. Claude 3.5 Sonnet, als neuestes Modell, hat in mehreren Bewertungen hervorragend abgeschnitten und bietet gleichzeitig ein hohes Preis-Leistungs-Verhältnis."
+ },
+ "azure": {
+ "description": "Azure bietet eine Vielzahl fortschrittlicher KI-Modelle, darunter GPT-3.5 und die neueste GPT-4-Serie, die verschiedene Datentypen und komplexe Aufgaben unterstützen und sich auf sichere, zuverlässige und nachhaltige KI-Lösungen konzentrieren."
+ },
+ "baichuan": {
+ "description": "Baichuan Intelligent ist ein Unternehmen, das sich auf die Forschung und Entwicklung großer KI-Modelle spezialisiert hat. Ihre Modelle zeigen hervorragende Leistungen in chinesischen Aufgaben wie Wissensdatenbanken, Verarbeitung langer Texte und kreative Generierung und übertreffen die gängigen Modelle im Ausland. Baichuan Intelligent verfügt auch über branchenführende multimodale Fähigkeiten und hat in mehreren renommierten Bewertungen hervorragend abgeschnitten. Ihre Modelle umfassen Baichuan 4, Baichuan 3 Turbo und Baichuan 3 Turbo 128k, die jeweils für unterschiedliche Anwendungsszenarien optimiert sind und kosteneffiziente Lösungen bieten."
+ },
+ "bedrock": {
+ "description": "Bedrock ist ein Service von Amazon AWS, der sich darauf konzentriert, Unternehmen fortschrittliche KI-Sprach- und visuelle Modelle bereitzustellen. Die Modellfamilie umfasst die Claude-Serie von Anthropic, die Llama 3.1-Serie von Meta und mehr, und bietet eine Vielzahl von Optionen von leichtgewichtig bis hochleistungsfähig, die Textgenerierung, Dialoge, Bildverarbeitung und andere Aufgaben unterstützen und für Unternehmensanwendungen unterschiedlicher Größen und Anforderungen geeignet sind."
+ },
+ "deepseek": {
+ "description": "DeepSeek ist ein Unternehmen, das sich auf die Forschung und Anwendung von KI-Technologien spezialisiert hat. Ihr neuestes Modell, DeepSeek-V2.5, kombiniert allgemeine Dialog- und Codeverarbeitungsfähigkeiten und hat signifikante Fortschritte in den Bereichen menschliche Präferenzanpassung, Schreibaufgaben und Befehlsbefolgung erzielt."
+ },
+ "fireworksai": {
+ "description": "Fireworks AI ist ein führender Anbieter von fortschrittlichen Sprachmodellen, der sich auf Funktionsaufrufe und multimodale Verarbeitung spezialisiert hat. Ihr neuestes Modell, Firefunction V2, basiert auf Llama-3 und ist für Funktionsaufrufe, Dialoge und Befehlsbefolgung optimiert. Das visuelle Sprachmodell FireLLaVA-13B unterstützt gemischte Eingaben von Bildern und Text. Weitere bemerkenswerte Modelle sind die Llama-Serie und die Mixtral-Serie, die effiziente mehrsprachige Befehlsbefolgung und Generierungsunterstützung bieten."
+ },
+ "github": {
+ "description": "Mit GitHub-Modellen können Entwickler zu KI-Ingenieuren werden und mit den führenden KI-Modellen der Branche arbeiten."
+ },
+ "google": {
+ "description": "Die Gemini-Serie von Google ist ihr fortschrittlichstes, universelles KI-Modell, das von Google DeepMind entwickelt wurde und speziell für multimodale Anwendungen konzipiert ist. Es unterstützt nahtlose Verständnis- und Verarbeitungsprozesse für Text, Code, Bilder, Audio und Video. Es ist für eine Vielzahl von Umgebungen geeignet, von Rechenzentren bis hin zu mobilen Geräten, und verbessert erheblich die Effizienz und Anwendbarkeit von KI-Modellen."
+ },
+ "groq": {
+ "description": "Der LPU-Inferenz-Engine von Groq hat in den neuesten unabhängigen Benchmark-Tests für große Sprachmodelle (LLM) hervorragende Leistungen gezeigt und definiert mit seiner erstaunlichen Geschwindigkeit und Effizienz die Standards für KI-Lösungen neu. Groq ist ein Beispiel für sofortige Inferenzgeschwindigkeit und zeigt in cloudbasierten Bereitstellungen eine gute Leistung."
+ },
+ "minimax": {
+ "description": "MiniMax ist ein im Jahr 2021 gegründetes Unternehmen für allgemeine künstliche Intelligenz, das sich der gemeinsamen Schaffung von Intelligenz mit den Nutzern widmet. MiniMax hat verschiedene multimodale allgemeine große Modelle entwickelt, darunter ein Textmodell mit Billionen von Parametern, ein Sprachmodell und ein Bildmodell. Außerdem wurden Anwendungen wie Conch AI eingeführt."
+ },
+ "mistral": {
+ "description": "Mistral bietet fortschrittliche allgemeine, spezialisierte und forschungsorientierte Modelle an, die in Bereichen wie komplexe Schlussfolgerungen, mehrsprachige Aufgaben und Code-Generierung weit verbreitet sind. Durch Funktionsaufrufschnittstellen können Benutzer benutzerdefinierte Funktionen integrieren und spezifische Anwendungen realisieren."
+ },
+ "moonshot": {
+ "description": "Moonshot ist eine Open-Source-Plattform, die von Beijing Dark Side Technology Co., Ltd. eingeführt wurde und eine Vielzahl von Modellen zur Verarbeitung natürlicher Sprache anbietet, die in vielen Bereichen Anwendung finden, darunter, aber nicht beschränkt auf, Inhaltserstellung, akademische Forschung, intelligente Empfehlungen und medizinische Diagnosen, und unterstützt die Verarbeitung langer Texte und komplexer Generierungsaufgaben."
+ },
+ "novita": {
+ "description": "Novita AI ist eine Plattform, die eine Vielzahl von großen Sprachmodellen und API-Diensten für die KI-Bilderzeugung anbietet, die flexibel, zuverlässig und kosteneffektiv ist. Sie unterstützt die neuesten Open-Source-Modelle wie Llama3 und Mistral und bietet umfassende, benutzerfreundliche und automatisch skalierbare API-Lösungen für die Entwicklung generativer KI-Anwendungen, die für das schnelle Wachstum von KI-Startups geeignet sind."
+ },
+ "ollama": {
+ "description": "Die von Ollama angebotenen Modelle decken ein breites Spektrum ab, darunter Code-Generierung, mathematische Berechnungen, mehrsprachige Verarbeitung und dialogbasierte Interaktionen, und unterstützen die vielfältigen Anforderungen an unternehmensgerechte und lokal angepasste Bereitstellungen."
+ },
+ "openai": {
+ "description": "OpenAI ist eine weltweit führende Forschungsinstitution im Bereich der künstlichen Intelligenz, deren entwickelte Modelle wie die GPT-Serie die Grenzen der Verarbeitung natürlicher Sprache vorantreiben. OpenAI setzt sich dafür ein, durch innovative und effiziente KI-Lösungen verschiedene Branchen zu transformieren. Ihre Produkte zeichnen sich durch herausragende Leistung und Wirtschaftlichkeit aus und finden breite Anwendung in Forschung, Wirtschaft und innovativen Anwendungen."
+ },
+ "openrouter": {
+ "description": "OpenRouter ist eine Serviceplattform, die eine Vielzahl von Schnittstellen für fortschrittliche große Modelle anbietet und OpenAI, Anthropic, LLaMA und mehr unterstützt, um vielfältige Entwicklungs- und Anwendungsbedürfnisse zu erfüllen. Benutzer können je nach ihren Anforderungen flexibel das optimale Modell und den Preis auswählen, um das KI-Erlebnis zu verbessern."
+ },
+ "perplexity": {
+ "description": "Perplexity ist ein führender Anbieter von Dialoggenerierungsmodellen und bietet eine Vielzahl fortschrittlicher Llama 3.1-Modelle an, die sowohl für Online- als auch Offline-Anwendungen geeignet sind und sich besonders für komplexe Aufgaben der Verarbeitung natürlicher Sprache eignen."
+ },
+ "qwen": {
+ "description": "Tongyi Qianwen ist ein von Alibaba Cloud selbst entwickeltes, groß angelegtes Sprachmodell mit starken Fähigkeiten zur Verarbeitung und Generierung natürlicher Sprache. Es kann eine Vielzahl von Fragen beantworten, Texte erstellen, Meinungen äußern und Code schreiben und spielt in mehreren Bereichen eine Rolle."
+ },
+ "siliconcloud": {
+ "description": "SiliconFlow hat sich zum Ziel gesetzt, AGI zu beschleunigen, um der Menschheit zu dienen, und die Effizienz großangelegter KI durch eine benutzerfreundliche und kostengünstige GenAI-Stack zu steigern."
+ },
+ "spark": {
+ "description": "Der Xinghuo-Großmodell von iFLYTEK bietet leistungsstarke KI-Fähigkeiten in mehreren Bereichen und Sprachen und nutzt fortschrittliche Technologien zur Verarbeitung natürlicher Sprache, um innovative Anwendungen für intelligente Hardware, intelligente Medizin, intelligente Finanzen und andere vertikale Szenarien zu entwickeln."
+ },
+ "stepfun": {
+ "description": "Das StepFun-Großmodell verfügt über branchenführende multimodale und komplexe Schlussfolgerungsfähigkeiten und unterstützt das Verständnis von sehr langen Texten sowie leistungsstarke, autonome Suchmaschinenfunktionen."
+ },
+ "taichu": {
+ "description": "Das Institut für Automatisierung der Chinesischen Akademie der Wissenschaften und das Wuhan Institute of Artificial Intelligence haben ein neues Generation multimodales großes Modell eingeführt, das umfassende Frage-Antwort-Aufgaben unterstützt, darunter mehrstufige Fragen, Textgenerierung, Bildgenerierung, 3D-Verständnis und Signalverarbeitung, mit stärkeren kognitiven, verstehenden und kreativen Fähigkeiten, die ein neues interaktives Erlebnis bieten."
+ },
+ "togetherai": {
+ "description": "Together AI strebt an, durch innovative KI-Modelle führende Leistungen zu erzielen und bietet umfangreiche Anpassungsmöglichkeiten, einschließlich schneller Skalierungsunterstützung und intuitiver Bereitstellungsprozesse, um den unterschiedlichen Anforderungen von Unternehmen gerecht zu werden."
+ },
+ "upstage": {
+ "description": "Upstage konzentriert sich auf die Entwicklung von KI-Modellen für verschiedene geschäftliche Anforderungen, einschließlich Solar LLM und Dokumenten-KI, mit dem Ziel, künstliche allgemeine Intelligenz (AGI) zu erreichen. Es ermöglicht die Erstellung einfacher Dialogagenten über die Chat-API und unterstützt Funktionsaufrufe, Übersetzungen, Einbettungen und spezifische Anwendungsbereiche."
+ },
+ "zeroone": {
+ "description": "01.AI konzentriert sich auf die künstliche Intelligenz-Technologie der AI 2.0-Ära und fördert aktiv die Innovation und Anwendung von 'Mensch + künstliche Intelligenz', indem sie leistungsstarke Modelle und fortschrittliche KI-Technologien einsetzt, um die Produktivität der Menschen zu steigern und technologische Befähigung zu erreichen."
+ },
+ "zhipu": {
+ "description": "Zhipu AI bietet eine offene Plattform für multimodale und Sprachmodelle, die eine breite Palette von KI-Anwendungsszenarien unterstützt, darunter Textverarbeitung, Bildverständnis und Programmierhilfe."
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/ragEval.json b/DigitalHumanWeb/locales/de-DE/ragEval.json
new file mode 100644
index 0000000..a4ac459
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "Neu erstellen",
+ "description": {
+ "placeholder": "Beschreibung des Datensatzes (optional)"
+ },
+ "name": {
+ "placeholder": "Name des Datensatzes",
+ "required": "Bitte geben Sie den Namen des Datensatzes ein"
+ },
+ "title": "Datensatz hinzufügen"
+ },
+ "dataset": {
+ "addNewButton": "Datensatz erstellen",
+ "emptyGuide": "Der aktuelle Datensatz ist leer, bitte erstellen Sie einen Datensatz.",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "Daten importieren"
+ },
+ "columns": {
+ "actions": "Aktionen",
+ "ideal": {
+ "title": "Erwartete Antwort"
+ },
+ "question": {
+ "title": "Frage"
+ },
+ "referenceFiles": {
+ "title": "Referenzdateien"
+ }
+ },
+ "notSelected": "Bitte wählen Sie einen Datensatz auf der linken Seite aus",
+ "title": "Details zum Datensatz"
+ },
+ "title": "Datensatz"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "Neu erstellen",
+ "datasetId": {
+ "placeholder": "Bitte wählen Sie Ihren Bewertungsdatensatz aus",
+ "required": "Bitte wählen Sie einen Bewertungsdatensatz aus"
+ },
+ "description": {
+ "placeholder": "Beschreibung der Bewertungsaufgabe (optional)"
+ },
+ "name": {
+ "placeholder": "Name der Bewertungsaufgabe",
+ "required": "Bitte geben Sie den Namen der Bewertungsaufgabe ein"
+ },
+ "title": "Bewertungsaufgabe hinzufügen"
+ },
+ "addNewButton": "Bewertung erstellen",
+ "emptyGuide": "Aktuell sind keine Bewertungsaufgaben vorhanden, beginnen Sie mit der Erstellung einer Bewertung.",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "Status überprüfen",
+ "confirmDelete": "Möchten Sie diese Bewertungsaufgabe wirklich löschen?",
+ "confirmRun": "Möchten Sie die Ausführung starten? Nach dem Start wird die Bewertungsaufgabe im Hintergrund asynchron ausgeführt, das Schließen der Seite hat keinen Einfluss auf die Ausführung der asynchronen Aufgabe.",
+ "downloadRecords": "Bewertung herunterladen",
+ "retry": "Erneut versuchen",
+ "run": "Ausführen",
+ "title": "Aktionen"
+ },
+ "datasetId": {
+ "title": "Datensatz"
+ },
+ "name": {
+ "title": "Name der Bewertungsaufgabe"
+ },
+ "records": {
+ "title": "Anzahl der Bewertungsaufzeichnungen"
+ },
+ "referenceFiles": {
+ "title": "Referenzdateien"
+ },
+ "status": {
+ "error": "Fehler bei der Ausführung",
+ "pending": "Warten auf Ausführung",
+ "processing": "Wird ausgeführt",
+ "success": "Erfolgreich ausgeführt",
+ "title": "Status"
+ }
+ },
+ "title": "Liste der Bewertungsaufgaben"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/setting.json b/DigitalHumanWeb/locales/de-DE/setting.json
new file mode 100644
index 0000000..4cf3937
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "Über"
+ },
+ "agentTab": {
+ "chat": "Chat-Präferenz",
+ "meta": "Assistenteninformation",
+ "modal": "Modell-Einstellungen",
+ "plugin": "Plugin-Einstellungen",
+ "prompt": "Rollenkonfiguration",
+ "tts": "Sprachdienst"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "Indem du die Übertragung von Telemetriedaten auswählst, kannst du uns helfen, das Gesamterlebnis der Nutzer von {{appName}} zu verbessern.",
+ "title": "Anonyme Nutzungsdaten senden"
+ },
+ "title": "Analytics"
+ },
+ "danger": {
+ "clear": {
+ "action": "Alle löschen",
+ "confirm": "Alle Chat-Daten löschen?",
+ "desc": "Alle Sitzungsdaten werden gelöscht, einschließlich Assistenten, Dateien, Nachrichten, Plugins usw.",
+ "success": "Alle Sitzungsnachrichten wurden gelöscht",
+ "title": "Alle Sitzungsnachrichten löschen"
+ },
+ "reset": {
+ "action": "Zurücksetzen",
+ "confirm": "Alle Einstellungen zurücksetzen?",
+ "currentVersion": "Aktuelle Version",
+ "desc": "Alle Einstellungen auf Standardwerte zurücksetzen",
+ "success": "Alle Einstellungen wurden zurückgesetzt",
+ "title": "Alle Einstellungen zurücksetzen"
+ }
+ },
+ "header": {
+ "desc": "Präferenzen und Modellkonfigurationen.",
+ "global": "Global Einstellungen",
+ "session": "Sitzungseinstellungen",
+ "sessionDesc": "Rollenkonfiguration und Sitzungspräferenzen.",
+ "sessionWithName": "Sitzungseinstellungen · {{name}}",
+ "title": "Einstellungen"
+ },
+ "llm": {
+ "aesGcm": "Ihr Schlüssel und Ihre Proxy-Adresse werden mit dem <1>AES-GCM1> Verschlüsselungsalgorithmus verschlüsselt.",
+ "apiKey": {
+ "desc": "Bitte geben Sie Ihren {{name}} API-Schlüssel ein",
+ "placeholder": "{{name}} API-Schlüssel",
+ "title": "API-Schlüssel"
+ },
+ "checker": {
+ "button": "Überprüfen",
+ "desc": "Überprüfen Sie, ob der API-Schlüssel und die Proxy-Adresse korrekt eingegeben wurden",
+ "pass": "Überprüfung bestanden",
+ "title": "Konnektivitätsprüfung"
+ },
+ "customModelCards": {
+ "addNew": "Erstellen und Hinzufügen von {{id}} Modell",
+ "config": "Modell konfigurieren",
+ "confirmDelete": "Das benutzerdefinierte Modell wird gelöscht und kann nicht wiederhergestellt werden. Bitte seien Sie vorsichtig.",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "Das Feld, das tatsächlich in Azure OpenAI angefordert wird",
+ "placeholder": "Geben Sie den Bereitstellungsnamen des Modells in Azure ein",
+ "title": "Modellbereitstellungsname"
+ },
+ "displayName": {
+ "placeholder": "Geben Sie den Anzeigenamen des Modells ein, z. B. ChatGPT, GPT-4 usw.",
+ "title": "Modellanzeigename"
+ },
+ "files": {
+ "extra": "Die aktuelle Datei-Upload-Implementierung ist lediglich eine Hack-Lösung und nur für eigene Versuche gedacht. Bitte warte auf die vollständige Implementierung der Datei-Upload-Funktionalität.",
+ "title": "Datei-Upload unterstützen"
+ },
+ "functionCall": {
+ "extra": "Diese Konfiguration aktiviert nur die Funktion zum Aufrufen von Funktionen innerhalb der Anwendung. Ob die Funktionalität zum Aufrufen von Funktionen unterstützt wird, hängt vollständig vom Modell selbst ab. Bitte teste die Verwendbarkeit der Funktionalität des Modells selbst.",
+ "title": "Funktionsaufrufe unterstützen"
+ },
+ "id": {
+ "extra": "Wird als Modell-Tag angezeigt",
+ "placeholder": "Geben Sie die Modell-ID ein, z. B. gpt-4-turbo-preview oder claude-2.1",
+ "title": "Modell-ID"
+ },
+ "modalTitle": "Benutzerdefinierte Modellkonfiguration",
+ "tokens": {
+ "title": "Maximale Token-Anzahl",
+ "unlimited": "unbegrenzt"
+ },
+ "vision": {
+ "extra": "Diese Konfiguration aktiviert nur die Bild-Upload-Einstellungen innerhalb der Anwendung. Ob die Erkennung unterstützt wird, hängt vollständig vom Modell selbst ab. Bitte teste die Verwendbarkeit der visuellen Erkennungsfähigkeit des Modells selbst.",
+ "title": "Visuelle Erkennung unterstützen"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "Client Fetch-Modus initiiert direkte Sitzungsanfragen vom Browser aus und verbessert die Reaktionsgeschwindigkeit.",
+ "title": "Client Fetch-Modus verwenden"
+ },
+ "fetcher": {
+ "fetch": "Modelle abrufen",
+ "fetching": "Modelle werden abgerufen...",
+ "latestTime": "Letzte Aktualisierung: {{time}}",
+ "noLatestTime": "Liste noch nicht abgerufen"
+ },
+ "helpDoc": "Konfigurationsanleitung",
+ "modelList": {
+ "desc": "Wählen Sie die Modelle aus, die in der Sitzung angezeigt werden sollen. Die ausgewählten Modelle werden in der Modellliste angezeigt.",
+ "placeholder": "Wählen Sie ein Modell aus der Liste aus",
+ "title": "Modellliste",
+ "total": "Insgesamt {{count}} Modelle verfügbar"
+ },
+ "proxyUrl": {
+ "desc": "Außer der Standardadresse muss http(s):// enthalten sein",
+ "title": "API-Proxy-Adresse"
+ },
+ "waitingForMore": "Weitere Modelle werden <1>geplant1>, bitte freuen Sie sich auf weitere Updates"
+ },
+ "plugin": {
+ "addTooltip": "Benutzerdefiniertes Plugin",
+ "clearDeprecated": "Entfernen Sie ungültige Plugins",
+ "empty": "Keine installierten Plugins vorhanden. Besuchen Sie den <1>Plugin-Store1>, um mehr zu entdecken.",
+ "installStatus": {
+ "deprecated": "Deinstalliert"
+ },
+ "settings": {
+ "hint": "Bitte füllen Sie die folgende Konfiguration gemäß der Beschreibung aus",
+ "title": "{{id}} Plugin-Konfiguration",
+ "tooltip": "Plugin-Konfiguration"
+ },
+ "store": "Plugin-Store"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "Profilbild"
+ },
+ "backgroundColor": {
+ "title": "Hintergrundfarbe"
+ },
+ "description": {
+ "placeholder": "Bitte geben Sie eine Assistentenbeschreibung ein",
+ "title": "Assistentenbeschreibung"
+ },
+ "name": {
+ "placeholder": "Bitte geben Sie den Assistentennamen ein",
+ "title": "Name"
+ },
+ "prompt": {
+ "placeholder": "Bitte geben Sie das Rollen-Prompt-Wort ein",
+ "title": "Rollen-Einstellung"
+ },
+ "tag": {
+ "placeholder": "Bitte geben Sie ein Tag ein",
+ "title": "Tag"
+ },
+ "title": "Assistenteninformationen"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "Automatische Erstellung eines Themas, wenn die Anzahl der Nachrichten diesen Wert überschreitet",
+ "title": "Nachrichtenschwelle für automatische Themen-Erstellung"
+ },
+ "chatStyleType": {
+ "title": "Chatfenster-Stil",
+ "type": {
+ "chat": "Dialogmodus",
+ "docs": "Dokumentenmodus"
+ }
+ },
+ "compressThreshold": {
+ "desc": "Komprimierung der Historie, wenn die Anzahl der unkomprimierten Nachrichten diesen Wert überschreitet",
+ "title": "Komprimierungsschwelle für Historienlänge"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "Automatische Erstellung eines Themas während des Gesprächs, nur in temporären Themen aktiv",
+ "title": "Automatische Themen-Erstellung aktivieren"
+ },
+ "enableCompressThreshold": {
+ "title": "Aktivieren der Komprimierungsschwelle für Historienlänge"
+ },
+ "enableHistoryCount": {
+ "alias": "Unbegrenzt",
+ "limited": "Enthält nur {{number}} Gesprächsnachrichten",
+ "setlimited": "Setzen Sie die begrenzte Anzahl von Nachrichten",
+ "title": "Historiennachrichten begrenzen",
+ "unlimited": "Unbegrenzte Historiennachrichten"
+ },
+ "historyCount": {
+ "desc": "Anzahl der Nachrichten pro Anfrage (einschließlich der neuesten Fragen und Antworten. Jede Frage und Antwort zählt als 1)",
+ "title": "Anzahl der mitgelieferten Nachrichten"
+ },
+ "inputTemplate": {
+ "desc": "Die neueste Benutzernachricht wird in dieses Template eingefügt",
+ "placeholder": "Vorlagen-{{text}} werden durch Echtzeit-Eingabeinformationen ersetzt",
+ "title": "Benutzereingabe-Vorverarbeitung"
+ },
+ "title": "Chat-Einstellungen"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "Maximale Token pro Antwort aktivieren"
+ },
+ "frequencyPenalty": {
+ "desc": "Je höher der Wert, desto wahrscheinlicher ist es, dass sich wiederholende Wörter reduziert werden",
+ "title": "Frequenzstrafe"
+ },
+ "maxTokens": {
+ "desc": "Maximale Anzahl von Tokens, die pro Interaktion verwendet werden",
+ "title": "Maximale Token pro Antwort"
+ },
+ "model": {
+ "desc": "{{provider}} Modell",
+ "title": "Modell"
+ },
+ "presencePenalty": {
+ "desc": "Je höher der Wert, desto wahrscheinlicher ist es, dass sich das Gespräch auf neue Themen ausweitet",
+ "title": "Themenfrische"
+ },
+ "temperature": {
+ "desc": "Je höher der Wert, desto zufälliger die Antwort",
+ "title": "Zufälligkeit",
+ "titleWithValue": "Zufälligkeit {{value}}"
+ },
+ "title": "Modelleinstellungen",
+ "topP": {
+ "desc": "Ähnlich wie Zufälligkeit, aber nicht zusammen mit Zufälligkeit ändern",
+ "title": "Top-P-Sampling"
+ }
+ },
+ "settingPlugin": {
+ "title": "Plugin-Liste"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "Der Administrator hat den verschlüsselten Zugriff aktiviert",
+ "placeholder": "Bitte geben Sie das Zugangspasswort ein",
+ "title": "Zugangspasswort"
+ },
+ "oauth": {
+ "info": {
+ "desc": "Angemeldet",
+ "title": "Kontoinformationen"
+ },
+ "signin": {
+ "action": "Anmelden",
+ "desc": "Mit SSO anmelden, um die Anwendung freizuschalten",
+ "title": "Konto anmelden"
+ },
+ "signout": {
+ "action": "Abmelden",
+ "confirm": "Abmelden bestätigen?",
+ "success": "Abmeldung erfolgreich"
+ }
+ },
+ "title": "Systemeinstellungen"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "OpenAI Spracherkennungsmodell",
+ "title": "OpenAI",
+ "ttsModel": "OpenAI Sprachsynthesemodell"
+ },
+ "showAllLocaleVoice": {
+ "desc": "Wenn deaktiviert, werden nur Stimmen in der aktuellen Sprache angezeigt",
+ "title": "Alle Sprachstimmen anzeigen"
+ },
+ "stt": "Spracherkennungseinstellungen",
+ "sttAutoStop": {
+ "desc": "Wenn deaktiviert, wird die Spracherkennung nicht automatisch beendet und muss manuell gestoppt werden",
+ "title": "Automatisches Beenden der Spracherkennung"
+ },
+ "sttLocale": {
+ "desc": "Die Spracheingabe für die Spracherkennung, diese Option kann die Genauigkeit der Spracherkennung verbessern",
+ "title": "Spracherkennungssprache"
+ },
+ "sttService": {
+ "desc": "Browser ist ein nativer Spracherkennungsdienst des Browsers",
+ "title": "Spracherkennungsdienst"
+ },
+ "title": "Sprachdienste",
+ "tts": "Sprachsynthese-Einstellungen",
+ "ttsService": {
+ "desc": "Wenn der OpenAI-Text-to-Speech-Dienst verwendet wird, stellen Sie sicher, dass der OpenAI-Modellservice aktiviert ist",
+ "title": "Sprachsynthese-Dienst"
+ },
+ "voice": {
+ "desc": "Wählen Sie eine Stimme für den aktuellen Assistenten aus. Unterschiedliche TTS-Dienste unterstützen unterschiedliche Stimmen",
+ "preview": "Stimme anhören",
+ "title": "Sprachsynthese-Stimme"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "Profilbild"
+ },
+ "fontSize": {
+ "desc": "Schriftgröße für Chat-Inhalte",
+ "marks": {
+ "normal": "Normal"
+ },
+ "title": "Schriftgröße"
+ },
+ "lang": {
+ "autoMode": "Systemsprache folgen",
+ "title": "Sprache"
+ },
+ "neutralColor": {
+ "desc": "Benutzerdefinierte Graustufen für verschiedene Farbneigungen",
+ "title": "Neutralfarbe"
+ },
+ "primaryColor": {
+ "desc": "Benutzerdefinierte Hauptfarbe des Themas",
+ "title": "Hauptfarbe"
+ },
+ "themeMode": {
+ "auto": "Automatisch",
+ "dark": "Dunkel",
+ "light": "Hell",
+ "title": "Thema"
+ },
+ "title": "Thema einstellen"
+ },
+ "submitAgentModal": {
+ "button": "Assistent einreichen",
+ "identifier": "Assistenten-Kennung",
+ "metaMiss": "Bitte vervollständigen Sie die Assistenteninformationen, einschließlich Name, Beschreibung und Tags, bevor Sie sie einreichen.",
+ "placeholder": "Geben Sie die Kennung des Assistenten ein, die eindeutig sein muss, z. B. Web-Entwicklung",
+ "tooltips": "Auf dem Assistentenmarkt teilen"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "Fügen Sie einen Namen hinzu, um das Gerät zu identifizieren",
+ "placeholder": "Geben Sie den Gerätenamen ein",
+ "title": "Gerätename"
+ },
+ "title": "Geräteinformationen",
+ "unknownBrowser": "Unbekannter Browser",
+ "unknownOS": "Unbekanntes Betriebssystem"
+ },
+ "warning": {
+ "tip": "Nach einer längeren Phase des Community-Tests kann die WebRTC-Synchronisierung möglicherweise nicht stabil genug sein, um allgemeine Synchronisierungsanforderungen zu erfüllen. Bitte <1>richten Sie einen Signalisierungsserver ein1> und verwenden Sie ihn dann."
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "WebRTC verwendet diesen Namen, um einen Synchronisierungskanal zu erstellen. Stellen Sie sicher, dass der Kanalname eindeutig ist",
+ "placeholder": "Geben Sie den Synchronisierungskanalnamen ein",
+ "shuffle": "Zufällige Generierung",
+ "title": "Synchronisierungskanalname"
+ },
+ "channelPassword": {
+ "desc": "Fügen Sie ein Passwort hinzu, um die Vertraulichkeit des Kanals zu gewährleisten. Nur wenn das Passwort korrekt ist, kann das Gerät dem Kanal beitreten",
+ "placeholder": "Geben Sie das Synchronisierungskennwort ein",
+ "title": "Synchronisierungskennwort"
+ },
+ "desc": "Echtzeit, Punkt-zu-Punkt-Datenkommunikation, bei der die Geräte gleichzeitig online sein müssen, um synchronisiert zu werden",
+ "enabled": {
+ "invalid": "Bitte geben Sie zuerst den Signalisierungsserver und den Synchronisierungskanal an, bevor Sie dies aktivieren.",
+ "title": "Synchronisierung aktivieren"
+ },
+ "signaling": {
+ "desc": "WebRTC wird diese Adresse für die Synchronisierung verwenden",
+ "placeholder": "Bitte geben Sie die Adresse des Signalisierungsservers ein",
+ "title": "Signalisierungsserver"
+ },
+ "title": "WebRTC-Synchronisierung"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "Assistentenmetadaten-Generierungsmodell",
+ "modelDesc": "Das Modell, das zur Generierung von Assistentennamen, -beschreibungen, -avatars und -tags verwendet wird",
+ "title": "Automatische Generierung von Assistenteninformationen"
+ },
+ "queryRewrite": {
+ "label": "Fragenumformulierung Modell",
+ "modelDesc": "Modell zur Optimierung der Benutzeranfragen",
+ "title": "Wissensdatenbank"
+ },
+ "title": "Systemassistent",
+ "topic": {
+ "label": "Themenbenennungsmodell",
+ "modelDesc": "Das Modell, das für die automatische Umbenennung von Themen verwendet wird",
+ "title": "Automatische Themenbenennung"
+ },
+ "translation": {
+ "label": "Übersetzungsmodell",
+ "modelDesc": "Das für die Übersetzung verwendete Modell",
+ "title": "Einstellungen für Übersetzungsassistent"
+ }
+ },
+ "tab": {
+ "about": "Über",
+ "agent": "Standard-Assistent",
+ "common": "Allgemeine Einstellungen",
+ "experiment": "Experiment",
+ "llm": "Sprachmodell",
+ "sync": "Cloud-Synchronisierung",
+ "system-agent": "Systemassistent",
+ "tts": "Sprachdienste"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "Integriert"
+ },
+ "disabled": "Das aktuelle Modell unterstützt keine Funktionsaufrufe und kann keine Plugins verwenden",
+ "plugins": {
+ "enabled": "Aktiviert: {{num}}",
+ "groupName": "Plugins",
+ "noEnabled": "Keine Plugins aktiviert",
+ "store": "Plugin-Store"
+ },
+ "title": "Erweiterungswerkzeuge"
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/tool.json b/DigitalHumanWeb/locales/de-DE/tool.json
new file mode 100644
index 0000000..02a9c5e
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "Automatisch generieren",
+ "downloading": "Die von DallE3 generierten Bildlinks sind nur 1 Stunde lang gültig. Das Bild wird lokal zwischengespeichert...",
+ "generate": "Generieren",
+ "generating": "Generiert",
+ "images": "Bilder:",
+ "prompt": "Hinweiswort"
+ }
+}
diff --git a/DigitalHumanWeb/locales/de-DE/welcome.json b/DigitalHumanWeb/locales/de-DE/welcome.json
new file mode 100644
index 0000000..b4f105e
--- /dev/null
+++ b/DigitalHumanWeb/locales/de-DE/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "Konfiguration importieren",
+ "market": "Markt durchstöbern",
+ "start": "Jetzt starten"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "Ersetzen",
+ "title": "Neue Assistentenempfehlung:"
+ },
+ "defaultMessage": "Ich bin Ihr persönlicher intelligenter Assistent {{appName}}. Wie kann ich Ihnen jetzt helfen?\nWenn Sie einen professionelleren oder maßgeschneiderten Assistenten benötigen, klicken Sie auf `+`, um einen benutzerdefinierten Assistenten zu erstellen.",
+ "defaultMessageWithoutCreate": "Ich bin Ihr persönlicher intelligenter Assistent {{appName}}. Wie kann ich Ihnen jetzt helfen?",
+ "qa": {
+ "q01": "Was ist LobeHub?",
+ "q02": "Was ist {{appName}}?",
+ "q03": "Hat {{appName}} Community-Support?",
+ "q04": "Welche Funktionen unterstützt {{appName}}?",
+ "q05": "Wie wird {{appName}} bereitgestellt und verwendet?",
+ "q06": "Wie ist die Preisgestaltung von {{appName}}?",
+ "q07": "Ist {{appName}} kostenlos?",
+ "q08": "Gibt es eine Cloud-Service-Version?",
+ "q09": "Unterstützt es lokale Sprachmodelle?",
+ "q10": "Unterstützt es Bildverarbeitung und -erzeugung?",
+ "q11": "Unterstützt es Sprachsynthese und Spracherkennung?",
+ "q12": "Unterstützt es ein Plug-in-System?",
+ "q13": "Gibt es einen eigenen Marktplatz für GPTs?",
+ "q14": "Unterstützt es mehrere AI-Dienstanbieter?",
+ "q15": "Was soll ich tun, wenn ich Probleme bei der Nutzung habe?"
+ },
+ "questions": {
+ "moreBtn": "Mehr erfahren",
+ "title": "Häufig gestellte Fragen:"
+ },
+ "welcome": {
+ "afternoon": "Guten Nachmittag",
+ "morning": "Guten Morgen",
+ "night": "Guten Abend",
+ "noon": "Guten Mittag"
+ }
+ },
+ "header": "Willkommen",
+ "pickAgent": "Oder wählen Sie eine Vorlage aus den folgenden Assistenten",
+ "skip": "Erstellung überspringen",
+ "slogan": {
+ "desc1": "Starten Sie das Gehirncluster und entfachen Sie den Funken des Denkens. Ihr intelligenter Assistent ist immer da.",
+ "desc2": "Erstellen Sie Ihren ersten Assistenten und lassen Sie uns beginnen.",
+ "title": "Geben Sie sich ein schlaueres Gehirn"
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/auth.json b/DigitalHumanWeb/locales/en-US/auth.json
new file mode 100644
index 0000000..d363724
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "Login",
+ "loginOrSignup": "Log in / Sign up",
+ "profile": "Profile",
+ "security": "Security",
+ "signout": "Sign out",
+ "signup": "Sign up"
+}
diff --git a/DigitalHumanWeb/locales/en-US/chat.json b/DigitalHumanWeb/locales/en-US/chat.json
new file mode 100644
index 0000000..2d1dd43
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "Model"
+ },
+ "agentDefaultMessage": "Hello, I am **{{name}}**. You can start a conversation with me right away, or you can go to [Assistant Settings]({{url}}) to complete my information.",
+ "agentDefaultMessageWithSystemRole": "Hello, I'm **{{name}}**, {{systemRole}}. Let's start chatting!",
+ "agentDefaultMessageWithoutEdit": "Hello, I'm **{{name}}**, let's start chatting!",
+ "agents": "Assistants",
+ "artifact": {
+ "generating": "Generating",
+ "thinking": "Thinking",
+ "thought": "Thought Process",
+ "unknownTitle": "Untitled Work"
+ },
+ "backToBottom": "Back to bottom",
+ "chatList": {
+ "longMessageDetail": "View Details"
+ },
+ "clearCurrentMessages": "Clear current session messages",
+ "confirmClearCurrentMessages": "You are about to clear the current session messages. Once cleared, they cannot be retrieved. Please confirm your action.",
+ "confirmRemoveSessionItemAlert": "You are about to delete this assistant. Once deleted, it cannot be retrieved. Please confirm your action.",
+ "confirmRemoveSessionSuccess": "Assistant removed successfully",
+ "defaultAgent": "Default Assistant",
+ "defaultList": "Default List",
+ "defaultSession": "Default Assistant",
+ "duplicateSession": {
+ "loading": "Copying...",
+ "success": "Copy successful",
+ "title": "{{title}} Copy"
+ },
+ "duplicateTitle": "{{title}} Copy",
+ "emptyAgent": "No assistant available",
+ "historyRange": "History Range",
+ "inbox": {
+ "desc": "Activate the brain cluster and spark creative thinking. Your virtual assistant is here to communicate with you about everything.",
+ "title": "Just Chat"
+ },
+ "input": {
+ "addAi": "Add an AI message",
+ "addUser": "Add a user message",
+ "more": "more",
+ "send": "Send",
+ "sendWithCmdEnter": "Press {{meta}} + Enter to send",
+ "sendWithEnter": "Press Enter to send",
+ "stop": "Stop",
+ "warp": "New Line"
+ },
+ "knowledgeBase": {
+ "all": "All Content",
+ "allFiles": "All Files",
+ "allKnowledgeBases": "All Knowledge Bases",
+ "disabled": "The current deployment mode does not support knowledge base conversations. To use this feature, please switch to server-side database deployment or use the {{cloud}} service.",
+ "library": {
+ "action": {
+ "add": "Add",
+ "detail": "Details",
+ "remove": "Remove"
+ },
+ "title": "Files/Knowledge Base"
+ },
+ "relativeFilesOrKnowledgeBases": "Related Files/Knowledge Bases",
+ "title": "Knowledge Base",
+ "uploadGuide": "Uploaded files can be viewed in the 'Knowledge Base'.",
+ "viewMore": "View More"
+ },
+ "messageAction": {
+ "delAndRegenerate": "Delete and Regenerate",
+ "regenerate": "Regenerate"
+ },
+ "newAgent": "New Assistant",
+ "pin": "Pin",
+ "pinOff": "Unpin",
+ "rag": {
+ "referenceChunks": "Reference Source",
+ "userQuery": {
+ "actions": {
+ "delete": "Delete Query Rewrite",
+ "regenerate": "Regenerate Query"
+ }
+ }
+ },
+ "regenerate": "Regenerate",
+ "roleAndArchive": "Role and Archive",
+ "searchAgentPlaceholder": "Search assistants...",
+ "sendPlaceholder": "Type your message here...",
+ "sessionGroup": {
+ "config": "Group Management",
+ "confirmRemoveGroupAlert": "This group is about to be deleted. After deletion, the assistants in this group will be moved to the default list. Please confirm your operation.",
+ "createAgentSuccess": "Assistant created successfully",
+ "createGroup": "Add New Group",
+ "createSuccess": "Created successfully",
+ "creatingAgent": "Creating assistant...",
+ "inputPlaceholder": "Please enter group name...",
+ "moveGroup": "Move to Group",
+ "newGroup": "New Group",
+ "rename": "Rename Group",
+ "renameSuccess": "Renamed successfully",
+ "sortSuccess": "Reorder successful",
+ "sorting": "Group sorting updating...",
+ "tooLong": "Group name length should be between 1-20"
+ },
+ "shareModal": {
+ "download": "Download Screenshot",
+ "imageType": "Image Format",
+ "screenshot": "Screenshot",
+ "settings": "Export Settings",
+ "shareToShareGPT": "Generate ShareGPT Sharing Link",
+ "withBackground": "Include Background Image",
+ "withFooter": "Include Footer",
+ "withPluginInfo": "Include Plugin Information",
+ "withSystemRole": "Include Assistant Role Setting"
+ },
+ "stt": {
+ "action": "Voice Input",
+ "loading": "Recognizing...",
+ "prettifying": "Polishing..."
+ },
+ "temp": "Temporary",
+ "tokenDetails": {
+ "chats": "Chat Messages",
+ "rest": "Remaining",
+ "systemRole": "Role Settings",
+ "title": "Context Details",
+ "tools": "Plugin Settings",
+ "total": "Total Available",
+ "used": "Total Used"
+ },
+ "tokenTag": {
+ "overload": "Exceeded Limit",
+ "remained": "Remaining",
+ "used": "Used"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "Auto Rename",
+ "duplicate": "Create Copy",
+ "export": "Export Topic"
+ },
+ "checkOpenNewTopic": "Enable new topic?",
+ "checkSaveCurrentMessages": "Do you want to save the current conversation as a topic?",
+ "confirmRemoveAll": "You are about to delete all topics. Once deleted, they cannot be recovered. Please proceed with caution.",
+ "confirmRemoveTopic": "You are about to delete this topic. Once deleted, it cannot be recovered. Please proceed with caution.",
+ "confirmRemoveUnstarred": "You are about to delete unstarred topics. Once deleted, they cannot be recovered. Please proceed with caution.",
+ "defaultTitle": "Default Topic",
+ "duplicateLoading": "Topic duplicating...",
+ "duplicateSuccess": "Topic duplicated successfully",
+ "guide": {
+ "desc": "Click the button on the left to save the current session as a historical topic and start a new session.",
+ "title": "Topic List"
+ },
+ "openNewTopic": "Open New Topic",
+ "removeAll": "Remove All Topics",
+ "removeUnstarred": "Remove Unstarred Topics",
+ "saveCurrentMessages": "Save current session as topic",
+ "searchPlaceholder": "Search topics...",
+ "title": "Topic List"
+ },
+ "translate": {
+ "action": "Translate",
+ "clear": "Clear Translation"
+ },
+ "tts": {
+ "action": "Text-to-Speech",
+ "clear": "Clear Speech"
+ },
+ "updateAgent": "Update Assistant Information",
+ "upload": {
+ "action": {
+ "fileUpload": "Upload File",
+ "folderUpload": "Upload Folder",
+ "imageDisabled": "The current model does not support visual recognition. Please switch models to use this feature.",
+ "imageUpload": "Upload Image",
+ "tooltip": "Upload"
+ },
+ "clientMode": {
+ "actionFiletip": "Upload File",
+ "actionTooltip": "Upload",
+ "disabled": "The current model does not support visual recognition and file analysis. Please switch models to use this feature."
+ },
+ "preview": {
+ "prepareTasks": "Preparing chunks...",
+ "status": {
+ "pending": "Preparing to upload...",
+ "processing": "Processing file..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/clerk.json b/DigitalHumanWeb/locales/en-US/clerk.json
new file mode 100644
index 0000000..cfeeccf
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "Back",
+ "badge__default": "Default",
+ "badge__otherImpersonatorDevice": "Other impersonator device",
+ "badge__primary": "Primary",
+ "badge__requiresAction": "Requires action",
+ "badge__thisDevice": "This device",
+ "badge__unverified": "Unverified",
+ "badge__userDevice": "User device",
+ "badge__you": "You",
+ "createOrganization": {
+ "formButtonSubmit": "Create organization",
+ "invitePage": {
+ "formButtonReset": "Skip"
+ },
+ "title": "Create organization"
+ },
+ "dates": {
+ "lastDay": "Yesterday at {{ date | timeString('en-US') }}",
+ "next6Days": "{{ date | weekday('en-US','long') }} at {{ date | timeString('en-US') }}",
+ "nextDay": "Tomorrow at {{ date | timeString('en-US') }}",
+ "numeric": "{{ date | numeric('en-US') }}",
+ "previous6Days": "Last {{ date | weekday('en-US','long') }} at {{ date | timeString('en-US') }}",
+ "sameDay": "Today at {{ date | timeString('en-US') }}"
+ },
+ "dividerText": "or",
+ "footerActionLink__useAnotherMethod": "Use another method",
+ "footerPageLink__help": "Help",
+ "footerPageLink__privacy": "Privacy",
+ "footerPageLink__terms": "Terms",
+ "formButtonPrimary": "Continue",
+ "formButtonPrimary__verify": "Verify",
+ "formFieldAction__forgotPassword": "Forgot password?",
+ "formFieldError__matchingPasswords": "Passwords match.",
+ "formFieldError__notMatchingPasswords": "Passwords don't match.",
+ "formFieldError__verificationLinkExpired": "The verification link expired. Please request a new link.",
+ "formFieldHintText__optional": "Optional",
+ "formFieldHintText__slug": "A slug is a human-readable ID that must be unique. It’s often used in URLs.",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "Delete account",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "example@email.com, example2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "my-org",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "Enable automatic invitations for this domain",
+ "formFieldLabel__backupCode": "Backup code",
+ "formFieldLabel__confirmDeletion": "Confirmation",
+ "formFieldLabel__confirmPassword": "Confirm password",
+ "formFieldLabel__currentPassword": "Current password",
+ "formFieldLabel__emailAddress": "Email address",
+ "formFieldLabel__emailAddress_username": "Email address or username",
+ "formFieldLabel__emailAddresses": "Email addresses",
+ "formFieldLabel__firstName": "First name",
+ "formFieldLabel__lastName": "Last name",
+ "formFieldLabel__newPassword": "New password",
+ "formFieldLabel__organizationDomain": "Domain",
+ "formFieldLabel__organizationDomainDeletePending": "Delete pending invitations and suggestions",
+ "formFieldLabel__organizationDomainEmailAddress": "Verification email address",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "Enter an email address under this domain to receive a code and verify this domain.",
+ "formFieldLabel__organizationName": "Name",
+ "formFieldLabel__organizationSlug": "Slug",
+ "formFieldLabel__passkeyName": "Name of passkey",
+ "formFieldLabel__password": "Password",
+ "formFieldLabel__phoneNumber": "Phone number",
+ "formFieldLabel__role": "Role",
+ "formFieldLabel__signOutOfOtherSessions": "Sign out of all other devices",
+ "formFieldLabel__username": "Username",
+ "impersonationFab": {
+ "action__signOut": "Sign out",
+ "title": "Signed in as {{identifier}}"
+ },
+ "locale": "en-US",
+ "maintenanceMode": "We are currently undergoing maintenance, but don't worry, it shouldn't take more than a few minutes.",
+ "membershipRole__admin": "Admin",
+ "membershipRole__basicMember": "Member",
+ "membershipRole__guestMember": "Guest",
+ "organizationList": {
+ "action__createOrganization": "Create organization",
+ "action__invitationAccept": "Join",
+ "action__suggestionsAccept": "Request to join",
+ "createOrganization": "Create Organization",
+ "invitationAcceptedLabel": "Joined",
+ "subtitle": "to continue to {{applicationName}}",
+ "suggestionsAcceptedLabel": "Pending approval",
+ "title": "Choose an account",
+ "titleWithoutPersonal": "Choose an organization"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "Automatic invitations",
+ "badge__automaticSuggestion": "Automatic suggestions",
+ "badge__manualInvitation": "No automatic enrollment",
+ "badge__unverified": "Unverified",
+ "createDomainPage": {
+ "subtitle": "Add the domain to verify. Users with email addresses at this domain can join the organization automatically or request to join.",
+ "title": "Add domain"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "The invitations could not be sent. There are already pending invitations for the following email addresses: {{email_addresses}}.",
+ "formButtonPrimary__continue": "Send invitations",
+ "selectDropdown__role": "Select role",
+ "subtitle": "Enter or paste one or more email addresses, separated by spaces or commas.",
+ "successMessage": "Invitations successfully sent",
+ "title": "Invite new members"
+ },
+ "membersPage": {
+ "action__invite": "Invite",
+ "activeMembersTab": {
+ "menuAction__remove": "Remove member",
+ "tableHeader__actions": "",
+ "tableHeader__joined": "Joined",
+ "tableHeader__role": "Role",
+ "tableHeader__user": "User"
+ },
+ "detailsTitle__emptyRow": "No members to display",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "Invite users by connecting an email domain with your organization. Anyone who signs up with a matching email domain will be able to join the organization anytime.",
+ "headerTitle": "Automatic invitations",
+ "primaryButton": "Manage verified domains"
+ },
+ "table__emptyRow": "No invitations to display"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "Revoke invitation",
+ "tableHeader__invited": "Invited"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "Users who sign up with a matching email domain, will be able to see a suggestion to request to join your organization.",
+ "headerTitle": "Automatic suggestions",
+ "primaryButton": "Manage verified domains"
+ },
+ "menuAction__approve": "Approve",
+ "menuAction__reject": "Reject",
+ "tableHeader__requested": "Requested access",
+ "table__emptyRow": "No requests to display"
+ },
+ "start": {
+ "headerTitle__invitations": "Invitations",
+ "headerTitle__members": "Members",
+ "headerTitle__requests": "Requests"
+ }
+ },
+ "navbar": {
+ "description": "Manage your organization.",
+ "general": "General",
+ "members": "Members",
+ "title": "Organization"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "Type \"{{organizationName}}\" below to continue.",
+ "messageLine1": "Are you sure you want to delete this organization?",
+ "messageLine2": "This action is permanent and irreversible.",
+ "successMessage": "You have deleted the organization.",
+ "title": "Delete organization"
+ },
+ "leaveOrganization": {
+ "actionDescription": "Type \"{{organizationName}}\" below to continue.",
+ "messageLine1": "Are you sure you want to leave this organization? You will lose access to this organization and its applications.",
+ "messageLine2": "This action is permanent and irreversible.",
+ "successMessage": "You have left the organization.",
+ "title": "Leave organization"
+ },
+ "title": "Danger"
+ },
+ "domainSection": {
+ "menuAction__manage": "Manage",
+ "menuAction__remove": "Delete",
+ "menuAction__verify": "Verify",
+ "primaryButton": "Add domain",
+ "subtitle": "Allow users to join the organization automatically or request to join based on a verified email domain.",
+ "title": "Verified domains"
+ },
+ "successMessage": "The organization has been updated.",
+ "title": "Update profile"
+ },
+ "removeDomainPage": {
+ "messageLine1": "The email domain {{domain}} will be removed.",
+ "messageLine2": "Users won’t be able to join the organization automatically after this.",
+ "successMessage": "{{domain}} has been removed.",
+ "title": "Remove domain"
+ },
+ "start": {
+ "headerTitle__general": "General",
+ "headerTitle__members": "Members",
+ "profileSection": {
+ "primaryButton": "Update profile",
+ "title": "Organization Profile",
+ "uploadAction__title": "Logo"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "Removing this domain will affect invited users.",
+ "removeDomainActionLabel__remove": "Remove domain",
+ "removeDomainSubtitle": "Remove this domain from your verified domains",
+ "removeDomainTitle": "Remove domain"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "Users are automatically invited to join the organization when they sign-up and can join anytime.",
+ "automaticInvitationOption__label": "Automatic invitations",
+ "automaticSuggestionOption__description": "Users receive a suggestion to request to join, but must be approved by an admin before they are able to join the organization.",
+ "automaticSuggestionOption__label": "Automatic suggestions",
+ "calloutInfoLabel": "Changing the enrollment mode will only affect new users.",
+ "calloutInvitationCountLabel": "Pending invitations sent to users: {{count}}",
+ "calloutSuggestionCountLabel": "Pending suggestions sent to users: {{count}}",
+ "manualInvitationOption__description": "Users can only be invited manually to the organization.",
+ "manualInvitationOption__label": "No automatic enrollment",
+ "subtitle": "Choose how users from this domain can join the organization."
+ },
+ "start": {
+ "headerTitle__danger": "Danger",
+ "headerTitle__enrollment": "Enrollment options"
+ },
+ "subtitle": "The domain {{domain}} is now verified. Continue by selecting enrollment mode.",
+ "title": "Update {{domain}}"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "Enter the verification code sent to your email address",
+ "formTitle": "Verification code",
+ "resendButton": "Didn't receive a code? Resend",
+ "subtitle": "The domain {{domainName}} needs to be verified via email.",
+ "subtitleVerificationCodeScreen": "A verification code was sent to {{emailAddress}}. Enter the code to continue.",
+ "title": "Verify domain"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "Create organization",
+ "action__invitationAccept": "Join",
+ "action__manageOrganization": "Manage",
+ "action__suggestionsAccept": "Request to join",
+ "notSelected": "No organization selected",
+ "personalWorkspace": "Personal account",
+ "suggestionsAcceptedLabel": "Pending approval"
+ },
+ "paginationButton__next": "Next",
+ "paginationButton__previous": "Previous",
+ "paginationRowText__displaying": "Displaying",
+ "paginationRowText__of": "of",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "Add account",
+ "action__signOutAll": "Sign out of all accounts",
+ "subtitle": "Select the account with which you wish to continue.",
+ "title": "Choose an account"
+ },
+ "alternativeMethods": {
+ "actionLink": "Get help",
+ "actionText": "Don’t have any of these?",
+ "blockButton__backupCode": "Use a backup code",
+ "blockButton__emailCode": "Email code to {{identifier}}",
+ "blockButton__emailLink": "Email link to {{identifier}}",
+ "blockButton__passkey": "Sign in with your passkey",
+ "blockButton__password": "Sign in with your password",
+ "blockButton__phoneCode": "Send SMS code to {{identifier}}",
+ "blockButton__totp": "Use your authenticator app",
+ "getHelp": {
+ "blockButton__emailSupport": "Email support",
+ "content": "If you’re experiencing difficulty signing into your account, email us and we will work with you to restore access as soon as possible.",
+ "title": "Get help"
+ },
+ "subtitle": "Facing issues? You can use any of these methods to sign in.",
+ "title": "Use another method"
+ },
+ "backupCodeMfa": {
+ "subtitle": "Your backup code is the one you got when setting up two-step authentication.",
+ "title": "Enter a backup code"
+ },
+ "emailCode": {
+ "formTitle": "Verification code",
+ "resendButton": "Didn't receive a code? Resend",
+ "subtitle": "to continue to {{applicationName}}",
+ "title": "Check your email"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "Return to the original tab to continue.",
+ "title": "This verification link has expired"
+ },
+ "failed": {
+ "subtitle": "Return to the original tab to continue.",
+ "title": "This verification link is invalid"
+ },
+ "formSubtitle": "Use the verification link sent to your email",
+ "formTitle": "Verification link",
+ "loading": {
+ "subtitle": "You will be redirected soon",
+ "title": "Signing in..."
+ },
+ "resendButton": "Didn't receive a link? Resend",
+ "subtitle": "to continue to {{applicationName}}",
+ "title": "Check your email",
+ "unusedTab": {
+ "title": "You may close this tab"
+ },
+ "verified": {
+ "subtitle": "You will be redirected soon",
+ "title": "Successfully signed in"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Return to original tab to continue",
+ "subtitleNewTab": "Return to the newly opened tab to continue",
+ "titleNewTab": "Signed in on other tab"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "Reset password code",
+ "resendButton": "Didn't receive a code? Resend",
+ "subtitle": "to reset your password",
+ "subtitle_email": "First, enter the code sent to your email address",
+ "subtitle_phone": "First, enter the code sent to your phone",
+ "title": "Reset password"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "Reset your password",
+ "label__alternativeMethods": "Or, sign in with another method",
+ "title": "Forgot Password?"
+ },
+ "noAvailableMethods": {
+ "message": "Cannot proceed with sign in. There's no available authentication factor.",
+ "subtitle": "An error occurred",
+ "title": "Cannot sign in"
+ },
+ "passkey": {
+ "subtitle": "Using your passkey confirms it's you. Your device may ask for your fingerprint, face or screen lock.",
+ "title": "Use your passkey"
+ },
+ "password": {
+ "actionLink": "Use another method",
+ "subtitle": "Enter the password associated with your account",
+ "title": "Enter your password"
+ },
+ "passwordPwned": {
+ "title": "Password compromised"
+ },
+ "phoneCode": {
+ "formTitle": "Verification code",
+ "resendButton": "Didn't receive a code? Resend",
+ "subtitle": "to continue to {{applicationName}}",
+ "title": "Check your phone"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "Verification code",
+ "resendButton": "Didn't receive a code? Resend",
+ "subtitle": "To continue, please enter the verification code sent to your phone",
+ "title": "Check your phone"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "Reset Password",
+ "requiredMessage": "For security reasons, it is required to reset your password.",
+ "successMessage": "Your password was successfully changed. Signing you in, please wait a moment.",
+ "title": "Set new password"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "We need to verify your identity before resetting your password."
+ },
+ "start": {
+ "actionLink": "Sign up",
+ "actionLink__use_email": "Use email",
+ "actionLink__use_email_username": "Use email or username",
+ "actionLink__use_passkey": "Use passkey instead",
+ "actionLink__use_phone": "Use phone",
+ "actionLink__use_username": "Use username",
+ "actionText": "Don’t have an account?",
+ "subtitle": "Welcome back! Please sign in to continue",
+ "title": "Sign in to {{applicationName}}"
+ },
+ "totpMfa": {
+ "formTitle": "Verification code",
+ "subtitle": "To continue, please enter the verification code generated by your authenticator app",
+ "title": "Two-step verification"
+ }
+ },
+ "signInEnterPasswordTitle": "Enter your password",
+ "signUp": {
+ "continue": {
+ "actionLink": "Sign in",
+ "actionText": "Already have an account?",
+ "subtitle": "Please fill in the remaining details to continue.",
+ "title": "Fill in missing fields"
+ },
+ "emailCode": {
+ "formSubtitle": "Enter the verification code sent to your email address",
+ "formTitle": "Verification code",
+ "resendButton": "Didn't receive a code? Resend",
+ "subtitle": "Enter the verification code sent to your email",
+ "title": "Verify your email"
+ },
+ "emailLink": {
+ "formSubtitle": "Use the verification link sent to your email address",
+ "formTitle": "Verification link",
+ "loading": {
+ "title": "Signing up..."
+ },
+ "resendButton": "Didn't receive a link? Resend",
+ "subtitle": "to continue to {{applicationName}}",
+ "title": "Verify your email",
+ "verified": {
+ "title": "Successfully signed up"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Return to the newly opened tab to continue",
+ "subtitleNewTab": "Return to previous tab to continue",
+ "title": "Successfully verified email"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "Enter the verification code sent to your phone number",
+ "formTitle": "Verification code",
+ "resendButton": "Didn't receive a code? Resend",
+ "subtitle": "Enter the verification code sent to your phone",
+ "title": "Verify your phone"
+ },
+ "start": {
+ "actionLink": "Sign in",
+ "actionText": "Already have an account?",
+ "subtitle": "Welcome! Please fill in the details to get started.",
+ "title": "Create your account"
+ }
+ },
+ "socialButtonsBlockButton": "Continue with {{provider|titleize}}",
+ "unstable__errors": {
+ "captcha_invalid": "Sign up unsuccessful due to failed security validations. Please refresh the page to try again or reach out to support for more assistance.",
+ "captcha_unavailable": "Sign up unsuccessful due to failed bot validation. Please refresh the page to try again or reach out to support for more assistance.",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "This email address is taken. Please try another.",
+ "form_identifier_exists__phone_number": "This phone number is taken. Please try another.",
+ "form_identifier_exists__username": "This username is taken. Please try another.",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "Email address must be a valid email address.",
+ "form_param_format_invalid__phone_number": "Phone number must be in a valid international format",
+ "form_param_max_length_exceeded__first_name": "First name should not exceed 256 characters.",
+ "form_param_max_length_exceeded__last_name": "Last name should not exceed 256 characters.",
+ "form_param_max_length_exceeded__name": "Name should not exceed 256 characters.",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "Your password is not strong enough.",
+ "form_password_pwned": "This password has been found as part of a breach and can not be used, please try another password instead.",
+ "form_password_pwned__sign_in": "This password has been found as part of a breach and can not be used, please reset your password.",
+ "form_password_size_in_bytes_exceeded": "Your password has exceeded the maximum number of bytes allowed, please shorten it or remove some special characters.",
+ "form_password_validation_failed": "Incorrect Password",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "You cannot delete your last identification.",
+ "not_allowed_access": "",
+ "passkey_already_exists": "A passkey is already registered with this device.",
+ "passkey_not_supported": "Passkeys are not supported on this device.",
+ "passkey_pa_not_supported": "Registration requires a platform authenticator but the device does not support it.",
+ "passkey_registration_cancelled": "Passkey registration was cancelled or timed out.",
+ "passkey_retrieval_cancelled": "Passkey verification was cancelled or timed out.",
+ "passwordComplexity": {
+ "maximumLength": "less than {{length}} characters",
+ "minimumLength": "{{length}} or more characters",
+ "requireLowercase": "a lowercase letter",
+ "requireNumbers": "a number",
+ "requireSpecialCharacter": "a special character",
+ "requireUppercase": "an uppercase letter",
+ "sentencePrefix": "Your password must contain"
+ },
+ "phone_number_exists": "This phone number is taken. Please try another.",
+ "zxcvbn": {
+ "couldBeStronger": "Your password works, but could be stronger. Try adding more characters.",
+ "goodPassword": "Your password meets all the necessary requirements.",
+ "notEnough": "Your password is not strong enough.",
+ "suggestions": {
+ "allUppercase": "Capitalize some, but not all letters.",
+ "anotherWord": "Add more words that are less common.",
+ "associatedYears": "Avoid years that are associated with you.",
+ "capitalization": "Capitalize more than the first letter.",
+ "dates": "Avoid dates and years that are associated with you.",
+ "l33t": "Avoid predictable letter substitutions like '@' for 'a'.",
+ "longerKeyboardPattern": "Use longer keyboard patterns and change typing direction multiple times.",
+ "noNeed": "You can create strong passwords without using symbols, numbers, or uppercase letters.",
+ "pwned": "If you use this password elsewhere, you should change it.",
+ "recentYears": "Avoid recent years.",
+ "repeated": "Avoid repeated words and characters.",
+ "reverseWords": "Avoid reversed spellings of common words.",
+ "sequences": "Avoid common character sequences.",
+ "useWords": "Use multiple words, but avoid common phrases."
+ },
+ "warnings": {
+ "common": "This is a commonly used password.",
+ "commonNames": "Common names and surnames are easy to guess.",
+ "dates": "Dates are easy to guess.",
+ "extendedRepeat": "Repeated character patterns like \"abcabcabc\" are easy to guess.",
+ "keyPattern": "Short keyboard patterns are easy to guess.",
+ "namesByThemselves": "Single names or surnames are easy to guess.",
+ "pwned": "Your password was exposed by a data breach on the Internet.",
+ "recentYears": "Recent years are easy to guess.",
+ "sequences": "Common character sequences like \"abc\" are easy to guess.",
+ "similarToCommon": "This is similar to a commonly used password.",
+ "simpleRepeat": "Repeated characters like \"aaa\" are easy to guess.",
+ "straightRow": "Straight rows of keys on your keyboard are easy to guess.",
+ "topHundred": "This is a frequently used password.",
+ "topTen": "This is a heavily used password.",
+ "userInputs": "There should not be any personal or page related data.",
+ "wordByItself": "Single words are easy to guess."
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "Add account",
+ "action__manageAccount": "Manage account",
+ "action__signOut": "Sign out",
+ "action__signOutAll": "Sign out of all accounts"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "Copied!",
+ "actionLabel__copy": "Copy all",
+ "actionLabel__download": "Download .txt",
+ "actionLabel__print": "Print",
+ "infoText1": "Backup codes will be enabled for this account.",
+ "infoText2": "Keep the backup codes secret and store them securely. You may regenerate backup codes if you suspect they have been compromised.",
+ "subtitle__codelist": "Store them securely and keep them secret.",
+ "successMessage": "Backup codes are now enabled. You can use one of these to sign in to your account, if you lose access to your authentication device. Each code can only be used once.",
+ "successSubtitle": "You can use one of these to sign in to your account, if you lose access to your authentication device.",
+ "title": "Add backup code verification",
+ "title__codelist": "Backup codes"
+ },
+ "connectedAccountPage": {
+ "formHint": "Select a provider to connect your account.",
+ "formHint__noAccounts": "There are no available external account providers.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} will be removed from this account.",
+ "messageLine2": "You will no longer be able to use this connected account and any dependent features will no longer work.",
+ "successMessage": "{{connectedAccount}} has been removed from your account.",
+ "title": "Remove connected account"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "The provider has been added to your account",
+ "title": "Add connected account"
+ },
+ "deletePage": {
+ "actionDescription": "Type \"Delete account\" below to continue.",
+ "confirm": "Delete account",
+ "messageLine1": "Are you sure you want to delete your account?",
+ "messageLine2": "This action is permanent and irreversible.",
+ "title": "Delete account"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "An email containing a verification code will be sent to this email address.",
+ "formSubtitle": "Enter the verification code sent to {{identifier}}",
+ "formTitle": "Verification code",
+ "resendButton": "Didn't receive a code? Resend",
+ "successMessage": "The email {{identifier}} has been added to your account."
+ },
+ "emailLink": {
+ "formHint": "An email containing a verification link will be sent to this email address.",
+ "formSubtitle": "Click on the verification link in the email sent to {{identifier}}",
+ "formTitle": "Verification link",
+ "resendButton": "Didn't receive a link? Resend",
+ "successMessage": "The email {{identifier}} has been added to your account."
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}} will be removed from this account.",
+ "messageLine2": "You will no longer be able to sign in using this email address.",
+ "successMessage": "{{emailAddress}} has been removed from your account.",
+ "title": "Remove email address"
+ },
+ "title": "Add email address",
+ "verifyTitle": "Verify email address"
+ },
+ "formButtonPrimary__add": "Add",
+ "formButtonPrimary__continue": "Continue",
+ "formButtonPrimary__finish": "Finish",
+ "formButtonPrimary__remove": "Remove",
+ "formButtonPrimary__save": "Save",
+ "formButtonReset": "Cancel",
+ "mfaPage": {
+ "formHint": "Select a method to add.",
+ "title": "Add two-step verification"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "Use existing number",
+ "primaryButton__addPhoneNumber": "Add phone number",
+ "removeResource": {
+ "messageLine1": "{{identifier}} will be no longer receiving verification codes when signing in.",
+ "messageLine2": "Your account may not be as secure. Are you sure you want to continue?",
+ "successMessage": "SMS code two-step verification has been removed for {{mfaPhoneCode}}",
+ "title": "Remove two-step verification"
+ },
+ "subtitle__availablePhoneNumbers": "Select an existing phone number to register for SMS code two-step verification or add a new one.",
+ "subtitle__unavailablePhoneNumbers": "There are no available phone numbers to register for SMS code two-step verification, please add a new one.",
+ "successMessage1": "When signing in, you will need to enter a verification code sent to this phone number as an additional step.",
+ "successMessage2": "Save these backup codes and store them somewhere safe. If you lose access to your authentication device, you can use backup codes to sign in.",
+ "successTitle": "SMS code verification enabled",
+ "title": "Add SMS code verification"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "Scan QR code instead",
+ "buttonUnableToScan__nonPrimary": "Can’t scan QR code?",
+ "infoText__ableToScan": "Set up a new sign-in method in your authenticator app and scan the following QR code to link it to your account.",
+ "infoText__unableToScan": "Set up a new sign-in method in your authenticator and enter the Key provided below.",
+ "inputLabel__unableToScan1": "Make sure Time-based or One-time passwords is enabled, then finish linking your account.",
+ "inputLabel__unableToScan2": "Alternatively, if your authenticator supports TOTP URIs, you can also copy the full URI."
+ },
+ "removeResource": {
+ "messageLine1": "Verification codes from this authenticator will no longer be required when signing in.",
+ "messageLine2": "Your account may not be as secure. Are you sure you want to continue?",
+ "successMessage": "Two-step verification via authenticator application has been removed.",
+ "title": "Remove two-step verification"
+ },
+ "successMessage": "Two-step verification is now enabled. When signing in, you will need to enter a verification code from this authenticator as an additional step.",
+ "title": "Add authenticator application",
+ "verifySubtitle": "Enter verification code generated by your authenticator",
+ "verifyTitle": "Verification code"
+ },
+ "mobileButton__menu": "Menu",
+ "navbar": {
+ "account": "Profile",
+ "description": "Manage your account info.",
+ "security": "Security",
+ "title": "Account"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} will be removed from this account.",
+ "title": "Remove passkey"
+ },
+ "subtitle__rename": "You can change the passkey name to make it easier to find.",
+ "title__rename": "Rename Passkey"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "It is recommended to sign out of all other devices which may have used your old password.",
+ "readonly": "Your password can currently not be edited because you can sign in only via the enterprise connection.",
+ "successMessage__set": "Your password has been set.",
+ "successMessage__signOutOfOtherSessions": "All other devices have been signed out.",
+ "successMessage__update": "Your password has been updated.",
+ "title__set": "Set password",
+ "title__update": "Update password"
+ },
+ "phoneNumberPage": {
+ "infoText": "A text message containing a verification code will be sent to this phone number. Message and data rates may apply.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} will be removed from this account.",
+ "messageLine2": "You will no longer be able to sign in using this phone number.",
+ "successMessage": "{{phoneNumber}} has been removed from your account.",
+ "title": "Remove phone number"
+ },
+ "successMessage": "{{identifier}} has been added to your account.",
+ "title": "Add phone number",
+ "verifySubtitle": "Enter the verification code sent to {{identifier}}",
+ "verifyTitle": "Verify phone number"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "Recommended size 1:1, up to 10MB.",
+ "imageFormDestructiveActionSubtitle": "Remove",
+ "imageFormSubtitle": "Upload",
+ "imageFormTitle": "Profile image",
+ "readonly": "Your profile information has been provided by the enterprise connection and cannot be edited.",
+ "successMessage": "Your profile has been updated.",
+ "title": "Update profile"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "Sign out of device",
+ "title": "Active devices"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "Try again",
+ "actionLabel__reauthorize": "Authorize now",
+ "destructiveActionTitle": "Remove",
+ "primaryButton": "Connect account",
+ "subtitle__reauthorize": "The required scopes have been updated, and you may be experiencing limited functionality. Please re-authorize this application to avoid any issues",
+ "title": "Connected accounts"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "Delete account",
+ "title": "Delete account"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "Remove email",
+ "detailsAction__nonPrimary": "Set as primary",
+ "detailsAction__primary": "Complete verification",
+ "detailsAction__unverified": "Verify",
+ "primaryButton": "Add email address",
+ "title": "Email addresses"
+ },
+ "enterpriseAccountsSection": {
+ "title": "Enterprise accounts"
+ },
+ "headerTitle__account": "Profile details",
+ "headerTitle__security": "Security",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "Regenerate",
+ "headerTitle": "Backup codes",
+ "subtitle__regenerate": "Get a fresh set of secure backup codes. Prior backup codes will be deleted and cannot be used.",
+ "title__regenerate": "Regenerate backup codes"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "Set as default",
+ "destructiveActionLabel": "Remove"
+ },
+ "primaryButton": "Add two-step verification",
+ "title": "Two-step verification",
+ "totp": {
+ "destructiveActionTitle": "Remove",
+ "headerTitle": "Authenticator application"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "Remove",
+ "menuAction__rename": "Rename",
+ "title": "Passkeys"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "Set password",
+ "primaryButton__updatePassword": "Update password",
+ "title": "Password"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "Remove phone number",
+ "detailsAction__nonPrimary": "Set as primary",
+ "detailsAction__primary": "Complete verification",
+ "detailsAction__unverified": "Verify phone number",
+ "primaryButton": "Add phone number",
+ "title": "Phone numbers"
+ },
+ "profileSection": {
+ "primaryButton": "Update profile",
+ "title": "Profile"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "Set username",
+ "primaryButton__updateUsername": "Update username",
+ "title": "Username"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "Remove wallet",
+ "primaryButton": "Web3 wallets",
+ "title": "Web3 wallets"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "Your username has been updated.",
+ "title__set": "Set username",
+ "title__update": "Update username"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} will be removed from this account.",
+ "messageLine2": "You will no longer be able to sign in using this web3 wallet.",
+ "successMessage": "{{web3Wallet}} has been removed from your account.",
+ "title": "Remove web3 wallet"
+ },
+ "subtitle__availableWallets": "Select a web3 wallet to connect to your account.",
+ "subtitle__unavailableWallets": "There are no available web3 wallets.",
+ "successMessage": "The wallet has been added to your account.",
+ "title": "Add web3 wallet"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/common.json b/DigitalHumanWeb/locales/en-US/common.json
new file mode 100644
index 0000000..38194dc
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "About",
+ "advanceSettings": "Advanced Settings",
+ "alert": {
+ "cloud": {
+ "action": "Free Trial",
+ "desc": "We offer {{credit}} free computing credits to all registered users, no complex configuration required, ready to use out of the box, support unlimited conversation history and global cloud synchronization.",
+ "descOnMobile": "We provide {{credit}} free computing credits for all registered users, ready to use without complex configuration.",
+ "title": "Launch {{name}}"
+ }
+ },
+ "appInitializing": "Application is starting...",
+ "autoGenerate": "Auto Generate",
+ "autoGenerateTooltip": "Auto-generate assistant description based on prompts",
+ "autoGenerateTooltipDisabled": "Please enter a tooltip before using the autocomplete feature",
+ "back": "Back",
+ "batchDelete": "Batch Delete",
+ "blog": "Product Blog",
+ "cancel": "Cancel",
+ "changelog": "Changelog",
+ "close": "Close",
+ "contact": "Contact Us",
+ "copy": "Copy",
+ "copyFail": "Copy failed",
+ "copySuccess": "Copied successfully",
+ "dataStatistics": {
+ "messages": "Messages",
+ "sessions": "Assistants",
+ "today": "Today's New",
+ "topics": "Topics"
+ },
+ "defaultAgent": "Default Assistant",
+ "defaultSession": "Default Assistant",
+ "delete": "Delete",
+ "document": "User Manual",
+ "download": "Download",
+ "duplicate": "Create Duplicate",
+ "edit": "Edit",
+ "export": "Export Configuration",
+ "exportType": {
+ "agent": "Export Assistant Settings",
+ "agentWithMessage": "Export Assistant and Messages",
+ "all": "Export Global Settings and All Assistant Data",
+ "allAgent": "Export All Assistant Settings",
+ "allAgentWithMessage": "Export All Assistants and Messages",
+ "globalSetting": "Export Global Settings"
+ },
+ "feedback": "Feedback",
+ "follow": "Follow us on {{name}}",
+ "footer": {
+ "action": {
+ "feedback": "Share Your Valuable Feedback",
+ "star": "Star on GitHub"
+ },
+ "and": "and",
+ "feedback": {
+ "action": "Share Feedback",
+ "desc": "Every idea and suggestion from you is precious to us. We can't wait to hear your thoughts! Feel free to contact us to provide feedback on product features and user experience, helping us make LobeChat even better.",
+ "title": "Share Your Valuable Feedback on GitHub"
+ },
+ "later": "Later",
+ "star": {
+ "action": "Star It",
+ "desc": "If you love our product and want to support us, could you go to GitHub and give us a star? This small gesture means a lot to us and motivates us to continue providing you with great experiences.",
+ "title": "Star Us on GitHub"
+ },
+ "title": "Like Our Product?"
+ },
+ "fullscreen": "Full Screen Mode",
+ "historyRange": "History Range",
+ "import": "Import Configuration",
+ "importModal": {
+ "error": {
+ "desc": "Sorry, an error occurred during the data import process. Please try importing again, or <1>submit a request1>, and we will help you troubleshoot the issue as soon as possible.",
+ "title": "Data Import Failed"
+ },
+ "finish": {
+ "onlySettings": "System settings imported successfully",
+ "start": "Start using",
+ "subTitle": "Data imported successfully, took {{duration}} seconds. Import details are as follows:",
+ "title": "Data import completed"
+ },
+ "loading": "Data importing, please wait...",
+ "preparing": "Data import module is preparing...",
+ "result": {
+ "added": "Imported successfully",
+ "errors": "Import errors",
+ "messages": "Messages",
+ "sessionGroups": "Groups",
+ "sessions": "Assistants",
+ "skips": "Duplicates skipped",
+ "topics": "Topics",
+ "type": "Data Type"
+ },
+ "title": "Import Data",
+ "uploading": {
+ "desc": "The current file is large, and is being uploaded...",
+ "restTime": "Time remaining",
+ "speed": "Upload speed"
+ }
+ },
+ "information": "Community and News",
+ "installPWA": "Install browser app",
+ "lang": {
+ "ar": "Arabic",
+ "bg-BG": "Bulgarian",
+ "bn": "Bengali",
+ "cs-CZ": "Czech",
+ "da-DK": "Danish",
+ "de-DE": "German",
+ "el-GR": "Greek",
+ "en": "English",
+ "en-US": "English",
+ "es-ES": "Spanish",
+ "fi-FI": "Finnish",
+ "fr-FR": "French",
+ "hi-IN": "Hindi",
+ "hu-HU": "Hungarian",
+ "id-ID": "Indonesian",
+ "it-IT": "Italian",
+ "ja-JP": "Japanese",
+ "ko-KR": "Korean",
+ "nl-NL": "Dutch",
+ "no-NO": "Norwegian",
+ "pl-PL": "Polish",
+ "pt-BR": "Portuguese",
+ "pt-PT": "Portuguese",
+ "ro-RO": "Romanian",
+ "ru-RU": "Russian",
+ "sk-SK": "Slovak",
+ "sr-RS": "Serbian",
+ "sv-SE": "Swedish",
+ "th-TH": "Thai",
+ "tr-TR": "Turkish",
+ "uk-UA": "Ukrainian",
+ "vi-VN": "Vietnamese",
+ "zh": "Simplified Chinese",
+ "zh-CN": "Simplified Chinese",
+ "zh-TW": "Traditional Chinese"
+ },
+ "layoutInitializing": "Initializing layout...",
+ "legal": "Legal Disclaimer",
+ "loading": "Loading...",
+ "mail": {
+ "business": "Business Cooperation",
+ "support": "Email Support"
+ },
+ "oauth": "SSO Login",
+ "officialSite": "Official Website",
+ "ok": "OK",
+ "password": "Password",
+ "pin": "Pin",
+ "pinOff": "Unpin",
+ "privacy": "Privacy Policy",
+ "regenerate": "Regenerate",
+ "rename": "Rename",
+ "reset": "Reset",
+ "retry": "Retry",
+ "send": "Send",
+ "setting": "Settings",
+ "share": "Share",
+ "stop": "Stop",
+ "sync": {
+ "actions": {
+ "settings": "Sync Settings",
+ "sync": "Sync Now"
+ },
+ "awareness": {
+ "current": "Current Device"
+ },
+ "channel": "Channel",
+ "disabled": {
+ "actions": {
+ "enable": "Enable Cloud Sync",
+ "settings": "Sync Settings"
+ },
+ "desc": "Current session data is only stored in this browser. If you need to sync data across multiple devices, please configure and enable cloud sync.",
+ "title": "Data Sync Disabled"
+ },
+ "enabled": {
+ "title": "Data Sync Enabled"
+ },
+ "status": {
+ "connecting": "Connecting",
+ "disabled": "Sync Disabled",
+ "ready": "Connected",
+ "synced": "Synced",
+ "syncing": "Syncing",
+ "unconnected": "Connection Failed"
+ },
+ "title": "Sync Status",
+ "unconnected": {
+ "tip": "Signaling server connection failed, and peer-to-peer communication channel cannot be established. Please check the network and try again."
+ }
+ },
+ "tab": {
+ "chat": "Chat",
+ "discover": "Discover",
+ "files": "Files",
+ "me": "Me",
+ "setting": "Settings"
+ },
+ "telemetry": {
+ "allow": "Allow",
+ "deny": "Deny",
+ "desc": "We would like to anonymously collect usage information to help us improve LobeChat and provide you with a better product experience. You can disable this at any time in Settings - About.",
+ "learnMore": "Learn More",
+ "title": "Help LobeChat be better"
+ },
+ "temp": "Temporary",
+ "terms": "Terms of Service",
+ "updateAgent": "Update Assistant Information",
+ "upgradeVersion": {
+ "action": "Upgrade",
+ "hasNew": "New update available",
+ "newVersion": "New version available: {{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "Anonymous User",
+ "billing": "Billing Management",
+ "cloud": "Launch {{name}}",
+ "data": "Data Storage",
+ "defaultNickname": "Community User",
+ "discord": "Community Support",
+ "docs": "Documentation",
+ "email": "Email Support",
+ "feedback": "Feedback and Suggestions",
+ "help": "Help Center",
+ "moveGuide": "The settings button has been moved here",
+ "plans": "Subscription Plans",
+ "preview": "Preview",
+ "profile": "Account Management",
+ "setting": "Settings",
+ "usages": "Usage Statistics"
+ },
+ "version": "Version"
+}
diff --git a/DigitalHumanWeb/locales/en-US/components.json b/DigitalHumanWeb/locales/en-US/components.json
new file mode 100644
index 0000000..e257f55
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "Drag and drop files here to upload multiple images.",
+ "dragFileDesc": "Drag and drop images and files here to upload multiple images and files.",
+ "dragFileTitle": "Upload Files",
+ "dragTitle": "Upload Images"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "Add to Knowledge Base",
+ "addToOtherKnowledgeBase": "Add to Other Knowledge Base",
+ "batchChunking": "Batch Chunking",
+ "chunking": "Chunking",
+ "chunkingTooltip": "Split the file into multiple text chunks and embedding them for semantic search and file dialogue.",
+ "confirmDelete": "You are about to delete this file. Once deleted, it cannot be recovered. Please confirm your action.",
+ "confirmDeleteMultiFiles": "You are about to delete the selected {{count}} files. Once deleted, they cannot be recovered. Please confirm your action.",
+ "confirmRemoveFromKnowledgeBase": "You are about to remove the selected {{count}} files from the knowledge base. They will still be viewable in all files. Please confirm your action.",
+ "copyUrl": "Copy Link",
+ "copyUrlSuccess": "File url copied successfully.",
+ "createChunkingTask": "Preparing...",
+ "deleteSuccess": "File deleted successfully.",
+ "downloading": "Downloading file...",
+ "removeFromKnowledgeBase": "Remove from Knowledge Base",
+ "removeFromKnowledgeBaseSuccess": "File removed successfully."
+ },
+ "bottom": "You've reached the end.",
+ "config": {
+ "showFilesInKnowledgeBase": "Show content in Knowledge Base"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "Upload File",
+ "folder": "Upload Folder",
+ "knowledgeBase": "Create New Knowledge Base"
+ },
+ "or": "or",
+ "title": "Drag files or folders here"
+ },
+ "title": {
+ "createdAt": "Created At",
+ "size": "Size",
+ "title": "File"
+ },
+ "total": {
+ "fileCount": "Total {{count}} items",
+ "selectedCount": "Selected {{count}} items"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "Text chunks have not been fully embedded, which will render the semantic search feature unavailable. To improve search quality, please embedding the text chunks.",
+ "error": "Embedding failed",
+ "errorResult": "Vectorization failed, please check and try again. Error detail:",
+ "processing": "Text chunks are being embedded, please be patient.",
+ "success": "All current text chunks have been embedded"
+ },
+ "embeddings": "Embedding",
+ "status": {
+ "error": "Chunking failed",
+ "errorResult": "Chunking failed, please check and try again. Error detail:",
+ "processing": "Chunking",
+ "processingTip": "The server is splitting text chunks; closing the page will not affect the chunking progress."
+ }
+ }
+ },
+ "GoBack": {
+ "back": "Back"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "Custom model, by default, supports both function call and visual recognition. Please verify the availability of the above capabilities based on actual situations.",
+ "file": "This model supports file upload for reading and recognition.",
+ "functionCall": "This model supports function call.",
+ "tokens": "This model supports up to {{tokens}} tokens in a single session.",
+ "vision": "This model supports visual recognition."
+ },
+ "removed": "The model is not in the list. It will be automatically removed if deselected."
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "No enabled model. Please go to settings to enable.",
+ "provider": "Provider"
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/discover.json b/DigitalHumanWeb/locales/en-US/discover.json
new file mode 100644
index 0000000..84e71d0
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "Add Assistant",
+ "addAgentAndConverse": "Add Assistant and Converse",
+ "addAgentSuccess": "Added Successfully",
+ "conversation": {
+ "l1": "Hello, I am **{{name}}**, you can ask me any questions, and I will do my best to answer you ~",
+ "l2": "Here are my capabilities: ",
+ "l3": "Let's start the conversation!"
+ },
+ "description": "Assistant Introduction",
+ "detail": "Details",
+ "list": "Assistant List",
+ "more": "More",
+ "plugins": "Integrated Plugins",
+ "recentSubmits": "Recent Updates",
+ "suggestions": "Related Recommendations",
+ "systemRole": "Assistant Settings",
+ "try": "Try It Out"
+ },
+ "back": "Back to Discovery",
+ "category": {
+ "assistant": {
+ "academic": "Academic",
+ "all": "All",
+ "career": "Career",
+ "copywriting": "Copywriting",
+ "design": "Design",
+ "education": "Education",
+ "emotions": "Emotions",
+ "entertainment": "Entertainment",
+ "games": "Games",
+ "general": "General",
+ "life": "Life",
+ "marketing": "Marketing",
+ "office": "Office",
+ "programming": "Programming",
+ "translation": "Translation"
+ },
+ "plugin": {
+ "all": "All",
+ "gaming-entertainment": "Gaming & Entertainment",
+ "life-style": "Lifestyle",
+ "media-generate": "Media Generation",
+ "science-education": "Science & Education",
+ "social": "Social Media",
+ "stocks-finance": "Stocks & Finance",
+ "tools": "Utility Tools",
+ "web-search": "Web Search"
+ }
+ },
+ "cleanFilter": "Clear Filter",
+ "create": "Create",
+ "createGuide": {
+ "func1": {
+ "desc1": "Access the settings page for the assistant you want to submit through the settings in the upper right corner of the conversation window;",
+ "desc2": "Click the 'Submit to Assistant Marketplace' button in the upper right corner.",
+ "tag": "Method One",
+ "title": "Submit via LobeChat"
+ },
+ "func2": {
+ "button": "Go to GitHub Assistant Repository",
+ "desc": "If you want to add the assistant to the index, create an entry using agent-template.json or agent-template-full.json in the plugins directory, write a brief description, tag appropriately, and then create a pull request.",
+ "tag": "Method Two",
+ "title": "Submit via GitHub"
+ }
+ },
+ "dislike": "Dislike",
+ "filter": "Filter",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "All Authors",
+ "followed": "Followed Authors",
+ "title": "Author Range"
+ },
+ "contentLength": "Minimum Context Length",
+ "maxToken": {
+ "title": "Set Maximum Length (Token)",
+ "unlimited": "Unlimited"
+ },
+ "other": {
+ "functionCall": "Supports Function Calls",
+ "title": "Other",
+ "vision": "Supports Visual Recognition",
+ "withKnowledge": "Includes Knowledge Base",
+ "withTool": "Includes Plugins"
+ },
+ "pricing": "Model Pricing",
+ "timePeriod": {
+ "all": "All Time",
+ "day": "Last 24 Hours",
+ "month": "Last 30 Days",
+ "title": "Time Range",
+ "week": "Last 7 Days",
+ "year": "Last Year"
+ }
+ },
+ "home": {
+ "featuredAssistants": "Featured Assistants",
+ "featuredModels": "Featured Models",
+ "featuredProviders": "Featured Model Providers",
+ "featuredTools": "Featured Plugins",
+ "more": "Discover More"
+ },
+ "like": "Like",
+ "models": {
+ "chat": "Start Conversation",
+ "contentLength": "Maximum Context Length",
+ "free": "Free",
+ "guide": "Configuration Guide",
+ "list": "Model List",
+ "more": "More",
+ "parameterList": {
+ "defaultValue": "Default Value",
+ "docs": "View Documentation",
+ "frequency_penalty": {
+ "desc": "This setting adjusts the frequency at which the model reuses specific vocabulary that has already appeared in the input. Higher values reduce the likelihood of such repetition, while negative values have the opposite effect. Vocabulary penalties do not increase with frequency of occurrence. Negative values encourage vocabulary reuse.",
+ "title": "Frequency Penalty"
+ },
+ "max_tokens": {
+ "desc": "This setting defines the maximum length that the model can generate in a single response. Setting a higher value allows the model to produce longer replies, while a lower value restricts the length of the response, making it more concise. Adjusting this value appropriately based on different application scenarios can help achieve the desired response length and level of detail.",
+ "title": "Single Response Limit"
+ },
+ "presence_penalty": {
+ "desc": "This setting aims to control the reuse of vocabulary based on its frequency in the input. It attempts to use less of those words that appear more frequently in the input, with usage frequency proportional to occurrence frequency. Vocabulary penalties increase with frequency of occurrence. Negative values encourage vocabulary reuse.",
+ "title": "Topic Freshness"
+ },
+ "range": "Range",
+ "temperature": {
+ "desc": "This setting affects the diversity of the model's responses. Lower values lead to more predictable and typical responses, while higher values encourage more diverse and less common responses. When set to 0, the model always gives the same response to a given input.",
+ "title": "Randomness"
+ },
+ "title": "Model Parameters",
+ "top_p": {
+ "desc": "This setting limits the model's selection to a certain proportion of the most likely vocabulary: only selecting those top words whose cumulative probability reaches P. Lower values make the model's responses more predictable, while the default setting allows the model to choose from the entire range of vocabulary.",
+ "title": "Nucleus Sampling"
+ },
+ "type": "Type"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChat supports using a custom API key for this provider.",
+ "input": "Input Price",
+ "inputTooltip": "Cost per million tokens",
+ "latency": "Latency",
+ "latencyTooltip": "Average response time for the provider to send the first token",
+ "maxOutput": "Maximum Output Length",
+ "maxOutputTooltip": "Maximum number of tokens this endpoint can generate",
+ "officialTooltip": "LobeHub Official Service",
+ "output": "Output Price",
+ "outputTooltip": "Cost per million tokens",
+ "streamCancellationTooltip": "This provider supports stream cancellation.",
+ "throughput": "Throughput",
+ "throughputTooltip": "Average number of tokens transmitted per second for stream requests"
+ },
+ "suggestions": "Related Models",
+ "supportedProviders": "Providers Supporting This Model"
+ },
+ "plugins": {
+ "community": "Community Plugins",
+ "install": "Install Plugin",
+ "installed": "Installed",
+ "list": "Plugin List",
+ "meta": {
+ "description": "Description",
+ "parameter": "Parameter",
+ "title": "Tool Parameters",
+ "type": "Type"
+ },
+ "more": "More",
+ "official": "Official Plugins",
+ "recentSubmits": "Recently Updated",
+ "suggestions": "Related Recommendations"
+ },
+ "providers": {
+ "config": "Configure Provider",
+ "list": "Model Provider List",
+ "modelCount": "{{count}} models",
+ "modelSite": "Model Documentation",
+ "more": "More",
+ "officialSite": "Official Website",
+ "showAllModels": "Show All Models",
+ "suggestions": "Related Providers",
+ "supportedModels": "Supported Models"
+ },
+ "search": {
+ "placeholder": "Search by name, description, or keywords...",
+ "result": "{{count}} results about {{keyword}}",
+ "searching": "Searching..."
+ },
+ "sort": {
+ "mostLiked": "Most Liked",
+ "mostUsed": "Most Used",
+ "newest": "Newest First",
+ "oldest": "Oldest First",
+ "recommended": "Recommended"
+ },
+ "tab": {
+ "assistants": "Assistants",
+ "home": "Home",
+ "models": "Models",
+ "plugins": "Plugins",
+ "providers": "Model Providers"
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/error.json b/DigitalHumanWeb/locales/en-US/error.json
new file mode 100644
index 0000000..02c3ff3
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "Continue Session",
+ "desc": "{{greeting}}, it's great to continue serving you. Let's pick up where we left off.",
+ "title": "Welcome back, {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "Back to Home",
+ "desc": "Give it a try later, or go back to the known world.",
+ "retry": "Reload",
+ "title": "Oops, something went wrong.."
+ },
+ "fetchError": "Request Failed",
+ "fetchErrorDetail": "Error Details",
+ "notFound": {
+ "backHome": "Back to Home",
+ "check": "Please check if your URL is correct.",
+ "desc": "We couldn't find the page you were looking for.",
+ "title": "Entered Unknown Territory?"
+ },
+ "pluginSettings": {
+ "desc": "Complete the following configuration to start using this plugin",
+ "title": "{{name}} Plugin Settings"
+ },
+ "response": {
+ "400": "Sorry, the server does not understand your request. Please make sure your request parameters are correct.",
+ "401": "Sorry, the server has rejected your request, possibly due to insufficient permissions or invalid authentication.",
+ "403": "Sorry, the server has rejected your request. You do not have permission to access this content.",
+ "404": "Sorry, the server cannot find the page or resource you requested. Please make sure your URL is correct.",
+ "405": "Sorry, the server does not support the request method you are using. Please make sure your request method is correct.",
+ "406": "Sorry, the server cannot complete the request based on the characteristics of the content you requested",
+ "407": "Sorry, you need to authenticate the proxy before continuing with this request",
+ "408": "Sorry, the server timed out while waiting for the request, please check your network connection and try again",
+ "409": "Sorry, the request cannot be processed due to a conflict, possibly because the resource state is incompatible with the request",
+ "410": "Sorry, the resource you requested has been permanently removed and cannot be found",
+ "411": "Sorry, the server cannot process the request without a valid content length",
+ "412": "Sorry, your request does not meet the server's conditions and cannot be completed",
+ "413": "Sorry, your request data is too large for the server to process",
+ "414": "Sorry, the URI of your request is too long for the server to process",
+ "415": "Sorry, the server cannot process the media format attached to the request",
+ "416": "Sorry, the server cannot satisfy the range of your request",
+ "417": "Sorry, the server cannot meet your expectations",
+ "422": "Sorry, your request is in the correct format, but due to semantic errors, it cannot be responded to",
+ "423": "Sorry, the resource you requested is locked",
+ "424": "Sorry, the current request cannot be completed due to a previous request failure",
+ "426": "Sorry, the server requires your client to upgrade to a higher protocol version",
+ "428": "Sorry, the server requires a precondition, and requests that your request contain the correct conditional header",
+ "429": "Sorry, your request is too frequent and the server is a bit tired. Please try again later.",
+ "431": "Sorry, the header fields of your request are too large for the server to process",
+ "451": "Sorry, the server refuses to provide this resource due to legal reasons",
+ "500": "Sorry, the server seems to be experiencing some difficulties and is temporarily unable to complete your request. Please try again later.",
+ "502": "Sorry, the server seems to be lost and is temporarily unable to provide service. Please try again later.",
+ "503": "Sorry, the server is currently unable to process your request, possibly due to overload or maintenance. Please try again later.",
+ "504": "Sorry, the server did not receive a response from the upstream server. Please try again later.",
+ "AgentRuntimeError": "Lobe language model runtime execution error. Please troubleshoot or retry based on the following information.",
+ "FreePlanLimit": "You are currently a free user and cannot use this feature. Please upgrade to a paid plan to continue using it.",
+ "InvalidAccessCode": "Invalid access code or empty. Please enter the correct access code or add a custom API Key.",
+ "InvalidBedrockCredentials": "Bedrock authentication failed. Please check the AccessKeyId/SecretAccessKey and retry.",
+ "InvalidClerkUser": "Sorry, you are not currently logged in. Please log in or register an account to continue.",
+ "InvalidGithubToken": "The GitHub Personal Access Token is incorrect or empty. Please check your GitHub Personal Access Token and try again.",
+ "InvalidOllamaArgs": "Invalid Ollama configuration, please check Ollama configuration and try again",
+ "InvalidProviderAPIKey": "{{provider}} API Key is incorrect or empty, please check your {{provider}} API Key and try again",
+ "LocationNotSupportError": "We're sorry, your current location does not support this model service. This may be due to regional restrictions or the service not being available. Please confirm if the current location supports using this service, or try using a different location.",
+ "NoOpenAIAPIKey": "OpenAI API Key is empty, please add a custom OpenAI API Key",
+ "OllamaBizError": "Error requesting Ollama service, please troubleshoot or retry based on the following information",
+ "OllamaServiceUnavailable": "Ollama service is unavailable. Please check if Ollama is running properly or if the cross-origin configuration of Ollama is set correctly.",
+ "OpenAIBizError": "Error requesting OpenAI service, please troubleshoot or retry based on the following information",
+ "PluginApiNotFound": "Sorry, the API does not exist in the plugin's manifest. Please check if your request method matches the plugin manifest API",
+ "PluginApiParamsError": "Sorry, the input parameter validation for the plugin request failed. Please check if the input parameters match the API description",
+ "PluginFailToTransformArguments": "Sorry, the plugin failed to parse the arguments. Please try regenerating the assistant message or switch to a more powerful AI model with Tools Calling capability and try again",
+ "PluginGatewayError": "Sorry, there was an error with the plugin gateway. Please check if the plugin gateway configuration is correct.",
+ "PluginManifestInvalid": "Sorry, the plugin's manifest validation failed. Please check if the manifest format is correct",
+ "PluginManifestNotFound": "Sorry, the server could not find the plugin's manifest file (manifest.json). Please check if the plugin manifest file address is correct",
+ "PluginMarketIndexInvalid": "Sorry, the plugin index validation failed. Please check if the index file format is correct",
+ "PluginMarketIndexNotFound": "Sorry, the server could not find the plugin index. Please check if the index address is correct",
+ "PluginMetaInvalid": "Sorry, the plugin's metadata validation failed. Please check if the plugin metadata format is correct",
+ "PluginMetaNotFound": "Sorry, the plugin was not found in the index. Please check the plugin's configuration information in the index",
+ "PluginOpenApiInitError": "Sorry, the OpenAPI client failed to initialize. Please check if the OpenAPI configuration information is correct.",
+ "PluginServerError": "Plugin server request returned an error. Please check your plugin manifest file, plugin configuration, or server implementation based on the error information below",
+ "PluginSettingsInvalid": "This plugin needs to be correctly configured before it can be used. Please check if your configuration is correct",
+ "ProviderBizError": "Error requesting {{provider}} service, please troubleshoot or retry based on the following information",
+ "StreamChunkError": "Error parsing the message chunk of the streaming request. Please check if the current API interface complies with the standard specifications, or contact your API provider for assistance.",
+ "SubscriptionPlanLimit": "Your subscription limit has been reached, and you cannot use this feature. Please upgrade to a higher plan or purchase a resource pack to continue using it.",
+ "UnknownChatFetchError": "Sorry, an unknown request error occurred. Please check the information below or try again."
+ },
+ "stt": {
+ "responseError": "Service request failed, please check the configuration or try again"
+ },
+ "tts": {
+ "responseError": "Service request failed, please check the configuration or try again"
+ },
+ "unlock": {
+ "addProxyUrl": "Add OpenAI proxy URL (optional)",
+ "apiKey": {
+ "description": "Enter your {{name}} API Key to start the session",
+ "title": "Use custom {{name}} API Key"
+ },
+ "closeMessage": "Close message",
+ "confirm": "Confirm and Retry",
+ "oauth": {
+ "description": "The administrator has enabled unified login authentication. Click the button below to log in and unlock the application.",
+ "success": "Login successful",
+ "title": "Log in to your account",
+ "welcome": "Welcome!"
+ },
+ "password": {
+ "description": "The application encryption has been enabled by the administrator. Enter the application password to unlock the application. The password only needs to be filled in once.",
+ "placeholder": "Please enter password",
+ "title": "Enter Password to Unlock Application"
+ },
+ "tabs": {
+ "apiKey": "Custom API Key",
+ "password": "Password"
+ }
+ },
+ "upload": {
+ "desc": "Details: {{detail}}",
+ "fileOnlySupportInServerMode": "The current deployment mode does not support uploading non-image files. To upload files in {{ext}} format, please switch to server database deployment or use the {{cloud}} service.",
+ "networkError": "Please check your network connection and ensure that the file storage service's cross-origin configuration is correct.",
+ "title": "File upload failed. Please check your network connection or try again later",
+ "unknownError": "Error reason: {{reason}}",
+ "uploadFailed": "File upload failed."
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/file.json b/DigitalHumanWeb/locales/en-US/file.json
new file mode 100644
index 0000000..dc33e42
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "Manage files and knowledge base",
+ "detail": {
+ "basic": {
+ "createdAt": "Creation Time",
+ "filename": "File Name",
+ "size": "File Size",
+ "title": "Basic Information",
+ "type": "Format",
+ "updatedAt": "Update Time"
+ },
+ "data": {
+ "chunkCount": "Chunks",
+ "embedding": {
+ "default": "Not embedding",
+ "error": "Failed",
+ "pending": "Pending start",
+ "processing": "In progress",
+ "success": "Completed"
+ },
+ "embeddingStatus": "embedding"
+ }
+ },
+ "empty": "No files or folders have been uploaded yet.",
+ "header": {
+ "actions": {
+ "newFolder": "New Folder",
+ "uploadFile": "Upload File",
+ "uploadFolder": "Upload Folder"
+ },
+ "uploadButton": "Upload"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "You are about to delete this knowledge base. The files within will not be deleted and will be moved to All Files. Once deleted, the knowledge base cannot be recovered. Please proceed with caution.",
+ "empty": "Click <1>+1> to add a knowledge base"
+ },
+ "new": "New Knowledge Base",
+ "title": "Knowledge Base"
+ },
+ "networkError": "Failed to retrieve the knowledge base. Please check your network connection and try again.",
+ "notSupportGuide": {
+ "desc": "The current deployment instance is in client database mode, and file management features are not available. Please switch to <1>server database deployment mode1>, or use <3>LobeChat Cloud3> directly.",
+ "features": {
+ "allKind": {
+ "desc": "Supports mainstream file types, including common document formats like Word, PPT, Excel, PDF, TXT, as well as popular code files like JS and Python.",
+ "title": "Multiple File Type Parsing"
+ },
+ "embeddings": {
+ "desc": "Utilizes high-performance vector models to vectorize text chunks, enabling semantic search of file content.",
+ "title": "Vector Semantics"
+ },
+ "repos": {
+ "desc": "Supports the creation of knowledge bases and allows the addition of different types of files, building your domain knowledge.",
+ "title": "Knowledge Base"
+ }
+ },
+ "title": "The current deployment mode does not support file management"
+ },
+ "preview": {
+ "downloadFile": "Download File",
+ "unsupportedFileAndContact": "This file format is not currently supported for online preview. If you have a request for previewing, feel free to <1>contact us1>."
+ },
+ "searchFilePlaceholder": "Search Files",
+ "tab": {
+ "all": "All Files",
+ "audios": "Audio",
+ "documents": "Documents",
+ "images": "Images",
+ "videos": "Videos",
+ "websites": "Websites"
+ },
+ "title": "Files",
+ "uploadDock": {
+ "body": {
+ "collapse": "Collapse",
+ "item": {
+ "done": "Uploaded",
+ "error": "Upload failed, please try again",
+ "pending": "Preparing to upload...",
+ "processing": "Processing file...",
+ "restTime": "Remaining {{time}}"
+ }
+ },
+ "totalCount": "Total {{count}} items",
+ "uploadStatus": {
+ "error": "Upload error",
+ "pending": "Waiting to upload",
+ "processing": "Uploading",
+ "success": "Upload completed",
+ "uploading": "Uploading"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/knowledgeBase.json b/DigitalHumanWeb/locales/en-US/knowledgeBase.json
new file mode 100644
index 0000000..bed43be
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "File added successfully, <1>view now1>",
+ "confirm": "Add",
+ "id": {
+ "placeholder": "Please select a knowledge base to add",
+ "required": "Please select a knowledge base",
+ "title": "Target Knowledge Base"
+ },
+ "title": "Add to Knowledge Base",
+ "totalFiles": "{{count}} files selected"
+ },
+ "createNew": {
+ "confirm": "Create New",
+ "description": {
+ "placeholder": "Knowledge base description (optional)"
+ },
+ "formTitle": "Basic Information",
+ "name": {
+ "placeholder": "Knowledge base name",
+ "required": "Please enter the knowledge base name"
+ },
+ "title": "Create Knowledge Base"
+ },
+ "tab": {
+ "evals": "Evaluations",
+ "files": "Documents",
+ "settings": "Settings",
+ "testing": "Recall Testing"
+ },
+ "title": "Knowledge Base"
+}
diff --git a/DigitalHumanWeb/locales/en-US/market.json b/DigitalHumanWeb/locales/en-US/market.json
new file mode 100644
index 0000000..5b49ec3
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "Add Assistant",
+ "addAgentAndConverse": "Add Assistant and Converse",
+ "addAgentSuccess": "Successfully Added",
+ "guide": {
+ "func1": {
+ "desc1": "Enter the settings page you want to submit to the assistant by clicking on the settings icon in the upper right corner of the chat window.",
+ "desc2": "Click on the 'Submit to Assistant Market' button in the upper right corner.",
+ "tag": "Method 1",
+ "title": "Submit via LobeChat"
+ },
+ "func2": {
+ "button": "Go to Github Assistant Repository",
+ "desc": "If you want to add the assistant to the index, create an entry in the plugins directory using agent-template.json or agent-template-full.json, write a brief description and appropriate tags, and then create a pull request.",
+ "tag": "Method 2",
+ "title": "Submit via Github"
+ }
+ },
+ "search": {
+ "placeholder": "Search assistant name, description or keywords..."
+ },
+ "sidebar": {
+ "comment": "Comments",
+ "prompt": "Prompts",
+ "title": "Assistant Details"
+ },
+ "submitAgent": "Submit Assistant",
+ "title": {
+ "allAgents": "All Assistants",
+ "recentSubmits": "Recent Submits"
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/metadata.json b/DigitalHumanWeb/locales/en-US/metadata.json
new file mode 100644
index 0000000..ce08892
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}} brings you the best UI experience for ChatGPT, Claude, Gemini, and OLLaMA.",
+ "title": "{{appName}}: Your personal AI productivity tool for a smarter brain."
+ },
+ "discover": {
+ "assistants": {
+ "description": "Content creation, copywriting, Q&A, image generation, video generation, voice generation, intelligent agents, automated workflows, customize your own AI / GPTs / OLLaMA intelligent assistant",
+ "title": "AI Assistants"
+ },
+ "description": "Content creation, copywriting, Q&A, image generation, video generation, voice generation, intelligent agents, automated workflows, custom AI applications, customize your own AI application workspace",
+ "models": {
+ "description": "Explore mainstream AI models OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek",
+ "title": "AI Models"
+ },
+ "plugins": {
+ "description": "Explore chart generation, academic tools, image generation, video generation, voice generation, and automated workflows to integrate rich plugin capabilities into your assistant.",
+ "title": "AI Plugins"
+ },
+ "providers": {
+ "description": "Explore leading model providers OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter",
+ "title": "AI Model Providers"
+ },
+ "search": "Search",
+ "title": "Discover"
+ },
+ "plugins": {
+ "description": "Search, chart generation, academic tools, image generation, video generation, voice generation, automated workflows—customize Tools Calling plugin capabilities for ChatGPT / Claude.",
+ "title": "Plugin Marketplace"
+ },
+ "welcome": {
+ "description": "{{appName}} brings you the best UI experience for ChatGPT, Claude, Gemini, and OLLaMA.",
+ "title": "Welcome to {{appName}}: Your personal AI productivity tool for a smarter brain."
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/migration.json b/DigitalHumanWeb/locales/en-US/migration.json
new file mode 100644
index 0000000..9b25e8c
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "Clear Local Data",
+ "downloadBackup": "Download Data Backup",
+ "reUpgrade": "Re-upgrade",
+ "start": "Get Started",
+ "upgrade": "Upgrade"
+ },
+ "clear": {
+ "confirm": "You are about to clear local data (global settings will not be affected). Please confirm that you have downloaded a data backup."
+ },
+ "description": "In the new version, the data storage of {{appName}} has made significant advancements. Therefore, we need to upgrade the old data to provide you with a better user experience.",
+ "features": {
+ "capability": {
+ "desc": "Based on IndexedDB technology, capable of storing a lifetime's worth of chat messages.",
+ "title": "Large Capacity"
+ },
+ "performance": {
+ "desc": "Automatically indexes millions of messages, with retrieval queries responding in milliseconds.",
+ "title": "High Performance"
+ },
+ "use": {
+ "desc": "Supports searching by title, description, tags, message content, and even translated text, greatly enhancing daily search efficiency.",
+ "title": "More User-Friendly"
+ }
+ },
+ "title": "{{appName}} Data Evolution",
+ "upgrade": {
+ "error": {
+ "subTitle": "We apologize, an error occurred during the database upgrade process. Please try the following solutions: A. Clear local data and re-import backup data; B. Click the 'Retry Upgrade' button.
If the issue persists, please <1>submit a problem report1>, and we will assist you as soon as possible.",
+ "title": "Database Upgrade Failed"
+ },
+ "success": {
+ "subTitle": "{{appName}}'s database has been successfully upgraded to the latest version. Start experiencing it now!",
+ "title": "Database Upgrade Successful"
+ }
+ },
+ "upgradeTip": "The upgrade will take approximately 10 to 20 seconds. Please do not close {{appName}} during the upgrade process."
+ },
+ "migrateError": {
+ "missVersion": "Imported data is missing a version number. Please check the file and try again.",
+ "noMigration": "No migration solution found for the current version. Please check the version number and try again. If the issue persists, please submit a feedback request."
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/modelProvider.json b/DigitalHumanWeb/locales/en-US/modelProvider.json
new file mode 100644
index 0000000..e48d8bf
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "Azure API version, follow the format YYYY-MM-DD, check the [latest version](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#chat-completions)",
+ "fetch": "Fetch List",
+ "title": "Azure API Version"
+ },
+ "empty": "Please enter a model ID to add the first model",
+ "endpoint": {
+ "desc": "When checking resources from the Azure portal, you can find this value in the 'Keys and Endpoints' section",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "Azure API Address"
+ },
+ "modelListPlaceholder": "Select or add the OpenAI model you deployed",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "When checking resources from the Azure portal, you can find this value in the 'Keys and Endpoints' section. You can use KEY1 or KEY2",
+ "placeholder": "Azure API Key",
+ "title": "API Key"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "Enter AWS Access Key Id",
+ "placeholder": "AWS Access Key Id",
+ "title": "AWS Access Key Id"
+ },
+ "checker": {
+ "desc": "Test if AccessKeyId / SecretAccessKey are filled in correctly"
+ },
+ "region": {
+ "desc": "Enter AWS Region",
+ "placeholder": "AWS Region",
+ "title": "AWS Region"
+ },
+ "secretAccessKey": {
+ "desc": "Enter AWS Secret Access Key",
+ "placeholder": "AWS Secret Access Key",
+ "title": "AWS Secret Access Key"
+ },
+ "sessionToken": {
+ "desc": "If you are using AWS SSO/STS, please enter your AWS Session Token",
+ "placeholder": "AWS Session Token",
+ "title": "AWS Session Token (optional)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "Custom Service Region",
+ "customSessionToken": "Custom Session Token",
+ "description": "Enter your AWS AccessKeyId / SecretAccessKey to start the session. The app will not store your authentication configuration",
+ "title": "Use Custom Bedrock Authentication Information"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "Enter your GitHub PAT. Click [here](https://github.com/settings/tokens) to create one.",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "Test if the proxy address is correctly filled in",
+ "title": "Connectivity Check"
+ },
+ "customModelName": {
+ "desc": "Add custom models, separate multiple models with commas",
+ "placeholder": "vicuna, llava, codellama, llama2:13b-text",
+ "title": "Custom model name"
+ },
+ "download": {
+ "desc": "Ollama is downloading the model. Please try not to close this page. The download will resume from where it left off if interrupted.",
+ "remainingTime": "Remaining Time",
+ "speed": "Download Speed",
+ "title": "Downloading model {{model}}"
+ },
+ "endpoint": {
+ "desc": "Enter the Ollama interface proxy address, leave blank if not specified locally",
+ "title": "Interface proxy address"
+ },
+ "setup": {
+ "cors": {
+ "description": "Due to browser security restrictions, you need to configure cross-origin settings for Ollama to function properly.",
+ "linux": {
+ "env": "Add `Environment` under [Service] section, and set the OLLAMA_ORIGINS environment variable:",
+ "reboot": "Reload systemd and restart Ollama.",
+ "systemd": "Invoke systemd to edit the ollama service:"
+ },
+ "macos": "Open the 'Terminal' application, paste the following command, and press Enter to run it.",
+ "reboot": "Please restart the Ollama service after completion.",
+ "title": "Configure Ollama for Cross-Origin Access",
+ "windows": "On Windows, go to 'Control Panel' and edit system environment variables. Create a new environment variable named 'OLLAMA_ORIGINS' for your user account, set the value to '*', and click 'OK/Apply' to save."
+ },
+ "install": {
+ "description": "Please make sure you have enabled Ollama. If you haven't downloaded Ollama yet, please visit the official website <1>to download1>.",
+ "docker": "If you prefer using Docker, Ollama also provides an official Docker image. You can pull it using the following command:",
+ "linux": {
+ "command": "Install using the following command:",
+ "manual": "Alternatively, you can refer to the <1>Linux Manual Installation Guide1> for manual installation."
+ },
+ "title": "Install and Start Ollama Locally",
+ "windowsTab": "Windows (Preview)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "Cancel Download",
+ "confirm": "Download",
+ "description": "Enter your Ollama model tag to continue the session",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "Starting download...",
+ "title": "Download specified Ollama model"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI Zero One Everything"
+ },
+ "zhipu": {
+ "title": "Zhipu"
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/models.json b/DigitalHumanWeb/locales/en-US/models.json
new file mode 100644
index 0000000..46161be
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34B delivers superior performance in industry applications with a wealth of training samples."
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9B supports 16K tokens, providing efficient and smooth language generation capabilities."
+ },
+ "360gpt-pro": {
+ "description": "360GPT Pro, as an important member of the 360 AI model series, meets diverse natural language application scenarios with efficient text processing capabilities, supporting long text understanding and multi-turn dialogue."
+ },
+ "360gpt-turbo": {
+ "description": "360GPT Turbo offers powerful computation and dialogue capabilities, with excellent semantic understanding and generation efficiency, making it an ideal intelligent assistant solution for enterprises and developers."
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8K emphasizes semantic safety and responsibility, designed specifically for applications with high content safety requirements, ensuring accuracy and robustness in user experience."
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Pro is an advanced natural language processing model launched by 360, featuring exceptional text generation and understanding capabilities, particularly excelling in generation and creative tasks, capable of handling complex language transformations and role-playing tasks."
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultra is the most powerful version in the Spark large model series, enhancing text content understanding and summarization capabilities while upgrading online search links. It is a comprehensive solution for improving office productivity and accurately responding to demands, leading the industry as an intelligent product."
+ },
+ "Baichuan2-Turbo": {
+ "description": "Utilizes search enhancement technology to achieve comprehensive links between large models and domain knowledge, as well as knowledge from the entire web. Supports uploads of various documents such as PDF and Word, and URL input, providing timely and comprehensive information retrieval with accurate and professional output."
+ },
+ "Baichuan3-Turbo": {
+ "description": "Optimized for high-frequency enterprise scenarios, significantly improving performance and cost-effectiveness. Compared to the Baichuan2 model, content creation improves by 20%, knowledge Q&A by 17%, and role-playing ability by 40%. Overall performance is superior to GPT-3.5."
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "Features a 128K ultra-long context window, optimized for high-frequency enterprise scenarios, significantly improving performance and cost-effectiveness. Compared to the Baichuan2 model, content creation improves by 20%, knowledge Q&A by 17%, and role-playing ability by 40%. Overall performance is superior to GPT-3.5."
+ },
+ "Baichuan4": {
+ "description": "The model is the best in the country, surpassing mainstream foreign models in Chinese tasks such as knowledge encyclopedias, long texts, and creative generation. It also boasts industry-leading multimodal capabilities, excelling in multiple authoritative evaluation benchmarks."
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B) is an innovative model suitable for multi-domain applications and complex tasks."
+ },
+ "Max-32k": {
+ "description": "Spark Max 32K is equipped with enhanced context processing capabilities, stronger context understanding, and logical reasoning abilities, supporting text input of up to 32K tokens, suitable for scenarios such as long document reading and private knowledge Q&A."
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPO is a highly flexible multi-model fusion designed to provide an exceptional creative experience."
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B) is a high-precision instruction model suitable for complex computations."
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B) provides optimized language output and diverse application possibilities."
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "An update of the Phi-3-mini model."
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "The same Phi-3-medium model, but with a larger context size for RAG or few-shot prompting."
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "A 14B parameter model that provides better quality than Phi-3-mini, focusing on high-quality, reasoning-dense data."
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "The same Phi-3-mini model, but with a larger context size for RAG or few-shot prompting."
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "The smallest member of the Phi-3 family, optimized for both quality and low latency."
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "The same Phi-3-small model, but with a larger context size for RAG or few-shot prompting."
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "A 7B parameter model that provides better quality than Phi-3-mini, focusing on high-quality, reasoning-dense data."
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128K is configured with ultra-large context processing capabilities, able to handle up to 128K of contextual information, particularly suitable for long texts requiring comprehensive analysis and long-term logical connections, providing smooth and consistent logic and diverse citation support in complex text communication."
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "As a beta version of Qwen2, Qwen1.5 utilizes large-scale data to achieve more precise conversational capabilities."
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B) provides quick responses and natural conversational abilities, suitable for multilingual environments."
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2 is an advanced general-purpose language model that supports various types of instructions."
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5 is a brand new series of large language models designed to optimize the handling of instruction-based tasks."
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5 is a brand new series of large language models designed to optimize the handling of instruction-based tasks."
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5 is a brand new series of large language models with enhanced understanding and generation capabilities."
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5 is a brand new series of large language models designed to optimize the handling of instruction-based tasks."
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coder focuses on code writing."
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Math focuses on problem-solving in the field of mathematics, providing expert solutions for challenging problems."
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9B is an open-source version that provides an optimized conversational experience for chat applications."
+ },
+ "abab5.5-chat": {
+ "description": "Targeted at productivity scenarios, supporting complex task processing and efficient text generation, suitable for professional applications."
+ },
+ "abab5.5s-chat": {
+ "description": "Designed for Chinese persona dialogue scenarios, providing high-quality Chinese dialogue generation capabilities, suitable for various application contexts."
+ },
+ "abab6.5g-chat": {
+ "description": "Designed for multilingual persona dialogue, supporting high-quality dialogue generation in English and other languages."
+ },
+ "abab6.5s-chat": {
+ "description": "Suitable for a wide range of natural language processing tasks, including text generation and dialogue systems."
+ },
+ "abab6.5t-chat": {
+ "description": "Optimized for Chinese persona dialogue scenarios, providing smooth dialogue generation that aligns with Chinese expression habits."
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "Fireworks open-source function-calling model provides excellent instruction execution capabilities and customizable features."
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "Fireworks' latest Firefunction-v2 is a high-performance function-calling model developed based on Llama-3, optimized for function calls, dialogues, and instruction following."
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13b is a visual language model that can accept both image and text inputs, trained on high-quality data, suitable for multimodal tasks."
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "Gemma 2 9B instruction model, based on previous Google technology, suitable for answering questions, summarizing, and reasoning across various text generation tasks."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "Llama 3 70B instruction model, optimized for multilingual dialogues and natural language understanding, outperforming most competitive models."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "Llama 3 70B instruction model (HF version), aligned with official implementation results, suitable for high-quality instruction following tasks."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "Llama 3 8B instruction model, optimized for dialogues and multilingual tasks, delivering outstanding and efficient performance."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "Llama 3 8B instruction model (HF version), consistent with official implementation results, featuring high consistency and cross-platform compatibility."
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "Llama 3.1 405B instruction model, equipped with massive parameters, suitable for complex tasks and instruction following in high-load scenarios."
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "Llama 3.1 70B instruction model provides exceptional natural language understanding and generation capabilities, making it an ideal choice for dialogue and analysis tasks."
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "Llama 3.1 8B instruction model, optimized for multilingual dialogues, capable of surpassing most open-source and closed-source models on common industry benchmarks."
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "Mixtral MoE 8x22B instruction model, featuring large-scale parameters and a multi-expert architecture, fully supporting efficient processing of complex tasks."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "Mixtral MoE 8x7B instruction model, with a multi-expert architecture providing efficient instruction following and execution."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "Mixtral MoE 8x7B instruction model (HF version), performance consistent with official implementation, suitable for various efficient task scenarios."
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "MythoMax L2 13B model, combining novel merging techniques, excels in narrative and role-playing."
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "Phi 3 Vision instruction model, a lightweight multimodal model capable of handling complex visual and textual information, with strong reasoning abilities."
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "StarCoder 15.5B model supports advanced programming tasks, enhanced multilingual capabilities, suitable for complex code generation and understanding."
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "StarCoder 7B model, trained on over 80 programming languages, boasts excellent code completion capabilities and contextual understanding."
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "Yi-Large model, featuring exceptional multilingual processing capabilities, suitable for various language generation and understanding tasks."
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "A 398B parameter (94B active) multilingual model, offering a 256K long context window, function calling, structured output, and grounded generation."
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "A 52B parameter (12B active) multilingual model, offering a 256K long context window, function calling, structured output, and grounded generation."
+ },
+ "ai21-jamba-instruct": {
+ "description": "A production-grade Mamba-based LLM model designed to achieve best-in-class performance, quality, and cost efficiency."
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "Claude 3.5 Sonnet raises the industry standard, outperforming competitor models and Claude 3 Opus, excelling in a wide range of evaluations while maintaining the speed and cost of our mid-tier models."
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "Claude 3 Haiku is Anthropic's fastest and most compact model, providing near-instantaneous response times. It can quickly answer simple queries and requests. Customers will be able to build seamless AI experiences that mimic human interaction. Claude 3 Haiku can process images and return text output, with a context window of 200K."
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "Claude 3 Opus is Anthropic's most powerful AI model, featuring state-of-the-art performance on highly complex tasks. It can handle open-ended prompts and unseen scenarios, demonstrating exceptional fluency and human-like understanding. Claude 3 Opus showcases the forefront of generative AI possibilities. Claude 3 Opus can process images and return text output, with a context window of 200K."
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "Anthropic's Claude 3 Sonnet strikes an ideal balance between intelligence and speed—especially suited for enterprise workloads. It offers maximum utility at a price lower than competitors and is designed to be a reliable, durable workhorse for scalable AI deployments. Claude 3 Sonnet can process images and return text output, with a context window of 200K."
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "A fast, economical, yet still highly capable model that can handle a range of tasks, including everyday conversations, text analysis, summarization, and document Q&A."
+ },
+ "anthropic.claude-v2": {
+ "description": "Anthropic's model demonstrates high capability across a wide range of tasks, from complex conversations and creative content generation to detailed instruction following."
+ },
+ "anthropic.claude-v2:1": {
+ "description": "An updated version of Claude 2, featuring double the context window and improvements in reliability, hallucination rates, and evidence-based accuracy in long documents and RAG contexts."
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 Haiku is Anthropic's fastest and most compact model, designed for near-instantaneous responses. It features quick and accurate directional performance."
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opus is Anthropic's most powerful model for handling highly complex tasks. It excels in performance, intelligence, fluency, and comprehension."
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 Sonnet offers capabilities that surpass Opus and faster speeds than Sonnet, while maintaining the same pricing as Sonnet. Sonnet excels particularly in programming, data science, visual processing, and agent tasks."
+ },
+ "aya": {
+ "description": "Aya 23 is a multilingual model launched by Cohere, supporting 23 languages, facilitating diverse language applications."
+ },
+ "aya:35b": {
+ "description": "Aya 23 is a multilingual model launched by Cohere, supporting 23 languages, facilitating diverse language applications."
+ },
+ "charglm-3": {
+ "description": "CharGLM-3 is designed for role-playing and emotional companionship, supporting ultra-long multi-turn memory and personalized dialogue, with wide applications."
+ },
+ "chatgpt-4o-latest": {
+ "description": "ChatGPT-4o is a dynamic model that updates in real-time to stay current with the latest version. It combines powerful language understanding and generation capabilities, making it suitable for large-scale applications, including customer service, education, and technical support."
+ },
+ "claude-2.0": {
+ "description": "Claude 2 provides advancements in key capabilities for enterprises, including industry-leading 200K token context, significantly reducing the occurrence of model hallucinations, system prompts, and a new testing feature: tool invocation."
+ },
+ "claude-2.1": {
+ "description": "Claude 2 provides advancements in key capabilities for enterprises, including industry-leading 200K token context, significantly reducing the occurrence of model hallucinations, system prompts, and a new testing feature: tool invocation."
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnet offers capabilities that surpass Opus and faster speeds than Sonnet, while maintaining the same price as Sonnet. Sonnet excels particularly in programming, data science, visual processing, and agent tasks."
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haiku is Anthropic's fastest and most compact model, designed for near-instantaneous responses. It features rapid and accurate directional performance."
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opus is Anthropic's most powerful model for handling highly complex tasks. It excels in performance, intelligence, fluency, and comprehension."
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnet provides an ideal balance of intelligence and speed for enterprise workloads. It offers maximum utility at a lower price, reliable and suitable for large-scale deployment."
+ },
+ "claude-instant-1.2": {
+ "description": "Anthropic's model for low-latency, high-throughput text generation, capable of generating hundreds of pages of text."
+ },
+ "codegeex-4": {
+ "description": "CodeGeeX-4 is a powerful AI programming assistant that supports intelligent Q&A and code completion in various programming languages, enhancing development efficiency."
+ },
+ "codegemma": {
+ "description": "CodeGemma is a lightweight language model dedicated to various programming tasks, supporting rapid iteration and integration."
+ },
+ "codegemma:2b": {
+ "description": "CodeGemma is a lightweight language model dedicated to various programming tasks, supporting rapid iteration and integration."
+ },
+ "codellama": {
+ "description": "Code Llama is an LLM focused on code generation and discussion, combining extensive programming language support, suitable for developer environments."
+ },
+ "codellama:13b": {
+ "description": "Code Llama is an LLM focused on code generation and discussion, combining extensive programming language support, suitable for developer environments."
+ },
+ "codellama:34b": {
+ "description": "Code Llama is an LLM focused on code generation and discussion, combining extensive programming language support, suitable for developer environments."
+ },
+ "codellama:70b": {
+ "description": "Code Llama is an LLM focused on code generation and discussion, combining extensive programming language support, suitable for developer environments."
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5 is a large language model trained on extensive code data, specifically designed to solve complex programming tasks."
+ },
+ "codestral": {
+ "description": "Codestral is Mistral AI's first code model, providing excellent support for code generation tasks."
+ },
+ "codestral-latest": {
+ "description": "Codestral is a cutting-edge generative model focused on code generation, optimized for intermediate filling and code completion tasks."
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22B is a model designed for instruction following, dialogue, and programming."
+ },
+ "cohere-command-r": {
+ "description": "Command R is a scalable generative model targeting RAG and Tool Use to enable production-scale AI for enterprises."
+ },
+ "cohere-command-r-plus": {
+ "description": "Command R+ is a state-of-the-art RAG-optimized model designed to tackle enterprise-grade workloads."
+ },
+ "command-r": {
+ "description": "Command R is an LLM optimized for dialogue and long context tasks, particularly suitable for dynamic interactions and knowledge management."
+ },
+ "command-r-plus": {
+ "description": "Command R+ is a high-performance large language model designed for real enterprise scenarios and complex applications."
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instruct provides highly reliable instruction processing capabilities, supporting applications across multiple industries."
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5 combines the excellent features of previous versions, enhancing general and coding capabilities."
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67B is an advanced model trained for highly complex conversations."
+ },
+ "deepseek-chat": {
+ "description": "A new open-source model that integrates general and coding capabilities, retaining the general conversational abilities of the original Chat model and the powerful code handling capabilities of the Coder model, while better aligning with human preferences. Additionally, DeepSeek-V2.5 has achieved significant improvements in writing tasks, instruction following, and more."
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2 is an open-source hybrid expert code model that performs excellently in coding tasks, comparable to GPT4-Turbo."
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2 is an open-source hybrid expert code model that performs excellently in coding tasks, comparable to GPT4-Turbo."
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2 is an efficient Mixture-of-Experts language model, suitable for cost-effective processing needs."
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236B is the design code model of DeepSeek, providing powerful code generation capabilities."
+ },
+ "deepseek/deepseek-chat": {
+ "description": "A new open-source model that integrates general and coding capabilities, retaining the general conversational abilities of the original Chat model and the powerful code handling capabilities of the Coder model, while better aligning with human preferences. Additionally, DeepSeek-V2.5 has achieved significant improvements in writing tasks, instruction following, and more."
+ },
+ "emohaa": {
+ "description": "Emohaa is a psychological model with professional counseling capabilities, helping users understand emotional issues."
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001 (Tuning) offers stable and tunable performance, making it an ideal choice for complex task solutions."
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002 (Tuning) provides excellent multimodal support, focusing on effective solutions for complex tasks."
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Pro is Google's high-performance AI model, designed for extensive task scaling."
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001 is an efficient multimodal model that supports extensive application scaling."
+ },
+ "gemini-1.5-flash-002": {
+ "description": "Gemini 1.5 Flash 002 is an efficient multimodal model that supports a wide range of applications."
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827 is designed for handling large-scale task scenarios, providing unparalleled processing speed."
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "Gemini 1.5 Flash 8B 0924 is the latest experimental model, showcasing significant performance improvements in both text and multimodal use cases."
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827 offers optimized multimodal processing capabilities, suitable for a variety of complex task scenarios."
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flash is Google's latest multimodal AI model, featuring fast processing capabilities and supporting text, image, and video inputs, making it suitable for efficient scaling across various tasks."
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001 is a scalable multimodal AI solution that supports a wide range of complex tasks."
+ },
+ "gemini-1.5-pro-002": {
+ "description": "Gemini 1.5 Pro 002 is the latest production-ready model, delivering higher quality outputs, with notable enhancements in mathematics, long-context, and visual tasks."
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801 offers excellent multimodal processing capabilities, providing greater flexibility for application development."
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827 combines the latest optimization technologies to deliver more efficient multimodal data processing capabilities."
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Pro supports up to 2 million tokens, making it an ideal choice for medium-sized multimodal models, providing multifaceted support for complex tasks."
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7B is suitable for medium to small-scale task processing, offering cost-effectiveness."
+ },
+ "gemma2": {
+ "description": "Gemma 2 is an efficient model launched by Google, covering a variety of application scenarios from small applications to complex data processing."
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9B is a model optimized for specific tasks and tool integration."
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2 is an efficient model launched by Google, covering a variety of application scenarios from small applications to complex data processing."
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2 is an efficient model launched by Google, covering a variety of application scenarios from small applications to complex data processing."
+ },
+ "general": {
+ "description": "Spark Lite is a lightweight large language model with extremely low latency and efficient processing capabilities, completely free and open, supporting real-time online search functionality. Its fast response characteristics make it excel in inference applications and model fine-tuning on low-power devices, providing users with excellent cost-effectiveness and intelligent experiences, particularly in knowledge Q&A, content generation, and search scenarios."
+ },
+ "generalv3": {
+ "description": "Spark Pro is a high-performance large language model optimized for professional fields, focusing on mathematics, programming, healthcare, education, and more, supporting online search and built-in plugins for weather, dates, etc. Its optimized model demonstrates excellent performance and efficiency in complex knowledge Q&A, language understanding, and high-level text creation, making it an ideal choice for professional application scenarios."
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Max is the most comprehensive version, supporting online search and numerous built-in plugins. Its fully optimized core capabilities, along with system role settings and function calling features, enable it to perform exceptionally well in various complex application scenarios."
+ },
+ "glm-4": {
+ "description": "GLM-4 is the old flagship version released in January 2024, currently replaced by the more powerful GLM-4-0520."
+ },
+ "glm-4-0520": {
+ "description": "GLM-4-0520 is the latest model version designed for highly complex and diverse tasks, demonstrating outstanding performance."
+ },
+ "glm-4-air": {
+ "description": "GLM-4-Air is a cost-effective version with performance close to GLM-4, offering fast speed at an affordable price."
+ },
+ "glm-4-airx": {
+ "description": "GLM-4-AirX provides an efficient version of GLM-4-Air, with inference speeds up to 2.6 times faster."
+ },
+ "glm-4-alltools": {
+ "description": "GLM-4-AllTools is a multifunctional intelligent agent model optimized to support complex instruction planning and tool invocation, such as web browsing, code interpretation, and text generation, suitable for multitasking."
+ },
+ "glm-4-flash": {
+ "description": "GLM-4-Flash is the ideal choice for handling simple tasks, being the fastest and most cost-effective."
+ },
+ "glm-4-long": {
+ "description": "GLM-4-Long supports ultra-long text inputs, suitable for memory-based tasks and large-scale document processing."
+ },
+ "glm-4-plus": {
+ "description": "GLM-4-Plus, as a high-intelligence flagship, possesses strong capabilities for processing long texts and complex tasks, with overall performance improvements."
+ },
+ "glm-4v": {
+ "description": "GLM-4V provides strong image understanding and reasoning capabilities, supporting various visual tasks."
+ },
+ "glm-4v-plus": {
+ "description": "GLM-4V-Plus has the ability to understand video content and multiple images, suitable for multimodal tasks."
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827 provides optimized multimodal processing capabilities, suitable for various complex task scenarios."
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827 combines the latest optimization technologies to deliver more efficient multimodal data processing capabilities."
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2 continues the design philosophy of being lightweight and efficient."
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2 is Google's lightweight open-source text model series."
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2 is Google's lightweight open-source text model series."
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B) provides basic instruction processing capabilities, suitable for lightweight applications."
+ },
+ "gpt-3.5-turbo": {
+ "description": "GPT 3.5 Turbo is suitable for various text generation and understanding tasks. Currently points to gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "GPT 3.5 Turbo is suitable for various text generation and understanding tasks. Currently points to gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "GPT 3.5 Turbo is suitable for various text generation and understanding tasks. Currently points to gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "GPT 3.5 Turbo is suitable for various text generation and understanding tasks. Currently points to gpt-3.5-turbo-0125."
+ },
+ "gpt-4": {
+ "description": "GPT-4 offers a larger context window, capable of handling longer text inputs, making it suitable for scenarios that require extensive information integration and data analysis."
+ },
+ "gpt-4-0125-preview": {
+ "description": "The latest GPT-4 Turbo model features visual capabilities. Now, visual requests can be made using JSON format and function calls. GPT-4 Turbo is an enhanced version that provides cost-effective support for multimodal tasks. It strikes a balance between accuracy and efficiency, making it suitable for applications requiring real-time interaction."
+ },
+ "gpt-4-0613": {
+ "description": "GPT-4 offers a larger context window, capable of handling longer text inputs, making it suitable for scenarios that require extensive information integration and data analysis."
+ },
+ "gpt-4-1106-preview": {
+ "description": "The latest GPT-4 Turbo model features visual capabilities. Now, visual requests can be made using JSON format and function calls. GPT-4 Turbo is an enhanced version that provides cost-effective support for multimodal tasks. It strikes a balance between accuracy and efficiency, making it suitable for applications requiring real-time interaction."
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "The latest GPT-4 Turbo model features visual capabilities. Now, visual requests can be made using JSON format and function calls. GPT-4 Turbo is an enhanced version that provides cost-effective support for multimodal tasks. It strikes a balance between accuracy and efficiency, making it suitable for applications requiring real-time interaction."
+ },
+ "gpt-4-32k": {
+ "description": "GPT-4 offers a larger context window, capable of handling longer text inputs, making it suitable for scenarios that require extensive information integration and data analysis."
+ },
+ "gpt-4-32k-0613": {
+ "description": "GPT-4 offers a larger context window, capable of handling longer text inputs, making it suitable for scenarios that require extensive information integration and data analysis."
+ },
+ "gpt-4-turbo": {
+ "description": "The latest GPT-4 Turbo model features visual capabilities. Now, visual requests can be made using JSON format and function calls. GPT-4 Turbo is an enhanced version that provides cost-effective support for multimodal tasks. It strikes a balance between accuracy and efficiency, making it suitable for applications requiring real-time interaction."
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "The latest GPT-4 Turbo model features visual capabilities. Now, visual requests can be made using JSON format and function calls. GPT-4 Turbo is an enhanced version that provides cost-effective support for multimodal tasks. It strikes a balance between accuracy and efficiency, making it suitable for applications requiring real-time interaction."
+ },
+ "gpt-4-turbo-preview": {
+ "description": "The latest GPT-4 Turbo model features visual capabilities. Now, visual requests can be made using JSON format and function calls. GPT-4 Turbo is an enhanced version that provides cost-effective support for multimodal tasks. It strikes a balance between accuracy and efficiency, making it suitable for applications requiring real-time interaction."
+ },
+ "gpt-4-vision-preview": {
+ "description": "The latest GPT-4 Turbo model features visual capabilities. Now, visual requests can be made using JSON format and function calls. GPT-4 Turbo is an enhanced version that provides cost-effective support for multimodal tasks. It strikes a balance between accuracy and efficiency, making it suitable for applications requiring real-time interaction."
+ },
+ "gpt-4o": {
+ "description": "ChatGPT-4o is a dynamic model that updates in real-time to stay current with the latest version. It combines powerful language understanding and generation capabilities, making it suitable for large-scale applications, including customer service, education, and technical support."
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "ChatGPT-4o is a dynamic model that updates in real-time to stay current with the latest version. It combines powerful language understanding and generation capabilities, making it suitable for large-scale applications, including customer service, education, and technical support."
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o is a dynamic model that updates in real-time to stay current with the latest version. It combines powerful language understanding and generation capabilities, making it suitable for large-scale applications, including customer service, education, and technical support."
+ },
+ "gpt-4o-mini": {
+ "description": "GPT-4o mini is the latest model released by OpenAI after GPT-4 Omni, supporting both image and text input while outputting text. As their most advanced small model, it is significantly cheaper than other recent cutting-edge models, costing over 60% less than GPT-3.5 Turbo. It maintains state-of-the-art intelligence while offering remarkable cost-effectiveness. GPT-4o mini scored 82% on the MMLU test and currently ranks higher than GPT-4 in chat preferences."
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13B is a language model that combines creativity and intelligence by merging multiple top models."
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "The innovative open-source model InternLM2.5 enhances dialogue intelligence through a large number of parameters."
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5 offers intelligent dialogue solutions across multiple scenarios."
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct model, featuring 70B parameters, delivers outstanding performance in large text generation and instruction tasks."
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70B provides enhanced AI reasoning capabilities, suitable for complex applications, supporting extensive computational processing while ensuring efficiency and accuracy."
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8B is a high-performance model that offers rapid text generation capabilities, making it ideal for applications requiring large-scale efficiency and cost-effectiveness."
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct model, featuring 8B parameters, supports efficient execution of visual instruction tasks, providing high-quality text generation capabilities."
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "Llama 3.1 Sonar Huge Online model, featuring 405B parameters, supports a context length of approximately 127,000 tokens, designed for complex online chat applications."
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "Llama 3.1 Sonar Large Chat model, featuring 70B parameters, supports a context length of approximately 127,000 tokens, suitable for complex offline chat tasks."
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "Llama 3.1 Sonar Large Online model, featuring 70B parameters, supports a context length of approximately 127,000 tokens, suitable for high-capacity and diverse chat tasks."
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "Llama 3.1 Sonar Small Chat model, featuring 8B parameters, designed for offline chat, supports a context length of approximately 127,000 tokens."
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "Llama 3.1 Sonar Small Online model, featuring 8B parameters, supports a context length of approximately 127,000 tokens, designed for online chat, efficiently handling various text interactions."
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70B provides unparalleled complexity handling capabilities, tailored for high-demand projects."
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8B delivers high-quality reasoning performance, suitable for diverse application needs."
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Use offers powerful tool invocation capabilities, supporting efficient processing of complex tasks."
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Use is a model optimized for efficient tool usage, supporting fast parallel computation."
+ },
+ "llama3.1": {
+ "description": "Llama 3.1 is a leading model launched by Meta, supporting up to 405B parameters, applicable in complex dialogues, multilingual translation, and data analysis."
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1 is a leading model launched by Meta, supporting up to 405B parameters, applicable in complex dialogues, multilingual translation, and data analysis."
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1 is a leading model launched by Meta, supporting up to 405B parameters, applicable in complex dialogues, multilingual translation, and data analysis."
+ },
+ "llava": {
+ "description": "LLaVA is a multimodal model that combines a visual encoder with Vicuna for powerful visual and language understanding."
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7B offers integrated visual processing capabilities, generating complex outputs from visual information inputs."
+ },
+ "llava:13b": {
+ "description": "LLaVA is a multimodal model that combines a visual encoder with Vicuna for powerful visual and language understanding."
+ },
+ "llava:34b": {
+ "description": "LLaVA is a multimodal model that combines a visual encoder with Vicuna for powerful visual and language understanding."
+ },
+ "mathstral": {
+ "description": "MathΣtral is designed for scientific research and mathematical reasoning, providing effective computational capabilities and result interpretation."
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "A powerful 70-billion parameter model excelling in reasoning, coding, and broad language applications."
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "A versatile 8-billion parameter model optimized for dialogue and text generation tasks."
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "The Llama 3.1 instruction-tuned text-only models are optimized for multilingual dialogue use cases and outperform many of the available open-source and closed chat models on common industry benchmarks."
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "The Llama 3.1 instruction-tuned text-only models are optimized for multilingual dialogue use cases and outperform many of the available open-source and closed chat models on common industry benchmarks."
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "The Llama 3.1 instruction-tuned text-only models are optimized for multilingual dialogue use cases and outperform many of the available open-source and closed chat models on common industry benchmarks."
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B) offers excellent language processing capabilities and outstanding interactive experiences."
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B) is a powerful chat model that supports complex conversational needs."
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B) provides multilingual support, covering a rich array of domain knowledge."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Lite is suitable for environments requiring high performance and low latency."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turbo offers exceptional language understanding and generation capabilities, suitable for the most demanding computational tasks."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Lite is designed for resource-constrained environments, providing excellent balanced performance."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turbo is a high-performance large language model, supporting a wide range of application scenarios."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405B is a powerful model for pre-training and instruction tuning."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "The 405B Llama 3.1 Turbo model provides massive context support for big data processing, excelling in large-scale AI applications."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70B offers efficient conversational support in multiple languages."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "Llama 3.1 70B model is finely tuned for high-load applications, quantized to FP8 for enhanced computational efficiency and accuracy, ensuring outstanding performance in complex scenarios."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1 provides multilingual support and is one of the industry's leading generative models."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "Llama 3.1 8B model utilizes FP8 quantization, supporting up to 131,072 context tokens, making it a standout in open-source models, excelling in complex tasks and outperforming many industry benchmarks."
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instruct is optimized for high-quality conversational scenarios, demonstrating excellent performance in various human evaluations."
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instruct is optimized for high-quality conversational scenarios, performing better than many closed-source models."
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B Instruct is the latest version from Meta, optimized for generating high-quality dialogues, surpassing many leading closed-source models."
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct is designed for high-quality conversations, excelling in human evaluations, particularly in highly interactive scenarios."
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct is the latest version released by Meta, optimized for high-quality conversational scenarios, outperforming many leading closed-source models."
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1 offers multilingual support and is one of the industry's leading generative models."
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 405B Instruct is the largest and most powerful model in the Llama 3.1 Instruct series. It is a highly advanced conversational reasoning and synthetic data generation model, which can also serve as a foundation for specialized continuous pre-training or fine-tuning in specific domains. The multilingual large language models (LLMs) provided by Llama 3.1 are a set of pre-trained, instruction-tuned generative models, including sizes of 8B, 70B, and 405B (text input/output). The instruction-tuned text models (8B, 70B, 405B) are optimized for multilingual conversational use cases and have outperformed many available open-source chat models in common industry benchmarks. Llama 3.1 is designed for commercial and research purposes across multiple languages. The instruction-tuned text models are suitable for assistant-like chat, while the pre-trained models can adapt to various natural language generation tasks. The Llama 3.1 models also support improving other models using their outputs, including synthetic data generation and refinement. Llama 3.1 is an autoregressive language model built using an optimized transformer architecture. The tuned versions utilize supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety."
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "The updated version of Meta Llama 3.1 70B Instruct includes an extended 128K context length, multilingual capabilities, and improved reasoning abilities. The multilingual large language models (LLMs) provided by Llama 3.1 are a set of pre-trained, instruction-tuned generative models, including sizes of 8B, 70B, and 405B (text input/output). The instruction-tuned text models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and have surpassed many available open-source chat models in common industry benchmarks. Llama 3.1 is designed for commercial and research purposes in multiple languages. The instruction-tuned text models are suitable for assistant-like chat, while the pre-trained models can adapt to various natural language generation tasks. The Llama 3.1 model also supports using its outputs to improve other models, including synthetic data generation and refinement. Llama 3.1 is an autoregressive language model using optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety."
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "The updated version of Meta Llama 3.1 8B Instruct includes an extended 128K context length, multilingual capabilities, and improved reasoning abilities. The multilingual large language models (LLMs) provided by Llama 3.1 are a set of pre-trained, instruction-tuned generative models, including sizes of 8B, 70B, and 405B (text input/output). The instruction-tuned text models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and have surpassed many available open-source chat models in common industry benchmarks. Llama 3.1 is designed for commercial and research purposes in multiple languages. The instruction-tuned text models are suitable for assistant-like chat, while the pre-trained models can adapt to various natural language generation tasks. The Llama 3.1 model also supports using its outputs to improve other models, including synthetic data generation and refinement. Llama 3.1 is an autoregressive language model using optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety."
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3 is an open large language model (LLM) aimed at developers, researchers, and enterprises, designed to help them build, experiment, and responsibly scale their generative AI ideas. As part of a foundational system for global community innovation, it is particularly suitable for content creation, conversational AI, language understanding, R&D, and enterprise applications."
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3 is an open large language model (LLM) aimed at developers, researchers, and enterprises, designed to help them build, experiment, and responsibly scale their generative AI ideas. As part of a foundational system for global community innovation, it is particularly suitable for those with limited computational power and resources, edge devices, and faster training times."
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7B is Microsoft's latest lightweight AI model, performing nearly ten times better than existing leading open-source models."
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22B is Microsoft's state-of-the-art Wizard model, demonstrating extremely competitive performance."
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-V is a next-generation multimodal large model launched by OpenBMB, boasting exceptional OCR recognition and multimodal understanding capabilities, supporting a wide range of application scenarios."
+ },
+ "mistral": {
+ "description": "Mistral is a 7B model released by Mistral AI, suitable for diverse language processing needs."
+ },
+ "mistral-large": {
+ "description": "Mixtral Large is Mistral's flagship model, combining capabilities in code generation, mathematics, and reasoning, supporting a 128k context window."
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407) is an advanced Large Language Model (LLM) with state-of-the-art reasoning, knowledge, and coding capabilities."
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Large is the flagship model, excelling in multilingual tasks, complex reasoning, and code generation, making it an ideal choice for high-end applications."
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemo, developed in collaboration with Mistral AI and NVIDIA, is a high-performance 12B model."
+ },
+ "mistral-small": {
+ "description": "Mistral Small can be used for any language-based task that requires high efficiency and low latency."
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Small is a cost-effective, fast, and reliable option suitable for use cases such as translation, summarization, and sentiment analysis."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instruct is known for its high performance, suitable for various language tasks."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7B is a model fine-tuned on demand, providing optimized answers for tasks."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3 offers efficient computational power and natural language understanding, suitable for a wide range of applications."
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B) is a super large language model that supports extremely high processing demands."
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7B is a pre-trained sparse mixture of experts model for general text tasks."
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instruct is a high-performance industry-standard model optimized for speed and long context support."
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemo is a multilingual model with 7.3 billion parameters, designed for high-performance programming."
+ },
+ "mixtral": {
+ "description": "Mixtral is an expert model from Mistral AI, featuring open-source weights and providing support in code generation and language understanding."
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7B provides high fault-tolerant parallel computing capabilities, suitable for complex tasks."
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtral is an expert model from Mistral AI, featuring open-source weights and providing support in code generation and language understanding."
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128K is a model with ultra-long context processing capabilities, suitable for generating extremely long texts, meeting the demands of complex generation tasks, capable of handling up to 128,000 tokens, making it ideal for research, academia, and large document generation."
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32K offers medium-length context processing capabilities, able to handle 32,768 tokens, particularly suitable for generating various long documents and complex dialogues, applicable in content creation, report generation, and dialogue systems."
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8K is designed for generating short text tasks, featuring efficient processing performance, capable of handling 8,192 tokens, making it ideal for brief dialogues, note-taking, and rapid content generation."
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8B is an upgraded version of Nous Hermes 2, featuring the latest internally developed datasets."
+ },
+ "o1-mini": {
+ "description": "o1-mini is a fast and cost-effective reasoning model designed for programming, mathematics, and scientific applications. This model features a 128K context and has a knowledge cutoff date of October 2023."
+ },
+ "o1-preview": {
+ "description": "o1 is OpenAI's new reasoning model, suitable for complex tasks that require extensive general knowledge. This model features a 128K context and has a knowledge cutoff date of October 2023."
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mamba is a language model focused on code generation, providing strong support for advanced coding and reasoning tasks."
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7B is a compact yet high-performance model, excelling in batch processing and simple tasks such as classification and text generation, with good reasoning capabilities."
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemo is a 12B model developed in collaboration with Nvidia, offering outstanding reasoning and coding performance, easy to integrate and replace."
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22B is a larger expert model focused on complex tasks, providing excellent reasoning capabilities and higher throughput."
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7B is a sparse expert model that leverages multiple parameters to enhance reasoning speed, suitable for handling multilingual and code generation tasks."
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o is a dynamic model that updates in real-time to maintain the latest version. It combines powerful language understanding and generation capabilities, making it suitable for large-scale application scenarios, including customer service, education, and technical support."
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o mini is the latest model released by OpenAI following GPT-4 Omni, supporting both text and image input while outputting text. As their most advanced small model, it is significantly cheaper than other recent cutting-edge models and over 60% cheaper than GPT-3.5 Turbo. It maintains state-of-the-art intelligence while offering remarkable cost-effectiveness. GPT-4o mini scored 82% on the MMLU test and currently ranks higher than GPT-4 in chat preferences."
+ },
+ "openai/o1-mini": {
+ "description": "o1-mini is a fast and cost-effective reasoning model designed for programming, mathematics, and scientific applications. This model features a 128K context and has a knowledge cutoff date of October 2023."
+ },
+ "openai/o1-preview": {
+ "description": "o1 is OpenAI's new reasoning model, suitable for complex tasks that require extensive general knowledge. This model features a 128K context and has a knowledge cutoff date of October 2023."
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7B is an open-source language model library fine-tuned using the 'C-RLFT (Conditional Reinforcement Learning Fine-Tuning)' strategy."
+ },
+ "openrouter/auto": {
+ "description": "Based on context length, topic, and complexity, your request will be sent to Llama 3 70B Instruct, Claude 3.5 Sonnet (self-regulating), or GPT-4o."
+ },
+ "phi3": {
+ "description": "Phi-3 is a lightweight open model launched by Microsoft, suitable for efficient integration and large-scale knowledge reasoning."
+ },
+ "phi3:14b": {
+ "description": "Phi-3 is a lightweight open model launched by Microsoft, suitable for efficient integration and large-scale knowledge reasoning."
+ },
+ "pixtral-12b-2409": {
+ "description": "The Pixtral model demonstrates strong capabilities in tasks such as chart and image understanding, document question answering, multimodal reasoning, and instruction following. It can ingest images at natural resolutions and aspect ratios and handle an arbitrary number of images within a long context window of up to 128K tokens."
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "The Tongyi Qianwen Coder model."
+ },
+ "qwen-long": {
+ "description": "Qwen is a large-scale language model that supports long text contexts and dialogue capabilities based on long documents and multiple documents."
+ },
+ "qwen-math-plus-latest": {
+ "description": "The Tongyi Qianwen Math model is specifically designed for solving mathematical problems."
+ },
+ "qwen-math-turbo-latest": {
+ "description": "The Tongyi Qianwen Math model is specifically designed for solving mathematical problems."
+ },
+ "qwen-max-latest": {
+ "description": "Tongyi Qianwen Max is a large-scale language model with hundreds of billions of parameters, supporting input in various languages, including Chinese and English. It is the API model behind the current Tongyi Qianwen 2.5 product version."
+ },
+ "qwen-plus-latest": {
+ "description": "Tongyi Qianwen Plus is an enhanced version of the large-scale language model, supporting input in various languages, including Chinese and English."
+ },
+ "qwen-turbo-latest": {
+ "description": "Tongyi Qianwen is a large-scale language model that supports input in various languages, including Chinese and English."
+ },
+ "qwen-vl-chat-v1": {
+ "description": "Qwen VL supports flexible interaction methods, including multi-image, multi-turn Q&A, and creative capabilities."
+ },
+ "qwen-vl-max": {
+ "description": "Qwen is a large-scale visual language model. Compared to the enhanced version, it further improves visual reasoning and instruction-following capabilities, providing higher levels of visual perception and cognition."
+ },
+ "qwen-vl-plus": {
+ "description": "Qwen is a large-scale visual language model enhanced version. It significantly improves detail recognition and text recognition capabilities, supporting images with resolutions over one million pixels and any aspect ratio."
+ },
+ "qwen-vl-v1": {
+ "description": "Initialized with the Qwen-7B language model, this pre-trained model adds an image model with an input resolution of 448."
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2 is a brand new series of large language models with enhanced understanding and generation capabilities."
+ },
+ "qwen2": {
+ "description": "Qwen2 is Alibaba's next-generation large-scale language model, supporting diverse application needs with excellent performance."
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "The 14B model of Tongyi Qianwen 2.5 is open-sourced."
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "The 32B model of Tongyi Qianwen 2.5 is open-sourced."
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "The 72B model of Tongyi Qianwen 2.5 is open-sourced."
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "The 7B model of Tongyi Qianwen 2.5 is open-sourced."
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "The open-source version of the Tongyi Qianwen Coder model."
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "The open-source version of the Tongyi Qianwen Coder model."
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "The Qwen-Math model possesses strong capabilities for solving mathematical problems."
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "The Qwen-Math model possesses strong capabilities for solving mathematical problems."
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "The Qwen-Math model possesses strong capabilities for solving mathematical problems."
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2 is Alibaba's next-generation large-scale language model, supporting diverse application needs with excellent performance."
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2 is Alibaba's next-generation large-scale language model, supporting diverse application needs with excellent performance."
+ },
+ "qwen2:72b": {
+ "description": "Qwen2 is Alibaba's next-generation large-scale language model, supporting diverse application needs with excellent performance."
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar Mini is a compact LLM that outperforms GPT-3.5, featuring strong multilingual capabilities, supporting English and Korean, and providing an efficient and compact solution."
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja) extends the capabilities of Solar Mini, focusing on Japanese while maintaining efficiency and excellent performance in English and Korean usage."
+ },
+ "solar-pro": {
+ "description": "Solar Pro is a highly intelligent LLM launched by Upstage, focusing on single-GPU instruction-following capabilities, with an IFEval score above 80. Currently supports English, with a formal version planned for release in November 2024, which will expand language support and context length."
+ },
+ "step-1-128k": {
+ "description": "Balances performance and cost, suitable for general scenarios."
+ },
+ "step-1-256k": {
+ "description": "Equipped with ultra-long context processing capabilities, especially suitable for long document analysis."
+ },
+ "step-1-32k": {
+ "description": "Supports medium-length dialogues, applicable to various application scenarios."
+ },
+ "step-1-8k": {
+ "description": "Small model, suitable for lightweight tasks."
+ },
+ "step-1-flash": {
+ "description": "High-speed model, suitable for real-time dialogues."
+ },
+ "step-1v-32k": {
+ "description": "Supports visual input, enhancing multimodal interaction experiences."
+ },
+ "step-1v-8k": {
+ "description": "A small visual model suitable for basic text and image tasks."
+ },
+ "step-2-16k": {
+ "description": "Supports large-scale context interactions, suitable for complex dialogue scenarios."
+ },
+ "taichu_llm": {
+ "description": "The ZD Taichu language model possesses strong language understanding capabilities and excels in text creation, knowledge Q&A, code programming, mathematical calculations, logical reasoning, sentiment analysis, and text summarization. It innovatively combines large-scale pre-training with rich knowledge from multiple sources, continuously refining algorithmic techniques and absorbing new knowledge in vocabulary, structure, grammar, and semantics from vast text data, resulting in an evolving model performance. It provides users with more convenient information and services, as well as a more intelligent experience."
+ },
+ "taichu_vqa": {
+ "description": "Taichu 2.0V integrates capabilities such as image understanding, knowledge transfer, and logical reasoning, excelling in the field of image-text question answering."
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B) provides enhanced computational capabilities through efficient strategies and model architecture."
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B) is suitable for refined instruction tasks, offering excellent language processing capabilities."
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2 is a language model provided by Microsoft AI, excelling in complex dialogues, multilingual capabilities, reasoning, and intelligent assistant applications."
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2 is a language model provided by Microsoft AI, excelling in complex dialogues, multilingual capabilities, reasoning, and intelligent assistant applications."
+ },
+ "yi-large": {
+ "description": "A new trillion-parameter model, providing super strong question-answering and text generation capabilities."
+ },
+ "yi-large-fc": {
+ "description": "Based on the yi-large model, supports and enhances tool invocation capabilities, suitable for various business scenarios requiring agent or workflow construction."
+ },
+ "yi-large-preview": {
+ "description": "Initial version, recommended to use yi-large (new version)."
+ },
+ "yi-large-rag": {
+ "description": "High-level service based on the yi-large super strong model, combining retrieval and generation techniques to provide precise answers and real-time information retrieval services."
+ },
+ "yi-large-turbo": {
+ "description": "Exceptional performance at a high cost-performance ratio. Conducts high-precision tuning based on performance, inference speed, and cost."
+ },
+ "yi-medium": {
+ "description": "Medium-sized model upgraded and fine-tuned, balanced capabilities, and high cost-performance ratio. Deeply optimized instruction-following capabilities."
+ },
+ "yi-medium-200k": {
+ "description": "200K ultra-long context window, providing deep understanding and generation capabilities for long texts."
+ },
+ "yi-spark": {
+ "description": "Small yet powerful, lightweight and fast model. Provides enhanced mathematical computation and coding capabilities."
+ },
+ "yi-vision": {
+ "description": "Model for complex visual tasks, providing high-performance image understanding and analysis capabilities."
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/plugin.json b/DigitalHumanWeb/locales/en-US/plugin.json
new file mode 100644
index 0000000..081cb29
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "Arguments",
+ "function_call": "Function Call",
+ "off": "Turn off debug",
+ "on": "View plugin invocation information",
+ "payload": "plugin payload",
+ "response": "Response",
+ "tool_call": "tool call request"
+ },
+ "detailModal": {
+ "info": {
+ "description": "API Description",
+ "name": "API Name"
+ },
+ "tabs": {
+ "info": "Plugin Capabilities",
+ "manifest": "Installation File",
+ "settings": "Settings"
+ },
+ "title": "Plugin Details"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "Are you sure you want to delete this local plugin? Once deleted, it cannot be recovered.",
+ "customParams": {
+ "useProxy": {
+ "label": "Install via proxy (if encountering cross-origin access errors, try enabling this option and reinstalling)"
+ }
+ },
+ "deleteSuccess": "Plugin deleted successfully",
+ "manifest": {
+ "identifier": {
+ "desc": "The unique identifier of the plugin",
+ "label": "Identifier"
+ },
+ "mode": {
+ "local": "Visual Configuration",
+ "local-tooltip": "Visual configuration is not supported at the moment",
+ "url": "Online Link"
+ },
+ "name": {
+ "desc": "The title of the plugin",
+ "label": "Title",
+ "placeholder": "Search Engine"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "The author of the plugin",
+ "label": "Author"
+ },
+ "avatar": {
+ "desc": "The icon of the plugin, can be an Emoji or a URL",
+ "label": "Icon"
+ },
+ "description": {
+ "desc": "The description of the plugin",
+ "label": "Description",
+ "placeholder": "Get information from search engines"
+ },
+ "formFieldRequired": "This field is required",
+ "homepage": {
+ "desc": "The homepage of the plugin",
+ "label": "Homepage"
+ },
+ "identifier": {
+ "desc": "The unique identifier of the plugin, only supports alphanumeric characters, hyphen -, and underscore _",
+ "errorDuplicate": "The identifier is already used by another plugin, please modify the identifier",
+ "label": "Identifier",
+ "pattenErrorMessage": "Only alphanumeric characters, hyphen -, and underscore _ are allowed"
+ },
+ "manifest": {
+ "desc": "{{appName}} will install the plugin through this link.",
+ "label": "Plugin Description (Manifest) URL",
+ "preview": "Preview Manifest",
+ "refresh": "Refresh"
+ },
+ "title": {
+ "desc": "The title of the plugin",
+ "label": "Title",
+ "placeholder": "Search Engine"
+ }
+ },
+ "metaConfig": "Plugin metadata configuration",
+ "modalDesc": "After adding a custom plugin, it can be used for plugin development verification or directly in the session. Please refer to the <1>development documentation↗> for plugin development.",
+ "openai": {
+ "importUrl": "Import from URL link",
+ "schema": "Schema"
+ },
+ "preview": {
+ "card": "Preview of plugin display",
+ "desc": "Preview of plugin description",
+ "title": "Plugin Name Preview"
+ },
+ "save": "Install Plugin",
+ "saveSuccess": "Plugin settings saved successfully",
+ "tabs": {
+ "manifest": "Function Description Manifest (Manifest)",
+ "meta": "Plugin Metadata"
+ },
+ "title": {
+ "create": "Add Custom Plugin",
+ "edit": "Edit Custom Plugin"
+ },
+ "type": {
+ "lobe": "LobeChat Plugin",
+ "openai": "OpenAI Plugin"
+ },
+ "update": "Update",
+ "updateSuccess": "Plugin settings updated successfully"
+ },
+ "error": {
+ "fetchError": "Failed to fetch the manifest link. Please ensure the link is valid and allows cross-origin access.",
+ "installError": "Plugin {{name}} installation failed",
+ "manifestInvalid": "The manifest does not conform to the specification. Validation result: \n\n {{error}}",
+ "noManifest": "Manifest file does not exist",
+ "openAPIInvalid": "OpenAPI parsing failed. Error: \n\n {{error}}",
+ "reinstallError": "Failed to refresh plugin {{name}}",
+ "urlError": "The link did not return content in JSON format. Please ensure it is a valid link."
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "Deleted",
+ "local.config": "Configuration",
+ "local.title": "Local"
+ }
+ },
+ "loading": {
+ "content": "Calling plugin...",
+ "plugin": "Plugin is running..."
+ },
+ "pluginList": "Plugin List",
+ "setting": "Plugin Settings",
+ "settings": {
+ "indexUrl": {
+ "title": "Marketplace Index",
+ "tooltip": "Editing is not supported at the moment"
+ },
+ "modalDesc": "After configuring the address of the plugin marketplace, you can use a custom plugin marketplace",
+ "title": "Configure Plugin Marketplace"
+ },
+ "showInPortal": "Please check the details in the Portal view",
+ "store": {
+ "actions": {
+ "confirmUninstall": "The plugin is about to be uninstalled. After uninstalling, the plugin configuration will be cleared. Please confirm your operation.",
+ "detail": "Details",
+ "install": "Install",
+ "manifest": "Edit Installation File",
+ "settings": "Settings",
+ "uninstall": "Uninstall"
+ },
+ "communityPlugin": "Third-party",
+ "customPlugin": "Custom Plugin",
+ "empty": "No installed plugins yet",
+ "installAllPlugins": "Install All",
+ "networkError": "Failed to fetch plugin store. Please check your network connection and try again",
+ "placeholder": "Search for plugin name, description, or keyword...",
+ "releasedAt": "Released at {{createdAt}}",
+ "tabs": {
+ "all": "All",
+ "installed": "Installed"
+ },
+ "title": "Plugin Store"
+ },
+ "unknownPlugin": "Unknown plugin"
+}
diff --git a/DigitalHumanWeb/locales/en-US/portal.json b/DigitalHumanWeb/locales/en-US/portal.json
new file mode 100644
index 0000000..a02b51c
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "Artifacts",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "Chunk",
+ "file": "File"
+ }
+ },
+ "Plugins": "Plugins",
+ "actions": {
+ "genAiMessage": "Generate Assistant Message",
+ "summary": "Summary",
+ "summaryTooltip": "Summarize current content"
+ },
+ "artifacts": {
+ "display": {
+ "code": "Code",
+ "preview": "Preview"
+ },
+ "svg": {
+ "copyAsImage": "Copy as Image",
+ "copyFail": "Copy failed, reason: {{error}}",
+ "copySuccess": "Image copied successfully",
+ "download": {
+ "png": "Download as PNG",
+ "svg": "Download as SVG"
+ }
+ }
+ },
+ "emptyArtifactList": "The current Artifacts list is empty. Please use plugins in the session as needed before viewing.",
+ "emptyKnowledgeList": "The current knowledge list is empty. Please enable the knowledge base as needed during the conversation before viewing.",
+ "files": "Files",
+ "messageDetail": "Message Details",
+ "title": "Portal View"
+}
diff --git a/DigitalHumanWeb/locales/en-US/providers.json b/DigitalHumanWeb/locales/en-US/providers.json
new file mode 100644
index 0000000..e1976ae
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "360 AI is an AI model and service platform launched by 360 Company, offering various advanced natural language processing models, including 360GPT2 Pro, 360GPT Pro, 360GPT Turbo, and 360GPT Turbo Responsibility 8K. These models combine large-scale parameters and multimodal capabilities, widely applied in text generation, semantic understanding, dialogue systems, and code generation. With flexible pricing strategies, 360 AI meets diverse user needs, supports developer integration, and promotes the innovation and development of intelligent applications."
+ },
+ "anthropic": {
+ "description": "Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio."
+ },
+ "azure": {
+ "description": "Azure offers a variety of advanced AI models, including GPT-3.5 and the latest GPT-4 series, supporting various data types and complex tasks, dedicated to secure, reliable, and sustainable AI solutions."
+ },
+ "baichuan": {
+ "description": "Baichuan Intelligence is a company focused on the research and development of large AI models, with its models excelling in domestic knowledge encyclopedias, long text processing, and generative creation tasks in Chinese, surpassing mainstream foreign models. Baichuan Intelligence also possesses industry-leading multimodal capabilities, performing excellently in multiple authoritative evaluations. Its models include Baichuan 4, Baichuan 3 Turbo, and Baichuan 3 Turbo 128k, each optimized for different application scenarios, providing cost-effective solutions."
+ },
+ "bedrock": {
+ "description": "Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs."
+ },
+ "deepseek": {
+ "description": "DeepSeek is a company focused on AI technology research and application, with its latest model DeepSeek-V2.5 integrating general dialogue and code processing capabilities, achieving significant improvements in human preference alignment, writing tasks, and instruction following."
+ },
+ "fireworksai": {
+ "description": "Fireworks AI is a leading provider of advanced language model services, focusing on functional calling and multimodal processing. Its latest model, Firefunction V2, is based on Llama-3, optimized for function calling, conversation, and instruction following. The visual language model FireLLaVA-13B supports mixed input of images and text. Other notable models include the Llama series and Mixtral series, providing efficient multilingual instruction following and generation support."
+ },
+ "github": {
+ "description": "With GitHub Models, developers can become AI engineers and leverage the industry's leading AI models."
+ },
+ "google": {
+ "description": "Google's Gemini series represents its most advanced, versatile AI models, developed by Google DeepMind, designed for multimodal capabilities, supporting seamless understanding and processing of text, code, images, audio, and video. Suitable for various environments from data centers to mobile devices, it significantly enhances the efficiency and applicability of AI models."
+ },
+ "groq": {
+ "description": "Groq's LPU inference engine has excelled in the latest independent large language model (LLM) benchmarks, redefining the standards for AI solutions with its remarkable speed and efficiency. Groq represents instant inference speed, demonstrating strong performance in cloud-based deployments."
+ },
+ "minimax": {
+ "description": "MiniMax is a general artificial intelligence technology company established in 2021, dedicated to co-creating intelligence with users. MiniMax has independently developed general large models of different modalities, including trillion-parameter MoE text models, voice models, and image models, and has launched applications such as Conch AI."
+ },
+ "mistral": {
+ "description": "Mistral provides advanced general, specialized, and research models widely used in complex reasoning, multilingual tasks, and code generation. Through functional calling interfaces, users can integrate custom functionalities for specific applications."
+ },
+ "moonshot": {
+ "description": "Moonshot is an open-source platform launched by Beijing Dark Side Technology Co., Ltd., providing various natural language processing models with a wide range of applications, including but not limited to content creation, academic research, intelligent recommendations, and medical diagnosis, supporting long text processing and complex generation tasks."
+ },
+ "novita": {
+ "description": "Novita AI is a platform providing a variety of large language models and AI image generation API services, flexible, reliable, and cost-effective. It supports the latest open-source models like Llama3 and Mistral, offering a comprehensive, user-friendly, and auto-scaling API solution for generative AI application development, suitable for the rapid growth of AI startups."
+ },
+ "ollama": {
+ "description": "Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs."
+ },
+ "openai": {
+ "description": "OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications."
+ },
+ "openrouter": {
+ "description": "OpenRouter is a service platform providing access to various cutting-edge large model interfaces, supporting OpenAI, Anthropic, LLaMA, and more, suitable for diverse development and application needs. Users can flexibly choose the optimal model and pricing based on their requirements, enhancing the AI experience."
+ },
+ "perplexity": {
+ "description": "Perplexity is a leading provider of conversational generation models, offering various advanced Llama 3.1 models that support both online and offline applications, particularly suited for complex natural language processing tasks."
+ },
+ "qwen": {
+ "description": "Tongyi Qianwen is a large-scale language model independently developed by Alibaba Cloud, featuring strong natural language understanding and generation capabilities. It can answer various questions, create written content, express opinions, and write code, playing a role in multiple fields."
+ },
+ "siliconcloud": {
+ "description": "SiliconFlow is dedicated to accelerating AGI for the benefit of humanity, enhancing large-scale AI efficiency through an easy-to-use and cost-effective GenAI stack."
+ },
+ "spark": {
+ "description": "iFlytek's Spark model provides powerful AI capabilities across multiple domains and languages, utilizing advanced natural language processing technology to build innovative applications suitable for smart hardware, smart healthcare, smart finance, and other vertical scenarios."
+ },
+ "stepfun": {
+ "description": "StepFun's large model possesses industry-leading multimodal and complex reasoning capabilities, supporting ultra-long text understanding and powerful autonomous scheduling search engine functions."
+ },
+ "taichu": {
+ "description": "The Institute of Automation, Chinese Academy of Sciences, and Wuhan Artificial Intelligence Research Institute have launched a new generation of multimodal large models, supporting comprehensive question-answering tasks such as multi-turn Q&A, text creation, image generation, 3D understanding, and signal analysis, with stronger cognitive, understanding, and creative abilities, providing a new interactive experience."
+ },
+ "togetherai": {
+ "description": "Together AI is dedicated to achieving leading performance through innovative AI models, offering extensive customization capabilities, including rapid scaling support and intuitive deployment processes to meet various enterprise needs."
+ },
+ "upstage": {
+ "description": "Upstage focuses on developing AI models for various business needs, including Solar LLM and document AI, aiming to achieve artificial general intelligence (AGI) for work. It allows for the creation of simple conversational agents through Chat API and supports functional calling, translation, embedding, and domain-specific applications."
+ },
+ "zeroone": {
+ "description": "01.AI focuses on AI 2.0 era technologies, vigorously promoting the innovation and application of 'human + artificial intelligence', using powerful models and advanced AI technologies to enhance human productivity and achieve technological empowerment."
+ },
+ "zhipu": {
+ "description": "Zhipu AI offers an open platform for multimodal and language models, supporting a wide range of AI application scenarios, including text processing, image understanding, and programming assistance."
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/ragEval.json b/DigitalHumanWeb/locales/en-US/ragEval.json
new file mode 100644
index 0000000..667522f
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "Create",
+ "description": {
+ "placeholder": "Dataset description (optional)"
+ },
+ "name": {
+ "placeholder": "Dataset name",
+ "required": "Please enter the dataset name"
+ },
+ "title": "Add Dataset"
+ },
+ "dataset": {
+ "addNewButton": "Create Dataset",
+ "emptyGuide": "There are currently no datasets. Please create a dataset.",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "Import Data"
+ },
+ "columns": {
+ "actions": "Actions",
+ "ideal": {
+ "title": "Expected Answer"
+ },
+ "question": {
+ "title": "Question"
+ },
+ "referenceFiles": {
+ "title": "Reference Files"
+ }
+ },
+ "notSelected": "Please select a dataset on the left",
+ "title": "Dataset Details"
+ },
+ "title": "Dataset"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "Create",
+ "datasetId": {
+ "placeholder": "Please select your evaluation dataset",
+ "required": "Please select an evaluation dataset"
+ },
+ "description": {
+ "placeholder": "Evaluation task description (optional)"
+ },
+ "name": {
+ "placeholder": "Evaluation task name",
+ "required": "Please enter the evaluation task name"
+ },
+ "title": "Add Evaluation Task"
+ },
+ "addNewButton": "Create Evaluation",
+ "emptyGuide": "There are currently no evaluation tasks. Start creating an evaluation.",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "Check Status",
+ "confirmDelete": "Are you sure you want to delete this evaluation?",
+ "confirmRun": "Are you sure you want to start running? The evaluation task will be executed asynchronously in the background, and closing the page will not affect the execution of the asynchronous task.",
+ "downloadRecords": "Download Evaluation",
+ "retry": "Retry",
+ "run": "Run",
+ "title": "Actions"
+ },
+ "datasetId": {
+ "title": "Dataset"
+ },
+ "name": {
+ "title": "Evaluation Task Name"
+ },
+ "records": {
+ "title": "Number of Evaluation Records"
+ },
+ "referenceFiles": {
+ "title": "Reference Files"
+ },
+ "status": {
+ "error": "Execution Error",
+ "pending": "Pending",
+ "processing": "In Progress",
+ "success": "Execution Successful",
+ "title": "Status"
+ }
+ },
+ "title": "Evaluation Task List"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/setting.json b/DigitalHumanWeb/locales/en-US/setting.json
new file mode 100644
index 0000000..2395ed1
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "About"
+ },
+ "agentTab": {
+ "chat": "Chat Preferences",
+ "meta": "Assistant Info",
+ "modal": "Model Settings",
+ "plugin": "Plugin Settings",
+ "prompt": "Role Configuration",
+ "tts": "Voice Service"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "By opting to send telemetry data, you can help us improve the overall user experience of {{appName}}.",
+ "title": "Send Anonymous Usage Data"
+ },
+ "title": "Analytics"
+ },
+ "danger": {
+ "clear": {
+ "action": "Clear Now",
+ "confirm": "Confirm to clear all chat data?",
+ "desc": "This will clear all session data, including assistant, files, messages, plugins, etc.",
+ "success": "All session messages have been cleared",
+ "title": "Clear All Session Messages"
+ },
+ "reset": {
+ "action": "Reset Now",
+ "confirm": "Confirm to reset all settings?",
+ "currentVersion": "Current Version",
+ "desc": "Reset all settings to default values",
+ "success": "All settings have been reset",
+ "title": "Reset All Settings"
+ }
+ },
+ "header": {
+ "desc": "Preferences and model settings.",
+ "global": "Global Settings",
+ "session": "Session Settings",
+ "sessionDesc": "Role settings and session preferences.",
+ "sessionWithName": "Session Settings · {{name}}",
+ "title": "Settings"
+ },
+ "llm": {
+ "aesGcm": "Your keys and proxy address will be encrypted using the <1>AES-GCM1> encryption algorithm",
+ "apiKey": {
+ "desc": "Please enter your {{name}} API Key",
+ "placeholder": "{{name}} API Key",
+ "title": "API Key"
+ },
+ "checker": {
+ "button": "Check",
+ "desc": "Test if the API Key and proxy address are filled in correctly",
+ "pass": "Check Passed",
+ "title": "Connectivity Check"
+ },
+ "customModelCards": {
+ "addNew": "Create and add {{id}} model",
+ "config": "Model Configuration",
+ "confirmDelete": "You are about to delete this custom model. Once deleted, it cannot be recovered. Please proceed with caution.",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "The field actually requested in Azure OpenAI",
+ "placeholder": "Enter the model deployment name in Azure",
+ "title": "Model Deployment Name"
+ },
+ "displayName": {
+ "placeholder": "Enter the display name of the model, such as ChatGPT, GPT-4, etc.",
+ "title": "Model Display Name"
+ },
+ "files": {
+ "extra": "The current file upload implementation is merely a hack solution and is intended for personal experimentation only. Please wait for a complete file upload capability in future updates.",
+ "title": "Support File Upload"
+ },
+ "functionCall": {
+ "extra": "This configuration will only enable function calling capabilities within the application. Whether function calling is supported depends entirely on the model itself; please test the model's function calling capabilities on your own.",
+ "title": "Support Function Call"
+ },
+ "id": {
+ "extra": "Will be displayed as the model label",
+ "placeholder": "Enter the model ID, such as gpt-4-turbo-preview or claude-2.1",
+ "title": "Model ID"
+ },
+ "modalTitle": "Custom Model Configuration",
+ "tokens": {
+ "title": "Maximum Token Count",
+ "unlimited": "unlimited"
+ },
+ "vision": {
+ "extra": "This configuration will only enable image upload capabilities within the application. Whether recognition is supported depends entirely on the model itself; please test the model's visual recognition capabilities on your own.",
+ "title": "Support Visual Recognition"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "Client-side fetching mode initiates session requests directly from the browser, improving response speed.",
+ "title": "Use Client-Side Fetching Mode"
+ },
+ "fetcher": {
+ "fetch": "Get Model List",
+ "fetching": "Fetching Model List...",
+ "latestTime": "Last Updated: {{time}}",
+ "noLatestTime": "No list available yet"
+ },
+ "helpDoc": "Configuration Guide",
+ "modelList": {
+ "desc": "Select the models to display in the session. The selected models will be displayed in the model list.",
+ "placeholder": "Please select a model from the list",
+ "title": "Model List",
+ "total": "{{count}} models available in total"
+ },
+ "proxyUrl": {
+ "desc": "Must include http(s):// in addition to the default address",
+ "title": "API Proxy Address"
+ },
+ "waitingForMore": "More models are <1>planned to be added1>, stay tuned"
+ },
+ "plugin": {
+ "addTooltip": "Custom Plugin",
+ "clearDeprecated": "Remove Deprecated Plugins",
+ "empty": "No installed plugins yet, feel free to explore the <1>Plugin Store1>",
+ "installStatus": {
+ "deprecated": "Uninstalled"
+ },
+ "settings": {
+ "hint": "Please fill in the following configurations based on the description",
+ "title": "{{id}} Plugin Configuration",
+ "tooltip": "Plugin Configuration"
+ },
+ "store": "Plugin Store"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "Avatar"
+ },
+ "backgroundColor": {
+ "title": "Background Color"
+ },
+ "description": {
+ "placeholder": "Enter assistant description",
+ "title": "Assistant Description"
+ },
+ "name": {
+ "placeholder": "Enter assistant name",
+ "title": "Name"
+ },
+ "prompt": {
+ "placeholder": "Enter role prompt word",
+ "title": "Role Setting"
+ },
+ "tag": {
+ "placeholder": "Enter tag",
+ "title": "Tag"
+ },
+ "title": "Assistant Information"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "Automatically create a topic when the current message count exceeds this value",
+ "title": "Message Threshold"
+ },
+ "chatStyleType": {
+ "title": "Chat Window Style",
+ "type": {
+ "chat": "Conversation Mode",
+ "docs": "Document Mode"
+ }
+ },
+ "compressThreshold": {
+ "desc": "When the uncompressed history messages exceed this value, compression will be applied",
+ "title": "History Message Length Compression Threshold"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "Whether to automatically create a topic during the conversation, only effective in temporary topics",
+ "title": "Auto Create Topic"
+ },
+ "enableCompressThreshold": {
+ "title": "Enable History Message Length Compression Threshold"
+ },
+ "enableHistoryCount": {
+ "alias": "Unlimited",
+ "limited": "Include only {{number}} conversation messages",
+ "setlimited": "Set limited history messages",
+ "title": "Limit History Message Count",
+ "unlimited": "Unlimited history message count"
+ },
+ "historyCount": {
+ "desc": "Number of historical messages carried with each request",
+ "title": "Attached History Message Count"
+ },
+ "inputTemplate": {
+ "desc": "The user's latest message will be filled into this template",
+ "placeholder": "Preprocessing template {{text}} will be replaced with real-time input information",
+ "title": "User Input Preprocessing"
+ },
+ "title": "Chat Settings"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "Enable Max Tokens Limit"
+ },
+ "frequencyPenalty": {
+ "desc": "The higher the value, the more likely it is to reduce repeated words",
+ "title": "Frequency Penalty"
+ },
+ "maxTokens": {
+ "desc": "The maximum number of tokens used for each interaction",
+ "title": "Max Tokens Limit"
+ },
+ "model": {
+ "desc": "{{provider}} model",
+ "title": "Model"
+ },
+ "presencePenalty": {
+ "desc": "The higher the value, the more likely it is to expand to new topics",
+ "title": "Topic Freshness"
+ },
+ "temperature": {
+ "desc": "The higher the value, the more random the response",
+ "title": "Randomness",
+ "titleWithValue": "Randomness {{value}}"
+ },
+ "title": "Model Settings",
+ "topP": {
+ "desc": "Similar to randomness, but do not change together with randomness",
+ "title": "Top P Sampling"
+ }
+ },
+ "settingPlugin": {
+ "title": "Plugin List"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "Encryption access is enabled by the administrator",
+ "placeholder": "Enter access password",
+ "title": "Access Password"
+ },
+ "oauth": {
+ "info": {
+ "desc": "Logged in",
+ "title": "Account Information"
+ },
+ "signin": {
+ "action": "Sign In",
+ "desc": "Sign in using SSO to unlock the app",
+ "title": "Sign In to Your Account"
+ },
+ "signout": {
+ "action": "Sign Out",
+ "confirm": "Confirm sign out?",
+ "success": "Sign out successful"
+ }
+ },
+ "title": "System Settings"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "OpenAI Speech-to-Text Model",
+ "title": "OpenAI",
+ "ttsModel": "OpenAI Text-to-Speech Model"
+ },
+ "showAllLocaleVoice": {
+ "desc": "If closed, only voices in the current language will be displayed",
+ "title": "Show All Locale Voices"
+ },
+ "stt": "Speech Recognition Settings",
+ "sttAutoStop": {
+ "desc": "When closed, speech recognition will not end automatically and requires manual click to stop",
+ "title": "Auto Stop Speech Recognition"
+ },
+ "sttLocale": {
+ "desc": "The language of the speech input, this option can improve the accuracy of speech recognition",
+ "title": "Speech Recognition Language"
+ },
+ "sttService": {
+ "desc": "Where 'browser' is the native speech recognition service of the browser",
+ "title": "Speech Recognition Service"
+ },
+ "title": "Speech Service",
+ "tts": "Text-to-Speech Settings",
+ "ttsService": {
+ "desc": "If using OpenAI text-to-speech service, make sure the OpenAI model service is enabled",
+ "title": "Text-to-Speech Service"
+ },
+ "voice": {
+ "desc": "Select a voice for the current assistant, different TTS services support different voices",
+ "preview": "Voice Preview",
+ "title": "Text-to-Speech Voice"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "Avatar"
+ },
+ "fontSize": {
+ "desc": "Font size for chat content",
+ "marks": {
+ "normal": "Normal"
+ },
+ "title": "Font Size"
+ },
+ "lang": {
+ "autoMode": "Follow System",
+ "title": "Language"
+ },
+ "neutralColor": {
+ "desc": "Custom neutral color for different color tendencies",
+ "title": "Neutral Color"
+ },
+ "primaryColor": {
+ "desc": "Custom primary theme color",
+ "title": "Primary Color"
+ },
+ "themeMode": {
+ "auto": "Auto",
+ "dark": "Dark",
+ "light": "Light",
+ "title": "Theme"
+ },
+ "title": "Theme Settings"
+ },
+ "submitAgentModal": {
+ "button": "Submit Assistant",
+ "identifier": "Assistant Identifier",
+ "metaMiss": "Please complete the assistant information before submitting. It should include name, description, and tags",
+ "placeholder": "Enter a unique identifier for the assistant, e.g. web-development",
+ "tooltips": "Share to the assistant marketplace"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "Add a name for easy identification",
+ "placeholder": "Enter device name",
+ "title": "Device Name"
+ },
+ "title": "Device Information",
+ "unknownBrowser": "Unknown Browser",
+ "unknownOS": "Unknown OS"
+ },
+ "warning": {
+ "tip": "After a long period of community testing, WebRTC synchronization may not reliably meet general data synchronization needs. Please <1>deploy a signaling server1> before use."
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "WebRTC will use this name to create a sync channel. Ensure the channel name is unique.",
+ "placeholder": "Enter sync channel name",
+ "shuffle": "Generate Randomly",
+ "title": "Sync Channel Name"
+ },
+ "channelPassword": {
+ "desc": "Add a password to ensure channel privacy. Only devices with the correct password can join the channel.",
+ "placeholder": "Enter sync channel password",
+ "title": "Sync Channel Password"
+ },
+ "desc": "Real-time, peer-to-peer data communication requires all devices to be online for synchronization.",
+ "enabled": {
+ "invalid": "Please fill in the signaling server and synchronization channel name before enabling.",
+ "title": "Enable Sync"
+ },
+ "signaling": {
+ "desc": "WebRTC will use this address for synchronization",
+ "placeholder": "Enter signaling server address",
+ "title": "Signaling Server"
+ },
+ "title": "WebRTC Sync"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "Assistant Metadata Generation Model",
+ "modelDesc": "Model designated for generating assistant name, description, avatar, and tags",
+ "title": "Automatically Generate Assistant Information"
+ },
+ "queryRewrite": {
+ "label": "Question Rewriting Model",
+ "modelDesc": "Specify the model used to optimize user inquiries",
+ "title": "Knowledge Base"
+ },
+ "title": "System Assistants",
+ "topic": {
+ "label": "Topic Naming Model",
+ "modelDesc": "Model designated for automatic topic renaming",
+ "title": "Automatic Topic Naming"
+ },
+ "translation": {
+ "label": "Translation Assistant",
+ "modelDesc": "Specific model for translate message",
+ "title": "Translation Settings"
+ }
+ },
+ "tab": {
+ "about": "About",
+ "agent": "Default Assistant",
+ "common": "Common Settings",
+ "experiment": "Experiment",
+ "llm": "Language Model",
+ "sync": "Cloud Sync",
+ "system-agent": "System Assistant",
+ "tts": "Text-to-Speech"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "Built-ins"
+ },
+ "disabled": "The current model does not support function calls and cannot use the plugin",
+ "plugins": {
+ "enabled": "Enabled: {{num}}",
+ "groupName": "Plugins",
+ "noEnabled": "No plugins enabled",
+ "store": "Plugin Store"
+ },
+ "title": "Extension Tools"
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/tool.json b/DigitalHumanWeb/locales/en-US/tool.json
new file mode 100644
index 0000000..75d75ce
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "Auto Generate",
+ "downloading": "The image links generated by DALL·E3 are only valid for 1 hour, caching the images locally...",
+ "generate": "Generate",
+ "generating": "Generating...",
+ "images": "Images:",
+ "prompt": "Prompt"
+ }
+}
diff --git a/DigitalHumanWeb/locales/en-US/welcome.json b/DigitalHumanWeb/locales/en-US/welcome.json
new file mode 100644
index 0000000..1b4bc23
--- /dev/null
+++ b/DigitalHumanWeb/locales/en-US/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "Import Configuration",
+ "market": "Visit Market",
+ "start": "Start Now"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "Replace Batch",
+ "title": "New Assistant Recommendations:"
+ },
+ "defaultMessage": "I am your personal intelligent assistant {{appName}}. How can I assist you today?\nIf you need a more professional or customized assistant, you can click `+` to create a custom assistant.",
+ "defaultMessageWithoutCreate": "I am your personal intelligent assistant {{appName}}. How can I assist you today?",
+ "qa": {
+ "q01": "What is LobeHub?",
+ "q02": "What is {{appName}}?",
+ "q03": "Does {{appName}} have community support?",
+ "q04": "What features does {{appName}} support?",
+ "q05": "How do I deploy and use {{appName}}?",
+ "q06": "What is the pricing for {{appName}}?",
+ "q07": "Is {{appName}} free?",
+ "q08": "Is there a cloud service version available?",
+ "q09": "Does it support local language models?",
+ "q10": "Does it support image recognition and generation?",
+ "q11": "Does it support speech synthesis and speech recognition?",
+ "q12": "Does it support a plugin system?",
+ "q13": "Is there a marketplace to acquire GPTs?",
+ "q14": "Does it support multiple AI service providers?",
+ "q15": "What should I do if I encounter issues while using it?"
+ },
+ "questions": {
+ "moreBtn": "Learn More",
+ "title": "Frequently Asked Questions:"
+ },
+ "welcome": {
+ "afternoon": "Good Afternoon",
+ "morning": "Good Morning",
+ "night": "Good Evening",
+ "noon": "Good Noon"
+ }
+ },
+ "header": "Welcome",
+ "pickAgent": "Or choose from the following assistant templates",
+ "skip": "Skip Creation",
+ "slogan": {
+ "desc1": "Pioneering the new age of thinking and creating. Built for you, the Super Individual.",
+ "desc2": "Create your first assistant and let's get started~",
+ "title": "Unlock the superpower of your brain"
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/auth.json b/DigitalHumanWeb/locales/es-ES/auth.json
new file mode 100644
index 0000000..f62b326
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "Iniciar sesión",
+ "loginOrSignup": "Iniciar sesión / Registrarse",
+ "profile": "Perfil",
+ "security": "Seguridad",
+ "signout": "Cerrar sesión",
+ "signup": "Registrarse"
+}
diff --git a/DigitalHumanWeb/locales/es-ES/chat.json b/DigitalHumanWeb/locales/es-ES/chat.json
new file mode 100644
index 0000000..a1cc602
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "Cambiar modelo"
+ },
+ "agentDefaultMessage": "Hola, soy **{{name}}**. Puedes comenzar a hablar conmigo de inmediato o ir a [Configuración del asistente]({{url}}) para completar mi información.",
+ "agentDefaultMessageWithSystemRole": "Hola, soy **{{name}}**, {{systemRole}}, ¡comencemos a chatear!",
+ "agentDefaultMessageWithoutEdit": "¡Hola, soy **{{name}}**! Comencemos nuestra conversación.",
+ "agents": "Asistente",
+ "artifact": {
+ "generating": "Generando",
+ "thinking": "Pensando",
+ "thought": "Proceso de pensamiento",
+ "unknownTitle": "Obra sin título"
+ },
+ "backToBottom": "Volver al fondo",
+ "chatList": {
+ "longMessageDetail": "Ver detalles"
+ },
+ "clearCurrentMessages": "Borrar mensajes actuales",
+ "confirmClearCurrentMessages": "Estás a punto de borrar los mensajes de esta sesión. Una vez borrados, no se podrán recuperar. Por favor, confirma tu acción.",
+ "confirmRemoveSessionItemAlert": "Estás a punto de eliminar este asistente. Una vez eliminado, no se podrá recuperar. Por favor, confirma tu acción.",
+ "confirmRemoveSessionSuccess": "Asistente eliminado con éxito",
+ "defaultAgent": "Asistente predeterminado",
+ "defaultList": "Lista predeterminada",
+ "defaultSession": "Asistente predeterminado",
+ "duplicateSession": {
+ "loading": "Cargando duplicado...",
+ "success": "Duplicado exitoso",
+ "title": "{{title}} Copia"
+ },
+ "duplicateTitle": "{{title}} Copia",
+ "emptyAgent": "No hay asistente disponible",
+ "historyRange": "Rango de historial",
+ "inbox": {
+ "desc": "Despierta la mente con el poder del cerebro colectivo. Tu asistente inteligente está aquí para conversar contigo sobre cualquier cosa.",
+ "title": "Charla casual"
+ },
+ "input": {
+ "addAi": "Agregar un mensaje de IA",
+ "addUser": "Agregar un mensaje de usuario",
+ "more": "más",
+ "send": "Enviar",
+ "sendWithCmdEnter": "Enviar con {{meta}} + Enter",
+ "sendWithEnter": "Enviar con Enter",
+ "stop": "Detener",
+ "warp": "Salto de línea"
+ },
+ "knowledgeBase": {
+ "all": "Todo el contenido",
+ "allFiles": "Todos los archivos",
+ "allKnowledgeBases": "Todas las bases de conocimiento",
+ "disabled": "El modo de implementación actual no admite conversaciones con la base de conocimientos. Si desea utilizar esta función, cambie a la implementación de base de datos en el servidor o utilice el servicio {{cloud}}.",
+ "library": {
+ "action": {
+ "add": "Agregar",
+ "detail": "Detalles",
+ "remove": "Eliminar"
+ },
+ "title": "Archivos/Bases de conocimiento"
+ },
+ "relativeFilesOrKnowledgeBases": "Archivos/Bases de conocimiento relacionados",
+ "title": "Base de conocimiento",
+ "uploadGuide": "Los archivos subidos se pueden ver en la 'Base de conocimiento'.",
+ "viewMore": "Ver más"
+ },
+ "messageAction": {
+ "delAndRegenerate": "Eliminar y Regenerar",
+ "regenerate": "Regenerar"
+ },
+ "newAgent": "Nuevo asistente",
+ "pin": "Fijar",
+ "pinOff": "Desfijar",
+ "rag": {
+ "referenceChunks": "Fragmentos de referencia",
+ "userQuery": {
+ "actions": {
+ "delete": "Eliminar reescritura de consulta",
+ "regenerate": "Regenerar consulta"
+ }
+ }
+ },
+ "regenerate": "Regenerar",
+ "roleAndArchive": "Rol y archivo",
+ "searchAgentPlaceholder": "Asistente de búsqueda...",
+ "sendPlaceholder": "Escribe tu mensaje...",
+ "sessionGroup": {
+ "config": "Gestión de grupos",
+ "confirmRemoveGroupAlert": "Estás a punto de eliminar este grupo. Una vez eliminado, los asistentes de este grupo se moverán a la lista predeterminada. Por favor, confirma tu acción.",
+ "createAgentSuccess": "Asistente creado con éxito",
+ "createGroup": "Crear nuevo grupo",
+ "createSuccess": "Grupo creado con éxito",
+ "creatingAgent": "Creando asistente...",
+ "inputPlaceholder": "Introduce el nombre del grupo...",
+ "moveGroup": "Mover al grupo",
+ "newGroup": "Nuevo grupo",
+ "rename": "Renombrar grupo",
+ "renameSuccess": "Grupo renombrado con éxito",
+ "sortSuccess": "Reordenación exitosa",
+ "sorting": "Actualizando orden de grupos...",
+ "tooLong": "El nombre del grupo debe tener entre 1 y 20 caracteres"
+ },
+ "shareModal": {
+ "download": "Descargar captura de pantalla",
+ "imageType": "Tipo de imagen",
+ "screenshot": "Captura de pantalla",
+ "settings": "Configuración de exportación",
+ "shareToShareGPT": "Generar enlace de compartición ShareGPT",
+ "withBackground": "Incluir imagen de fondo",
+ "withFooter": "Incluir pie de página",
+ "withPluginInfo": "Incluir información del plugin",
+ "withSystemRole": "Incluir configuración de rol del asistente"
+ },
+ "stt": {
+ "action": "Entrada de voz",
+ "loading": "Reconociendo...",
+ "prettifying": "Embelleciendo..."
+ },
+ "temp": "Temporal",
+ "tokenDetails": {
+ "chats": "Mensajes de chat",
+ "rest": "Restante",
+ "systemRole": "Rol del sistema",
+ "title": "Detalles del token",
+ "tools": "Herramientas",
+ "total": "Total",
+ "used": "Utilizado"
+ },
+ "tokenTag": {
+ "overload": "Excedido",
+ "remained": "Restante",
+ "used": "Usado"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "Renombrar automáticamente",
+ "duplicate": "Crear copia",
+ "export": "Exportar tema"
+ },
+ "checkOpenNewTopic": "¿Abrir un nuevo tema?",
+ "checkSaveCurrentMessages": "¿Desea guardar la conversación actual como tema?",
+ "confirmRemoveAll": "Estás a punto de eliminar todos los temas. Una vez eliminados, no se podrán recuperar. Por favor, procede con precaución.",
+ "confirmRemoveTopic": "Estás a punto de eliminar este tema. Una vez eliminado, no se podrá recuperar. Por favor, procede con precaución.",
+ "confirmRemoveUnstarred": "Estás a punto de eliminar los temas no marcados como favoritos. Una vez eliminados, no se podrán recuperar. Por favor, procede con precaución.",
+ "defaultTitle": "Tema predeterminado",
+ "duplicateLoading": "Duplicando tema...",
+ "duplicateSuccess": "Tema duplicado exitosamente",
+ "guide": {
+ "desc": "Haz clic en el botón izquierdo para guardar la conversación actual como un tema histórico y comenzar una nueva sesión",
+ "title": "Lista de temas"
+ },
+ "openNewTopic": "Abrir nuevo tema",
+ "removeAll": "Eliminar todos los temas",
+ "removeUnstarred": "Eliminar temas no marcados como favoritos",
+ "saveCurrentMessages": "Guardar la conversación actual como tema",
+ "searchPlaceholder": "Buscar temas...",
+ "title": "Lista de temas"
+ },
+ "translate": {
+ "action": "Traducir",
+ "clear": "Borrar traducción"
+ },
+ "tts": {
+ "action": "Lectura de voz",
+ "clear": "Borrar voz"
+ },
+ "updateAgent": "Actualizar información del asistente",
+ "upload": {
+ "action": {
+ "fileUpload": "Subir archivo",
+ "folderUpload": "Subir carpeta",
+ "imageDisabled": "El modelo actual no soporta reconocimiento visual, por favor cambie de modelo para usar esta función",
+ "imageUpload": "Subir imagen",
+ "tooltip": "Subir"
+ },
+ "clientMode": {
+ "actionFiletip": "Subir archivo",
+ "actionTooltip": "Subir",
+ "disabled": "El modelo actual no soporta reconocimiento visual ni análisis de archivos, por favor cambie de modelo para usar esta función"
+ },
+ "preview": {
+ "prepareTasks": "Preparando fragmentos...",
+ "status": {
+ "pending": "Preparando para subir...",
+ "processing": "Procesando archivo..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/clerk.json b/DigitalHumanWeb/locales/es-ES/clerk.json
new file mode 100644
index 0000000..109560b
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "Atrás",
+ "badge__default": "Predeterminado",
+ "badge__otherImpersonatorDevice": "Otro dispositivo de suplantación",
+ "badge__primary": "Principal",
+ "badge__requiresAction": "Requiere acción",
+ "badge__thisDevice": "Este dispositivo",
+ "badge__unverified": "No verificado",
+ "badge__userDevice": "Dispositivo del usuario",
+ "badge__you": "Tú",
+ "createOrganization": {
+ "formButtonSubmit": "Crear organización",
+ "invitePage": {
+ "formButtonReset": "Omitir"
+ },
+ "title": "Crear organización"
+ },
+ "dates": {
+ "lastDay": "Ayer a las {{ date | timeString('es-ES') }}",
+ "next6Days": "{{ date | weekday('es-ES','long') }} a las {{ date | timeString('es-ES') }}",
+ "nextDay": "Mañana a las {{ date | timeString('es-ES') }}",
+ "numeric": "{{ date | numeric('es-ES') }}",
+ "previous6Days": "Último {{ date | weekday('es-ES','long') }} a las {{ date | timeString('es-ES') }}",
+ "sameDay": "Hoy a las {{ date | timeString('es-ES') }}"
+ },
+ "dividerText": "o",
+ "footerActionLink__useAnotherMethod": "Usar otro método",
+ "footerPageLink__help": "Ayuda",
+ "footerPageLink__privacy": "Privacidad",
+ "footerPageLink__terms": "Términos",
+ "formButtonPrimary": "Continuar",
+ "formButtonPrimary__verify": "Verificar",
+ "formFieldAction__forgotPassword": "¿Olvidaste tu contraseña?",
+ "formFieldError__matchingPasswords": "Las contraseñas coinciden.",
+ "formFieldError__notMatchingPasswords": "Las contraseñas no coinciden.",
+ "formFieldError__verificationLinkExpired": "El enlace de verificación ha caducado. Por favor, solicita uno nuevo.",
+ "formFieldHintText__optional": "Opcional",
+ "formFieldHintText__slug": "Un slug es un identificador legible por humanos que debe ser único. A menudo se utiliza en las URL.",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "Eliminar cuenta",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "ejemplo@email.com, ejemplo2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "mi-org",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "Habilitar invitaciones automáticas para este dominio",
+ "formFieldLabel__backupCode": "Código de respaldo",
+ "formFieldLabel__confirmDeletion": "Confirmación",
+ "formFieldLabel__confirmPassword": "Confirmar contraseña",
+ "formFieldLabel__currentPassword": "Contraseña actual",
+ "formFieldLabel__emailAddress": "Dirección de correo electrónico",
+ "formFieldLabel__emailAddress_username": "Dirección de correo electrónico o nombre de usuario",
+ "formFieldLabel__emailAddresses": "Direcciones de correo electrónico",
+ "formFieldLabel__firstName": "Nombre",
+ "formFieldLabel__lastName": "Apellido",
+ "formFieldLabel__newPassword": "Nueva contraseña",
+ "formFieldLabel__organizationDomain": "Dominio",
+ "formFieldLabel__organizationDomainDeletePending": "Eliminar invitaciones y sugerencias pendientes",
+ "formFieldLabel__organizationDomainEmailAddress": "Dirección de correo electrónico de verificación",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "Ingresa una dirección de correo electrónico bajo este dominio para recibir un código y verificar este dominio.",
+ "formFieldLabel__organizationName": "Nombre",
+ "formFieldLabel__organizationSlug": "Slug",
+ "formFieldLabel__passkeyName": "Nombre de la clave de paso",
+ "formFieldLabel__password": "Contraseña",
+ "formFieldLabel__phoneNumber": "Número de teléfono",
+ "formFieldLabel__role": "Rol",
+ "formFieldLabel__signOutOfOtherSessions": "Cerrar sesión en todos los demás dispositivos",
+ "formFieldLabel__username": "Nombre de usuario",
+ "impersonationFab": {
+ "action__signOut": "Cerrar sesión",
+ "title": "Sesión iniciada como {{identifier}}"
+ },
+ "locale": "es-ES",
+ "maintenanceMode": "Actualmente estamos en mantenimiento, pero no te preocupes, no debería llevar más de unos minutos.",
+ "membershipRole__admin": "Admin",
+ "membershipRole__basicMember": "Miembro",
+ "membershipRole__guestMember": "Invitado",
+ "organizationList": {
+ "action__createOrganization": "Crear organización",
+ "action__invitationAccept": "Unirse",
+ "action__suggestionsAccept": "Solicitar unirse",
+ "createOrganization": "Crear organización",
+ "invitationAcceptedLabel": "Unido",
+ "subtitle": "para continuar con {{applicationName}}",
+ "suggestionsAcceptedLabel": "Aprobación pendiente",
+ "title": "Elige una cuenta",
+ "titleWithoutPersonal": "Elige una organización"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "Invitaciones automáticas",
+ "badge__automaticSuggestion": "Sugerencias automáticas",
+ "badge__manualInvitation": "Sin inscripción automática",
+ "badge__unverified": "No verificado",
+ "createDomainPage": {
+ "subtitle": "Añade el dominio para verificar. Los usuarios con direcciones de correo electrónico en este dominio pueden unirse automáticamente a la organización o solicitar unirse.",
+ "title": "Añadir dominio"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "No se pudieron enviar las invitaciones. Ya hay invitaciones pendientes para las siguientes direcciones de correo electrónico: {{email_addresses}}.",
+ "formButtonPrimary__continue": "Enviar invitaciones",
+ "selectDropdown__role": "Seleccionar rol",
+ "subtitle": "Introduce o pega una o más direcciones de correo electrónico, separadas por espacios o comas.",
+ "successMessage": "Invitaciones enviadas con éxito",
+ "title": "Invitar nuevos miembros"
+ },
+ "membersPage": {
+ "action__invite": "Invitar",
+ "activeMembersTab": {
+ "menuAction__remove": "Eliminar miembro",
+ "tableHeader__actions": "",
+ "tableHeader__joined": "Unido",
+ "tableHeader__role": "Rol",
+ "tableHeader__user": "Usuario"
+ },
+ "detailsTitle__emptyRow": "No hay miembros para mostrar",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "Invita a usuarios conectando un dominio de correo electrónico con tu organización. Cualquiera que se registre con un dominio de correo electrónico coincidente podrá unirse a la organización en cualquier momento.",
+ "headerTitle": "Invitaciones automáticas",
+ "primaryButton": "Gestionar dominios verificados"
+ },
+ "table__emptyRow": "No hay invitaciones para mostrar"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "Revocar invitación",
+ "tableHeader__invited": "Invitado"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "Los usuarios que se registren con un dominio de correo electrónico coincidente podrán ver una sugerencia para solicitar unirse a tu organización.",
+ "headerTitle": "Sugerencias automáticas",
+ "primaryButton": "Gestionar dominios verificados"
+ },
+ "menuAction__approve": "Aprobar",
+ "menuAction__reject": "Rechazar",
+ "tableHeader__requested": "Solicitud de acceso",
+ "table__emptyRow": "No hay solicitudes para mostrar"
+ },
+ "start": {
+ "headerTitle__invitations": "Invitaciones",
+ "headerTitle__members": "Miembros",
+ "headerTitle__requests": "Solicitudes"
+ }
+ },
+ "navbar": {
+ "description": "Administra tu organización.",
+ "general": "General",
+ "members": "Miembros",
+ "title": "Organización"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "Escribe \"{{organizationName}}\" a continuación para continuar.",
+ "messageLine1": "¿Estás seguro de que quieres eliminar esta organización?",
+ "messageLine2": "Esta acción es permanente e irreversible.",
+ "successMessage": "Has eliminado la organización.",
+ "title": "Eliminar organización"
+ },
+ "leaveOrganization": {
+ "actionDescription": "Escribe \"{{organizationName}}\" a continuación para continuar.",
+ "messageLine1": "¿Estás seguro de que quieres abandonar esta organización? Perderás acceso a esta organización y sus aplicaciones.",
+ "messageLine2": "Esta acción es permanente e irreversible.",
+ "successMessage": "Has abandonado la organización.",
+ "title": "Abandonar organización"
+ },
+ "title": "Peligro"
+ },
+ "domainSection": {
+ "menuAction__manage": "Gestionar",
+ "menuAction__remove": "Eliminar",
+ "menuAction__verify": "Verificar",
+ "primaryButton": "Añadir dominio",
+ "subtitle": "Permite a los usuarios unirse a la organización automáticamente o solicitar unirse en función de un dominio de correo electrónico verificado.",
+ "title": "Dominios verificados"
+ },
+ "successMessage": "La organización ha sido actualizada.",
+ "title": "Actualizar perfil"
+ },
+ "removeDomainPage": {
+ "messageLine1": "El dominio de correo electrónico {{domain}} se eliminará.",
+ "messageLine2": "Los usuarios no podrán unirse a la organización automáticamente después de esto.",
+ "successMessage": "{{domain}} ha sido eliminado.",
+ "title": "Eliminar dominio"
+ },
+ "start": {
+ "headerTitle__general": "General",
+ "headerTitle__members": "Miembros",
+ "profileSection": {
+ "primaryButton": "Actualizar perfil",
+ "title": "Perfil de la organización",
+ "uploadAction__title": "Logo"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "Eliminar este dominio afectará a los usuarios invitados.",
+ "removeDomainActionLabel__remove": "Eliminar dominio",
+ "removeDomainSubtitle": "Elimina este dominio de tus dominios verificados",
+ "removeDomainTitle": "Eliminar dominio"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "Los usuarios son invitados automáticamente a unirse a la organización cuando se registran y pueden unirse en cualquier momento.",
+ "automaticInvitationOption__label": "Invitaciones automáticas",
+ "automaticSuggestionOption__description": "Los usuarios reciben una sugerencia para solicitar unirse, pero deben ser aprobados por un administrador antes de poder unirse a la organización.",
+ "automaticSuggestionOption__label": "Sugerencias automáticas",
+ "calloutInfoLabel": "Cambiar el modo de inscripción solo afectará a los nuevos usuarios.",
+ "calloutInvitationCountLabel": "Invitaciones pendientes enviadas a usuarios: {{count}}",
+ "calloutSuggestionCountLabel": "Sugerencias pendientes enviadas a usuarios: {{count}}",
+ "manualInvitationOption__description": "Los usuarios solo pueden ser invitados manualmente a la organización.",
+ "manualInvitationOption__label": "Sin inscripción automática",
+ "subtitle": "Elige cómo los usuarios de este dominio pueden unirse a la organización."
+ },
+ "start": {
+ "headerTitle__danger": "Peligro",
+ "headerTitle__enrollment": "Opciones de inscripción"
+ },
+ "subtitle": "El dominio {{domain}} está ahora verificado. Continúa seleccionando el modo de inscripción.",
+ "title": "Actualizar {{domain}}"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "Introduce el código de verificación enviado a tu dirección de correo electrónico",
+ "formTitle": "Código de verificación",
+ "resendButton": "¿No has recibido un código? Reenviar",
+ "subtitle": "El dominio {{domainName}} necesita ser verificado por correo electrónico.",
+ "subtitleVerificationCodeScreen": "Se ha enviado un código de verificación a {{emailAddress}}. Introduce el código para continuar.",
+ "title": "Verificar dominio"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "Crear organización",
+ "action__invitationAccept": "Unirse",
+ "action__manageOrganization": "Gestionar",
+ "action__suggestionsAccept": "Solicitar unirse",
+ "notSelected": "Ninguna organización seleccionada",
+ "personalWorkspace": "Cuenta personal",
+ "suggestionsAcceptedLabel": "Aprobación pendiente"
+ },
+ "paginationButton__next": "Siguiente",
+ "paginationButton__previous": "Anterior",
+ "paginationRowText__displaying": "Mostrando",
+ "paginationRowText__of": "de",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "Añadir cuenta",
+ "action__signOutAll": "Cerrar sesión en todas las cuentas",
+ "subtitle": "Selecciona la cuenta con la que deseas continuar.",
+ "title": "Elige una cuenta"
+ },
+ "alternativeMethods": {
+ "actionLink": "Obtener ayuda",
+ "actionText": "¿No tienes ninguna de estas?",
+ "blockButton__backupCode": "Usar un código de respaldo",
+ "blockButton__emailCode": "Enviar código por correo a {{identifier}}",
+ "blockButton__emailLink": "Enviar enlace por correo a {{identifier}}",
+ "blockButton__passkey": "Iniciar sesión con tu clave de acceso",
+ "blockButton__password": "Iniciar sesión con tu contraseña",
+ "blockButton__phoneCode": "Enviar código SMS a {{identifier}}",
+ "blockButton__totp": "Usar tu aplicación de autenticación",
+ "getHelp": {
+ "blockButton__emailSupport": "Soporte por correo",
+ "content": "Si tienes problemas para iniciar sesión en tu cuenta, envíanos un correo y trabajaremos contigo para restaurar el acceso lo antes posible.",
+ "title": "Obtener ayuda"
+ },
+ "subtitle": "¿Problemas? Puedes utilizar cualquiera de estos métodos para iniciar sesión.",
+ "title": "Usar otro método"
+ },
+ "backupCodeMfa": {
+ "subtitle": "Tu código de respaldo es aquel que obtuviste al configurar la autenticación de dos pasos.",
+ "title": "Introduce un código de respaldo"
+ },
+ "emailCode": {
+ "formTitle": "Código de verificación",
+ "resendButton": "¿No recibiste un código? Reenviar",
+ "subtitle": "para continuar en {{applicationName}}",
+ "title": "Revisa tu correo electrónico"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "Vuelve a la pestaña original para continuar.",
+ "title": "Este enlace de verificación ha caducado"
+ },
+ "failed": {
+ "subtitle": "Vuelve a la pestaña original para continuar.",
+ "title": "Este enlace de verificación no es válido"
+ },
+ "formSubtitle": "Utiliza el enlace de verificación enviado a tu correo electrónico",
+ "formTitle": "Enlace de verificación",
+ "loading": {
+ "subtitle": "Serás redirigido pronto",
+ "title": "Iniciando sesión..."
+ },
+ "resendButton": "¿No recibiste un enlace? Reenviar",
+ "subtitle": "para continuar en {{applicationName}}",
+ "title": "Revisa tu correo electrónico",
+ "unusedTab": {
+ "title": "Puedes cerrar esta pestaña"
+ },
+ "verified": {
+ "subtitle": "Serás redirigido pronto",
+ "title": "Inicio de sesión correcto"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Vuelve a la pestaña original para continuar",
+ "subtitleNewTab": "Vuelve a la pestaña recién abierta para continuar",
+ "titleNewTab": "Iniciaste sesión en otra pestaña"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "Código de restablecimiento de contraseña",
+ "resendButton": "¿No recibiste un código? Reenviar",
+ "subtitle": "para restablecer tu contraseña",
+ "subtitle_email": "Primero, ingresa el código enviado a tu dirección de correo electrónico",
+ "subtitle_phone": "Primero, ingresa el código enviado a tu teléfono",
+ "title": "Restablecer contraseña"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "Restablecer tu contraseña",
+ "label__alternativeMethods": "O inicia sesión con otro método",
+ "title": "¿Olvidaste tu contraseña?"
+ },
+ "noAvailableMethods": {
+ "message": "No se puede continuar con la sesión. No hay factor de autenticación disponible.",
+ "subtitle": "Se produjo un error",
+ "title": "No se puede iniciar sesión"
+ },
+ "passkey": {
+ "subtitle": "Usar tu clave de acceso confirma que eres tú. Tu dispositivo puede solicitar tu huella dactilar, rostro o bloqueo de pantalla.",
+ "title": "Usar tu clave de acceso"
+ },
+ "password": {
+ "actionLink": "Usar otro método",
+ "subtitle": "Ingresa la contraseña asociada a tu cuenta",
+ "title": "Ingresa tu contraseña"
+ },
+ "passwordPwned": {
+ "title": "Contraseña comprometida"
+ },
+ "phoneCode": {
+ "formTitle": "Código de verificación",
+ "resendButton": "¿No recibiste un código? Reenviar",
+ "subtitle": "para continuar en {{applicationName}}",
+ "title": "Revisa tu teléfono"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "Código de verificación",
+ "resendButton": "¿No recibiste un código? Reenviar",
+ "subtitle": "Para continuar, por favor ingresa el código de verificación enviado a tu teléfono",
+ "title": "Revisa tu teléfono"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "Restablecer contraseña",
+ "requiredMessage": "Por razones de seguridad, es necesario restablecer tu contraseña.",
+ "successMessage": "Tu contraseña se cambió correctamente. Iniciando sesión, por favor espera un momento.",
+ "title": "Establecer nueva contraseña"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "Necesitamos verificar tu identidad antes de restablecer tu contraseña."
+ },
+ "start": {
+ "actionLink": "Registrarse",
+ "actionLink__use_email": "Usar correo electrónico",
+ "actionLink__use_email_username": "Usar correo electrónico o nombre de usuario",
+ "actionLink__use_passkey": "Usar clave de acceso en su lugar",
+ "actionLink__use_phone": "Usar teléfono",
+ "actionLink__use_username": "Usar nombre de usuario",
+ "actionText": "¿No tienes una cuenta?",
+ "subtitle": "¡Bienvenido de nuevo! Por favor inicia sesión para continuar",
+ "title": "Inicia sesión en {{applicationName}}"
+ },
+ "totpMfa": {
+ "formTitle": "Código de verificación",
+ "subtitle": "Para continuar, por favor introduce el código de verificación generado por tu aplicación de autenticación",
+ "title": "Verificación de dos pasos"
+ }
+ },
+ "signInEnterPasswordTitle": "Ingresa tu contraseña",
+ "signUp": {
+ "continue": {
+ "actionLink": "Iniciar sesión",
+ "actionText": "¿Ya tienes una cuenta?",
+ "subtitle": "Por favor completa los detalles restantes para continuar",
+ "title": "Completa los campos faltantes"
+ },
+ "emailCode": {
+ "formSubtitle": "Ingresa el código de verificación enviado a tu dirección de correo electrónico",
+ "formTitle": "Código de verificación",
+ "resendButton": "¿No recibiste un código? Reenviar",
+ "subtitle": "Ingresa el código de verificación enviado a tu correo electrónico",
+ "title": "Verifica tu correo electrónico"
+ },
+ "emailLink": {
+ "formSubtitle": "Utiliza el enlace de verificación enviado a tu dirección de correo electrónico",
+ "formTitle": "Enlace de verificación",
+ "loading": {
+ "title": "Registrándote..."
+ },
+ "resendButton": "¿No recibiste un enlace? Reenviar",
+ "subtitle": "para continuar en {{applicationName}}",
+ "title": "Verifica tu correo electrónico",
+ "verified": {
+ "title": "Registro exitoso"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Vuelve a la pestaña recién abierta para continuar",
+ "subtitleNewTab": "Vuelve a la pestaña anterior para continuar",
+ "title": "Correo electrónico verificado con éxito"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "Ingresa el código de verificación enviado a tu número de teléfono",
+ "formTitle": "Código de verificación",
+ "resendButton": "¿No recibiste un código? Reenviar",
+ "subtitle": "Ingresa el código de verificación enviado a tu teléfono",
+ "title": "Verifica tu teléfono"
+ },
+ "start": {
+ "actionLink": "Iniciar sesión",
+ "actionText": "¿Ya tienes una cuenta?",
+ "subtitle": "¡Bienvenido! Por favor completa los detalles para empezar",
+ "title": "Crea tu cuenta"
+ }
+ },
+ "socialButtonsBlockButton": "Continuar con {{provider|titleize}}",
+ "unstable__errors": {
+ "captcha_invalid": "Registro no exitoso debido a validaciones de seguridad fallidas. Por favor, actualiza la página e inténtalo de nuevo o contacta al soporte para más ayuda.",
+ "captcha_unavailable": "Registro no exitoso debido a validación de bot fallida. Por favor, actualiza la página e inténtalo de nuevo o contacta al soporte para más ayuda.",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "Esta dirección de correo electrónico ya está en uso. Por favor, prueba con otra.",
+ "form_identifier_exists__phone_number": "Este número de teléfono ya está en uso. Por favor, prueba con otro.",
+ "form_identifier_exists__username": "Este nombre de usuario ya está en uso. Por favor, prueba con otro.",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "La dirección de correo electrónico debe ser válida.",
+ "form_param_format_invalid__phone_number": "El número de teléfono debe tener un formato internacional válido.",
+ "form_param_max_length_exceeded__first_name": "El nombre no debe exceder los 256 caracteres.",
+ "form_param_max_length_exceeded__last_name": "El apellido no debe exceder los 256 caracteres.",
+ "form_param_max_length_exceeded__name": "El nombre no debe exceder los 256 caracteres.",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "Tu contraseña no es lo suficientemente segura.",
+ "form_password_pwned": "Esta contraseña ha sido encontrada en una filtración y no puede ser utilizada, por favor prueba con otra contraseña.",
+ "form_password_pwned__sign_in": "Esta contraseña ha sido encontrada en una filtración y no puede ser utilizada, por favor restablece tu contraseña.",
+ "form_password_size_in_bytes_exceeded": "Tu contraseña ha excedido el número máximo de bytes permitido, por favor acórtala o elimina algunos caracteres especiales.",
+ "form_password_validation_failed": "Contraseña incorrecta",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "No puedes eliminar tu última identificación.",
+ "not_allowed_access": "",
+ "passkey_already_exists": "Ya hay una clave de acceso registrada en este dispositivo.",
+ "passkey_not_supported": "Las claves de acceso no son compatibles con este dispositivo.",
+ "passkey_pa_not_supported": "El registro requiere un autenticador de plataforma pero el dispositivo no lo soporta.",
+ "passkey_registration_cancelled": "El registro de clave de acceso fue cancelado o expiró.",
+ "passkey_retrieval_cancelled": "La verificación de clave de acceso fue cancelada o expiró.",
+ "passwordComplexity": {
+ "maximumLength": "menos de {{length}} caracteres",
+ "minimumLength": "{{length}} o más caracteres",
+ "requireLowercase": "una letra minúscula",
+ "requireNumbers": "un número",
+ "requireSpecialCharacter": "un carácter especial",
+ "requireUppercase": "una letra mayúscula",
+ "sentencePrefix": "Tu contraseña debe contener"
+ },
+ "phone_number_exists": "Este número de teléfono ya está en uso. Por favor, prueba con otro.",
+ "zxcvbn": {
+ "couldBeStronger": "Tu contraseña funciona, pero podría ser más segura. Intenta agregar más caracteres.",
+ "goodPassword": "Tu contraseña cumple con todos los requisitos necesarios.",
+ "notEnough": "Tu contraseña no es lo suficientemente segura.",
+ "suggestions": {
+ "allUppercase": "Pon en mayúscula algunas letras, pero no todas.",
+ "anotherWord": "Añade más palabras menos comunes.",
+ "associatedYears": "Evita años asociados contigo.",
+ "capitalization": "Pon en mayúscula más de la primera letra.",
+ "dates": "Evita fechas y años asociados contigo.",
+ "l33t": "Evita sustituciones predecibles como '@' por 'a'.",
+ "longerKeyboardPattern": "Usa patrones de teclado más largos y cambia la dirección de escritura varias veces.",
+ "noNeed": "Puedes crear contraseñas seguras sin usar símbolos, números o letras mayúsculas.",
+ "pwned": "Si usas esta contraseña en otro lugar, deberías cambiarla.",
+ "recentYears": "Evita años recientes.",
+ "repeated": "Evita palabras y caracteres repetidos.",
+ "reverseWords": "Evita deletrear al revés palabras comunes.",
+ "sequences": "Evita secuencias de caracteres comunes.",
+ "useWords": "Usa varias palabras, pero evita frases comunes."
+ },
+ "warnings": {
+ "common": "Esta es una contraseña comúnmente utilizada.",
+ "commonNames": "Nombres y apellidos comunes son fáciles de adivinar.",
+ "dates": "Las fechas son fáciles de adivinar.",
+ "extendedRepeat": "Patrones de caracteres repetidos como \"abcabcabc\" son fáciles de adivinar.",
+ "keyPattern": "Patrones de teclado cortos son fáciles de adivinar.",
+ "namesByThemselves": "Nombres o apellidos solos son fáciles de adivinar.",
+ "pwned": "Tu contraseña fue expuesta en una filtración de datos en Internet.",
+ "recentYears": "Años recientes son fáciles de adivinar.",
+ "sequences": "Secuencias de caracteres comunes como \"abc\" son fáciles de adivinar.",
+ "similarToCommon": "Esto es similar a una contraseña comúnmente utilizada.",
+ "simpleRepeat": "Caracteres repetidos como \"aaa\" son fáciles de adivinar.",
+ "straightRow": "Filas rectas de teclas en tu teclado son fáciles de adivinar.",
+ "topHundred": "Esta es una contraseña frecuentemente utilizada.",
+ "topTen": "Esta es una contraseña muy utilizada.",
+ "userInputs": "No debería haber datos personales o relacionados con la página.",
+ "wordByItself": "Palabras solas son fáciles de adivinar."
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "Añadir cuenta",
+ "action__manageAccount": "Gestionar cuenta",
+ "action__signOut": "Cerrar sesión",
+ "action__signOutAll": "Cerrar sesión en todas las cuentas"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "¡Copiado!",
+ "actionLabel__copy": "Copiar todo",
+ "actionLabel__download": "Descargar .txt",
+ "actionLabel__print": "Imprimir",
+ "infoText1": "Los códigos de respaldo se activarán para esta cuenta.",
+ "infoText2": "Mantén los códigos de respaldo en secreto y guárdalos de forma segura. Puedes regenerar los códigos de respaldo si sospechas que han sido comprometidos.",
+ "subtitle__codelist": "Guárdalos de forma segura y mantenlos en secreto.",
+ "successMessage": "Los códigos de respaldo están ahora activados. Puedes usar uno de estos para iniciar sesión en tu cuenta si pierdes acceso a tu dispositivo de autenticación. Cada código solo se puede usar una vez.",
+ "successSubtitle": "Puedes usar uno de estos para iniciar sesión en tu cuenta si pierdes acceso a tu dispositivo de autenticación.",
+ "title": "Agregar verificación de código de respaldo",
+ "title__codelist": "Códigos de respaldo"
+ },
+ "connectedAccountPage": {
+ "formHint": "Selecciona un proveedor para conectar tu cuenta.",
+ "formHint__noAccounts": "No hay proveedores de cuentas externas disponibles.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} será eliminado de esta cuenta.",
+ "messageLine2": "Ya no podrás usar esta cuenta conectada y cualquier función dependiente dejará de funcionar.",
+ "successMessage": "{{connectedAccount}} ha sido eliminado de tu cuenta.",
+ "title": "Eliminar cuenta conectada"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "El proveedor ha sido añadido a tu cuenta",
+ "title": "Agregar cuenta conectada"
+ },
+ "deletePage": {
+ "actionDescription": "Escribe \"Eliminar cuenta\" abajo para continuar.",
+ "confirm": "Eliminar cuenta",
+ "messageLine1": "¿Estás seguro de que deseas eliminar tu cuenta?",
+ "messageLine2": "Esta acción es permanente e irreversible.",
+ "title": "Eliminar cuenta"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "Se enviará un correo electrónico con un código de verificación a esta dirección de correo electrónico.",
+ "formSubtitle": "Ingresa el código de verificación enviado a {{identifier}}",
+ "formTitle": "Código de verificación",
+ "resendButton": "¿No recibiste un código? Reenviar",
+ "successMessage": "El correo electrónico {{identifier}} ha sido añadido a tu cuenta."
+ },
+ "emailLink": {
+ "formHint": "Se enviará un correo electrónico con un enlace de verificación a esta dirección de correo electrónico.",
+ "formSubtitle": "Haz clic en el enlace de verificación en el correo electrónico enviado a {{identifier}}",
+ "formTitle": "Enlace de verificación",
+ "resendButton": "¿No recibiste un enlace? Reenviar",
+ "successMessage": "El correo electrónico {{identifier}} ha sido añadido a tu cuenta."
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}} será eliminado de esta cuenta.",
+ "messageLine2": "Ya no podrás iniciar sesión usando esta dirección de correo electrónico.",
+ "successMessage": "{{emailAddress}} ha sido eliminado de tu cuenta.",
+ "title": "Eliminar dirección de correo electrónico"
+ },
+ "title": "Agregar dirección de correo electrónico",
+ "verifyTitle": "Verificar dirección de correo electrónico"
+ },
+ "formButtonPrimary__add": "Añadir",
+ "formButtonPrimary__continue": "Continuar",
+ "formButtonPrimary__finish": "Finalizar",
+ "formButtonPrimary__remove": "Eliminar",
+ "formButtonPrimary__save": "Guardar",
+ "formButtonReset": "Cancelar",
+ "mfaPage": {
+ "formHint": "Selecciona un método para añadir.",
+ "title": "Agregar verificación en dos pasos"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "Usar número existente",
+ "primaryButton__addPhoneNumber": "Añadir número de teléfono",
+ "removeResource": {
+ "messageLine1": "{{identifier}} ya no recibirá códigos de verificación al iniciar sesión.",
+ "messageLine2": "Tu cuenta puede no ser tan segura. ¿Estás seguro de que quieres continuar?",
+ "successMessage": "La verificación en dos pasos con código SMS ha sido eliminada para {{mfaPhoneCode}}",
+ "title": "Eliminar verificación en dos pasos"
+ },
+ "subtitle__availablePhoneNumbers": "Selecciona un número de teléfono existente para registrarte en la verificación en dos pasos con código SMS o añade uno nuevo.",
+ "subtitle__unavailablePhoneNumbers": "No hay números de teléfono disponibles para registrarte en la verificación en dos pasos con código SMS, por favor añade uno nuevo.",
+ "successMessage1": "Al iniciar sesión, deberás ingresar un código de verificación enviado a este número de teléfono como paso adicional.",
+ "successMessage2": "Guarda estos códigos de respaldo y guárdalos en un lugar seguro. Si pierdes acceso a tu dispositivo de autenticación, puedes usar los códigos de respaldo para iniciar sesión.",
+ "successTitle": "Verificación con código SMS habilitada",
+ "title": "Agregar verificación con código SMS"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "Escanear código QR en su lugar",
+ "buttonUnableToScan__nonPrimary": "¿No puedes escanear el código QR?",
+ "infoText__ableToScan": "Configura un nuevo método de inicio de sesión en tu aplicación de autenticación y escanea el siguiente código QR para vincularlo a tu cuenta.",
+ "infoText__unableToScan": "Configura un nuevo método de inicio de sesión en tu autenticador e ingresa la clave proporcionada a continuación.",
+ "inputLabel__unableToScan1": "Asegúrate de que las contraseñas basadas en el tiempo o de un solo uso estén habilitadas, luego finaliza la vinculación de tu cuenta.",
+ "inputLabel__unableToScan2": "Alternativamente, si tu autenticador admite URIs TOTP, también puedes copiar el URI completo."
+ },
+ "removeResource": {
+ "messageLine1": "Los códigos de verificación de este autenticador ya no serán necesarios al iniciar sesión.",
+ "messageLine2": "Tu cuenta puede no ser tan segura. ¿Estás seguro de que quieres continuar?",
+ "successMessage": "La verificación en dos pasos a través de la aplicación de autenticación ha sido eliminada.",
+ "title": "Eliminar verificación en dos pasos"
+ },
+ "successMessage": "La verificación en dos pasos está ahora habilitada. Al iniciar sesión, deberás ingresar un código de verificación de este autenticador como paso adicional.",
+ "title": "Agregar aplicación de autenticación",
+ "verifySubtitle": "Ingresa el código de verificación generado por tu autenticador",
+ "verifyTitle": "Código de verificación"
+ },
+ "mobileButton__menu": "Menú",
+ "navbar": {
+ "account": "Perfil",
+ "description": "Administra la información de tu cuenta.",
+ "security": "Seguridad",
+ "title": "Cuenta"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} será eliminado de esta cuenta.",
+ "title": "Eliminar passkey"
+ },
+ "subtitle__rename": "Puedes cambiar el nombre del passkey para que sea más fácil de encontrar.",
+ "title__rename": "Renombrar Passkey"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "Se recomienda cerrar sesión en todos los demás dispositivos que puedan haber utilizado tu contraseña anterior.",
+ "readonly": "Actualmente no puedes editar tu contraseña porque solo puedes iniciar sesión a través de la conexión empresarial.",
+ "successMessage__set": "Tu contraseña ha sido establecida.",
+ "successMessage__signOutOfOtherSessions": "Todos los demás dispositivos han cerrado sesión.",
+ "successMessage__update": "Tu contraseña ha sido actualizada.",
+ "title__set": "Establecer contraseña",
+ "title__update": "Actualizar contraseña"
+ },
+ "phoneNumberPage": {
+ "infoText": "Se enviará un mensaje de texto con un código de verificación a este número de teléfono. Pueden aplicarse tarifas por mensajes y datos.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} será eliminado de esta cuenta.",
+ "messageLine2": "Ya no podrás iniciar sesión usando este número de teléfono.",
+ "successMessage": "{{phoneNumber}} ha sido eliminado de tu cuenta.",
+ "title": "Eliminar número de teléfono"
+ },
+ "successMessage": "{{identifier}} se ha añadido a tu cuenta.",
+ "title": "Añadir número de teléfono",
+ "verifySubtitle": "Ingresa el código de verificación enviado a {{identifier}}",
+ "verifyTitle": "Verificar número de teléfono"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "Tamaño recomendado 1:1, hasta 10MB.",
+ "imageFormDestructiveActionSubtitle": "Eliminar",
+ "imageFormSubtitle": "Subir",
+ "imageFormTitle": "Imagen de perfil",
+ "readonly": "Tu información de perfil ha sido proporcionada por la conexión empresarial y no se puede editar.",
+ "successMessage": "Tu perfil ha sido actualizado.",
+ "title": "Actualizar perfil"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "Cerrar sesión en el dispositivo",
+ "title": "Dispositivos activos"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "Intentar de nuevo",
+ "actionLabel__reauthorize": "Autorizar ahora",
+ "destructiveActionTitle": "Eliminar",
+ "primaryButton": "Conectar cuenta",
+ "subtitle__reauthorize": "Los permisos requeridos han sido actualizados, y es posible que experimentes funcionalidad limitada. Por favor, vuelve a autorizar esta aplicación para evitar problemas",
+ "title": "Cuentas conectadas"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "Eliminar cuenta",
+ "title": "Eliminar cuenta"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "Eliminar correo electrónico",
+ "detailsAction__nonPrimary": "Establecer como principal",
+ "detailsAction__primary": "Completar verificación",
+ "detailsAction__unverified": "Verificar",
+ "primaryButton": "Agregar dirección de correo electrónico",
+ "title": "Direcciones de correo electrónico"
+ },
+ "enterpriseAccountsSection": {
+ "title": "Cuentas empresariales"
+ },
+ "headerTitle__account": "Detalles del perfil",
+ "headerTitle__security": "Seguridad",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "Regenerar",
+ "headerTitle": "Códigos de respaldo",
+ "subtitle__regenerate": "Obtener un nuevo conjunto de códigos de respaldo seguros. Los códigos anteriores serán eliminados y no podrán ser utilizados.",
+ "title__regenerate": "Regenerar códigos de respaldo"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "Establecer como predeterminado",
+ "destructiveActionLabel": "Eliminar"
+ },
+ "primaryButton": "Agregar verificación en dos pasos",
+ "title": "Verificación en dos pasos",
+ "totp": {
+ "destructiveActionTitle": "Eliminar",
+ "headerTitle": "Aplicación autenticadora"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "Eliminar",
+ "menuAction__rename": "Renombrar",
+ "title": "Contraseñas"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "Establecer contraseña",
+ "primaryButton__updatePassword": "Actualizar contraseña",
+ "title": "Contraseña"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "Eliminar número de teléfono",
+ "detailsAction__nonPrimary": "Establecer como principal",
+ "detailsAction__primary": "Completar verificación",
+ "detailsAction__unverified": "Verificar número de teléfono",
+ "primaryButton": "Agregar número de teléfono",
+ "title": "Números de teléfono"
+ },
+ "profileSection": {
+ "primaryButton": "Actualizar perfil",
+ "title": "Perfil"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "Establecer nombre de usuario",
+ "primaryButton__updateUsername": "Actualizar nombre de usuario",
+ "title": "Nombre de usuario"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "Eliminar billetera",
+ "primaryButton": "Billeteras Web3",
+ "title": "Billeteras Web3"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "Tu nombre de usuario ha sido actualizado.",
+ "title__set": "Establecer nombre de usuario",
+ "title__update": "Actualizar nombre de usuario"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} será eliminado de esta cuenta.",
+ "messageLine2": "Ya no podrás iniciar sesión usando esta billetera web3.",
+ "successMessage": "{{web3Wallet}} ha sido eliminado de tu cuenta.",
+ "title": "Eliminar billetera web3"
+ },
+ "subtitle__availableWallets": "Selecciona una billetera web3 para conectar a tu cuenta.",
+ "subtitle__unavailableWallets": "No hay billeteras web3 disponibles.",
+ "successMessage": "La billetera ha sido añadida a tu cuenta.",
+ "title": "Añadir billetera web3"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/common.json b/DigitalHumanWeb/locales/es-ES/common.json
new file mode 100644
index 0000000..f7742da
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "Acerca de",
+ "advanceSettings": "Configuración avanzada",
+ "alert": {
+ "cloud": {
+ "action": "Prueba gratuita",
+ "desc": "Ofrecemos {{credit}} créditos de cálculo gratuitos para todos los usuarios registrados, sin necesidad de configuración complicada, listos para usar, con soporte para historial de conversaciones ilimitado y sincronización en la nube global. ¡Explora más funciones avanzadas juntos!",
+ "descOnMobile": "Ofrecemos {{credit}} créditos de cálculo gratuitos para todos los usuarios registrados, sin necesidad de configuraciones complicadas, listos para usar.",
+ "title": "Bienvenido a {{name}}"
+ }
+ },
+ "appInitializing": "Iniciando la aplicación...",
+ "autoGenerate": "Generación automática",
+ "autoGenerateTooltip": "Completar automáticamente la descripción del asistente basándose en las sugerencias",
+ "autoGenerateTooltipDisabled": "Por favor, complete la palabra clave antes de usar la función de autocompletar",
+ "back": "Volver",
+ "batchDelete": "Eliminar en lote",
+ "blog": "Blog de productos",
+ "cancel": "Cancelar",
+ "changelog": "Registro de cambios",
+ "close": "Cerrar",
+ "contact": "Contacto",
+ "copy": "Copiar",
+ "copyFail": "Fallo al copiar",
+ "copySuccess": "¡Copia exitosa!",
+ "dataStatistics": {
+ "messages": "Mensajes",
+ "sessions": "Sesiones",
+ "today": "Hoy",
+ "topics": "Temas"
+ },
+ "defaultAgent": "Asistente predeterminado",
+ "defaultSession": "Sesión predeterminada",
+ "delete": "Eliminar",
+ "document": "Documento de uso",
+ "download": "Descargar",
+ "duplicate": "Duplicar",
+ "edit": "Editar",
+ "export": "Exportar configuración",
+ "exportType": {
+ "agent": "Exportar configuración del asistente",
+ "agentWithMessage": "Exportar asistente y mensajes",
+ "all": "Exportar configuración global y todos los datos de los asistentes",
+ "allAgent": "Exportar todas las configuraciones de los asistentes",
+ "allAgentWithMessage": "Exportar todos los asistentes y mensajes",
+ "globalSetting": "Exportar configuración global"
+ },
+ "feedback": "Comentarios y sugerencias",
+ "follow": "Síguenos en {{name}}",
+ "footer": {
+ "action": {
+ "feedback": "Comparte tus valiosas sugerencias",
+ "star": "Agrega una estrella en GitHub"
+ },
+ "and": "y",
+ "feedback": {
+ "action": "Compartir retroalimentación",
+ "desc": "Cada idea y sugerencia que nos brindes es invaluable. ¡Estamos ansiosos por conocer tu opinión! Siéntete libre de contactarnos para proporcionar comentarios sobre las funciones del producto y la experiencia de uso, ¡ayúdanos a mejorar LobeChat!",
+ "title": "Comparte tu valiosa retroalimentación en GitHub"
+ },
+ "later": "Más tarde",
+ "star": {
+ "action": "Destacar con una estrella",
+ "desc": "¿Te encanta nuestro producto y deseas apoyarnos? ¿Podrías agregar una estrella en GitHub? Este pequeño gesto significa mucho para nosotros y nos motiva a seguir brindándote una experiencia de características excepcional.",
+ "title": "Destaca con una estrella en GitHub"
+ },
+ "title": "¿Te gusta nuestro producto?"
+ },
+ "fullscreen": "Pantalla completa",
+ "historyRange": "Rango de historial",
+ "import": "Importar configuración",
+ "importModal": {
+ "error": {
+ "desc": "Lo sentimos mucho, se produjo un error durante el proceso de importación de datos. Inténtalo de nuevo o <1>envía un informe1>, y te ayudaremos a solucionar el problema lo antes posible.",
+ "title": "Error en la importación de datos"
+ },
+ "finish": {
+ "onlySettings": "La importación de la configuración del sistema se ha completado",
+ "start": "Comenzar a usar",
+ "subTitle": "Importación de datos completada en {{duration}} segundos. Detalles de la importación:",
+ "title": "Importación de datos completada"
+ },
+ "loading": "Importando datos, por favor espere...",
+ "preparing": "Preparando el módulo de importación de datos...",
+ "result": {
+ "added": "Importación exitosa",
+ "errors": "Errores de importación",
+ "messages": "Mensajes",
+ "sessionGroups": "Grupos de sesión",
+ "sessions": "Asistentes",
+ "skips": "Saltos de duplicados",
+ "topics": "Temas",
+ "type": "Tipo de datos"
+ },
+ "title": "Importar datos",
+ "uploading": {
+ "desc": "El archivo actual es grande, se está subiendo... ",
+ "restTime": "Tiempo restante",
+ "speed": "Velocidad de carga"
+ }
+ },
+ "information": "Comunidad e Información",
+ "installPWA": "Instalar la aplicación del navegador",
+ "lang": {
+ "ar": "árabe",
+ "bg-BG": "búlgaro",
+ "bn": "bengalí",
+ "cs-CZ": "checo",
+ "da-DK": "danés",
+ "de-DE": "Alemán",
+ "el-GR": "griego",
+ "en": "Inglés",
+ "en-US": "Inglés",
+ "es-ES": "Español",
+ "fi-FI": "finlandés",
+ "fr-FR": "Francés",
+ "hi-IN": "hindi",
+ "hu-HU": "húngaro",
+ "id-ID": "indonesio",
+ "it-IT": "italiano",
+ "ja-JP": "Japonés",
+ "ko-KR": "Coreano",
+ "nl-NL": "neerlandés",
+ "no-NO": "noruego",
+ "pl-PL": "polaco",
+ "pt-BR": "Portugués",
+ "pt-PT": "portugués",
+ "ro-RO": "rumano",
+ "ru-RU": "Ruso",
+ "sk-SK": "eslovaco",
+ "sr-RS": "serbio",
+ "sv-SE": "sueco",
+ "th-TH": "tailandés",
+ "tr-TR": "Turco",
+ "uk-UA": "ucraniano",
+ "vi-VN": "vietnamita",
+ "zh": "Chino",
+ "zh-CN": "Chino simplificado",
+ "zh-TW": "Chino tradicional"
+ },
+ "layoutInitializing": "Inicializando diseño...",
+ "legal": "Aviso Legal",
+ "loading": "Cargando...",
+ "mail": {
+ "business": "Colaboración Comercial",
+ "support": "Soporte por Correo"
+ },
+ "oauth": "Inicio de sesión SSO",
+ "officialSite": "Sitio oficial",
+ "ok": "Aceptar",
+ "password": "Contraseña",
+ "pin": "Fijar",
+ "pinOff": "Quitar fijación",
+ "privacy": "Política de privacidad",
+ "regenerate": "Regenerar",
+ "rename": "Renombrar",
+ "reset": "Restablecer",
+ "retry": "Reintentar",
+ "send": "Enviar",
+ "setting": "Configuración",
+ "share": "Compartir",
+ "stop": "Detener",
+ "sync": {
+ "actions": {
+ "settings": "Configuración de sincronización",
+ "sync": "Sincronizar ahora"
+ },
+ "awareness": {
+ "current": "Dispositivo actual"
+ },
+ "channel": "Canal",
+ "disabled": {
+ "actions": {
+ "enable": "Habilitar sincronización en la nube",
+ "settings": "Configurar parámetros de sincronización"
+ },
+ "desc": "Los datos de esta sesión se almacenan solo en este navegador. Si necesitas sincronizar datos entre varios dispositivos, configura y habilita la sincronización en la nube.",
+ "title": "Sincronización de datos deshabilitada"
+ },
+ "enabled": {
+ "title": "Sincronización de datos"
+ },
+ "status": {
+ "connecting": "Conectando",
+ "disabled": "Sincronización deshabilitada",
+ "ready": "Listo",
+ "synced": "Sincronizado",
+ "syncing": "Sincronizando",
+ "unconnected": "Sin conexión"
+ },
+ "title": "Estado de sincronización",
+ "unconnected": {
+ "tip": "Fallo al conectar con el servidor de señal. No se podrá establecer un canal de comunicación punto a punto. Por favor, verifica la red e inténtalo de nuevo."
+ }
+ },
+ "tab": {
+ "chat": "Chat",
+ "discover": "Descubrir",
+ "files": "Archivos",
+ "me": "Yo",
+ "setting": "Configuración"
+ },
+ "telemetry": {
+ "allow": "Permitir",
+ "deny": "Denegar",
+ "desc": "Queremos recopilar información sobre tu uso de forma anónima para ayudarnos a mejorar LobeChat y ofrecerte una mejor experiencia de producto. Puedes desactivarlo en 'Configuración' - 'Acerca de' en cualquier momento.",
+ "learnMore": "Más información",
+ "title": "Ayuda a mejorar LobeChat"
+ },
+ "temp": "Temporal",
+ "terms": "Términos de servicio",
+ "updateAgent": "Actualizar información del asistente",
+ "upgradeVersion": {
+ "action": "Actualizar",
+ "hasNew": "Hay una nueva actualización disponible",
+ "newVersion": "Nueva versión disponible: {{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "Usuario Anónimo",
+ "billing": "Gestión de facturación",
+ "cloud": "Prueba {{name}}",
+ "data": "Almacenamiento de datos",
+ "defaultNickname": "Usuario de la comunidad",
+ "discord": "Soporte de la comunidad",
+ "docs": "Documentación de uso",
+ "email": "Soporte por correo electrónico",
+ "feedback": "Comentarios y sugerencias",
+ "help": "Centro de ayuda",
+ "moveGuide": "El botón de configuración se ha movido aquí",
+ "plans": "Planes de suscripción",
+ "preview": "Vista previa",
+ "profile": "Gestión de cuenta",
+ "setting": "Configuración de la aplicación",
+ "usages": "Estadísticas de uso"
+ },
+ "version": "Versión"
+}
diff --git a/DigitalHumanWeb/locales/es-ES/components.json b/DigitalHumanWeb/locales/es-ES/components.json
new file mode 100644
index 0000000..b1619b6
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "Arrastra los archivos aquí, se admite la carga de múltiples imágenes.",
+ "dragFileDesc": "Arrastra imágenes y archivos aquí, se admite la carga de múltiples imágenes y archivos.",
+ "dragFileTitle": "Subir archivo",
+ "dragTitle": "Subir imagen"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "Agregar a la base de conocimientos",
+ "addToOtherKnowledgeBase": "Agregar a otra base de conocimientos",
+ "batchChunking": "División por lotes",
+ "chunking": "División",
+ "chunkingTooltip": "Divida el archivo en múltiples bloques de texto y vectorícelos para su uso en búsqueda semántica y diálogo de archivos",
+ "confirmDelete": "Está a punto de eliminar este archivo. Una vez eliminado, no podrá recuperarlo. Por favor, confirme su acción.",
+ "confirmDeleteMultiFiles": "Está a punto de eliminar los {{count}} archivos seleccionados. Una vez eliminados, no podrá recuperarlos. Por favor, confirme su acción.",
+ "confirmRemoveFromKnowledgeBase": "Está a punto de eliminar los {{count}} archivos seleccionados de la base de conocimientos. Los archivos seguirán siendo visibles en todos los archivos. Por favor, confirme su acción.",
+ "copyUrl": "Copiar enlace",
+ "copyUrlSuccess": "Dirección del archivo copiada con éxito",
+ "createChunkingTask": "Preparando...",
+ "deleteSuccess": "Archivo eliminado con éxito",
+ "downloading": "Descargando archivo...",
+ "removeFromKnowledgeBase": "Eliminar de la base de conocimientos",
+ "removeFromKnowledgeBaseSuccess": "Archivo eliminado con éxito"
+ },
+ "bottom": "Ya has llegado al final",
+ "config": {
+ "showFilesInKnowledgeBase": "Mostrar contenido en la base de conocimientos"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "Subir archivo",
+ "folder": "Subir carpeta",
+ "knowledgeBase": "Crear nueva base de conocimientos"
+ },
+ "or": "o",
+ "title": "Arrastra archivos o carpetas aquí"
+ },
+ "title": {
+ "createdAt": "Fecha de creación",
+ "size": "Tamaño",
+ "title": "Archivo"
+ },
+ "total": {
+ "fileCount": "Total {{count}} elementos",
+ "selectedCount": "Seleccionados {{count}} elementos"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "Los bloques de texto aún no están completamente vectorizados, lo que hará que la función de búsqueda semántica no esté disponible. Para mejorar la calidad de búsqueda, por favor vectorice los bloques de texto.",
+ "error": "Error de vectorización",
+ "errorResult": "Error de vectorización, por favor verifica y vuelve a intentarlo. Motivo del fallo:",
+ "processing": "Los bloques de texto están siendo vectorizados, por favor, tenga paciencia.",
+ "success": "Todos los bloques de texto actuales han sido vectorizados."
+ },
+ "embeddings": "Vectorización",
+ "status": {
+ "error": "Error en la división",
+ "errorResult": "Error en la división, por favor revise y vuelva a intentarlo. Razón del fallo:",
+ "processing": "Dividiendo",
+ "processingTip": "El servidor está dividiendo los bloques de texto, cerrar la página no afectará el progreso de la división."
+ }
+ }
+ },
+ "GoBack": {
+ "back": "Regresar"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "Modelo personalizado: admite llamadas de función y reconocimiento visual. Verifique la disponibilidad de estas capacidades según sea necesario.",
+ "file": "Este modelo admite la carga y reconocimiento de archivos.",
+ "functionCall": "Este modelo admite llamadas de función.",
+ "tokens": "Este modelo admite un máximo de {{tokens}} tokens por sesión.",
+ "vision": "Este modelo admite el reconocimiento visual."
+ },
+ "removed": "El modelo no está en la lista, se eliminará automáticamente si se cancela la selección"
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "No hay modelos habilitados. Vaya a la configuración para habilitarlos.",
+ "provider": "Proveedor"
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/discover.json b/DigitalHumanWeb/locales/es-ES/discover.json
new file mode 100644
index 0000000..42386cb
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "Agregar asistente",
+ "addAgentAndConverse": "Agregar asistente y conversar",
+ "addAgentSuccess": "Agregado con éxito",
+ "conversation": {
+ "l1": "Hola, soy **{{name}}**, puedes preguntarme cualquier cosa y haré lo posible por responderte ~",
+ "l2": "Aquí tienes una introducción a mis capacidades: ",
+ "l3": "¡Comencemos la conversación!"
+ },
+ "description": "Introducción al asistente",
+ "detail": "Detalles",
+ "list": "Lista de asistentes",
+ "more": "Más",
+ "plugins": "Integrar complementos",
+ "recentSubmits": "Actualizaciones recientes",
+ "suggestions": "Recomendaciones relacionadas",
+ "systemRole": "Configuración del asistente",
+ "try": "Prueba"
+ },
+ "back": "Volver a Descubrir",
+ "category": {
+ "assistant": {
+ "academic": "Académico",
+ "all": "Todo",
+ "career": "Carrera",
+ "copywriting": "Redacción",
+ "design": "Diseño",
+ "education": "Educación",
+ "emotions": "Emociones",
+ "entertainment": "Entretenimiento",
+ "games": "Juegos",
+ "general": "General",
+ "life": "Vida",
+ "marketing": "Marketing",
+ "office": "Oficina",
+ "programming": "Programación",
+ "translation": "Traducción"
+ },
+ "plugin": {
+ "all": "Todo",
+ "gaming-entertainment": "Juegos y entretenimiento",
+ "life-style": "Estilo de vida",
+ "media-generate": "Generación de medios",
+ "science-education": "Ciencia y educación",
+ "social": "Redes sociales",
+ "stocks-finance": "Acciones y finanzas",
+ "tools": "Herramientas útiles",
+ "web-search": "Búsqueda en la web"
+ }
+ },
+ "cleanFilter": "Limpiar filtro",
+ "create": "Crear",
+ "createGuide": {
+ "func1": {
+ "desc1": "En la ventana de conversación, accede a la página de configuración del asistente que deseas enviar a través de la esquina superior derecha;",
+ "desc2": "Haz clic en el botón de enviar al mercado de asistentes en la esquina superior derecha.",
+ "tag": "Método uno",
+ "title": "Enviar a través de LobeChat"
+ },
+ "func2": {
+ "button": "Ir al repositorio de asistentes de Github",
+ "desc": "Si deseas agregar un asistente al índice, utiliza agent-template.json o agent-template-full.json para crear una entrada en el directorio de plugins, escribe una breve descripción y etiquétala adecuadamente, luego crea una solicitud de extracción.",
+ "tag": "Método dos",
+ "title": "Enviar a través de Github"
+ }
+ },
+ "dislike": "No me gusta",
+ "filter": "Filtrar",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "Todos los autores",
+ "followed": "Autores seguidos",
+ "title": "Rango de autores"
+ },
+ "contentLength": "Longitud mínima del contexto",
+ "maxToken": {
+ "title": "Establecer longitud máxima (Token)",
+ "unlimited": "Sin límite"
+ },
+ "other": {
+ "functionCall": "Soporte para llamadas a funciones",
+ "title": "Otros",
+ "vision": "Soporte para reconocimiento visual",
+ "withKnowledge": "Con base de conocimientos",
+ "withTool": "Con plugins"
+ },
+ "pricing": "Precio del modelo",
+ "timePeriod": {
+ "all": "Todo el tiempo",
+ "day": "Últimas 24 horas",
+ "month": "Últimos 30 días",
+ "title": "Rango de tiempo",
+ "week": "Últimos 7 días",
+ "year": "Último año"
+ }
+ },
+ "home": {
+ "featuredAssistants": "Asistentes recomendados",
+ "featuredModels": "Modelos recomendados",
+ "featuredProviders": "Proveedores de modelos recomendados",
+ "featuredTools": "Plugins recomendados",
+ "more": "Descubre más"
+ },
+ "like": "Me gusta",
+ "models": {
+ "chat": "Iniciar conversación",
+ "contentLength": "Longitud máxima del contexto",
+ "free": "Gratis",
+ "guide": "Guía de configuración",
+ "list": "Lista de modelos",
+ "more": "Más",
+ "parameterList": {
+ "defaultValue": "Valor por defecto",
+ "docs": "Ver documentación",
+ "frequency_penalty": {
+ "desc": "Esta configuración ajusta la frecuencia con la que el modelo reutiliza vocabulario específico que ya ha aparecido en la entrada. Un valor más alto reduce la probabilidad de que esto ocurra, mientras que un valor negativo produce el efecto contrario. La penalización de vocabulario no aumenta con la frecuencia de aparición. Un valor negativo alentará la reutilización del vocabulario.",
+ "title": "Penalización de frecuencia"
+ },
+ "max_tokens": {
+ "desc": "Esta configuración define la longitud máxima que el modelo puede generar en una sola respuesta. Establecer un valor más alto permite al modelo generar respuestas más largas, mientras que un valor más bajo limita la longitud de la respuesta, haciéndola más concisa. Ajustar este valor de manera razonable según el contexto de la aplicación puede ayudar a alcanzar la longitud y el nivel de detalle de respuesta deseados.",
+ "title": "Límite de Respuesta Única"
+ },
+ "presence_penalty": {
+ "desc": "Esta configuración está diseñada para controlar la reutilización del vocabulario según la frecuencia con la que aparece en la entrada. Intenta usar menos aquellas palabras que aparecen con más frecuencia en la entrada, siendo su uso proporcional a la frecuencia de aparición. La penalización de vocabulario aumenta con la frecuencia de aparición. Un valor negativo alentará la reutilización del vocabulario.",
+ "title": "Novedad del tema"
+ },
+ "range": "Rango",
+ "temperature": {
+ "desc": "Esta configuración afecta la diversidad de las respuestas del modelo. Un valor más bajo resultará en respuestas más predecibles y típicas, mientras que un valor más alto alentará respuestas más diversas y menos comunes. Cuando el valor se establece en 0, el modelo siempre dará la misma respuesta para una entrada dada.",
+ "title": "Aleatoriedad"
+ },
+ "title": "Parámetros del modelo",
+ "top_p": {
+ "desc": "Esta configuración limita la selección del modelo a un cierto porcentaje de vocabulario con la mayor probabilidad: solo selecciona aquellas palabras que alcanzan una probabilidad acumulativa de P. Un valor más bajo hace que las respuestas del modelo sean más predecibles, mientras que la configuración predeterminada permite al modelo elegir de todo el rango de vocabulario.",
+ "title": "Muestreo de núcleo"
+ },
+ "type": "Tipo"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChat admite el uso de claves API personalizadas para este proveedor.",
+ "input": "Precio de entrada",
+ "inputTooltip": "Costo por millón de Tokens",
+ "latency": "Latencia",
+ "latencyTooltip": "Tiempo promedio de respuesta del proveedor para enviar el primer Token",
+ "maxOutput": "Longitud máxima de salida",
+ "maxOutputTooltip": "Número máximo de Tokens que este punto final puede generar",
+ "officialTooltip": "Servicio oficial de LobeHub",
+ "output": "Precio de salida",
+ "outputTooltip": "Costo por millón de Tokens",
+ "streamCancellationTooltip": "Este proveedor admite la función de cancelación de flujo.",
+ "throughput": "Rendimiento",
+ "throughputTooltip": "Número promedio de Tokens transmitidos por segundo en solicitudes de flujo"
+ },
+ "suggestions": "Modelos relacionados",
+ "supportedProviders": "Proveedores que admiten este modelo"
+ },
+ "plugins": {
+ "community": "Complementos de la comunidad",
+ "install": "Instalar complemento",
+ "installed": "Instalado",
+ "list": "Lista de complementos",
+ "meta": {
+ "description": "Descripción",
+ "parameter": "Parámetro",
+ "title": "Parámetros de la herramienta",
+ "type": "Tipo"
+ },
+ "more": "Más",
+ "official": "Complementos oficiales",
+ "recentSubmits": "Actualizaciones recientes",
+ "suggestions": "Recomendaciones relacionadas"
+ },
+ "providers": {
+ "config": "Configurar proveedor",
+ "list": "Lista de proveedores de modelos",
+ "modelCount": "{{count}} modelos",
+ "modelSite": "Documentación del modelo",
+ "more": "Más",
+ "officialSite": "Sitio web oficial",
+ "showAllModels": "Mostrar todos los modelos",
+ "suggestions": "Proveedores relacionados",
+ "supportedModels": "Modelos soportados"
+ },
+ "search": {
+ "placeholder": "Buscar nombre, descripción o palabras clave...",
+ "result": "{{count}} resultados de búsqueda sobre {{keyword}}",
+ "searching": "Buscando..."
+ },
+ "sort": {
+ "mostLiked": "Más gustados",
+ "mostUsed": "Más utilizados",
+ "newest": "De nuevo a viejo",
+ "oldest": "De viejo a nuevo",
+ "recommended": "Recomendado"
+ },
+ "tab": {
+ "assistants": "Asistentes",
+ "home": "Inicio",
+ "models": "Modelos",
+ "plugins": "Complementos",
+ "providers": "Proveedores de modelos"
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/error.json b/DigitalHumanWeb/locales/es-ES/error.json
new file mode 100644
index 0000000..9aa7424
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "Continuar la sesión",
+ "desc": "{{greeting}}, es un placer poder seguir asistiéndote. Continuemos con el tema que estábamos tratando.",
+ "title": "Bienvenido de nuevo, {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "Volver a la página de inicio",
+ "desc": "Inténtalo de nuevo más tarde, o regresa al mundo conocido",
+ "retry": "Reintentar",
+ "title": "Se ha producido un problema en la página.."
+ },
+ "fetchError": "Error en la solicitud",
+ "fetchErrorDetail": "Detalles del error",
+ "notFound": {
+ "backHome": "Volver a la página de inicio",
+ "check": "Por favor, verifica si tu URL es correcta",
+ "desc": "No podemos encontrar la página que estás buscando",
+ "title": "¿Has entrado en un área desconocida?"
+ },
+ "pluginSettings": {
+ "desc": "Complete la siguiente configuración para comenzar a usar este complemento",
+ "title": "Configuración del complemento {{name}}"
+ },
+ "response": {
+ "400": "Lo sentimos, el servidor no comprende su solicitud. Por favor, asegúrese de que los parámetros de su solicitud sean correctos",
+ "401": "Lo sentimos, el servidor ha rechazado su solicitud, posiblemente debido a permisos insuficientes o falta de autenticación válida",
+ "403": "Lo sentimos, el servidor ha rechazado su solicitud. No tiene permiso para acceder a este contenido",
+ "404": "Lo sentimos, el servidor no puede encontrar la página o recurso solicitado. Por favor, verifique si la URL es correcta",
+ "405": "Lo sentimos, el servidor no admite el método de solicitud que está utilizando. Por favor, verifique si el método de solicitud es correcto",
+ "406": "Lo sentimos, el servidor no puede completar la solicitud basándose en las características de contenido que ha proporcionado",
+ "407": "Lo sentimos, debe autenticarse con el proxy antes de continuar con esta solicitud",
+ "408": "Lo sentimos, el servidor ha agotado el tiempo de espera mientras esperaba la solicitud. Por favor, verifique su conexión de red e inténtelo de nuevo",
+ "409": "Lo sentimos, la solicitud no se puede procesar debido a un conflicto, posiblemente porque el estado del recurso es incompatible con la solicitud",
+ "410": "Lo sentimos, el recurso solicitado ha sido eliminado permanentemente y no se puede encontrar",
+ "411": "Lo sentimos, el servidor no puede procesar la solicitud porque no incluye una longitud de contenido válida",
+ "412": "Lo sentimos, su solicitud no cumple con las condiciones del servidor y no se puede completar",
+ "413": "Lo sentimos, su solicitud es demasiado grande para ser procesada por el servidor",
+ "414": "Lo sentimos, la URI de su solicitud es demasiado larga para ser procesada por el servidor",
+ "415": "Lo sentimos, el servidor no puede procesar el formato de medios adjunto en la solicitud",
+ "416": "Lo sentimos, el servidor no puede satisfacer el rango de su solicitud",
+ "417": "Lo sentimos, el servidor no puede cumplir con sus expectativas",
+ "422": "Lo sentimos, su solicitud tiene el formato correcto, pero debido a errores semánticos no puede ser procesada",
+ "423": "Lo sentimos, el recurso solicitado está bloqueado",
+ "424": "Lo sentimos, debido a una solicitud previa fallida, la solicitud actual no se puede completar",
+ "426": "Lo sentimos, el servidor requiere que su cliente se actualice a una versión de protocolo más alta",
+ "428": "Lo sentimos, el servidor requiere una condición previa y solicita que su solicitud incluya encabezados de condición correctos",
+ "429": "Lo sentimos, ha realizado demasiadas solicitudes y el servidor está un poco cansado. Por favor, inténtelo de nuevo más tarde",
+ "431": "Lo sentimos, el campo de encabezado de su solicitud es demasiado grande para ser procesado por el servidor",
+ "451": "Lo sentimos, el servidor se niega a proporcionar este recurso debido a razones legales",
+ "500": "Lo sentimos, el servidor parece estar experimentando dificultades y no puede completar su solicitud en este momento. Por favor, inténtelo de nuevo más tarde",
+ "502": "Lo sentimos, el servidor parece estar desorientado y no puede proporcionar servicio en este momento. Por favor, inténtelo de nuevo más tarde",
+ "503": "Lo sentimos, el servidor no puede procesar su solicitud en este momento, posiblemente debido a una sobrecarga o mantenimiento. Por favor, inténtelo de nuevo más tarde",
+ "504": "Lo sentimos, el servidor no recibió respuesta del servidor upstream. Por favor, inténtelo de nuevo más tarde",
+ "AgentRuntimeError": "Se produjo un error en la ejecución del tiempo de ejecución del modelo de lenguaje Lobe, por favor, verifica la siguiente información o inténtalo de nuevo",
+ "FreePlanLimit": "Actualmente eres un usuario gratuito y no puedes utilizar esta función. Por favor, actualiza a un plan de pago para seguir utilizando.",
+ "InvalidAccessCode": "La contraseña no es válida o está vacía. Por favor, introduce una contraseña de acceso válida o añade una clave API personalizada",
+ "InvalidBedrockCredentials": "La autenticación de Bedrock no se ha completado con éxito, por favor, verifica AccessKeyId/SecretAccessKey e inténtalo de nuevo",
+ "InvalidClerkUser": "Lo siento mucho, actualmente no has iniciado sesión. Por favor, inicia sesión o regístrate antes de continuar.",
+ "InvalidGithubToken": "El token de acceso personal de Github es incorrecto o está vacío. Por favor, verifica el token de acceso personal de Github y vuelve a intentarlo.",
+ "InvalidOllamaArgs": "La configuración de Ollama no es válida, por favor revisa la configuración de Ollama e inténtalo de nuevo",
+ "InvalidProviderAPIKey": "{{provider}} API Key incorrecta o vacía, por favor revisa tu {{provider}} API Key e intenta de nuevo",
+ "LocationNotSupportError": "Lo sentimos, tu ubicación actual no es compatible con este servicio de modelo, puede ser debido a restricciones geográficas o a que el servicio no está disponible. Por favor, verifica si tu ubicación actual es compatible con este servicio o intenta usar otra información de ubicación.",
+ "NoOpenAIAPIKey": "La clave de API de OpenAI está vacía. Agregue una clave de API de OpenAI personalizada",
+ "OllamaBizError": "Error al solicitar el servicio de Ollama, por favor verifica la siguiente información o inténtalo de nuevo",
+ "OllamaServiceUnavailable": "El servicio Ollama no está disponible. Por favor, verifica si Ollama está funcionando correctamente o si la configuración de Ollama para el acceso entre dominios está configurada correctamente.",
+ "OpenAIBizError": "Se produjo un error al solicitar el servicio de OpenAI, por favor, revise la siguiente información o inténtelo de nuevo",
+ "PluginApiNotFound": "Lo sentimos, el API especificado no existe en el manifiesto del complemento. Verifique si su método de solicitud coincide con el API del manifiesto del complemento",
+ "PluginApiParamsError": "Lo sentimos, la validación de los parámetros de entrada de la solicitud del complemento no ha pasado. Verifique si los parámetros de entrada coinciden con la información de descripción del API",
+ "PluginFailToTransformArguments": "Lo siento, no se pudieron transformar los argumentos de la llamada al plugin. Por favor, intenta generar de nuevo el mensaje del asistente o prueba con un modelo de IA de Tools Calling más potente.",
+ "PluginGatewayError": "Lo sentimos, se ha producido un error en la puerta de enlace del complemento. Verifique si la configuración de la puerta de enlace del complemento es correcta",
+ "PluginManifestInvalid": "Lo sentimos, la validación del manifiesto del complemento no ha pasado. Por favor, verifique si el formato del manifiesto es correcto",
+ "PluginManifestNotFound": "Lo sentimos, el servidor no puede encontrar el manifiesto de descripción del complemento (manifest.json). Verifique si la dirección del archivo de descripción del complemento es correcta",
+ "PluginMarketIndexInvalid": "Lo sentimos, la validación del índice del complemento no ha pasado. Por favor, verifique si el formato del archivo de índice es correcto",
+ "PluginMarketIndexNotFound": "Lo sentimos, el servidor no puede encontrar el índice del complemento. Por favor, verifique si la dirección del índice es correcta",
+ "PluginMetaInvalid": "Lo sentimos, la validación de la meta del complemento no ha pasado. Por favor, verifique si el formato de la meta del complemento es correcto",
+ "PluginMetaNotFound": "Lo sentimos, no se encontró la meta del complemento en el índice. Verifique la información de configuración del complemento en el índice",
+ "PluginOpenApiInitError": "Lo sentimos, la inicialización del cliente OpenAPI ha fallado. Verifique si la información de configuración de OpenAPI es correcta",
+ "PluginServerError": "Error al recibir la respuesta del servidor del complemento. Verifique el archivo de descripción del complemento, la configuración del complemento o la implementación del servidor según la información de error a continuación",
+ "PluginSettingsInvalid": "Este complemento necesita una configuración correcta antes de poder usarse. Verifique si su configuración es correcta",
+ "ProviderBizError": "Se produjo un error al solicitar el servicio de {{provider}}, por favor, revise la siguiente información o inténtelo de nuevo",
+ "StreamChunkError": "Error de análisis del bloque de mensajes de la solicitud en streaming. Por favor, verifica si la API actual cumple con las normas estándar o contacta a tu proveedor de API para más información.",
+ "SubscriptionPlanLimit": "Has alcanzado el límite de tu suscripción y no puedes utilizar esta función. Por favor, actualiza a un plan superior o compra un paquete de recursos para seguir utilizando.",
+ "UnknownChatFetchError": "Lo sentimos, se ha producido un error desconocido en la solicitud. Por favor, verifica la información a continuación o intenta de nuevo."
+ },
+ "stt": {
+ "responseError": "Error en la solicitud de servicio. Verifique la configuración o reintente"
+ },
+ "tts": {
+ "responseError": "Error en la solicitud de servicio. Verifique la configuración o reintente"
+ },
+ "unlock": {
+ "addProxyUrl": "Agregar URL de proxy de OpenAI (opcional)",
+ "apiKey": {
+ "description": "Ingresa tu API Key de {{name}} para comenzar la sesión",
+ "title": "Usar tu propia API Key de {{name}}"
+ },
+ "closeMessage": "Cerrar mensaje",
+ "confirm": "Confirmar y volver a intentar",
+ "oauth": {
+ "description": "El administrador ha habilitado la autenticación de inicio de sesión única. Haz clic en el botón a continuación para iniciar sesión y desbloquear la aplicación.",
+ "success": "Inicio de sesión exitoso",
+ "title": "Iniciar sesión",
+ "welcome": "¡Bienvenido!"
+ },
+ "password": {
+ "description": "El administrador ha activado el cifrado de la aplicación. Ingresa la contraseña de la aplicación para desbloquearla. La contraseña solo se necesita ingresar una vez",
+ "placeholder": "Ingresa la contraseña",
+ "title": "Ingresar contraseña para desbloquear la aplicación"
+ },
+ "tabs": {
+ "apiKey": "Clave de API personalizada",
+ "password": "Contraseña"
+ }
+ },
+ "upload": {
+ "desc": "Detalles: {{detail}}",
+ "fileOnlySupportInServerMode": "El modo de implementación actual no admite la carga de archivos que no sean imágenes. Si necesita cargar un archivo en formato {{ext}}, cambie a la implementación de base de datos en servidor o utilice el servicio {{cloud}}.",
+ "networkError": "Por favor, verifica que tu red esté funcionando correctamente y comprueba si la configuración de CORS del servicio de almacenamiento de archivos es correcta.",
+ "title": "Error al subir el archivo, por favor verifica la conexión a internet o inténtalo de nuevo más tarde",
+ "unknownError": "Razón del error: {{reason}}",
+ "uploadFailed": "La carga del archivo ha fallado."
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/file.json b/DigitalHumanWeb/locales/es-ES/file.json
new file mode 100644
index 0000000..521a7e8
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "Gestiona tus archivos y tu base de conocimientos",
+ "detail": {
+ "basic": {
+ "createdAt": "Fecha de creación",
+ "filename": "Nombre del archivo",
+ "size": "Tamaño del archivo",
+ "title": "Información básica",
+ "type": "Formato",
+ "updatedAt": "Fecha de actualización"
+ },
+ "data": {
+ "chunkCount": "Número de fragmentos",
+ "embedding": {
+ "default": "No vectorizado aún",
+ "error": "Error",
+ "pending": "Pendiente de inicio",
+ "processing": "En proceso",
+ "success": "Completado"
+ },
+ "embeddingStatus": "Vectorización"
+ }
+ },
+ "empty": "No hay archivos/carpetas subidos aún",
+ "header": {
+ "actions": {
+ "newFolder": "Nueva carpeta",
+ "uploadFile": "Subir archivo",
+ "uploadFolder": "Subir carpeta"
+ },
+ "uploadButton": "Subir"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "Estás a punto de eliminar esta base de conocimientos. Los archivos no se eliminarán, se moverán a Todos los archivos. Una vez eliminada, la base de conocimientos no se podrá recuperar, por favor actúa con precaución.",
+ "empty": "Haz clic en <1>+1> para comenzar a crear una base de conocimientos"
+ },
+ "new": "Nueva base de conocimientos",
+ "title": "Base de conocimientos"
+ },
+ "networkError": "Error al obtener la base de conocimientos, por favor verifica la conexión a internet y vuelve a intentarlo",
+ "notSupportGuide": {
+ "desc": "La instancia de despliegue actual está en modo de base de datos cliente, no se puede utilizar la función de gestión de archivos. Por favor, cambia a <1>modo de despliegue de base de datos en servidor1>, o utiliza directamente <3>LobeChat Cloud3>",
+ "features": {
+ "allKind": {
+ "desc": "Soporta los tipos de archivos más comunes, incluyendo formatos de documentos como Word, PPT, Excel, PDF, TXT, así como archivos de código como JS, Python, etc.",
+ "title": "Análisis de múltiples tipos de archivos"
+ },
+ "embeddings": {
+ "desc": "Utiliza modelos de vectores de alto rendimiento para vectorizar fragmentos de texto, permitiendo la búsqueda semántica del contenido del archivo",
+ "title": "Semantización de vectores"
+ },
+ "repos": {
+ "desc": "Soporta la creación de bases de conocimientos y permite añadir diferentes tipos de archivos, construyendo tu propio conocimiento en el área",
+ "title": "Base de conocimientos"
+ }
+ },
+ "title": "El modo de despliegue actual no soporta la gestión de archivos"
+ },
+ "preview": {
+ "downloadFile": "Descargar archivo",
+ "unsupportedFileAndContact": "Este formato de archivo no es compatible con la vista previa en línea. Si desea solicitar una vista previa, no dude en <1>contactarnos1>."
+ },
+ "searchFilePlaceholder": "Buscar archivo",
+ "tab": {
+ "all": "Todos los archivos",
+ "audios": "Audios",
+ "documents": "Documentos",
+ "images": "Imágenes",
+ "videos": "Videos",
+ "websites": "Sitios web"
+ },
+ "title": "Archivos",
+ "uploadDock": {
+ "body": {
+ "collapse": "Colapsar",
+ "item": {
+ "done": "Subido",
+ "error": "Error en la subida, por favor intenta de nuevo",
+ "pending": "Preparando para subir...",
+ "processing": "Procesando archivo...",
+ "restTime": "Tiempo restante {{time}}"
+ }
+ },
+ "totalCount": "Total {{count}} elementos",
+ "uploadStatus": {
+ "error": "Error en la subida",
+ "pending": "Esperando para subir",
+ "processing": "Subiendo",
+ "success": "Subida completada",
+ "uploading": "Subiendo"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/knowledgeBase.json b/DigitalHumanWeb/locales/es-ES/knowledgeBase.json
new file mode 100644
index 0000000..0d0646f
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "Archivo añadido con éxito, <1>ver ahora1>",
+ "confirm": "Añadir",
+ "id": {
+ "placeholder": "Seleccione la base de conocimiento a añadir",
+ "required": "Seleccione la base de conocimiento",
+ "title": "Base de conocimiento objetivo"
+ },
+ "title": "Añadir a la base de conocimiento",
+ "totalFiles": "Se han seleccionado {{count}} archivos"
+ },
+ "createNew": {
+ "confirm": "Crear nuevo",
+ "description": {
+ "placeholder": "Descripción de la base de conocimiento (opcional)"
+ },
+ "formTitle": "Información básica",
+ "name": {
+ "placeholder": "Nombre de la base de conocimiento",
+ "required": "Por favor, introduzca el nombre de la base de conocimiento"
+ },
+ "title": "Crear nueva base de conocimiento"
+ },
+ "tab": {
+ "evals": "Evaluaciones",
+ "files": "Documentos",
+ "settings": "Configuraciones",
+ "testing": "Prueba de recuperación"
+ },
+ "title": "Base de conocimiento"
+}
diff --git a/DigitalHumanWeb/locales/es-ES/market.json b/DigitalHumanWeb/locales/es-ES/market.json
new file mode 100644
index 0000000..52d7ea7
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "Añadir asistente",
+ "addAgentAndConverse": "Agregar agente y conversar",
+ "addAgentSuccess": "Agente agregado con éxito",
+ "guide": {
+ "func1": {
+ "desc1": "En la ventana de chat, accede a la página de configuración del asistente a través del icono de ajustes en la esquina superior derecha.",
+ "desc2": "Haz clic en el botón 'Enviar al mercado de asistentes' en la esquina superior derecha.",
+ "tag": "Método 1",
+ "title": "Enviar a través de LobeChat"
+ },
+ "func2": {
+ "button": "Ir al repositorio de asistentes en Github",
+ "desc": "Si deseas agregar un asistente al índice, utiliza agent-template.json o agent-template-full.json para crear una entrada en el directorio de complementos, escribe una breve descripción y etiquétala adecuadamente, luego crea una solicitud de extracción.",
+ "tag": "Método 2",
+ "title": "Enviar a través de Github"
+ }
+ },
+ "search": {
+ "placeholder": "Buscar nombre, descripción o palabras clave del asistente..."
+ },
+ "sidebar": {
+ "comment": "Comentarios",
+ "prompt": "Sugerencias",
+ "title": "Detalles del asistente"
+ },
+ "submitAgent": "Enviar asistente",
+ "title": {
+ "allAgents": "Todos los asistentes",
+ "recentSubmits": "Envíos recientes"
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/metadata.json b/DigitalHumanWeb/locales/es-ES/metadata.json
new file mode 100644
index 0000000..f7fb594
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}} te ofrece la mejor experiencia de uso de ChatGPT, Claude, Gemini y OLLaMA WebUI",
+ "title": "{{appName}}: Herramienta de productividad personal de IA, dale a tu cerebro un impulso más inteligente"
+ },
+ "discover": {
+ "assistants": {
+ "description": "Creación de contenido, redacción, preguntas y respuestas, generación de imágenes, generación de videos, generación de voz, Agentes inteligentes, flujos de trabajo automatizados, personaliza tu asistente inteligente AI / GPTs / OLLaMA",
+ "title": "Asistentes de IA"
+ },
+ "description": "Creación de contenido, redacción, preguntas y respuestas, generación de imágenes, generación de videos, generación de voz, Agentes inteligentes, flujos de trabajo automatizados, aplicaciones de IA personalizadas, personaliza tu espacio de trabajo de aplicaciones AI",
+ "models": {
+ "description": "Explora los modelos de IA más populares OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek",
+ "title": "Modelos de IA"
+ },
+ "plugins": {
+ "description": "Explora la generación de gráficos, académicos, imágenes, videos, voces y flujos de trabajo automatizados, integrando capacidades ricas de plugins para tu asistente.",
+ "title": "Complementos de IA"
+ },
+ "providers": {
+ "description": "Explora los principales proveedores de modelos OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter",
+ "title": "Proveedores de servicios de modelos de IA"
+ },
+ "search": "Buscar",
+ "title": "Descubrir"
+ },
+ "plugins": {
+ "description": "Búsqueda, generación de gráficos, académico, generación de imágenes, generación de videos, generación de voz, flujos de trabajo automatizados, personaliza las capacidades de los plugins ToolCall exclusivos de ChatGPT / Claude",
+ "title": "Mercado de Plugins"
+ },
+ "welcome": {
+ "description": "{{appName}} te ofrece la mejor experiencia de uso de ChatGPT, Claude, Gemini y OLLaMA WebUI",
+ "title": "Bienvenido a {{appName}}: Herramienta de productividad personal de IA, dale a tu cerebro un impulso más inteligente"
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/migration.json b/DigitalHumanWeb/locales/es-ES/migration.json
new file mode 100644
index 0000000..444a559
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "Limpiar datos locales",
+ "downloadBackup": "Descargar copia de seguridad",
+ "reUpgrade": "Volver a actualizar",
+ "start": "Comenzar",
+ "upgrade": "Actualizar"
+ },
+ "clear": {
+ "confirm": "Estás a punto de borrar los datos locales (la configuración global no se verá afectada). Por favor, asegúrate de haber descargado una copia de seguridad de los datos."
+ },
+ "description": "En la nueva versión, el almacenamiento de datos de {{appName}} ha dado un gran salto. Por lo tanto, vamos a actualizar los datos de la versión anterior para ofrecerte una mejor experiencia de uso.",
+ "features": {
+ "capability": {
+ "desc": "Basado en la tecnología IndexedDB, suficiente para almacenar todos los mensajes de tu vida.",
+ "title": "Gran capacidad"
+ },
+ "performance": {
+ "desc": "Indexación automática de millones de mensajes, con respuestas de búsqueda en milisegundos.",
+ "title": "Alto rendimiento"
+ },
+ "use": {
+ "desc": "Soporta la búsqueda de títulos, descripciones, etiquetas, contenido de mensajes e incluso textos traducidos, mejorando significativamente la eficiencia de búsqueda diaria.",
+ "title": "Más fácil de usar"
+ }
+ },
+ "title": "Evolución de datos de {{appName}}",
+ "upgrade": {
+ "error": {
+ "subTitle": "Lo sentimos, ha ocurrido un error durante el proceso de actualización de la base de datos. Por favor, intenta las siguientes soluciones: A. Borra los datos locales y vuelve a importar los datos de respaldo; B. Haz clic en el botón 'Reactualizar'.
Si el problema persiste, por favor <1>informa del problema1>, y te ayudaremos a resolverlo lo antes posible.",
+ "title": "Error en la actualización de la base de datos"
+ },
+ "success": {
+ "subTitle": "La base de datos de {{appName}} se ha actualizado a la última versión, ¡comienza a disfrutarla ahora!",
+ "title": "Actualización de la base de datos exitosa"
+ }
+ },
+ "upgradeTip": "La actualización tomará aproximadamente de 10 a 20 segundos, por favor no cierres {{appName}} durante el proceso."
+ },
+ "migrateError": {
+ "missVersion": "La importación de datos no incluye el número de versión. Por favor, verifica el archivo e inténtalo de nuevo",
+ "noMigration": "No se encontró un plan de migración correspondiente a la versión actual. Por favor, verifica el número de versión e inténtalo de nuevo. Si el problema persiste, por favor envía un informe de problema"
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/modelProvider.json b/DigitalHumanWeb/locales/es-ES/modelProvider.json
new file mode 100644
index 0000000..cadb4e2
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "La versión de la API de Azure, siguiendo el formato AAAA-MM-DD, consulta la [última versión](https://learn.microsoft.com/es-es/azure/ai-services/openai/reference#chat-completions)",
+ "fetch": "Obtener lista",
+ "title": "Versión de la API de Azure"
+ },
+ "empty": "Introduce el ID del modelo para agregar el primer modelo",
+ "endpoint": {
+ "desc": "Puedes encontrar este valor en la sección 'Claves y endpoint' al revisar tus recursos en el portal de Azure",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "Dirección de la API de Azure"
+ },
+ "modelListPlaceholder": "Selecciona o agrega el modelo de OpenAI que has implementado",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "Puedes encontrar este valor en la sección 'Claves y endpoint' al revisar tus recursos en el portal de Azure. Puedes usar KEY1 o KEY2",
+ "placeholder": "Clave API de Azure",
+ "title": "Clave API"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "Introduce tu AWS Access Key Id",
+ "placeholder": "AWS Access Key Id",
+ "title": "AWS Access Key Id"
+ },
+ "checker": {
+ "desc": "Prueba si el AccessKeyId / SecretAccessKey se ha introducido correctamente"
+ },
+ "region": {
+ "desc": "Introduce tu región de AWS",
+ "placeholder": "Región de AWS",
+ "title": "Región de AWS"
+ },
+ "secretAccessKey": {
+ "desc": "Introduce tu AWS Secret Access Key",
+ "placeholder": "AWS Secret Access Key",
+ "title": "AWS Secret Access Key"
+ },
+ "sessionToken": {
+ "desc": "Si estás utilizando AWS SSO/STS, introduce tu Token de Sesión de AWS",
+ "placeholder": "Token de Sesión de AWS",
+ "title": "Token de Sesión de AWS (opcional)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "Región de servicio personalizada",
+ "customSessionToken": "Token de sesión personalizado",
+ "description": "Introduce tu AWS AccessKeyId / SecretAccessKey para comenzar la sesión. La aplicación no guardará tu configuración de autenticación.",
+ "title": "Usar información de autenticación de Bedrock personalizada"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "Introduce tu PAT de Github, haz clic [aquí](https://github.com/settings/tokens) para crear uno",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "Prueba si la dirección del proxy de la interfaz se ha introducido correctamente",
+ "title": "Comprobación de conectividad"
+ },
+ "customModelName": {
+ "desc": "Añade modelos personalizados, separa múltiples modelos con comas (,)",
+ "placeholder": "vicuna,llava,codellama,llama2:13b-text",
+ "title": "Nombre de modelos personalizados"
+ },
+ "download": {
+ "desc": "Ollama is downloading the model. Please try not to close this page. The download will resume from where it left off if interrupted.",
+ "remainingTime": "Remaining Time",
+ "speed": "Download Speed",
+ "title": "Downloading model {{model}}"
+ },
+ "endpoint": {
+ "desc": "Introduce la dirección del proxy de la interfaz de Ollama, déjalo en blanco si no se ha especificado localmente",
+ "title": "Dirección del proxy de la interfaz"
+ },
+ "setup": {
+ "cors": {
+ "description": "Debido a restricciones de seguridad del navegador, es necesario configurar Ollama para permitir el acceso entre dominios.",
+ "linux": {
+ "env": "En la sección [Service], agrega `Environment` y añade la variable de entorno OLLAMA_ORIGINS:",
+ "reboot": "Recarga systemd y reinicia Ollama.",
+ "systemd": "Edita el servicio ollama llamando a systemd:"
+ },
+ "macos": "Abre la aplicación 'Terminal', pega y ejecuta el siguiente comando, luego presiona Enter.",
+ "reboot": "Reinicia el servicio de Ollama una vez completada la ejecución.",
+ "title": "Configuración para permitir el acceso entre dominios en Ollama",
+ "windows": "En Windows, ve a 'Panel de control', edita las variables de entorno del sistema. Crea una nueva variable de entorno llamada 'OLLAMA_ORIGINS' para tu cuenta de usuario, con el valor '*', y haz clic en 'OK/Aplicar' para guardar los cambios."
+ },
+ "install": {
+ "description": "Por favor, asegúrate de que has activado Ollama. Si no has descargado Ollama, por favor visita el sitio web oficial para <1>descargarlo1>.",
+ "docker": "Si prefieres usar Docker, Ollama también ofrece una imagen oficial en Docker. Puedes obtenerla con el siguiente comando:",
+ "linux": {
+ "command": "Instala con el siguiente comando:",
+ "manual": "O también puedes consultar la <1>Guía de instalación manual en Linux1> para instalarlo por tu cuenta."
+ },
+ "title": "Instalación local y activación de la aplicación Ollama",
+ "windowsTab": "Windows (Versión de vista previa)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "Cancel Download",
+ "confirm": "Download",
+ "description": "Enter your Ollama model tag to continue the session",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "Starting download...",
+ "title": "Download specified Ollama model"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI Cero Uno Todo"
+ },
+ "zhipu": {
+ "title": "Inteligencia de Mapa"
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/models.json b/DigitalHumanWeb/locales/es-ES/models.json
new file mode 100644
index 0000000..f6d8cee
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34B, con un rico conjunto de muestras de entrenamiento, ofrece un rendimiento superior en aplicaciones industriales."
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9B soporta 16K Tokens, proporcionando una capacidad de generación de lenguaje eficiente y fluida."
+ },
+ "360gpt-pro": {
+ "description": "360GPT Pro, como un miembro importante de la serie de modelos de IA de 360, satisface diversas aplicaciones de procesamiento de lenguaje natural con su eficiente capacidad de manejo de textos, soportando la comprensión de textos largos y funciones de diálogo en múltiples turnos."
+ },
+ "360gpt-turbo": {
+ "description": "360GPT Turbo ofrece potentes capacidades de cálculo y diálogo, con una excelente comprensión semántica y eficiencia de generación, siendo la solución ideal para empresas y desarrolladores como asistente inteligente."
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8K enfatiza la seguridad semántica y la responsabilidad, diseñado específicamente para aplicaciones que requieren altos estándares de seguridad de contenido, asegurando la precisión y robustez de la experiencia del usuario."
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Pro es un modelo avanzado de procesamiento de lenguaje natural lanzado por la empresa 360, con una excelente capacidad de generación y comprensión de textos, destacándose especialmente en la generación y creación de contenido, capaz de manejar tareas complejas de conversión de lenguaje y representación de roles."
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultra es la versión más poderosa de la serie de modelos grandes de Xinghuo, mejorando la comprensión y capacidad de resumen de contenido textual al actualizar la conexión de búsqueda en línea. Es una solución integral para mejorar la productividad en la oficina y responder con precisión a las necesidades, siendo un producto inteligente líder en la industria."
+ },
+ "Baichuan2-Turbo": {
+ "description": "Utiliza tecnología de búsqueda mejorada para lograr un enlace completo entre el gran modelo y el conocimiento del dominio, así como el conocimiento de toda la red. Soporta la carga de documentos en PDF, Word y otros formatos, así como la entrada de URL, proporcionando información oportuna y completa, con resultados precisos y profesionales."
+ },
+ "Baichuan3-Turbo": {
+ "description": "Optimizado para escenarios de alta frecuencia empresarial, con mejoras significativas en el rendimiento y una excelente relación calidad-precio. En comparación con el modelo Baichuan2, la creación de contenido mejora un 20%, las preguntas y respuestas de conocimiento un 17%, y la capacidad de interpretación de roles un 40%. En general, su rendimiento es superior al de GPT-3.5."
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "Con una ventana de contexto ultra larga de 128K, optimizado para escenarios de alta frecuencia empresarial, con mejoras significativas en el rendimiento y una excelente relación calidad-precio. En comparación con el modelo Baichuan2, la creación de contenido mejora un 20%, las preguntas y respuestas de conocimiento un 17%, y la capacidad de interpretación de roles un 40%. En general, su rendimiento es superior al de GPT-3.5."
+ },
+ "Baichuan4": {
+ "description": "El modelo tiene la mejor capacidad en el país, superando a los modelos principales extranjeros en tareas en chino como enciclopedias, textos largos y creación generativa. También cuenta con capacidades multimodales líderes en la industria, destacándose en múltiples evaluaciones de referencia autorizadas."
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B) es un modelo innovador, adecuado para aplicaciones en múltiples campos y tareas complejas."
+ },
+ "Max-32k": {
+ "description": "Spark Max 32K está equipado con una gran capacidad de procesamiento de contexto, una comprensión de contexto más fuerte y habilidades de razonamiento lógico, soporta entradas de texto de 32K tokens, adecuado para la lectura de documentos largos, preguntas y respuestas de conocimiento privado y otros escenarios."
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPO es una fusión de múltiples modelos altamente flexible, diseñada para ofrecer una experiencia creativa excepcional."
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B) es un modelo de instrucciones de alta precisión, adecuado para cálculos complejos."
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B) ofrece salidas de lenguaje optimizadas y diversas posibilidades de aplicación."
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "Actualización del modelo Phi-3-mini."
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "El mismo modelo Phi-3-medium, pero con un tamaño de contexto más grande para RAG o indicaciones de pocos disparos."
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "Un modelo de 14B parámetros, que demuestra mejor calidad que Phi-3-mini, con un enfoque en datos densos de razonamiento de alta calidad."
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "El mismo modelo Phi-3-mini, pero con un tamaño de contexto más grande para RAG o indicaciones de pocos disparos."
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "El miembro más pequeño de la familia Phi-3. Optimizado tanto para calidad como para baja latencia."
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "El mismo modelo Phi-3-small, pero con un tamaño de contexto más grande para RAG o indicaciones de pocos disparos."
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "Un modelo de 7B parámetros, que demuestra mejor calidad que Phi-3-mini, con un enfoque en datos densos de razonamiento de alta calidad."
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128K está configurado con una capacidad de procesamiento de contexto extremadamente grande, capaz de manejar hasta 128K de información contextual, especialmente adecuado para contenido largo que requiere análisis completo y manejo de relaciones lógicas a largo plazo, proporcionando una lógica fluida y consistente y un soporte diverso de citas en comunicaciones de texto complejas."
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "Como versión beta de Qwen2, Qwen1.5 utiliza datos a gran escala para lograr funciones de conversación más precisas."
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B) ofrece respuestas rápidas y capacidades de conversación natural, adecuado para entornos multilingües."
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2 es un modelo de lenguaje general avanzado, que soporta múltiples tipos de instrucciones."
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5 es una nueva serie de modelos de lenguaje a gran escala, diseñada para optimizar el procesamiento de tareas de instrucción."
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5 es una nueva serie de modelos de lenguaje a gran escala, diseñada para optimizar el procesamiento de tareas de instrucción."
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5 es una nueva serie de modelos de lenguaje a gran escala, con una mayor capacidad de comprensión y generación."
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5 es una nueva serie de modelos de lenguaje a gran escala, diseñada para optimizar el procesamiento de tareas de instrucción."
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coder se centra en la escritura de código."
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Math se centra en la resolución de problemas en el ámbito de las matemáticas, proporcionando respuestas profesionales a preguntas de alta dificultad."
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9B es una versión de código abierto, que proporciona una experiencia de conversación optimizada para aplicaciones de diálogo."
+ },
+ "abab5.5-chat": {
+ "description": "Orientado a escenarios de productividad, admite el procesamiento de tareas complejas y la generación eficiente de texto, adecuado para aplicaciones en campos profesionales."
+ },
+ "abab5.5s-chat": {
+ "description": "Diseñado para escenarios de diálogo de personajes en chino, ofrece capacidades de generación de diálogos de alta calidad en chino, adecuado para diversas aplicaciones."
+ },
+ "abab6.5g-chat": {
+ "description": "Diseñado para diálogos de personajes multilingües, admite generación de diálogos de alta calidad en inglés y otros idiomas."
+ },
+ "abab6.5s-chat": {
+ "description": "Adecuado para una amplia gama de tareas de procesamiento de lenguaje natural, incluyendo generación de texto, sistemas de diálogo, etc."
+ },
+ "abab6.5t-chat": {
+ "description": "Optimizado para escenarios de diálogo de personajes en chino, ofrece capacidades de generación de diálogos fluidos y acordes con las expresiones chinas."
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "Modelo de llamada de función de código abierto de Fireworks, que ofrece capacidades de ejecución de instrucciones sobresalientes y características personalizables."
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "Firefunction-v2, lanzado por Fireworks, es un modelo de llamada de función de alto rendimiento, desarrollado sobre Llama-3 y optimizado para escenarios como llamadas de función, diálogos y seguimiento de instrucciones."
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13b es un modelo de lenguaje visual que puede recibir entradas de imagen y texto simultáneamente, entrenado con datos de alta calidad, adecuado para tareas multimodales."
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "El modelo de instrucciones Gemma 2 9B, basado en la tecnología anterior de Google, es adecuado para responder preguntas, resumir y razonar en diversas tareas de generación de texto."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "El modelo de instrucciones Llama 3 70B está optimizado para diálogos multilingües y comprensión del lenguaje natural, superando el rendimiento de la mayoría de los modelos competidores."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "El modelo de instrucciones Llama 3 70B (versión HF) es consistente con los resultados de la implementación oficial, adecuado para tareas de seguimiento de instrucciones de alta calidad."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "El modelo de instrucciones Llama 3 8B está optimizado para diálogos y tareas multilingües, ofreciendo un rendimiento excepcional y eficiente."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "El modelo de instrucciones Llama 3 8B (versión HF) es consistente con los resultados de la implementación oficial, ofreciendo alta consistencia y compatibilidad multiplataforma."
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "El modelo de instrucciones Llama 3.1 405B, con parámetros de gran escala, es adecuado para tareas complejas y seguimiento de instrucciones en escenarios de alta carga."
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "El modelo de instrucciones Llama 3.1 70B ofrece una capacidad excepcional de comprensión y generación de lenguaje, siendo la elección ideal para tareas de diálogo y análisis."
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "El modelo de instrucciones Llama 3.1 8B está optimizado para diálogos multilingües, capaz de superar la mayoría de los modelos de código abierto y cerrado en estándares de la industria."
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "El modelo de instrucciones Mixtral MoE 8x22B, con parámetros a gran escala y arquitectura de múltiples expertos, soporta de manera integral el procesamiento eficiente de tareas complejas."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "El modelo de instrucciones Mixtral MoE 8x7B, con una arquitectura de múltiples expertos, ofrece un seguimiento y ejecución de instrucciones eficientes."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "El modelo de instrucciones Mixtral MoE 8x7B (versión HF) tiene un rendimiento consistente con la implementación oficial, adecuado para una variedad de escenarios de tareas eficientes."
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "El modelo MythoMax L2 13B combina técnicas de fusión innovadoras, destacándose en narración y juegos de rol."
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "El modelo de instrucciones Phi 3 Vision es un modelo multimodal ligero, capaz de manejar información visual y textual compleja, con una fuerte capacidad de razonamiento."
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "El modelo StarCoder 15.5B soporta tareas de programación avanzadas, con capacidades multilingües mejoradas, adecuado para la generación y comprensión de código complejo."
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "El modelo StarCoder 7B está entrenado en más de 80 lenguajes de programación, con una excelente capacidad de completado de código y comprensión del contexto."
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "El modelo Yi-Large ofrece una capacidad de procesamiento multilingüe excepcional, adecuado para diversas tareas de generación y comprensión de lenguaje."
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "Un modelo multilingüe de 398B parámetros (94B activos), que ofrece una ventana de contexto larga de 256K, llamada a funciones, salida estructurada y generación fundamentada."
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "Un modelo multilingüe de 52B parámetros (12B activos), que ofrece una ventana de contexto larga de 256K, llamada a funciones, salida estructurada y generación fundamentada."
+ },
+ "ai21-jamba-instruct": {
+ "description": "Un modelo LLM basado en Mamba de calidad de producción para lograr un rendimiento, calidad y eficiencia de costos de primera clase."
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "Claude 3.5 Sonnet eleva el estándar de la industria, superando a modelos competidores y a Claude 3 Opus, destacándose en evaluaciones amplias, mientras mantiene la velocidad y costo de nuestros modelos de nivel medio."
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "Claude 3 Haiku es el modelo más rápido y compacto de Anthropic, ofreciendo una velocidad de respuesta casi instantánea. Puede responder rápidamente a consultas y solicitudes simples. Los clientes podrán construir experiencias de IA sin costuras que imiten la interacción humana. Claude 3 Haiku puede manejar imágenes y devolver salidas de texto, con una ventana de contexto de 200K."
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "Claude 3 Opus es el modelo de IA más potente de Anthropic, con un rendimiento de vanguardia en tareas altamente complejas. Puede manejar indicaciones abiertas y escenarios no vistos, con una fluidez y comprensión humana excepcionales. Claude 3 Opus muestra la vanguardia de las posibilidades de la IA generativa. Claude 3 Opus puede manejar imágenes y devolver salidas de texto, con una ventana de contexto de 200K."
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "Claude 3 Sonnet de Anthropic logra un equilibrio ideal entre inteligencia y velocidad, especialmente adecuado para cargas de trabajo empresariales. Ofrece la máxima utilidad a un costo inferior al de los competidores, diseñado para ser un modelo confiable y duradero, apto para implementaciones de IA a gran escala. Claude 3 Sonnet puede manejar imágenes y devolver salidas de texto, con una ventana de contexto de 200K."
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "Un modelo rápido, económico y aún muy capaz, que puede manejar una variedad de tareas, incluyendo conversaciones cotidianas, análisis de texto, resúmenes y preguntas y respuestas de documentos."
+ },
+ "anthropic.claude-v2": {
+ "description": "Anthropic muestra un modelo con alta capacidad en una amplia gama de tareas, desde diálogos complejos y generación de contenido creativo hasta el seguimiento detallado de instrucciones."
+ },
+ "anthropic.claude-v2:1": {
+ "description": "La versión actualizada de Claude 2, con el doble de ventana de contexto, así como mejoras en la fiabilidad, tasa de alucinaciones y precisión basada en evidencia en contextos de documentos largos y RAG."
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 Haiku es el modelo más rápido y compacto de Anthropic, diseñado para lograr respuestas casi instantáneas. Tiene un rendimiento de orientación rápido y preciso."
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opus es el modelo más potente de Anthropic para manejar tareas altamente complejas. Destaca en rendimiento, inteligencia, fluidez y comprensión."
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 Sonnet ofrece capacidades que superan a Opus y una velocidad más rápida que Sonnet, manteniendo el mismo precio que Sonnet. Sonnet es especialmente hábil en programación, ciencia de datos, procesamiento visual y tareas de agente."
+ },
+ "aya": {
+ "description": "Aya 23 es un modelo multilingüe lanzado por Cohere, que admite 23 idiomas, facilitando aplicaciones de lenguaje diversas."
+ },
+ "aya:35b": {
+ "description": "Aya 23 es un modelo multilingüe lanzado por Cohere, que admite 23 idiomas, facilitando aplicaciones de lenguaje diversas."
+ },
+ "charglm-3": {
+ "description": "CharGLM-3 está diseñado para juegos de rol y acompañamiento emocional, soportando memoria de múltiples rondas y diálogos personalizados, con aplicaciones amplias."
+ },
+ "chatgpt-4o-latest": {
+ "description": "ChatGPT-4o es un modelo dinámico que se actualiza en tiempo real para mantener la versión más actual. Combina una poderosa comprensión y generación de lenguaje, adecuado para aplicaciones a gran escala, incluyendo servicio al cliente, educación y soporte técnico."
+ },
+ "claude-2.0": {
+ "description": "Claude 2 ofrece avances en capacidades clave para empresas, incluyendo un contexto líder en la industria de 200K tokens, una reducción significativa en la tasa de alucinaciones del modelo, indicaciones del sistema y una nueva función de prueba: llamadas a herramientas."
+ },
+ "claude-2.1": {
+ "description": "Claude 2 ofrece avances en capacidades clave para empresas, incluyendo un contexto líder en la industria de 200K tokens, una reducción significativa en la tasa de alucinaciones del modelo, indicaciones del sistema y una nueva función de prueba: llamadas a herramientas."
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnet ofrece capacidades que superan a Opus y una velocidad más rápida que Sonnet, manteniendo el mismo precio que Sonnet. Sonnet es especialmente bueno en programación, ciencia de datos, procesamiento visual y tareas de agentes."
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haiku es el modelo más rápido y compacto de Anthropic, diseñado para lograr respuestas casi instantáneas. Tiene un rendimiento de orientación rápido y preciso."
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opus es el modelo más potente de Anthropic para manejar tareas altamente complejas. Destaca en rendimiento, inteligencia, fluidez y comprensión."
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnet proporciona un equilibrio ideal entre inteligencia y velocidad para cargas de trabajo empresariales. Ofrece la máxima utilidad a un costo más bajo, siendo fiable y adecuado para implementaciones a gran escala."
+ },
+ "claude-instant-1.2": {
+ "description": "El modelo de Anthropic está diseñado para generación de texto de baja latencia y alto rendimiento, soportando la generación de cientos de páginas de texto."
+ },
+ "codegeex-4": {
+ "description": "CodeGeeX-4 es un potente asistente de programación AI, que admite preguntas y respuestas inteligentes y autocompletado de código en varios lenguajes de programación, mejorando la eficiencia del desarrollo."
+ },
+ "codegemma": {
+ "description": "CodeGemma es un modelo de lenguaje ligero especializado en diversas tareas de programación, que admite iteraciones rápidas e integración."
+ },
+ "codegemma:2b": {
+ "description": "CodeGemma es un modelo de lenguaje ligero especializado en diversas tareas de programación, que admite iteraciones rápidas e integración."
+ },
+ "codellama": {
+ "description": "Code Llama es un LLM enfocado en la generación y discusión de código, combinando un amplio soporte para lenguajes de programación, adecuado para entornos de desarrolladores."
+ },
+ "codellama:13b": {
+ "description": "Code Llama es un LLM enfocado en la generación y discusión de código, combinando un amplio soporte para lenguajes de programación, adecuado para entornos de desarrolladores."
+ },
+ "codellama:34b": {
+ "description": "Code Llama es un LLM enfocado en la generación y discusión de código, combinando un amplio soporte para lenguajes de programación, adecuado para entornos de desarrolladores."
+ },
+ "codellama:70b": {
+ "description": "Code Llama es un LLM enfocado en la generación y discusión de código, combinando un amplio soporte para lenguajes de programación, adecuado para entornos de desarrolladores."
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5 es un modelo de lenguaje a gran escala entrenado con una gran cantidad de datos de código, diseñado para resolver tareas de programación complejas."
+ },
+ "codestral": {
+ "description": "Codestral es el primer modelo de código de Mistral AI, que proporciona un excelente soporte para tareas de generación de código."
+ },
+ "codestral-latest": {
+ "description": "Codestral es un modelo generativo de vanguardia enfocado en la generación de código, optimizado para tareas de completado de código y relleno intermedio."
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22B es un modelo diseñado para seguir instrucciones, diálogos y programación."
+ },
+ "cohere-command-r": {
+ "description": "Command R es un modelo generativo escalable dirigido a RAG y uso de herramientas para habilitar IA a escala de producción para empresas."
+ },
+ "cohere-command-r-plus": {
+ "description": "Command R+ es un modelo optimizado para RAG de última generación diseñado para abordar cargas de trabajo de nivel empresarial."
+ },
+ "command-r": {
+ "description": "Command R es un LLM optimizado para tareas de diálogo y contexto largo, especialmente adecuado para interacciones dinámicas y gestión del conocimiento."
+ },
+ "command-r-plus": {
+ "description": "Command R+ es un modelo de lenguaje de gran tamaño de alto rendimiento, diseñado para escenarios empresariales reales y aplicaciones complejas."
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instruct ofrece capacidades de procesamiento de instrucciones de alta fiabilidad, soportando aplicaciones en múltiples industrias."
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5 combina las excelentes características de versiones anteriores, mejorando la capacidad general y de codificación."
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67B es un modelo avanzado entrenado para diálogos de alta complejidad."
+ },
+ "deepseek-chat": {
+ "description": "Un nuevo modelo de código abierto que fusiona capacidades generales y de codificación, que no solo conserva la capacidad de diálogo general del modelo Chat original y la potente capacidad de procesamiento de código del modelo Coder, sino que también se alinea mejor con las preferencias humanas. Además, DeepSeek-V2.5 ha logrado mejoras significativas en tareas de escritura, seguimiento de instrucciones y más."
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2 es un modelo de código de expertos híbrido de código abierto, que destaca en tareas de codificación, comparable a GPT4-Turbo."
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2 es un modelo de código de expertos híbrido de código abierto, que destaca en tareas de codificación, comparable a GPT4-Turbo."
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2 es un modelo de lenguaje Mixture-of-Experts eficiente, adecuado para necesidades de procesamiento económico."
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236B es el modelo de código de diseño de DeepSeek, que ofrece una potente capacidad de generación de código."
+ },
+ "deepseek/deepseek-chat": {
+ "description": "Un nuevo modelo de código abierto que fusiona capacidades generales y de codificación, no solo conserva la capacidad de diálogo general del modelo Chat original y la potente capacidad de procesamiento de código del modelo Coder, sino que también se alinea mejor con las preferencias humanas. Además, DeepSeek-V2.5 ha logrado mejoras significativas en tareas de escritura, seguimiento de instrucciones y más."
+ },
+ "emohaa": {
+ "description": "Emohaa es un modelo psicológico con capacidades de consulta profesional, ayudando a los usuarios a comprender problemas emocionales."
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001 (Ajuste) ofrece un rendimiento estable y ajustable, siendo una opción ideal para soluciones de tareas complejas."
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002 (Ajuste) proporciona un excelente soporte multimodal, centrado en la resolución efectiva de tareas complejas."
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Pro es el modelo de IA de alto rendimiento de Google, diseñado para la escalabilidad en una amplia gama de tareas."
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001 es un modelo multimodal eficiente, que admite la escalabilidad para aplicaciones amplias."
+ },
+ "gemini-1.5-flash-002": {
+ "description": "Gemini 1.5 Flash 002 es un modelo multimodal eficiente, que admite una amplia gama de aplicaciones."
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827 está diseñado para manejar escenarios de tareas a gran escala, ofreciendo una velocidad de procesamiento inigualable."
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "Gemini 1.5 Flash 8B 0924 es el último modelo experimental, con mejoras significativas en el rendimiento tanto en casos de uso de texto como multimodal."
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827 ofrece capacidades de procesamiento multimodal optimizadas, adecuadas para una variedad de escenarios de tareas complejas."
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flash es el último modelo de IA multimodal de Google, con capacidades de procesamiento rápido, que admite entradas de texto, imagen y video, adecuado para la escalabilidad eficiente en diversas tareas."
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001 es una solución de IA multimodal escalable, que admite una amplia gama de tareas complejas."
+ },
+ "gemini-1.5-pro-002": {
+ "description": "Gemini 1.5 Pro 002 es el último modelo listo para producción, que ofrece una calidad de salida superior, especialmente en tareas matemáticas, contextos largos y tareas visuales."
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801 ofrece una excelente capacidad de procesamiento multimodal, brindando mayor flexibilidad para el desarrollo de aplicaciones."
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827 combina las últimas tecnologías de optimización, ofreciendo una capacidad de procesamiento de datos multimodal más eficiente."
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Pro admite hasta 2 millones de tokens, siendo una opción ideal para modelos multimodales de tamaño medio, adecuados para un soporte multifacético en tareas complejas."
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7B es adecuado para el procesamiento de tareas de pequeña y mediana escala, combinando rentabilidad."
+ },
+ "gemma2": {
+ "description": "Gemma 2 es un modelo eficiente lanzado por Google, que abarca una variedad de escenarios de aplicación desde aplicaciones pequeñas hasta procesamiento de datos complejos."
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9B es un modelo optimizado para la integración de tareas y herramientas específicas."
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2 es un modelo eficiente lanzado por Google, que abarca una variedad de escenarios de aplicación desde aplicaciones pequeñas hasta procesamiento de datos complejos."
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2 es un modelo eficiente lanzado por Google, que abarca una variedad de escenarios de aplicación desde aplicaciones pequeñas hasta procesamiento de datos complejos."
+ },
+ "general": {
+ "description": "Spark Lite es un modelo de lenguaje grande y ligero, con una latencia extremadamente baja y una capacidad de procesamiento eficiente, completamente gratuito y abierto, que soporta funciones de búsqueda en línea en tiempo real. Su característica de respuesta rápida lo hace destacar en aplicaciones de inferencia y ajuste de modelos en dispositivos de baja potencia, brindando a los usuarios una excelente relación costo-beneficio y una experiencia inteligente, especialmente en escenarios de preguntas y respuestas, generación de contenido y búsqueda."
+ },
+ "generalv3": {
+ "description": "Spark Pro es un modelo de lenguaje grande de alto rendimiento optimizado para campos profesionales, enfocado en matemáticas, programación, medicina, educación y más, y soporta búsqueda en línea y plugins integrados como clima y fecha. Su modelo optimizado muestra un rendimiento excepcional y eficiente en preguntas y respuestas complejas, comprensión del lenguaje y creación de textos de alto nivel, siendo la opción ideal para escenarios de aplicación profesional."
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Max es la versión más completa, soportando búsqueda en línea y numerosos plugins integrados. Su capacidad central completamente optimizada, así como la configuración de roles del sistema y la función de llamada a funciones, hacen que su rendimiento en diversos escenarios de aplicación complejos sea excepcional y sobresaliente."
+ },
+ "glm-4": {
+ "description": "GLM-4 es la versión anterior lanzada en enero de 2024, actualmente ha sido reemplazada por el más potente GLM-4-0520."
+ },
+ "glm-4-0520": {
+ "description": "GLM-4-0520 es la última versión del modelo, diseñada para tareas altamente complejas y diversas, con un rendimiento excepcional."
+ },
+ "glm-4-air": {
+ "description": "GLM-4-Air es una versión de alto costo-beneficio, con un rendimiento cercano al GLM-4, ofreciendo velocidad y precios asequibles."
+ },
+ "glm-4-airx": {
+ "description": "GLM-4-AirX ofrece una versión eficiente de GLM-4-Air, con velocidades de inferencia de hasta 2.6 veces."
+ },
+ "glm-4-alltools": {
+ "description": "GLM-4-AllTools es un modelo de agente multifuncional, optimizado para soportar planificación de instrucciones complejas y llamadas a herramientas, como navegación web, interpretación de código y generación de texto, adecuado para la ejecución de múltiples tareas."
+ },
+ "glm-4-flash": {
+ "description": "GLM-4-Flash es la opción ideal para tareas simples, con la velocidad más rápida y el precio más bajo."
+ },
+ "glm-4-long": {
+ "description": "GLM-4-Long admite entradas de texto extremadamente largas, adecuado para tareas de memoria y procesamiento de documentos a gran escala."
+ },
+ "glm-4-plus": {
+ "description": "GLM-4-Plus, como buque insignia de alta inteligencia, tiene una poderosa capacidad para manejar textos largos y tareas complejas, con un rendimiento mejorado en general."
+ },
+ "glm-4v": {
+ "description": "GLM-4V proporciona una poderosa capacidad de comprensión e inferencia de imágenes, soportando diversas tareas visuales."
+ },
+ "glm-4v-plus": {
+ "description": "GLM-4V-Plus tiene la capacidad de entender contenido de video y múltiples imágenes, adecuado para tareas multimodales."
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827 ofrece capacidades de procesamiento multimodal optimizadas, adecuadas para una variedad de escenarios de tareas complejas."
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827 combina las últimas tecnologías de optimización, ofreciendo una capacidad de procesamiento de datos multimodal más eficiente."
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2 continúa con el concepto de diseño ligero y eficiente."
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2 es una serie de modelos de texto de código abierto y ligeros de Google."
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2 es una serie de modelos de texto de código abierto y livianos de Google."
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B) ofrece capacidades básicas de procesamiento de instrucciones, adecuado para aplicaciones ligeras."
+ },
+ "gpt-3.5-turbo": {
+ "description": "GPT 3.5 Turbo, adecuado para diversas tareas de generación y comprensión de texto, actualmente apunta a gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "GPT 3.5 Turbo, adecuado para diversas tareas de generación y comprensión de texto, actualmente apunta a gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "GPT 3.5 Turbo, adecuado para diversas tareas de generación y comprensión de texto, actualmente apunta a gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "GPT 3.5 Turbo, adecuado para diversas tareas de generación y comprensión de texto, actualmente apunta a gpt-3.5-turbo-0125."
+ },
+ "gpt-4": {
+ "description": "GPT-4 ofrece una ventana de contexto más grande, capaz de manejar entradas de texto más largas, adecuado para escenarios que requieren integración de información amplia y análisis de datos."
+ },
+ "gpt-4-0125-preview": {
+ "description": "El último modelo GPT-4 Turbo cuenta con funciones visuales. Ahora, las solicitudes visuales pueden utilizar el modo JSON y llamadas a funciones. GPT-4 Turbo es una versión mejorada que ofrece soporte rentable para tareas multimodales. Encuentra un equilibrio entre precisión y eficiencia, adecuado para aplicaciones que requieren interacción en tiempo real."
+ },
+ "gpt-4-0613": {
+ "description": "GPT-4 ofrece una ventana de contexto más grande, capaz de manejar entradas de texto más largas, adecuado para escenarios que requieren integración de información amplia y análisis de datos."
+ },
+ "gpt-4-1106-preview": {
+ "description": "El último modelo GPT-4 Turbo cuenta con funciones visuales. Ahora, las solicitudes visuales pueden utilizar el modo JSON y llamadas a funciones. GPT-4 Turbo es una versión mejorada que ofrece soporte rentable para tareas multimodales. Encuentra un equilibrio entre precisión y eficiencia, adecuado para aplicaciones que requieren interacción en tiempo real."
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "El último modelo GPT-4 Turbo cuenta con funciones visuales. Ahora, las solicitudes visuales pueden utilizar el modo JSON y llamadas a funciones. GPT-4 Turbo es una versión mejorada que ofrece soporte rentable para tareas multimodales. Encuentra un equilibrio entre precisión y eficiencia, adecuado para aplicaciones que requieren interacción en tiempo real."
+ },
+ "gpt-4-32k": {
+ "description": "GPT-4 ofrece una ventana de contexto más grande, capaz de manejar entradas de texto más largas, adecuado para escenarios que requieren integración de información amplia y análisis de datos."
+ },
+ "gpt-4-32k-0613": {
+ "description": "GPT-4 ofrece una ventana de contexto más grande, capaz de manejar entradas de texto más largas, adecuado para escenarios que requieren integración de información amplia y análisis de datos."
+ },
+ "gpt-4-turbo": {
+ "description": "El último modelo GPT-4 Turbo cuenta con funciones visuales. Ahora, las solicitudes visuales pueden utilizar el modo JSON y llamadas a funciones. GPT-4 Turbo es una versión mejorada que ofrece soporte rentable para tareas multimodales. Encuentra un equilibrio entre precisión y eficiencia, adecuado para aplicaciones que requieren interacción en tiempo real."
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "El último modelo GPT-4 Turbo cuenta con funciones visuales. Ahora, las solicitudes visuales pueden utilizar el modo JSON y llamadas a funciones. GPT-4 Turbo es una versión mejorada que ofrece soporte rentable para tareas multimodales. Encuentra un equilibrio entre precisión y eficiencia, adecuado para aplicaciones que requieren interacción en tiempo real."
+ },
+ "gpt-4-turbo-preview": {
+ "description": "El último modelo GPT-4 Turbo cuenta con funciones visuales. Ahora, las solicitudes visuales pueden utilizar el modo JSON y llamadas a funciones. GPT-4 Turbo es una versión mejorada que ofrece soporte rentable para tareas multimodales. Encuentra un equilibrio entre precisión y eficiencia, adecuado para aplicaciones que requieren interacción en tiempo real."
+ },
+ "gpt-4-vision-preview": {
+ "description": "El último modelo GPT-4 Turbo cuenta con funciones visuales. Ahora, las solicitudes visuales pueden utilizar el modo JSON y llamadas a funciones. GPT-4 Turbo es una versión mejorada que ofrece soporte rentable para tareas multimodales. Encuentra un equilibrio entre precisión y eficiencia, adecuado para aplicaciones que requieren interacción en tiempo real."
+ },
+ "gpt-4o": {
+ "description": "ChatGPT-4o es un modelo dinámico que se actualiza en tiempo real para mantener la versión más actual. Combina una poderosa comprensión y generación de lenguaje, adecuado para aplicaciones a gran escala, incluyendo servicio al cliente, educación y soporte técnico."
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "ChatGPT-4o es un modelo dinámico que se actualiza en tiempo real para mantener la versión más actual. Combina una poderosa comprensión y generación de lenguaje, adecuado para aplicaciones a gran escala, incluyendo servicio al cliente, educación y soporte técnico."
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o es un modelo dinámico que se actualiza en tiempo real para mantener la versión más actual. Combina una poderosa comprensión y generación de lenguaje, adecuado para aplicaciones a gran escala, incluyendo servicio al cliente, educación y soporte técnico."
+ },
+ "gpt-4o-mini": {
+ "description": "GPT-4o mini es el último modelo lanzado por OpenAI después de GPT-4 Omni, que admite entradas de texto e imagen y genera texto como salida. Como su modelo más avanzado de menor tamaño, es mucho más económico que otros modelos de vanguardia recientes y es más de un 60% más barato que GPT-3.5 Turbo. Mantiene una inteligencia de vanguardia mientras ofrece una relación calidad-precio significativa. GPT-4o mini obtuvo un puntaje del 82% en la prueba MMLU y actualmente se clasifica por encima de GPT-4 en preferencias de chat."
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13B es un modelo de lenguaje que combina creatividad e inteligencia, fusionando múltiples modelos de vanguardia."
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "El innovador modelo de código abierto InternLM2.5 mejora la inteligencia del diálogo mediante un gran número de parámetros."
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5 ofrece soluciones de diálogo inteligente en múltiples escenarios."
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "El modelo Llama 3.1 70B Instruct, con 70B de parámetros, puede ofrecer un rendimiento excepcional en tareas de generación de texto y de instrucciones a gran escala."
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70B ofrece una capacidad de razonamiento AI más potente, adecuada para aplicaciones complejas, soportando un procesamiento computacional extenso y garantizando eficiencia y precisión."
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8B es un modelo de alto rendimiento que ofrece una rápida capacidad de generación de texto, ideal para aplicaciones que requieren eficiencia a gran escala y rentabilidad."
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "El modelo Llama 3.1 8B Instruct, con 8B de parámetros, soporta la ejecución eficiente de tareas de instrucciones visuales, ofreciendo una excelente capacidad de generación de texto."
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "El modelo Llama 3.1 Sonar Huge Online, con 405B de parámetros, soporta una longitud de contexto de aproximadamente 127,000 tokens, diseñado para aplicaciones de chat en línea complejas."
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "El modelo Llama 3.1 Sonar Large Chat, con 70B de parámetros, soporta una longitud de contexto de aproximadamente 127,000 tokens, adecuado para tareas de chat fuera de línea complejas."
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "El modelo Llama 3.1 Sonar Large Online, con 70B de parámetros, soporta una longitud de contexto de aproximadamente 127,000 tokens, adecuado para tareas de chat de alta capacidad y diversidad."
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "El modelo Llama 3.1 Sonar Small Chat, con 8B de parámetros, está diseñado para chat fuera de línea, soportando una longitud de contexto de aproximadamente 127,000 tokens."
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "El modelo Llama 3.1 Sonar Small Online, con 8B de parámetros, soporta una longitud de contexto de aproximadamente 127,000 tokens, diseñado para chat en línea, capaz de manejar eficientemente diversas interacciones textuales."
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70B proporciona una capacidad de procesamiento de complejidad inigualable, diseñado a medida para proyectos de alta demanda."
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8B ofrece un rendimiento de razonamiento de alta calidad, adecuado para diversas necesidades de aplicación."
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Use ofrece una potente capacidad de invocación de herramientas, apoyando el procesamiento eficiente de tareas complejas."
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Use es un modelo optimizado para el uso eficiente de herramientas, que admite cálculos paralelos rápidos."
+ },
+ "llama3.1": {
+ "description": "Llama 3.1 es el modelo líder lanzado por Meta, que admite hasta 405B de parámetros, aplicable en diálogos complejos, traducción multilingüe y análisis de datos."
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1 es el modelo líder lanzado por Meta, que admite hasta 405B de parámetros, aplicable en diálogos complejos, traducción multilingüe y análisis de datos."
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1 es el modelo líder lanzado por Meta, que admite hasta 405B de parámetros, aplicable en diálogos complejos, traducción multilingüe y análisis de datos."
+ },
+ "llava": {
+ "description": "LLaVA es un modelo multimodal que combina un codificador visual y Vicuna, utilizado para una poderosa comprensión visual y lingüística."
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7B proporciona capacidades de procesamiento visual integradas, generando salidas complejas a partir de entradas de información visual."
+ },
+ "llava:13b": {
+ "description": "LLaVA es un modelo multimodal que combina un codificador visual y Vicuna, utilizado para una poderosa comprensión visual y lingüística."
+ },
+ "llava:34b": {
+ "description": "LLaVA es un modelo multimodal que combina un codificador visual y Vicuna, utilizado para una poderosa comprensión visual y lingüística."
+ },
+ "mathstral": {
+ "description": "MathΣtral está diseñado para la investigación científica y el razonamiento matemático, proporcionando capacidades de cálculo efectivas y explicación de resultados."
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "Un poderoso modelo de 70 mil millones de parámetros que sobresale en razonamiento, codificación y amplias aplicaciones de lenguaje."
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "Un modelo versátil de 8 mil millones de parámetros optimizado para tareas de diálogo y generación de texto."
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "Los modelos de texto solo ajustados por instrucciones Llama 3.1 están optimizados para casos de uso de diálogo multilingüe y superan muchos de los modelos de chat de código abierto y cerrados disponibles en los benchmarks de la industria."
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "Los modelos de texto solo ajustados por instrucciones Llama 3.1 están optimizados para casos de uso de diálogo multilingüe y superan muchos de los modelos de chat de código abierto y cerrados disponibles en los benchmarks de la industria."
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "Los modelos de texto solo ajustados por instrucciones Llama 3.1 están optimizados para casos de uso de diálogo multilingüe y superan muchos de los modelos de chat de código abierto y cerrados disponibles en los benchmarks de la industria."
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B) ofrece una excelente capacidad de procesamiento de lenguaje y una experiencia de interacción sobresaliente."
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B) es un modelo de chat potente, que soporta necesidades de conversación complejas."
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B) ofrece soporte multilingüe, abarcando un amplio conocimiento en diversos campos."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Lite es ideal para entornos que requieren alto rendimiento y baja latencia."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turbo ofrece una capacidad excepcional de comprensión y generación de lenguaje, ideal para las tareas de cálculo más exigentes."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Lite es adecuado para entornos con recursos limitados, ofreciendo un excelente equilibrio de rendimiento."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turbo es un modelo de lenguaje de alto rendimiento, adecuado para una amplia gama de escenarios de aplicación."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405B es un potente modelo de preentrenamiento y ajuste de instrucciones."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "El modelo Llama 3.1 Turbo de 405B proporciona un soporte de contexto de gran capacidad para el procesamiento de grandes datos, destacándose en aplicaciones de inteligencia artificial a gran escala."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70B proporciona soporte de conversación eficiente en múltiples idiomas."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "El modelo Llama 3.1 70B está finamente ajustado para aplicaciones de alta carga, cuantificado a FP8 para ofrecer una capacidad de cálculo y precisión más eficientes, asegurando un rendimiento excepcional en escenarios complejos."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1 ofrece soporte multilingüe y es uno de los modelos generativos líderes en la industria."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "El modelo Llama 3.1 8B utiliza cuantificación FP8, soportando hasta 131,072 tokens de contexto, destacándose entre los modelos de código abierto, ideal para tareas complejas y superando muchos estándares de la industria."
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instruct está optimizado para escenarios de conversación de alta calidad, destacándose en diversas evaluaciones humanas."
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instruct optimiza los escenarios de conversación de alta calidad, con un rendimiento superior a muchos modelos cerrados."
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B Instruct es la última versión lanzada por Meta, optimizada para generar diálogos de alta calidad, superando a muchos modelos cerrados líderes."
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct está diseñado para conversaciones de alta calidad, destacándose en evaluaciones humanas, especialmente en escenarios de alta interacción."
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct es la última versión lanzada por Meta, optimizada para escenarios de conversación de alta calidad, superando a muchos modelos cerrados líderes."
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1 ofrece soporte multilingüe y es uno de los modelos generativos más avanzados de la industria."
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 405B Instruct es el modelo más grande y potente de la serie Llama 3.1 Instruct, un modelo de generación de datos de diálogo y razonamiento altamente avanzado, que también puede servir como base para un preentrenamiento o ajuste fino especializado en dominios específicos. Los modelos de lenguaje de gran tamaño (LLMs) multilingües que ofrece Llama 3.1 son un conjunto de modelos generativos preentrenados y ajustados por instrucciones, que incluyen tamaños de 8B, 70B y 405B (entrada/salida de texto). Los modelos de texto ajustados por instrucciones de Llama 3.1 (8B, 70B, 405B) están optimizados para casos de uso de diálogo multilingüe y superan a muchos modelos de chat de código abierto disponibles en pruebas de referencia de la industria. Llama 3.1 está diseñado para usos comerciales y de investigación en múltiples idiomas. Los modelos de texto ajustados por instrucciones son adecuados para chats similares a asistentes, mientras que los modelos preentrenados pueden adaptarse a diversas tareas de generación de lenguaje natural. El modelo Llama 3.1 también admite el uso de su salida para mejorar otros modelos, incluida la generación de datos sintéticos y el refinamiento. Llama 3.1 es un modelo de lenguaje autorregresivo que utiliza una arquitectura de transformador optimizada. Las versiones ajustadas utilizan ajuste fino supervisado (SFT) y aprendizaje por refuerzo con retroalimentación humana (RLHF) para alinearse con las preferencias humanas de ayuda y seguridad."
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "La versión actualizada de Meta Llama 3.1 70B Instruct incluye una longitud de contexto ampliada de 128K, multilingüismo y capacidades de razonamiento mejoradas. Los modelos de lenguaje a gran escala (LLMs) de Llama 3.1 son un conjunto de modelos generativos preentrenados y ajustados por instrucciones, que incluyen tamaños de 8B, 70B y 405B (entrada/salida de texto). Los modelos de texto ajustados por instrucciones de Llama 3.1 (8B, 70B, 405B) están optimizados para casos de uso de diálogo multilingüe y superan muchos modelos de chat de código abierto disponibles en pruebas de referencia de la industria comunes. Llama 3.1 está diseñado para usos comerciales y de investigación en múltiples idiomas. Los modelos de texto ajustados por instrucciones son adecuados para chats similares a asistentes, mientras que los modelos preentrenados pueden adaptarse a diversas tareas de generación de lenguaje natural. El modelo Llama 3.1 también admite el uso de su salida de modelo para mejorar otros modelos, incluyendo la generación de datos sintéticos y refinamiento. Llama 3.1 es un modelo de lenguaje autoregresivo utilizando una arquitectura de transformador optimizada. La versión ajustada utiliza ajuste fino supervisado (SFT) y aprendizaje por refuerzo con retroalimentación humana (RLHF) para alinearse con las preferencias humanas de utilidad y seguridad."
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "La versión actualizada de Meta Llama 3.1 8B Instruct incluye una longitud de contexto ampliada de 128K, multilingüismo y capacidades de razonamiento mejoradas. Los modelos de lenguaje a gran escala (LLMs) de Llama 3.1 son un conjunto de modelos generativos preentrenados y ajustados por instrucciones, que incluyen tamaños de 8B, 70B y 405B (entrada/salida de texto). Los modelos de texto ajustados por instrucciones de Llama 3.1 (8B, 70B, 405B) están optimizados para casos de uso de diálogo multilingüe y superan muchos modelos de chat de código abierto disponibles en pruebas de referencia de la industria comunes. Llama 3.1 está diseñado para usos comerciales y de investigación en múltiples idiomas. Los modelos de texto ajustados por instrucciones son adecuados para chats similares a asistentes, mientras que los modelos preentrenados pueden adaptarse a diversas tareas de generación de lenguaje natural. El modelo Llama 3.1 también admite el uso de su salida de modelo para mejorar otros modelos, incluyendo la generación de datos sintéticos y refinamiento. Llama 3.1 es un modelo de lenguaje autoregresivo utilizando una arquitectura de transformador optimizada. La versión ajustada utiliza ajuste fino supervisado (SFT) y aprendizaje por refuerzo con retroalimentación humana (RLHF) para alinearse con las preferencias humanas de utilidad y seguridad."
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3 es un modelo de lenguaje de gran tamaño (LLM) abierto dirigido a desarrolladores, investigadores y empresas, diseñado para ayudarles a construir, experimentar y escalar de manera responsable sus ideas de IA generativa. Como parte de un sistema base para la innovación de la comunidad global, es ideal para la creación de contenido, IA de diálogo, comprensión del lenguaje, I+D y aplicaciones empresariales."
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3 es un modelo de lenguaje de gran tamaño (LLM) abierto dirigido a desarrolladores, investigadores y empresas, diseñado para ayudarles a construir, experimentar y escalar de manera responsable sus ideas de IA generativa. Como parte de un sistema base para la innovación de la comunidad global, es ideal para dispositivos de borde con recursos y capacidades computacionales limitadas, así como para tiempos de entrenamiento más rápidos."
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7B es el último modelo ligero y rápido de Microsoft AI, con un rendimiento cercano a 10 veces el de los modelos líderes de código abierto existentes."
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22B es el modelo Wizard más avanzado de Microsoft AI, mostrando un rendimiento extremadamente competitivo."
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-V es la nueva generación de modelos multimodales lanzada por OpenBMB, que cuenta con una excelente capacidad de reconocimiento OCR y comprensión multimodal, soportando una amplia gama de escenarios de aplicación."
+ },
+ "mistral": {
+ "description": "Mistral es un modelo de 7B lanzado por Mistral AI, adecuado para necesidades de procesamiento de lenguaje variables."
+ },
+ "mistral-large": {
+ "description": "Mixtral Large es el modelo insignia de Mistral, combinando capacidades de generación de código, matemáticas y razonamiento, soportando una ventana de contexto de 128k."
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407) es un modelo de lenguaje grande (LLM) avanzado con capacidades de razonamiento, conocimiento y codificación de última generación."
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Large es el modelo insignia, especializado en tareas multilingües, razonamiento complejo y generación de código, ideal para aplicaciones de alta gama."
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemo, desarrollado en colaboración entre Mistral AI y NVIDIA, es un modelo de 12B de alto rendimiento."
+ },
+ "mistral-small": {
+ "description": "Mistral Small se puede utilizar en cualquier tarea basada en lenguaje que requiera alta eficiencia y baja latencia."
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Small es una opción rentable, rápida y confiable, adecuada para casos de uso como traducción, resumen y análisis de sentimientos."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instruct es conocido por su alto rendimiento, adecuado para diversas tareas de lenguaje."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7B es un modelo ajustado bajo demanda, proporcionando respuestas optimizadas para tareas."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3 ofrece una capacidad de cálculo eficiente y comprensión del lenguaje natural, adecuado para una amplia gama de aplicaciones."
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B) es un modelo de lenguaje de gran tamaño, que soporta demandas de procesamiento extremadamente altas."
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7B es un modelo de expertos dispersos preentrenado, utilizado para tareas de texto de uso general."
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instruct es un modelo de estándar industrial de alto rendimiento, optimizado para velocidad y soporte de contexto largo."
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemo es un modelo de 7.3B parámetros con soporte multilingüe y programación de alto rendimiento."
+ },
+ "mixtral": {
+ "description": "Mixtral es el modelo de expertos de Mistral AI, con pesos de código abierto, que ofrece soporte en generación de código y comprensión del lenguaje."
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7B ofrece una capacidad de cálculo paralelo de alta tolerancia a fallos, adecuada para tareas complejas."
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtral es el modelo de expertos de Mistral AI, con pesos de código abierto, que ofrece soporte en generación de código y comprensión del lenguaje."
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128K es un modelo con capacidad de procesamiento de contexto ultra largo, adecuado para generar textos extensos, satisfaciendo las demandas de tareas de generación complejas, capaz de manejar hasta 128,000 tokens, ideal para aplicaciones en investigación, académicas y generación de documentos grandes."
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32K ofrece capacidad de procesamiento de contexto de longitud media, capaz de manejar 32,768 tokens, especialmente adecuado para generar diversos documentos largos y diálogos complejos, aplicable en creación de contenido, generación de informes y sistemas de diálogo."
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8K está diseñado para tareas de generación de texto corto, con un rendimiento de procesamiento eficiente, capaz de manejar 8,192 tokens, ideal para diálogos breves, toma de notas y generación rápida de contenido."
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8B es una versión mejorada de Nous Hermes 2, que incluye los conjuntos de datos más recientes desarrollados internamente."
+ },
+ "o1-mini": {
+ "description": "o1-mini es un modelo de inferencia rápido y rentable diseñado para aplicaciones de programación, matemáticas y ciencias. Este modelo tiene un contexto de 128K y una fecha de corte de conocimiento en octubre de 2023."
+ },
+ "o1-preview": {
+ "description": "o1 es el nuevo modelo de inferencia de OpenAI, adecuado para tareas complejas que requieren un amplio conocimiento general. Este modelo tiene un contexto de 128K y una fecha de corte de conocimiento en octubre de 2023."
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mamba es un modelo de lenguaje Mamba 2 enfocado en la generación de código, que proporciona un fuerte apoyo para tareas avanzadas de codificación y razonamiento."
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7B es un modelo compacto pero de alto rendimiento, especializado en el procesamiento por lotes y tareas simples, como clasificación y generación de texto, con buenas capacidades de razonamiento."
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemo es un modelo de 12B desarrollado en colaboración con Nvidia, que ofrece un rendimiento de razonamiento y codificación excepcional, fácil de integrar y reemplazar."
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22B es un modelo de expertos más grande, enfocado en tareas complejas, que ofrece una excelente capacidad de razonamiento y un mayor rendimiento."
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7B es un modelo de expertos dispersos que utiliza múltiples parámetros para mejorar la velocidad de razonamiento, adecuado para el procesamiento de tareas de múltiples idiomas y generación de código."
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o es un modelo dinámico que se actualiza en tiempo real para mantener la versión más actual. Combina una poderosa capacidad de comprensión y generación de lenguaje, adecuado para escenarios de aplicación a gran escala, incluyendo servicio al cliente, educación y soporte técnico."
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o mini es el modelo más reciente de OpenAI, lanzado después de GPT-4 Omni, que admite entradas de texto e imagen y genera texto como salida. Como su modelo más avanzado de tamaño pequeño, es mucho más económico que otros modelos de vanguardia recientes y más de un 60% más barato que GPT-3.5 Turbo. Mantiene una inteligencia de vanguardia mientras ofrece una relación calidad-precio notable. GPT-4o mini obtuvo un puntaje del 82% en la prueba MMLU y actualmente se clasifica por encima de GPT-4 en preferencias de chat."
+ },
+ "openai/o1-mini": {
+ "description": "o1-mini es un modelo de inferencia rápido y rentable diseñado para aplicaciones de programación, matemáticas y ciencias. Este modelo tiene un contexto de 128K y una fecha de corte de conocimiento en octubre de 2023."
+ },
+ "openai/o1-preview": {
+ "description": "o1 es el nuevo modelo de inferencia de OpenAI, adecuado para tareas complejas que requieren un amplio conocimiento general. Este modelo tiene un contexto de 128K y una fecha de corte de conocimiento en octubre de 2023."
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7B es una biblioteca de modelos de lenguaje de código abierto ajustada mediante la estrategia de 'C-RLFT (ajuste fino de refuerzo condicional)'."
+ },
+ "openrouter/auto": {
+ "description": "Según la longitud del contexto, el tema y la complejidad, tu solicitud se enviará a Llama 3 70B Instruct, Claude 3.5 Sonnet (autoajuste) o GPT-4o."
+ },
+ "phi3": {
+ "description": "Phi-3 es un modelo abierto ligero lanzado por Microsoft, adecuado para una integración eficiente y razonamiento de conocimiento a gran escala."
+ },
+ "phi3:14b": {
+ "description": "Phi-3 es un modelo abierto ligero lanzado por Microsoft, adecuado para una integración eficiente y razonamiento de conocimiento a gran escala."
+ },
+ "pixtral-12b-2409": {
+ "description": "El modelo Pixtral muestra una fuerte capacidad en tareas como comprensión de gráficos e imágenes, preguntas y respuestas de documentos, razonamiento multimodal y seguimiento de instrucciones, capaz de ingerir imágenes en resolución y proporción natural, y manejar una cantidad arbitraria de imágenes en una ventana de contexto larga de hasta 128K tokens."
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "El modelo de código Tongyi Qwen."
+ },
+ "qwen-long": {
+ "description": "Qwen es un modelo de lenguaje a gran escala que admite contextos de texto largos y funciones de conversación basadas en documentos largos y múltiples."
+ },
+ "qwen-math-plus-latest": {
+ "description": "El modelo de matemáticas Tongyi Qwen está diseñado específicamente para resolver problemas matemáticos."
+ },
+ "qwen-math-turbo-latest": {
+ "description": "El modelo de matemáticas Tongyi Qwen está diseñado específicamente para resolver problemas matemáticos."
+ },
+ "qwen-max-latest": {
+ "description": "El modelo de lenguaje a gran escala Tongyi Qwen de nivel de cientos de miles de millones, que admite entradas en diferentes idiomas como chino e inglés, es el modelo API detrás de la versión del producto Tongyi Qwen 2.5."
+ },
+ "qwen-plus-latest": {
+ "description": "La versión mejorada del modelo de lenguaje a gran escala Tongyi Qwen, que admite entradas en diferentes idiomas como chino e inglés."
+ },
+ "qwen-turbo-latest": {
+ "description": "El modelo de lenguaje a gran escala Tongyi Qwen, que admite entradas en diferentes idiomas como chino e inglés."
+ },
+ "qwen-vl-chat-v1": {
+ "description": "Qwen VL admite formas de interacción flexibles, incluyendo múltiples imágenes, preguntas y respuestas en múltiples rondas, y capacidades creativas."
+ },
+ "qwen-vl-max": {
+ "description": "Qwen es un modelo de lenguaje visual a gran escala. En comparación con la versión mejorada, mejora aún más la capacidad de razonamiento visual y la capacidad de seguir instrucciones, proporcionando un mayor nivel de percepción y cognición visual."
+ },
+ "qwen-vl-plus": {
+ "description": "Qwen es una versión mejorada del modelo de lenguaje visual a gran escala. Mejora significativamente la capacidad de reconocimiento de detalles y de texto, admite imágenes con resolución de más de un millón de píxeles y proporciones de aspecto de cualquier tamaño."
+ },
+ "qwen-vl-v1": {
+ "description": "Iniciado con el modelo de lenguaje Qwen-7B, se añade un modelo de imagen, un modelo preentrenado con una resolución de entrada de imagen de 448."
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2 es una nueva serie de modelos de lenguaje de gran tamaño, con una mayor capacidad de comprensión y generación."
+ },
+ "qwen2": {
+ "description": "Qwen2 es el nuevo modelo de lenguaje a gran escala de Alibaba, que ofrece un rendimiento excepcional para satisfacer diversas necesidades de aplicación."
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "El modelo de 14B de Tongyi Qwen 2.5, de código abierto."
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "El modelo de 32B de Tongyi Qwen 2.5, de código abierto."
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "El modelo de 72B de Tongyi Qwen 2.5, de código abierto."
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "El modelo de 7B de Tongyi Qwen 2.5, de código abierto."
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "La versión de código abierto del modelo de código Tongyi Qwen."
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "La versión de código abierto del modelo de código Tongyi Qwen."
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "El modelo Qwen-Math tiene una poderosa capacidad para resolver problemas matemáticos."
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "El modelo Qwen-Math tiene una poderosa capacidad para resolver problemas matemáticos."
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "El modelo Qwen-Math tiene una poderosa capacidad para resolver problemas matemáticos."
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2 es el nuevo modelo de lenguaje a gran escala de Alibaba, que ofrece un rendimiento excepcional para satisfacer diversas necesidades de aplicación."
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2 es el nuevo modelo de lenguaje a gran escala de Alibaba, que ofrece un rendimiento excepcional para satisfacer diversas necesidades de aplicación."
+ },
+ "qwen2:72b": {
+ "description": "Qwen2 es el nuevo modelo de lenguaje a gran escala de Alibaba, que ofrece un rendimiento excepcional para satisfacer diversas necesidades de aplicación."
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar Mini es un LLM compacto, con un rendimiento superior al de GPT-3.5, que cuenta con potentes capacidades multilingües, soportando inglés y coreano, ofreciendo una solución eficiente y compacta."
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja) amplía las capacidades de Solar Mini, enfocándose en el japonés, mientras mantiene un rendimiento eficiente y sobresaliente en el uso del inglés y el coreano."
+ },
+ "solar-pro": {
+ "description": "Solar Pro es un LLM de alta inteligencia lanzado por Upstage, enfocado en la capacidad de seguimiento de instrucciones en un solo GPU, con una puntuación IFEval superior a 80. Actualmente soporta inglés, y se planea lanzar la versión oficial en noviembre de 2024, ampliando el soporte de idiomas y la longitud del contexto."
+ },
+ "step-1-128k": {
+ "description": "Equilibrio entre rendimiento y costo, adecuado para escenarios generales."
+ },
+ "step-1-256k": {
+ "description": "Capacidad de procesamiento de contexto de longitud ultra larga, especialmente adecuada para análisis de documentos largos."
+ },
+ "step-1-32k": {
+ "description": "Soporta diálogos de longitud media, adecuado para diversas aplicaciones."
+ },
+ "step-1-8k": {
+ "description": "Modelo pequeño, adecuado para tareas ligeras."
+ },
+ "step-1-flash": {
+ "description": "Modelo de alta velocidad, adecuado para diálogos en tiempo real."
+ },
+ "step-1v-32k": {
+ "description": "Soporta entradas visuales, mejorando la experiencia de interacción multimodal."
+ },
+ "step-1v-8k": {
+ "description": "Modelo visual pequeño, adecuado para tareas básicas de texto e imagen."
+ },
+ "step-2-16k": {
+ "description": "Soporta interacciones de contexto a gran escala, adecuado para escenarios de diálogo complejos."
+ },
+ "taichu_llm": {
+ "description": "El modelo de lenguaje Taichu de Zīdōng tiene una poderosa capacidad de comprensión del lenguaje, así como habilidades en creación de textos, preguntas y respuestas, programación de código, cálculos matemáticos, razonamiento lógico, análisis de sentimientos y resúmenes de texto. Combina de manera innovadora el preentrenamiento con grandes datos y un conocimiento rico de múltiples fuentes, perfeccionando continuamente la tecnología algorítmica y absorbiendo nuevos conocimientos en vocabulario, estructura, gramática y semántica de grandes volúmenes de datos textuales, logrando una evolución constante del modelo. Proporciona a los usuarios información y servicios más convenientes, así como una experiencia más inteligente."
+ },
+ "taichu_vqa": {
+ "description": "Taichu 2.0V combina capacidades de comprensión de imágenes, transferencia de conocimiento y atribución lógica, destacándose en el campo de preguntas y respuestas basadas en texto e imagen."
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B) proporciona una capacidad de cálculo mejorada a través de estrategias y arquitecturas de modelos eficientes."
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B) es adecuado para tareas de instrucciones detalladas, ofreciendo una excelente capacidad de procesamiento de lenguaje."
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2 es un modelo de lenguaje proporcionado por Microsoft AI, que destaca en diálogos complejos, multilingües, razonamiento y asistentes inteligentes."
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2 es un modelo de lenguaje proporcionado por Microsoft AI, que destaca en diálogos complejos, multilingües, razonamiento y asistentes inteligentes."
+ },
+ "yi-large": {
+ "description": "Modelo de mil millones de parámetros completamente nuevo, que ofrece capacidades excepcionales de preguntas y respuestas y generación de texto."
+ },
+ "yi-large-fc": {
+ "description": "Basado en el modelo yi-large, soporta y refuerza la capacidad de llamadas a herramientas, adecuado para diversos escenarios de negocio que requieren la construcción de agentes o flujos de trabajo."
+ },
+ "yi-large-preview": {
+ "description": "Versión inicial, se recomienda usar yi-large (nueva versión)."
+ },
+ "yi-large-rag": {
+ "description": "Servicio de alto nivel basado en el modelo yi-large, combinando técnicas de recuperación y generación para proporcionar respuestas precisas y servicios de búsqueda de información en tiempo real."
+ },
+ "yi-large-turbo": {
+ "description": "Excelente relación calidad-precio y rendimiento excepcional. Ajuste de alta precisión basado en el rendimiento, velocidad de razonamiento y costo."
+ },
+ "yi-medium": {
+ "description": "Modelo de tamaño mediano, ajustado y equilibrado, con una buena relación calidad-precio. Optimización profunda de la capacidad de seguimiento de instrucciones."
+ },
+ "yi-medium-200k": {
+ "description": "Ventana de contexto de 200K, que ofrece una profunda comprensión y generación de texto de largo formato."
+ },
+ "yi-spark": {
+ "description": "Pequeño y ágil, modelo ligero y rápido. Ofrece capacidades mejoradas de cálculo matemático y escritura de código."
+ },
+ "yi-vision": {
+ "description": "Modelo para tareas visuales complejas, que ofrece un alto rendimiento en comprensión y análisis de imágenes."
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/plugin.json b/DigitalHumanWeb/locales/es-ES/plugin.json
new file mode 100644
index 0000000..49d34d1
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "Argumentos de llamada",
+ "function_call": "Llamada a función",
+ "off": "Desactivado",
+ "on": "Ver información de llamada de complemento",
+ "payload": "carga del complemento",
+ "response": "Respuesta",
+ "tool_call": "solicitud de llamada de herramienta"
+ },
+ "detailModal": {
+ "info": {
+ "description": "Descripción de la API",
+ "name": "Nombre de la API"
+ },
+ "tabs": {
+ "info": "Capacidad del complemento",
+ "manifest": "Archivo de instalación",
+ "settings": "Ajustes"
+ },
+ "title": "Detalles del complemento"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "Está a punto de eliminar este complemento local. Una vez eliminado, no se podrá recuperar. ¿Desea eliminar este complemento?",
+ "customParams": {
+ "useProxy": {
+ "label": "Instalar a través de proxy (si encuentra errores de acceso entre dominios, intente habilitar esta opción y reinstalar)"
+ }
+ },
+ "deleteSuccess": "Complemento eliminado",
+ "manifest": {
+ "identifier": {
+ "desc": "Identificador único del complemento",
+ "label": "Identificador"
+ },
+ "mode": {
+ "local": "Configuración visual",
+ "local-tooltip": "La configuración visual no está disponible temporalmente",
+ "url": "Enlace en línea"
+ },
+ "name": {
+ "desc": "Título del complemento",
+ "label": "Título",
+ "placeholder": "Buscar motor de búsqueda"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "Autor del complemento",
+ "label": "Autor"
+ },
+ "avatar": {
+ "desc": "Icono del complemento, se puede usar Emoji o URL",
+ "label": "Icono"
+ },
+ "description": {
+ "desc": "Descripción del complemento",
+ "label": "Descripción",
+ "placeholder": "Obtener información de búsqueda en el motor de búsqueda"
+ },
+ "formFieldRequired": "Este campo es obligatorio",
+ "homepage": {
+ "desc": "Página de inicio del complemento",
+ "label": "Página de inicio"
+ },
+ "identifier": {
+ "desc": "Identificador único del complemento, se reconocerá automáticamente desde el manifiesto",
+ "errorDuplicate": "El identificador del complemento ya existe, modifique el identificador",
+ "label": "Identificador",
+ "pattenErrorMessage": "Solo se pueden ingresar caracteres alfanuméricos, - y _"
+ },
+ "manifest": {
+ "desc": "{{appName}} se instalará el complemento a través de este enlace.",
+ "label": "URL del archivo de descripción del complemento (Manifest)",
+ "preview": "Vista previa del Manifest",
+ "refresh": "Actualizar"
+ },
+ "title": {
+ "desc": "Título del complemento",
+ "label": "Título",
+ "placeholder": "Buscar motor de búsqueda"
+ }
+ },
+ "metaConfig": "Configuración de metadatos del complemento",
+ "modalDesc": "Después de agregar un complemento personalizado, se puede utilizar para validar el desarrollo del complemento o se puede usar directamente en la conversación. Consulte el <1>documento de desarrollo↗> para el desarrollo del complemento.",
+ "openai": {
+ "importUrl": "Importar desde enlace URL",
+ "schema": "Esquema"
+ },
+ "preview": {
+ "card": "Vista previa del efecto del complemento",
+ "desc": "Vista previa de la descripción del complemento",
+ "title": "Vista previa del nombre del complemento"
+ },
+ "save": "Instalar complemento",
+ "saveSuccess": "Configuración del complemento guardada con éxito",
+ "tabs": {
+ "manifest": "Lista de descripción de funciones (Manifest)",
+ "meta": "Metadatos del complemento"
+ },
+ "title": {
+ "create": "Agregar complemento personalizado",
+ "edit": "Editar complemento personalizado"
+ },
+ "type": {
+ "lobe": "Complemento LobeChat",
+ "openai": "Complemento OpenAI"
+ },
+ "update": "Actualizar",
+ "updateSuccess": "Configuración del complemento actualizada con éxito"
+ },
+ "error": {
+ "fetchError": "Error al recuperar el enlace del manifiesto. Asegúrese de que el enlace sea válido y permita el acceso entre dominios.",
+ "installError": "Error al instalar el complemento {{name}}.",
+ "manifestInvalid": "El manifiesto no cumple con las normas. Resultado de la validación: \n\n {{error}}",
+ "noManifest": "No se encontró el archivo de descripción",
+ "openAPIInvalid": "Error al analizar OpenAPI. Error: \n\n {{error}}",
+ "reinstallError": "Error al volver a instalar el complemento {{name}}.",
+ "urlError": "El enlace no devuelve contenido en formato JSON. Asegúrese de que sea un enlace válido."
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "Obsoleto",
+ "local.config": "Configuración",
+ "local.title": "Personalizado"
+ }
+ },
+ "loading": {
+ "content": "Cargando complemento...",
+ "plugin": "Ejecutando complemento..."
+ },
+ "pluginList": "Lista de complementos",
+ "setting": "Configuración de complementos",
+ "settings": {
+ "indexUrl": {
+ "title": "Índice de mercado",
+ "tooltip": "No se admite la edición en línea. Configure a través de variables de entorno al implementar."
+ },
+ "modalDesc": "Después de configurar la dirección del mercado de complementos, puede utilizar un mercado personalizado de complementos.",
+ "title": "Configuración del mercado de complementos"
+ },
+ "showInPortal": "Por favor, consulta los detalles en el portal de trabajo",
+ "store": {
+ "actions": {
+ "confirmUninstall": "Está a punto de desinstalar este complemento. Se eliminará la configuración del complemento. Confirme su acción.",
+ "detail": "Detalles",
+ "install": "Instalar",
+ "manifest": "Editar archivo de instalación",
+ "settings": "Configuración",
+ "uninstall": "Desinstalar"
+ },
+ "communityPlugin": "Comunidad",
+ "customPlugin": "Personalizado",
+ "empty": "No hay complementos instalados",
+ "installAllPlugins": "Instalar todos",
+ "networkError": "Error al obtener la tienda de complementos. Verifique la conexión a internet e inténtelo de nuevo.",
+ "placeholder": "Buscar por nombre, descripción o palabra clave del complemento...",
+ "releasedAt": "Publicado el {{createdAt}}",
+ "tabs": {
+ "all": "Todos",
+ "installed": "Instalados"
+ },
+ "title": "Tienda de complementos"
+ },
+ "unknownPlugin": "Plugin desconocido"
+}
diff --git a/DigitalHumanWeb/locales/es-ES/portal.json b/DigitalHumanWeb/locales/es-ES/portal.json
new file mode 100644
index 0000000..4dec581
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "Artefactos",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "Fragmento",
+ "file": "Archivo"
+ }
+ },
+ "Plugins": "Complementos",
+ "actions": {
+ "genAiMessage": "Crear mensaje de IA",
+ "summary": "Resumen",
+ "summaryTooltip": "Resumir el contenido actual"
+ },
+ "artifacts": {
+ "display": {
+ "code": "Código",
+ "preview": "Vista previa"
+ },
+ "svg": {
+ "copyAsImage": "Copiar como imagen",
+ "copyFail": "Error al copiar, motivo del error: {{error}}",
+ "copySuccess": "Imagen copiada con éxito",
+ "download": {
+ "png": "Descargar como PNG",
+ "svg": "Descargar como SVG"
+ }
+ }
+ },
+ "emptyArtifactList": "La lista de Artefactos actual está vacía. Por favor, utilice los complementos en la conversación y vuelva a intentarlo.",
+ "emptyKnowledgeList": "La lista de conocimientos actual está vacía. Por favor, activa la base de conocimientos según sea necesario en la conversación antes de volver a revisar.",
+ "files": "archivos",
+ "messageDetail": "Detalles del mensaje",
+ "title": "Ventana de expansión"
+}
diff --git a/DigitalHumanWeb/locales/es-ES/providers.json b/DigitalHumanWeb/locales/es-ES/providers.json
new file mode 100644
index 0000000..37a5e6d
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "360 AI es una plataforma de modelos y servicios de IA lanzada por la empresa 360, que ofrece una variedad de modelos avanzados de procesamiento del lenguaje natural, incluidos 360GPT2 Pro, 360GPT Pro, 360GPT Turbo y 360GPT Turbo Responsibility 8K. Estos modelos combinan parámetros a gran escala y capacidades multimodales, siendo ampliamente utilizados en generación de texto, comprensión semántica, sistemas de diálogo y generación de código. A través de una estrategia de precios flexible, 360 AI satisface diversas necesidades de los usuarios, apoyando la integración de desarrolladores y promoviendo la innovación y desarrollo de aplicaciones inteligentes."
+ },
+ "anthropic": {
+ "description": "Anthropic es una empresa centrada en la investigación y desarrollo de inteligencia artificial, que ofrece una serie de modelos de lenguaje avanzados, como Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus y Claude 3 Haiku. Estos modelos logran un equilibrio ideal entre inteligencia, velocidad y costo, adecuados para una variedad de escenarios de aplicación, desde cargas de trabajo empresariales hasta respuestas rápidas. Claude 3.5 Sonnet, como su modelo más reciente, ha demostrado un rendimiento excepcional en múltiples evaluaciones, manteniendo una alta relación calidad-precio."
+ },
+ "azure": {
+ "description": "Azure ofrece una variedad de modelos de IA avanzados, incluidos GPT-3.5 y la última serie GPT-4, que admiten múltiples tipos de datos y tareas complejas, comprometidos con soluciones de IA seguras, confiables y sostenibles."
+ },
+ "baichuan": {
+ "description": "Baichuan Intelligent es una empresa centrada en el desarrollo de modelos de gran tamaño de inteligencia artificial, cuyos modelos han demostrado un rendimiento excepcional en tareas en chino como enciclopedias de conocimiento, procesamiento de textos largos y creación de contenido, superando a los modelos principales extranjeros. Baichuan Intelligent también posee capacidades multimodales líderes en la industria, destacándose en múltiples evaluaciones de autoridad. Sus modelos incluyen Baichuan 4, Baichuan 3 Turbo y Baichuan 3 Turbo 128k, optimizados para diferentes escenarios de aplicación, ofreciendo soluciones de alta relación calidad-precio."
+ },
+ "bedrock": {
+ "description": "Bedrock es un servicio proporcionado por Amazon AWS, enfocado en ofrecer modelos de lenguaje y visuales avanzados para empresas. Su familia de modelos incluye la serie Claude de Anthropic, la serie Llama 3.1 de Meta, entre otros, abarcando una variedad de opciones desde ligeras hasta de alto rendimiento, apoyando tareas como generación de texto, diálogos y procesamiento de imágenes, adecuadas para aplicaciones empresariales de diferentes escalas y necesidades."
+ },
+ "deepseek": {
+ "description": "DeepSeek es una empresa centrada en la investigación y aplicación de tecnologías de inteligencia artificial, cuyo modelo más reciente, DeepSeek-V2.5, combina capacidades de diálogo general y procesamiento de código, logrando mejoras significativas en alineación con preferencias humanas, tareas de escritura y seguimiento de instrucciones."
+ },
+ "fireworksai": {
+ "description": "Fireworks AI es un proveedor líder de servicios de modelos de lenguaje avanzados, enfocado en la llamada de funciones y el procesamiento multimodal. Su modelo más reciente, Firefunction V2, basado en Llama-3, está optimizado para llamadas de funciones, diálogos y seguimiento de instrucciones. El modelo de lenguaje visual FireLLaVA-13B admite entradas mixtas de imágenes y texto. Otros modelos notables incluyen la serie Llama y la serie Mixtral, que ofrecen un soporte eficiente para el seguimiento y generación de instrucciones multilingües."
+ },
+ "github": {
+ "description": "Con los Modelos de GitHub, los desarrolladores pueden convertirse en ingenieros de IA y construir con los modelos de IA líderes en la industria."
+ },
+ "google": {
+ "description": "La serie Gemini de Google es su modelo de IA más avanzado y versátil, desarrollado por Google DeepMind, diseñado para ser multimodal, apoyando la comprensión y procesamiento sin fisuras de texto, código, imágenes, audio y video. Es adecuado para una variedad de entornos, desde centros de datos hasta dispositivos móviles, mejorando enormemente la eficiencia y la aplicabilidad de los modelos de IA."
+ },
+ "groq": {
+ "description": "El motor de inferencia LPU de Groq ha demostrado un rendimiento excepcional en las pruebas de referencia de modelos de lenguaje de gran tamaño (LLM), redefiniendo los estándares de soluciones de IA con su asombrosa velocidad y eficiencia. Groq es un referente en velocidad de inferencia instantánea, mostrando un buen rendimiento en implementaciones basadas en la nube."
+ },
+ "minimax": {
+ "description": "MiniMax es una empresa de tecnología de inteligencia artificial general fundada en 2021, dedicada a co-crear inteligencia con los usuarios. MiniMax ha desarrollado de forma independiente modelos de gran tamaño de diferentes modalidades, que incluyen un modelo de texto MoE de un billón de parámetros, un modelo de voz y un modelo de imagen. También ha lanzado aplicaciones como Conch AI."
+ },
+ "mistral": {
+ "description": "Mistral ofrece modelos avanzados generales, especializados y de investigación, ampliamente utilizados en razonamiento complejo, tareas multilingües, generación de código, etc. A través de interfaces de llamada de funciones, los usuarios pueden integrar funciones personalizadas para aplicaciones específicas."
+ },
+ "moonshot": {
+ "description": "Moonshot es una plataforma de código abierto lanzada por Beijing Dark Side Technology Co., que ofrece una variedad de modelos de procesamiento del lenguaje natural, con aplicaciones en campos amplios, incluyendo pero no limitado a creación de contenido, investigación académica, recomendaciones inteligentes y diagnóstico médico, apoyando el procesamiento de textos largos y tareas de generación complejas."
+ },
+ "novita": {
+ "description": "Novita AI es una plataforma que ofrece servicios API para múltiples modelos de lenguaje de gran tamaño y generación de imágenes de IA, siendo flexible, confiable y rentable. Soporta los últimos modelos de código abierto como Llama3 y Mistral, proporcionando soluciones API completas, amigables para el usuario y autoescalables para el desarrollo de aplicaciones de IA, adecuadas para el rápido crecimiento de startups de IA."
+ },
+ "ollama": {
+ "description": "Los modelos ofrecidos por Ollama abarcan ampliamente áreas como la generación de código, cálculos matemáticos, procesamiento multilingüe e interacciones conversacionales, apoyando diversas necesidades de implementación empresarial y local."
+ },
+ "openai": {
+ "description": "OpenAI es una de las principales instituciones de investigación en inteligencia artificial a nivel mundial, cuyos modelos, como la serie GPT, están a la vanguardia del procesamiento del lenguaje natural. OpenAI se dedica a transformar múltiples industrias a través de soluciones de IA innovadoras y eficientes. Sus productos ofrecen un rendimiento y una rentabilidad significativos, siendo ampliamente utilizados en investigación, negocios y aplicaciones innovadoras."
+ },
+ "openrouter": {
+ "description": "OpenRouter es una plataforma de servicio que ofrece interfaces para diversos modelos de vanguardia, apoyando OpenAI, Anthropic, LLaMA y más, adecuada para diversas necesidades de desarrollo y aplicación. Los usuarios pueden elegir de manera flexible el modelo y precio óptimos según sus necesidades, mejorando la experiencia de IA."
+ },
+ "perplexity": {
+ "description": "Perplexity es un proveedor líder de modelos de generación de diálogos, ofreciendo varios modelos avanzados de Llama 3.1, que son adecuados para aplicaciones en línea y fuera de línea, especialmente para tareas complejas de procesamiento del lenguaje natural."
+ },
+ "qwen": {
+ "description": "Tongyi Qianwen es un modelo de lenguaje de gran escala desarrollado de forma independiente por Alibaba Cloud, con potentes capacidades de comprensión y generación de lenguaje natural. Puede responder a diversas preguntas, crear contenido escrito, expresar opiniones y redactar código, desempeñando un papel en múltiples campos."
+ },
+ "siliconcloud": {
+ "description": "SiliconFlow se dedica a acelerar la AGI para beneficiar a la humanidad, mejorando la eficiencia de la IA a gran escala a través de un stack GenAI fácil de usar y de bajo costo."
+ },
+ "spark": {
+ "description": "El modelo de gran tamaño Xinghuo de iFlytek ofrece potentes capacidades de IA en múltiples campos y lenguajes, utilizando tecnologías avanzadas de procesamiento del lenguaje natural para construir aplicaciones innovadoras adecuadas para diversos escenarios verticales como hardware inteligente, atención médica inteligente y finanzas inteligentes."
+ },
+ "stepfun": {
+ "description": "El modelo de gran tamaño de StepFun cuenta con capacidades de razonamiento complejo y multimodal líderes en la industria, apoyando la comprensión de textos extremadamente largos y funciones potentes de motor de búsqueda autónomo."
+ },
+ "taichu": {
+ "description": "El Instituto de Automatización de la Academia de Ciencias de China y el Instituto de Investigación de Inteligencia Artificial de Wuhan han lanzado una nueva generación de modelos de gran tamaño multimodal, que apoyan tareas de preguntas y respuestas de múltiples rondas, creación de texto, generación de imágenes, comprensión 3D, análisis de señales y más, con capacidades de cognición, comprensión y creación más fuertes, ofreciendo una nueva experiencia de interacción."
+ },
+ "togetherai": {
+ "description": "Together AI se dedica a lograr un rendimiento líder a través de modelos de IA innovadores, ofreciendo amplias capacidades de personalización, incluyendo soporte para escalado rápido y procesos de implementación intuitivos, satisfaciendo diversas necesidades empresariales."
+ },
+ "upstage": {
+ "description": "Upstage se centra en desarrollar modelos de IA para diversas necesidades comerciales, incluidos Solar LLM y Document AI, con el objetivo de lograr una inteligencia general artificial (AGI) que trabaje para las personas. Crea agentes de diálogo simples a través de la API de Chat y admite llamadas de funciones, traducción, incrustaciones y aplicaciones de dominio específico."
+ },
+ "zeroone": {
+ "description": "01.AI se centra en la tecnología de inteligencia artificial de la era 2.0, promoviendo enérgicamente la innovación y aplicación de 'humano + inteligencia artificial', utilizando modelos extremadamente potentes y tecnologías de IA avanzadas para mejorar la productividad humana y lograr el empoderamiento tecnológico."
+ },
+ "zhipu": {
+ "description": "Zhipu AI ofrece una plataforma abierta para modelos multimodales y de lenguaje, apoyando una amplia gama de escenarios de aplicación de IA, incluyendo procesamiento de texto, comprensión de imágenes y asistencia en programación."
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/ragEval.json b/DigitalHumanWeb/locales/es-ES/ragEval.json
new file mode 100644
index 0000000..71fbdad
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "Nuevo",
+ "description": {
+ "placeholder": "Descripción del conjunto de datos (opcional)"
+ },
+ "name": {
+ "placeholder": "Nombre del conjunto de datos",
+ "required": "Por favor, complete el nombre del conjunto de datos"
+ },
+ "title": "Agregar conjunto de datos"
+ },
+ "dataset": {
+ "addNewButton": "Crear conjunto de datos",
+ "emptyGuide": "El conjunto de datos actual está vacío, por favor crea un conjunto de datos.",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "Importar datos"
+ },
+ "columns": {
+ "actions": "Acciones",
+ "ideal": {
+ "title": "Respuesta ideal"
+ },
+ "question": {
+ "title": "Pregunta"
+ },
+ "referenceFiles": {
+ "title": "Archivos de referencia"
+ }
+ },
+ "notSelected": "Por favor, selecciona un conjunto de datos a la izquierda",
+ "title": "Detalles del conjunto de datos"
+ },
+ "title": "Conjunto de datos"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "Nuevo",
+ "datasetId": {
+ "placeholder": "Por favor, selecciona tu conjunto de datos de evaluación",
+ "required": "Por favor, selecciona un conjunto de datos de evaluación"
+ },
+ "description": {
+ "placeholder": "Descripción de la tarea de evaluación (opcional)"
+ },
+ "name": {
+ "placeholder": "Nombre de la tarea de evaluación",
+ "required": "Por favor, complete el nombre de la tarea de evaluación"
+ },
+ "title": "Agregar tarea de evaluación"
+ },
+ "addNewButton": "Crear evaluación",
+ "emptyGuide": "La tarea de evaluación actual está vacía, comienza a crear una evaluación.",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "Verificar estado",
+ "confirmDelete": "¿Deseas eliminar esta evaluación?",
+ "confirmRun": "¿Deseas comenzar a ejecutar? Al comenzar, la tarea de evaluación se ejecutará de forma asíncrona en segundo plano, cerrar la página no afectará la ejecución de la tarea asíncrona.",
+ "downloadRecords": "Descargar evaluación",
+ "retry": "Reintentar",
+ "run": "Ejecutar",
+ "title": "Acciones"
+ },
+ "datasetId": {
+ "title": "Conjunto de datos"
+ },
+ "name": {
+ "title": "Nombre de la tarea de evaluación"
+ },
+ "records": {
+ "title": "Número de registros de evaluación"
+ },
+ "referenceFiles": {
+ "title": "Archivos de referencia"
+ },
+ "status": {
+ "error": "Error en la ejecución",
+ "pending": "Pendiente de ejecución",
+ "processing": "Ejecutando",
+ "success": "Ejecución exitosa",
+ "title": "Estado"
+ }
+ },
+ "title": "Lista de tareas de evaluación"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/setting.json b/DigitalHumanWeb/locales/es-ES/setting.json
new file mode 100644
index 0000000..90461c4
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "Acerca de"
+ },
+ "agentTab": {
+ "chat": "Preferencias de chat",
+ "meta": "Información del asistente",
+ "modal": "Configuración del modelo",
+ "plugin": "Configuración de complementos",
+ "prompt": "Configuración de roles",
+ "tts": "Servicio de voz"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "Al seleccionar enviar datos de telemetría, puedes ayudarnos a mejorar la experiencia general del usuario en {{appName}}",
+ "title": "Enviar datos de uso anónimos"
+ },
+ "title": "Análisis de datos"
+ },
+ "danger": {
+ "clear": {
+ "action": "Limpiar ahora",
+ "confirm": "¿Confirmar el borrado de todos los datos de chat?",
+ "desc": "Esto eliminará todos los datos de la conversación, incluyendo asistentes, archivos, mensajes, complementos, etc.",
+ "success": "Todos los mensajes de la conversación han sido eliminados",
+ "title": "Borrar todos los mensajes de la conversación"
+ },
+ "reset": {
+ "action": "Restablecer ahora",
+ "confirm": "¿Confirmar el restablecimiento de todas las configuraciones?",
+ "currentVersion": "Versión actual",
+ "desc": "Restablecer todas las opciones de configuración a sus valores predeterminados",
+ "success": "Se han restablecido todas las configuraciones",
+ "title": "Restablecer todas las configuraciones"
+ }
+ },
+ "header": {
+ "desc": "Preferencias y configuración del modelo.",
+ "global": "Configuración global",
+ "session": "Configuración de la sesión",
+ "sessionDesc": "Configuración de roles y preferencias de sesión.",
+ "sessionWithName": "Configuración de la sesión · {{name}}",
+ "title": "Configuración"
+ },
+ "llm": {
+ "aesGcm": "Su clave y dirección del agente se cifrarán utilizando el algoritmo de cifrado <1>AES-GCM1>",
+ "apiKey": {
+ "desc": "Por favor, introduce tu clave de API de {{name}}",
+ "placeholder": "Clave de API de {{name}}",
+ "title": "Clave de API"
+ },
+ "checker": {
+ "button": "Comprobar",
+ "desc": "Comprueba si la clave API y la dirección del proxy están escritas correctamente",
+ "pass": "Comprobación exitosa",
+ "title": "Comprobación de conectividad"
+ },
+ "customModelCards": {
+ "addNew": "Crear y agregar el modelo {{id}}",
+ "config": "Configurar modelo",
+ "confirmDelete": "Estás a punto de eliminar este modelo personalizado. Una vez eliminado, no se podrá recuperar. Por favor, procede con precaución.",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "Campo utilizado en las solicitudes reales en Azure OpenAI",
+ "placeholder": "Ingresa el nombre de implementación del modelo en Azure",
+ "title": "Nombre de implementación del modelo"
+ },
+ "displayName": {
+ "placeholder": "Ingresa el nombre de visualización del modelo, por ejemplo, ChatGPT, GPT-4, etc.",
+ "title": "Nombre de visualización del modelo"
+ },
+ "files": {
+ "extra": "La implementación actual de carga de archivos es solo una solución temporal, limitada a pruebas personales. Por favor, espera la implementación completa de la capacidad de carga de archivos.",
+ "title": "Soporte de carga de archivos"
+ },
+ "functionCall": {
+ "extra": "Esta configuración solo habilitará la capacidad de llamada a funciones en la aplicación; si se admite la llamada a funciones depende completamente del modelo en sí. Por favor, prueba la disponibilidad de la capacidad de llamada a funciones de este modelo.",
+ "title": "Soporte de llamadas a funciones"
+ },
+ "id": {
+ "extra": "Se mostrará como etiqueta del modelo",
+ "placeholder": "Ingresa el ID del modelo, por ejemplo, gpt-4-turbo-preview o claude-2.1",
+ "title": "ID del modelo"
+ },
+ "modalTitle": "Configuración del modelo personalizado",
+ "tokens": {
+ "title": "Número máximo de tokens",
+ "unlimited": "ilimitado"
+ },
+ "vision": {
+ "extra": "Esta configuración solo habilitará la configuración de carga de imágenes en la aplicación; si se admite el reconocimiento depende completamente del modelo en sí. Por favor, prueba la disponibilidad de la capacidad de reconocimiento visual de este modelo.",
+ "title": "Soporte de reconocimiento visual"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "El modo de solicitud en el cliente iniciará directamente la solicitud de sesión desde el navegador, lo que puede mejorar la velocidad de respuesta",
+ "title": "Usar el modo de solicitud en el cliente"
+ },
+ "fetcher": {
+ "fetch": "Obtener lista de modelos",
+ "fetching": "Obteniendo lista de modelos...",
+ "latestTime": "Última actualización: {{time}}",
+ "noLatestTime": "Lista no disponible actualmente"
+ },
+ "helpDoc": "Tutorial de configuración",
+ "modelList": {
+ "desc": "Selecciona los modelos que se mostrarán en la conversación. Los modelos seleccionados se mostrarán en la lista de modelos.",
+ "placeholder": "Selecciona un modelo de la lista",
+ "title": "Lista de modelos",
+ "total": "Total de {{count}} modelos disponibles"
+ },
+ "proxyUrl": {
+ "desc": "Además de la dirección predeterminada, debe incluir http(s)://",
+ "title": "Dirección del proxy de la API"
+ },
+ "waitingForMore": "Más modelos están en <1>planificación para su incorporación1>, ¡estén atentos!"
+ },
+ "plugin": {
+ "addTooltip": "Agregar complemento personalizado",
+ "clearDeprecated": "Eliminar complementos obsoletos",
+ "empty": "No hay complementos instalados actualmente, visita la <1>tienda de complementos1> para explorar",
+ "installStatus": {
+ "deprecated": "Desinstalado"
+ },
+ "settings": {
+ "hint": "Por favor completa la siguiente configuración según la descripción",
+ "title": "Configuración del complemento {{id}}",
+ "tooltip": "Configuración del complemento"
+ },
+ "store": "Tienda de complementos"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "Avatar"
+ },
+ "backgroundColor": {
+ "title": "Color de fondo"
+ },
+ "description": {
+ "placeholder": "Ingresa la descripción del asistente",
+ "title": "Descripción del asistente"
+ },
+ "name": {
+ "placeholder": "Ingresa el nombre del asistente",
+ "title": "Nombre"
+ },
+ "prompt": {
+ "placeholder": "Ingresa la palabra de aviso del rol",
+ "title": "Configuración del rol"
+ },
+ "tag": {
+ "placeholder": "Ingresa la etiqueta",
+ "title": "Etiqueta"
+ },
+ "title": "Información del asistente"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "Se creará automáticamente un tema cuando el número de mensajes actuales supere este valor",
+ "title": "Umbral de mensajes"
+ },
+ "chatStyleType": {
+ "title": "Estilo de la ventana de chat",
+ "type": {
+ "chat": "Modo de conversación",
+ "docs": "Modo de documentos"
+ }
+ },
+ "compressThreshold": {
+ "desc": "Se comprimirán los mensajes históricos cuando el valor no comprimido supere este umbral",
+ "title": "Umbral de compresión de longitud de mensajes históricos"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "Indica si se debe crear automáticamente un tema durante la conversación, solo se aplica en temas temporales",
+ "title": "Crear tema automáticamente"
+ },
+ "enableCompressThreshold": {
+ "title": "Activar umbral de compresión de longitud de mensajes históricos"
+ },
+ "enableHistoryCount": {
+ "alias": "Sin límite",
+ "limited": "Incluye solo {{number}} mensajes de conversación",
+ "setlimited": "Establecer cantidad de mensajes históricos",
+ "title": "Limitar número de mensajes históricos",
+ "unlimited": "Sin límite de mensajes históricos"
+ },
+ "historyCount": {
+ "desc": "Número de mensajes incluidos en cada solicitud (incluyendo las preguntas más recientes. Cada pregunta y respuesta se cuenta como 1)",
+ "title": "Número de mensajes incluidos"
+ },
+ "inputTemplate": {
+ "desc": "El último mensaje del usuario se completará en esta plantilla",
+ "placeholder": "La plantilla de preprocesamiento {{text}} se reemplazará por la información de entrada en tiempo real",
+ "title": "Preprocesamiento de entrada del usuario"
+ },
+ "title": "Configuración de chat"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "Activar límite de tokens por respuesta"
+ },
+ "frequencyPenalty": {
+ "desc": "Cuanto mayor sea el valor, más probable es que se reduzcan las repeticiones de palabras",
+ "title": "Penalización de frecuencia"
+ },
+ "maxTokens": {
+ "desc": "Número máximo de tokens utilizados en una interacción",
+ "title": "Límite de tokens por respuesta"
+ },
+ "model": {
+ "desc": "{{provider}} modelo",
+ "title": "Modelo"
+ },
+ "presencePenalty": {
+ "desc": "Cuanto mayor sea el valor, más probable es que se amplíe a nuevos temas",
+ "title": "Penalización de novedad del tema"
+ },
+ "temperature": {
+ "desc": "Cuanto mayor sea el valor, más aleatoria será la respuesta",
+ "title": "Temperatura",
+ "titleWithValue": "Temperatura {{value}}"
+ },
+ "title": "Configuración del modelo",
+ "topP": {
+ "desc": "Similar a la temperatura, pero no se debe cambiar junto con la temperatura",
+ "title": "Muestreo de núcleo"
+ }
+ },
+ "settingPlugin": {
+ "title": "Lista de complementos"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "El administrador ha habilitado el acceso cifrado",
+ "placeholder": "Ingrese la contraseña de acceso",
+ "title": "Contraseña de acceso"
+ },
+ "oauth": {
+ "info": {
+ "desc": "Sesión iniciada",
+ "title": "Información de la cuenta"
+ },
+ "signin": {
+ "action": "Iniciar sesión",
+ "desc": "Inicia sesión con SSO para desbloquear la aplicación",
+ "title": "Iniciar sesión en la cuenta"
+ },
+ "signout": {
+ "action": "Cerrar sesión",
+ "confirm": "¿Confirmar cierre de sesión?",
+ "success": "Sesión cerrada con éxito"
+ }
+ },
+ "title": "Configuración del sistema"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "Modelo de reconocimiento de voz de OpenAI",
+ "title": "OpenAI",
+ "ttsModel": "Modelo de síntesis de voz de OpenAI"
+ },
+ "showAllLocaleVoice": {
+ "desc": "Si está desactivado, solo se mostrarán las voces del idioma actual",
+ "title": "Mostrar todas las voces del idioma"
+ },
+ "stt": "Configuración de reconocimiento de voz",
+ "sttAutoStop": {
+ "desc": "Si está desactivado, el reconocimiento de voz no se detendrá automáticamente y deberá hacer clic en el botón de detener manualmente",
+ "title": "Detención automática del reconocimiento de voz"
+ },
+ "sttLocale": {
+ "desc": "Idioma de entrada de voz, esta opción puede mejorar la precisión del reconocimiento de voz",
+ "title": "Idioma de reconocimiento de voz"
+ },
+ "sttService": {
+ "desc": "El servicio de reconocimiento de voz, donde el navegador es el servicio nativo de reconocimiento de voz del navegador",
+ "title": "Servicio de reconocimiento de voz"
+ },
+ "title": "Servicio de voz",
+ "tts": "Configuración de síntesis de voz",
+ "ttsService": {
+ "desc": "Si utiliza el servicio de síntesis de voz de OpenAI, asegúrese de que el servicio de modelo de OpenAI esté habilitado",
+ "title": "Servicio de síntesis de voz"
+ },
+ "voice": {
+ "desc": "Seleccione una voz para el asistente actual, diferentes servicios de TTS admiten diferentes voces",
+ "preview": "Vista previa de la voz",
+ "title": "Voz de síntesis de voz"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "Avatar"
+ },
+ "fontSize": {
+ "desc": "Tamaño de fuente para el contenido del chat",
+ "marks": {
+ "normal": "Normal"
+ },
+ "title": "Tamaño de fuente"
+ },
+ "lang": {
+ "autoMode": "Seguir sistema",
+ "title": "Idioma"
+ },
+ "neutralColor": {
+ "desc": "Personalización de la escala de grises para diferentes tendencias de color",
+ "title": "Color neutro"
+ },
+ "primaryColor": {
+ "desc": "Color principal personalizado",
+ "title": "Color principal"
+ },
+ "themeMode": {
+ "auto": "Automático",
+ "dark": "Oscuro",
+ "light": "Claro",
+ "title": "Tema"
+ },
+ "title": "Configuración de tema"
+ },
+ "submitAgentModal": {
+ "button": "Enviar asistente",
+ "identifier": "Identificador del asistente",
+ "metaMiss": "Por favor complete la información del asistente antes de enviar, debe incluir nombre, descripción y etiquetas",
+ "placeholder": "Ingrese el identificador único del asistente, por ejemplo desarrollo-web",
+ "tooltips": "Compartir en el mercado de asistentes"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "Agrega un nombre para identificar el dispositivo",
+ "placeholder": "Introduce el nombre del dispositivo",
+ "title": "Nombre del dispositivo"
+ },
+ "title": "Información del dispositivo",
+ "unknownBrowser": "Navegador desconocido",
+ "unknownOS": "Sistema operativo desconocido"
+ },
+ "warning": {
+ "tip": "Después de un largo período de pruebas comunitarias, la sincronización WebRTC puede no ser capaz de satisfacer de manera estable las demandas generales de sincronización de datos. Por favor, <1>implementa tu propio servidor de señalización1> antes de usarlo."
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "WebRTC utilizará este nombre para crear un canal de sincronización. Asegúrate de que el nombre del canal sea único",
+ "placeholder": "Introduce el nombre del canal de sincronización",
+ "shuffle": "Generar aleatoriamente",
+ "title": "Nombre del canal de sincronización"
+ },
+ "channelPassword": {
+ "desc": "Agrega una contraseña para garantizar la privacidad del canal. Solo los dispositivos con la contraseña correcta podrán unirse al canal",
+ "placeholder": "Introduce la contraseña del canal de sincronización",
+ "title": "Contraseña del canal de sincronización"
+ },
+ "desc": "Comunicación de datos en tiempo real y punto a punto. Los dispositivos deben estar en línea simultáneamente para sincronizarse",
+ "enabled": {
+ "invalid": "Por favor, completa la información del servidor de señalización y el nombre del canal de sincronización antes de habilitarlo",
+ "title": "Activar sincronización"
+ },
+ "signaling": {
+ "desc": "WebRTC utilizará esta dirección para la sincronización",
+ "placeholder": "Introduce la dirección del servidor de señalización",
+ "title": "Servidor de señalización"
+ },
+ "title": "Sincronización WebRTC"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "Modelo de generación de metadatos de asistente",
+ "modelDesc": "Modelo designado para generar el nombre, descripción, avatar y etiquetas del asistente",
+ "title": "Generación automática de información del asistente"
+ },
+ "queryRewrite": {
+ "label": "Modelo de reescritura de preguntas",
+ "modelDesc": "Modelo designado para optimizar las preguntas de los usuarios",
+ "title": "Base de conocimientos"
+ },
+ "title": "Asistente del sistema",
+ "topic": {
+ "label": "Modelo de nombramiento de temas",
+ "modelDesc": "Modelo designado para el renombramiento automático de temas",
+ "title": "Renombramiento automático de temas"
+ },
+ "translation": {
+ "label": "Modelo de traducción",
+ "modelDesc": "Especifica el modelo a utilizar para la traducción",
+ "title": "Configuración del asistente de traducción"
+ }
+ },
+ "tab": {
+ "about": "Acerca de",
+ "agent": "Asistente predeterminado",
+ "common": "Configuración común",
+ "experiment": "Experimento",
+ "llm": "Modelo de lenguaje",
+ "sync": "Sincronización en la nube",
+ "system-agent": "Asistente del sistema",
+ "tts": "Servicio de voz"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "Incorporados"
+ },
+ "disabled": "El modelo actual no admite llamadas de función y no se puede utilizar el complemento",
+ "plugins": {
+ "enabled": "Habilitados {{num}}",
+ "groupName": "Complementos",
+ "noEnabled": "No hay complementos habilitados por el momento",
+ "store": "Tienda de complementos"
+ },
+ "title": "Herramientas de extensión"
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/tool.json b/DigitalHumanWeb/locales/es-ES/tool.json
new file mode 100644
index 0000000..2677a4d
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "Auto-generar",
+ "downloading": "El enlace de la imagen generada por DALL·E 3 solo es válido durante 1 hora, descargando la imagen al dispositivo local...",
+ "generate": "Generar",
+ "generating": "Generando...",
+ "images": "Imágenes:",
+ "prompt": "Palabra de aviso"
+ }
+}
diff --git a/DigitalHumanWeb/locales/es-ES/welcome.json b/DigitalHumanWeb/locales/es-ES/welcome.json
new file mode 100644
index 0000000..50f9c55
--- /dev/null
+++ b/DigitalHumanWeb/locales/es-ES/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "Importar configuración",
+ "market": "Explorar el mercado",
+ "start": "Comenzar ahora"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "Cambiar grupo",
+ "title": "Recomendación de asistentes nuevos:"
+ },
+ "defaultMessage": "Soy su asistente inteligente personal {{appName}}. ¿En qué puedo ayudarle ahora?\nSi necesita un asistente más profesional o personalizado, puede hacer clic en `+` para crear un asistente personalizado.",
+ "defaultMessageWithoutCreate": "Soy su asistente inteligente personal {{appName}}. ¿En qué puedo ayudarle ahora?",
+ "qa": {
+ "q01": "¿Qué es LobeHub?",
+ "q02": "¿Qué es {{appName}}?",
+ "q03": "¿{{appName}} tiene soporte comunitario?",
+ "q04": "¿Qué funciones soporta {{appName}}?",
+ "q05": "¿Cómo se despliega y utiliza {{appName}}?",
+ "q06": "¿Cuál es el precio de {{appName}}?",
+ "q07": "¿{{appName}} es gratuito?",
+ "q08": "¿Hay una versión en la nube?",
+ "q09": "¿Soporta modelos de lenguaje locales?",
+ "q10": "¿Soporta reconocimiento y generación de imágenes?",
+ "q11": "¿Soporta síntesis de voz y reconocimiento de voz?",
+ "q12": "¿Soporta un sistema de plugins?",
+ "q13": "¿Tiene su propio mercado para obtener GPTs?",
+ "q14": "¿Soporta múltiples proveedores de servicios de IA?",
+ "q15": "¿Qué debo hacer si tengo problemas al usarlo?"
+ },
+ "questions": {
+ "moreBtn": "Saber más",
+ "title": "Preguntas frecuentes:"
+ },
+ "welcome": {
+ "afternoon": "Buenas tardes",
+ "morning": "Buenos días",
+ "night": "Buenas noches",
+ "noon": "Buen mediodía"
+ }
+ },
+ "header": "Bienvenido/a",
+ "pickAgent": "O elige una plantilla de asistente a continuación",
+ "skip": "Saltar",
+ "slogan": {
+ "desc1": "Despierta el poder de tu mente. Tu asistente inteligente siempre está aquí para avivar la chispa del pensamiento.",
+ "desc2": "Crea tu primer asistente. ¡Comencemos!",
+ "title": "Dale a tu mente una ventaja más inteligente"
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/auth.json b/DigitalHumanWeb/locales/fr-FR/auth.json
new file mode 100644
index 0000000..9f51164
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "Connexion",
+ "loginOrSignup": "Connexion / Inscription",
+ "profile": "Profil",
+ "security": "Sécurité",
+ "signout": "Déconnexion",
+ "signup": "Inscription"
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/chat.json b/DigitalHumanWeb/locales/fr-FR/chat.json
new file mode 100644
index 0000000..27f4ff9
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "Modèle"
+ },
+ "agentDefaultMessage": "Bonjour, je suis **{{name}}**, vous pouvez commencer à discuter avec moi immédiatement ou vous rendre dans [Paramètres de l'assistant]({{url}}) pour compléter mes informations.",
+ "agentDefaultMessageWithSystemRole": "Bonjour, je suis **{{name}}**, {{systemRole}}. Commençons la conversation !",
+ "agentDefaultMessageWithoutEdit": "Bonjour, je suis **{{name}}**. Commençons notre conversation !",
+ "agents": "Assistant",
+ "artifact": {
+ "generating": "Génération en cours",
+ "thinking": "En réflexion",
+ "thought": "Processus de pensée",
+ "unknownTitle": "Œuvre sans nom"
+ },
+ "backToBottom": "Retour en bas",
+ "chatList": {
+ "longMessageDetail": "Voir les détails"
+ },
+ "clearCurrentMessages": "Effacer les messages actuels",
+ "confirmClearCurrentMessages": "Vous êtes sur le point d'effacer les messages de cette session. Cette action est irréversible. Veuillez confirmer.",
+ "confirmRemoveSessionItemAlert": "Vous êtes sur le point de supprimer cet agent. Cette action est irréversible. Veuillez confirmer.",
+ "confirmRemoveSessionSuccess": "Agent supprimé avec succès",
+ "defaultAgent": "Agent par défaut",
+ "defaultList": "Liste par défaut",
+ "defaultSession": "Session par défaut",
+ "duplicateSession": {
+ "loading": "Copie en cours...",
+ "success": "Copie réussie",
+ "title": "{{title}} Copie"
+ },
+ "duplicateTitle": "{{title}} Copie",
+ "emptyAgent": "Aucun assistant disponible",
+ "historyRange": "Plage d'historique",
+ "inbox": {
+ "desc": "Débloquez le potentiel de votre esprit. Votre agent intelligent est là pour discuter avec vous de tout et de rien.",
+ "title": "Discutons un peu"
+ },
+ "input": {
+ "addAi": "Ajouter un message AI",
+ "addUser": "Ajouter un message utilisateur",
+ "more": "Plus",
+ "send": "Envoyer",
+ "sendWithCmdEnter": "Envoyer avec {{meta}} + Entrée",
+ "sendWithEnter": "Envoyer avec Entrée",
+ "stop": "Arrêter",
+ "warp": "Saut de ligne"
+ },
+ "knowledgeBase": {
+ "all": "Tout le contenu",
+ "allFiles": "Tous les fichiers",
+ "allKnowledgeBases": "Toutes les bases de connaissances",
+ "disabled": "Le mode de déploiement actuel ne prend pas en charge les dialogues de la base de connaissances. Pour l'utiliser, veuillez passer à un déploiement de base de données sur serveur ou utiliser le service {{cloud}}.",
+ "library": {
+ "action": {
+ "add": "Ajouter",
+ "detail": "Détails",
+ "remove": "Supprimer"
+ },
+ "title": "Fichiers/Bases de connaissances"
+ },
+ "relativeFilesOrKnowledgeBases": "Fichiers/Bases de connaissances associés",
+ "title": "Base de connaissances",
+ "uploadGuide": "Les fichiers téléchargés peuvent être consultés dans la « Base de connaissances ».",
+ "viewMore": "Voir plus"
+ },
+ "messageAction": {
+ "delAndRegenerate": "Supprimer et régénérer",
+ "regenerate": "Régénérer"
+ },
+ "newAgent": "Nouvel agent",
+ "pin": "Épingler",
+ "pinOff": "Désépingler",
+ "rag": {
+ "referenceChunks": "Références",
+ "userQuery": {
+ "actions": {
+ "delete": "Supprimer la réécriture de la requête",
+ "regenerate": "Régénérer la requête"
+ }
+ }
+ },
+ "regenerate": "Regénérer",
+ "roleAndArchive": "Rôle et archivage",
+ "searchAgentPlaceholder": "Assistant de recherche...",
+ "sendPlaceholder": "Saisissez votre message...",
+ "sessionGroup": {
+ "config": "Gestion des groupes",
+ "confirmRemoveGroupAlert": "Vous êtes sur le point de supprimer ce groupe. Une fois supprimé, les agents de ce groupe seront déplacés vers la liste par défaut. Veuillez confirmer votre action.",
+ "createAgentSuccess": "Création de l'agent réussie",
+ "createGroup": "Créer un nouveau groupe",
+ "createSuccess": "Création réussie",
+ "creatingAgent": "Création de l'agent en cours...",
+ "inputPlaceholder": "Veuillez saisir le nom du groupe...",
+ "moveGroup": "Déplacer vers un groupe",
+ "newGroup": "Nouveau groupe",
+ "rename": "Renommer le groupe",
+ "renameSuccess": "Renommage réussi",
+ "sortSuccess": "Réorganisation réussie",
+ "sorting": "Mise à jour de la réorganisation des groupes en cours...",
+ "tooLong": "Le nom du groupe doit comporter entre 1 et 20 caractères"
+ },
+ "shareModal": {
+ "download": "Télécharger la capture d'écran",
+ "imageType": "Type d'image",
+ "screenshot": "Capture d'écran",
+ "settings": "Paramètres d'exportation",
+ "shareToShareGPT": "Générer un lien de partage ShareGPT",
+ "withBackground": "Avec image de fond",
+ "withFooter": "Avec pied de page",
+ "withPluginInfo": "Avec informations sur le plugin",
+ "withSystemRole": "Avec rôle de l'agent"
+ },
+ "stt": {
+ "action": "Entrée vocale",
+ "loading": "En cours de reconnaissance...",
+ "prettifying": "En cours d'embellissement..."
+ },
+ "temp": "Temporaire",
+ "tokenDetails": {
+ "chats": "Messages de discussion",
+ "rest": "Restant disponible",
+ "systemRole": "Rôle système",
+ "title": "Détails du jeton",
+ "tools": "Paramètres du plugin",
+ "total": "Total disponible",
+ "used": "Total utilisé"
+ },
+ "tokenTag": {
+ "overload": "Dépassement de limite",
+ "remained": "Restant",
+ "used": "Utilisé"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "Renommer automatiquement",
+ "duplicate": "Créer une copie",
+ "export": "Exporter le sujet"
+ },
+ "checkOpenNewTopic": "Voulez-vous ouvrir un nouveau sujet ?",
+ "checkSaveCurrentMessages": "Voulez-vous enregistrer la conversation actuelle en tant que sujet ?",
+ "confirmRemoveAll": "Vous êtes sur le point de supprimer tous les sujets. Cette action est irréversible. Veuillez confirmer.",
+ "confirmRemoveTopic": "Vous êtes sur le point de supprimer ce sujet. Cette action est irréversible. Veuillez confirmer.",
+ "confirmRemoveUnstarred": "Vous êtes sur le point de supprimer les sujets non favoris. Cette action est irréversible. Veuillez confirmer.",
+ "defaultTitle": "Sujet par défaut",
+ "duplicateLoading": "Duplication du sujet en cours...",
+ "duplicateSuccess": "Sujet dupliqué avec succès",
+ "guide": {
+ "desc": "Cliquez sur le bouton à gauche pour enregistrer la conversation actuelle comme un sujet historique et démarrer une nouvelle session.",
+ "title": "Liste des sujets"
+ },
+ "openNewTopic": "Ouvrir un nouveau sujet",
+ "removeAll": "Supprimer tous les sujets",
+ "removeUnstarred": "Supprimer les sujets non favoris",
+ "saveCurrentMessages": "Enregistrer la conversation actuelle en tant que sujet",
+ "searchPlaceholder": "Rechercher un sujet...",
+ "title": "Liste des sujets"
+ },
+ "translate": {
+ "action": "Traduire",
+ "clear": "Effacer la traduction"
+ },
+ "tts": {
+ "action": "Lecture vocale",
+ "clear": "Effacer la voix"
+ },
+ "updateAgent": "Mettre à jour les informations de l'agent",
+ "upload": {
+ "action": {
+ "fileUpload": "Télécharger un fichier",
+ "folderUpload": "Télécharger un dossier",
+ "imageDisabled": "Le modèle actuel ne prend pas en charge la reconnaissance visuelle, veuillez changer de modèle pour l'utiliser",
+ "imageUpload": "Télécharger une image",
+ "tooltip": "Télécharger"
+ },
+ "clientMode": {
+ "actionFiletip": "Télécharger un fichier",
+ "actionTooltip": "Télécharger",
+ "disabled": "Le modèle actuel ne prend pas en charge la reconnaissance visuelle et l'analyse de fichiers, veuillez changer de modèle pour l'utiliser"
+ },
+ "preview": {
+ "prepareTasks": "Préparation des morceaux...",
+ "status": {
+ "pending": "Préparation du téléchargement...",
+ "processing": "Traitement du fichier..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/clerk.json b/DigitalHumanWeb/locales/fr-FR/clerk.json
new file mode 100644
index 0000000..0293aa7
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "Retour",
+ "badge__default": "Par défaut",
+ "badge__otherImpersonatorDevice": "Autre appareil usurpateur",
+ "badge__primary": "Principal",
+ "badge__requiresAction": "Nécessite une action",
+ "badge__thisDevice": "Cet appareil",
+ "badge__unverified": "Non vérifié",
+ "badge__userDevice": "Appareil de l'utilisateur",
+ "badge__you": "Vous",
+ "createOrganization": {
+ "formButtonSubmit": "Créer une organisation",
+ "invitePage": {
+ "formButtonReset": "Passer"
+ },
+ "title": "Créer une organisation"
+ },
+ "dates": {
+ "lastDay": "Hier à {{ date | timeString('fr-FR') }}",
+ "next6Days": "{{ date | weekday('fr-FR','long') }} à {{ date | timeString('fr-FR') }}",
+ "nextDay": "Demain à {{ date | timeString('fr-FR') }}",
+ "numeric": "{{ date | numeric('fr-FR') }}",
+ "previous6Days": "Dernier {{ date | weekday('fr-FR','long') }} à {{ date | timeString('fr-FR') }}",
+ "sameDay": "Aujourd'hui à {{ date | timeString('fr-FR') }}"
+ },
+ "dividerText": "ou",
+ "footerActionLink__useAnotherMethod": "Utiliser une autre méthode",
+ "footerPageLink__help": "Aide",
+ "footerPageLink__privacy": "Confidentialité",
+ "footerPageLink__terms": "Conditions",
+ "formButtonPrimary": "Continuer",
+ "formButtonPrimary__verify": "Vérifier",
+ "formFieldAction__forgotPassword": "Mot de passe oublié ?",
+ "formFieldError__matchingPasswords": "Les mots de passe correspondent.",
+ "formFieldError__notMatchingPasswords": "Les mots de passe ne correspondent pas.",
+ "formFieldError__verificationLinkExpired": "Le lien de vérification a expiré. Veuillez demander un nouveau lien.",
+ "formFieldHintText__optional": "Facultatif",
+ "formFieldHintText__slug": "Un slug est un identifiant lisible par l'homme qui doit être unique. Il est souvent utilisé dans les URL.",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "Supprimer le compte",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "exemple@email.com, exemple2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "mon-org",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "Activer les invitations automatiques pour ce domaine",
+ "formFieldLabel__backupCode": "Code de secours",
+ "formFieldLabel__confirmDeletion": "Confirmation",
+ "formFieldLabel__confirmPassword": "Confirmer le mot de passe",
+ "formFieldLabel__currentPassword": "Mot de passe actuel",
+ "formFieldLabel__emailAddress": "Adresse e-mail",
+ "formFieldLabel__emailAddress_username": "Adresse e-mail ou nom d'utilisateur",
+ "formFieldLabel__emailAddresses": "Adresses e-mail",
+ "formFieldLabel__firstName": "Prénom",
+ "formFieldLabel__lastName": "Nom de famille",
+ "formFieldLabel__newPassword": "Nouveau mot de passe",
+ "formFieldLabel__organizationDomain": "Domaine",
+ "formFieldLabel__organizationDomainDeletePending": "Supprimer les invitations et suggestions en attente",
+ "formFieldLabel__organizationDomainEmailAddress": "Adresse e-mail de vérification",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "Saisissez une adresse e-mail sous ce domaine pour recevoir un code et vérifier ce domaine.",
+ "formFieldLabel__organizationName": "Nom",
+ "formFieldLabel__organizationSlug": "Slug",
+ "formFieldLabel__passkeyName": "Nom de la clé d'accès",
+ "formFieldLabel__password": "Mot de passe",
+ "formFieldLabel__phoneNumber": "Numéro de téléphone",
+ "formFieldLabel__role": "Rôle",
+ "formFieldLabel__signOutOfOtherSessions": "Se déconnecter de tous les autres appareils",
+ "formFieldLabel__username": "Nom d'utilisateur",
+ "impersonationFab": {
+ "action__signOut": "Se déconnecter",
+ "title": "Connecté en tant que {{identifier}}"
+ },
+ "locale": "fr-FR",
+ "maintenanceMode": "Nous sommes actuellement en maintenance, mais ne vous inquiétez pas, cela ne devrait pas prendre plus que quelques minutes.",
+ "membershipRole__admin": "Administrateur",
+ "membershipRole__basicMember": "Membre",
+ "membershipRole__guestMember": "Invité",
+ "organizationList": {
+ "action__createOrganization": "Créer une organisation",
+ "action__invitationAccept": "Rejoindre",
+ "action__suggestionsAccept": "Demander à rejoindre",
+ "createOrganization": "Créer une organisation",
+ "invitationAcceptedLabel": "Rejoint",
+ "subtitle": "pour continuer sur {{applicationName}}",
+ "suggestionsAcceptedLabel": "En attente d'approbation",
+ "title": "Choisir un compte",
+ "titleWithoutPersonal": "Choisir une organisation"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "Invitations automatiques",
+ "badge__automaticSuggestion": "Suggestions automatiques",
+ "badge__manualInvitation": "Pas d'inscription automatique",
+ "badge__unverified": "Non vérifié",
+ "createDomainPage": {
+ "subtitle": "Ajoutez le domaine à vérifier. Les utilisateurs avec des adresses e-mail de ce domaine peuvent rejoindre l'organisation automatiquement ou demander à rejoindre.",
+ "title": "Ajouter un domaine"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "Les invitations n'ont pas pu être envoyées. Il existe déjà des invitations en attente pour les adresses e-mail suivantes : {{email_addresses}}.",
+ "formButtonPrimary__continue": "Envoyer les invitations",
+ "selectDropdown__role": "Sélectionner le rôle",
+ "subtitle": "Saisissez ou collez une ou plusieurs adresses e-mail, séparées par des espaces ou des virgules.",
+ "successMessage": "Invitations envoyées avec succès",
+ "title": "Inviter de nouveaux membres"
+ },
+ "membersPage": {
+ "action__invite": "Inviter",
+ "activeMembersTab": {
+ "menuAction__remove": "Supprimer le membre",
+ "tableHeader__actions": "",
+ "tableHeader__joined": "Rejoint",
+ "tableHeader__role": "Rôle",
+ "tableHeader__user": "Utilisateur"
+ },
+ "detailsTitle__emptyRow": "Aucun membre à afficher",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "Invitez des utilisateurs en connectant un domaine e-mail à votre organisation. Toute personne qui s'inscrit avec un domaine e-mail correspondant pourra rejoindre l'organisation à tout moment.",
+ "headerTitle": "Invitations automatiques",
+ "primaryButton": "Gérer les domaines vérifiés"
+ },
+ "table__emptyRow": "Aucune invitation à afficher"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "Révoquer l'invitation",
+ "tableHeader__invited": "Invité"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "Les utilisateurs qui s'inscrivent avec un domaine e-mail correspondant pourront voir une suggestion pour demander à rejoindre votre organisation.",
+ "headerTitle": "Suggestions automatiques",
+ "primaryButton": "Gérer les domaines vérifiés"
+ },
+ "menuAction__approve": "Approuver",
+ "menuAction__reject": "Rejeter",
+ "tableHeader__requested": "Demande d'accès",
+ "table__emptyRow": "Aucune demande à afficher"
+ },
+ "start": {
+ "headerTitle__invitations": "Invitations",
+ "headerTitle__members": "Membres",
+ "headerTitle__requests": "Demandes"
+ }
+ },
+ "navbar": {
+ "description": "Gérez votre organisation.",
+ "general": "Général",
+ "members": "Membres",
+ "title": "Organisation"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "Saisissez \"{{organizationName}}\" ci-dessous pour continuer.",
+ "messageLine1": "Êtes-vous sûr de vouloir supprimer cette organisation ?",
+ "messageLine2": "Cette action est définitive et irréversible.",
+ "successMessage": "Vous avez supprimé l'organisation.",
+ "title": "Supprimer l'organisation"
+ },
+ "leaveOrganization": {
+ "actionDescription": "Saisissez \"{{organizationName}}\" ci-dessous pour continuer.",
+ "messageLine1": "Êtes-vous sûr de vouloir quitter cette organisation ? Vous perdrez l'accès à cette organisation et à ses applications.",
+ "messageLine2": "Cette action est définitive et irréversible.",
+ "successMessage": "Vous avez quitté l'organisation.",
+ "title": "Quitter l'organisation"
+ },
+ "title": "Danger"
+ },
+ "domainSection": {
+ "menuAction__manage": "Gérer",
+ "menuAction__remove": "Supprimer",
+ "menuAction__verify": "Vérifier",
+ "primaryButton": "Ajouter un domaine",
+ "subtitle": "Permettez aux utilisateurs de rejoindre l'organisation automatiquement ou de demander à rejoindre en fonction d'un domaine e-mail vérifié.",
+ "title": "Domaines vérifiés"
+ },
+ "successMessage": "L'organisation a été mise à jour.",
+ "title": "Mettre à jour le profil"
+ },
+ "removeDomainPage": {
+ "messageLine1": "Le domaine e-mail {{domain}} sera supprimé.",
+ "messageLine2": "Les utilisateurs ne pourront plus rejoindre l'organisation automatiquement après cela.",
+ "successMessage": "{{domain}} a été supprimé.",
+ "title": "Supprimer le domaine"
+ },
+ "start": {
+ "headerTitle__general": "Général",
+ "headerTitle__members": "Membres",
+ "profileSection": {
+ "primaryButton": "Mettre à jour le profil",
+ "title": "Profil de l'organisation",
+ "uploadAction__title": "Logo"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "La suppression de ce domaine affectera les utilisateurs invités.",
+ "removeDomainActionLabel__remove": "Supprimer le domaine",
+ "removeDomainSubtitle": "Supprimer ce domaine de vos domaines vérifiés",
+ "removeDomainTitle": "Supprimer le domaine"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "Les utilisateurs sont automatiquement invités à rejoindre l'organisation lorsqu'ils s'inscrivent et peuvent rejoindre à tout moment.",
+ "automaticInvitationOption__label": "Invitations automatiques",
+ "automaticSuggestionOption__description": "Les utilisateurs reçoivent une suggestion pour demander à rejoindre, mais doivent être approuvés par un administrateur avant de pouvoir rejoindre l'organisation.",
+ "automaticSuggestionOption__label": "Suggestions automatiques",
+ "calloutInfoLabel": "Le changement du mode d'inscription n'affectera que les nouveaux utilisateurs.",
+ "calloutInvitationCountLabel": "Invitations en attente envoyées aux utilisateurs : {{count}}",
+ "calloutSuggestionCountLabel": "Suggestions en attente envoyées aux utilisateurs : {{count}}",
+ "manualInvitationOption__description": "Les utilisateurs ne peuvent être invités à l'organisation que manuellement.",
+ "manualInvitationOption__label": "Pas d'inscription automatique",
+ "subtitle": "Choisissez comment les utilisateurs de ce domaine peuvent rejoindre l'organisation."
+ },
+ "start": {
+ "headerTitle__danger": "Danger",
+ "headerTitle__enrollment": "Options d'inscription"
+ },
+ "subtitle": "Le domaine {{domain}} est désormais vérifié. Poursuivez en sélectionnant le mode d'inscription.",
+ "title": "Mettre à jour {{domain}}"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "Saisissez le code de vérification envoyé à votre adresse e-mail",
+ "formTitle": "Code de vérification",
+ "resendButton": "Vous n'avez pas reçu de code ? Renvoyer",
+ "subtitle": "Le domaine {{domainName}} doit être vérifié via e-mail.",
+ "subtitleVerificationCodeScreen": "Un code de vérification a été envoyé à {{emailAddress}}. Saisissez le code pour continuer.",
+ "title": "Vérifier le domaine"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "Créer une organisation",
+ "action__invitationAccept": "Rejoindre",
+ "action__manageOrganization": "Gérer",
+ "action__suggestionsAccept": "Demander à rejoindre",
+ "notSelected": "Aucune organisation sélectionnée",
+ "personalWorkspace": "Compte personnel",
+ "suggestionsAcceptedLabel": "En attente d'approbation"
+ },
+ "paginationButton__next": "Suivant",
+ "paginationButton__previous": "Précédent",
+ "paginationRowText__displaying": "Affichage",
+ "paginationRowText__of": "de",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "Ajouter un compte",
+ "action__signOutAll": "Se déconnecter de tous les comptes",
+ "subtitle": "Sélectionnez le compte avec lequel vous souhaitez continuer.",
+ "title": "Choisissez un compte"
+ },
+ "alternativeMethods": {
+ "actionLink": "Obtenir de l'aide",
+ "actionText": "Vous n'avez aucun de ces comptes ?",
+ "blockButton__backupCode": "Utiliser un code de secours",
+ "blockButton__emailCode": "Envoyer un code par e-mail à {{identifier}}",
+ "blockButton__emailLink": "Envoyer un lien par e-mail à {{identifier}}",
+ "blockButton__passkey": "Se connecter avec votre passkey",
+ "blockButton__password": "Se connecter avec votre mot de passe",
+ "blockButton__phoneCode": "Envoyer un code SMS à {{identifier}}",
+ "blockButton__totp": "Utiliser votre application d'authentification",
+ "getHelp": {
+ "blockButton__emailSupport": "Assistance par e-mail",
+ "content": "Si vous rencontrez des difficultés pour vous connecter à votre compte, envoyez-nous un e-mail et nous travaillerons avec vous pour restaurer l'accès dès que possible.",
+ "title": "Obtenir de l'aide"
+ },
+ "subtitle": "Rencontrez-vous des problèmes ? Vous pouvez utiliser l'une de ces méthodes pour vous connecter.",
+ "title": "Utiliser une autre méthode"
+ },
+ "backupCodeMfa": {
+ "subtitle": "Votre code de secours est celui que vous avez reçu lors de la configuration de l'authentification à deux facteurs.",
+ "title": "Saisir un code de secours"
+ },
+ "emailCode": {
+ "formTitle": "Code de vérification",
+ "resendButton": "Vous n'avez pas reçu de code ? Renvoyer",
+ "subtitle": "pour continuer vers {{applicationName}}",
+ "title": "Vérifiez votre e-mail"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "Retournez à l'onglet d'origine pour continuer.",
+ "title": "Ce lien de vérification a expiré"
+ },
+ "failed": {
+ "subtitle": "Retournez à l'onglet d'origine pour continuer.",
+ "title": "Ce lien de vérification est invalide"
+ },
+ "formSubtitle": "Utilisez le lien de vérification envoyé à votre e-mail",
+ "formTitle": "Lien de vérification",
+ "loading": {
+ "subtitle": "Vous serez bientôt redirigé",
+ "title": "Connexion en cours..."
+ },
+ "resendButton": "Vous n'avez pas reçu de lien ? Renvoyer",
+ "subtitle": "pour continuer vers {{applicationName}}",
+ "title": "Vérifiez votre e-mail",
+ "unusedTab": {
+ "title": "Vous pouvez fermer cet onglet"
+ },
+ "verified": {
+ "subtitle": "Vous serez bientôt redirigé",
+ "title": "Connexion réussie"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Retournez à l'onglet d'origine pour continuer",
+ "subtitleNewTab": "Retournez à l'onglet nouvellement ouvert pour continuer",
+ "titleNewTab": "Connecté sur un autre onglet"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "Code de réinitialisation du mot de passe",
+ "resendButton": "Vous n'avez pas reçu de code ? Renvoyer",
+ "subtitle": "pour réinitialiser votre mot de passe",
+ "subtitle_email": "Tout d'abord, saisissez le code envoyé à votre adresse e-mail",
+ "subtitle_phone": "Tout d'abord, saisissez le code envoyé à votre téléphone",
+ "title": "Réinitialiser le mot de passe"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "Réinitialiser votre mot de passe",
+ "label__alternativeMethods": "Ou, connectez-vous avec une autre méthode",
+ "title": "Mot de passe oublié ?"
+ },
+ "noAvailableMethods": {
+ "message": "Impossible de procéder à la connexion. Aucun facteur d'authentification disponible.",
+ "subtitle": "Une erreur s'est produite",
+ "title": "Connexion impossible"
+ },
+ "passkey": {
+ "subtitle": "L'utilisation de votre passkey confirme que c'est bien vous. Votre appareil peut vous demander votre empreinte digitale, votre visage ou votre verrouillage d'écran.",
+ "title": "Utilisez votre passkey"
+ },
+ "password": {
+ "actionLink": "Utiliser une autre méthode",
+ "subtitle": "Entrez le mot de passe associé à votre compte",
+ "title": "Entrez votre mot de passe"
+ },
+ "passwordPwned": {
+ "title": "Mot de passe compromis"
+ },
+ "phoneCode": {
+ "formTitle": "Code de vérification",
+ "resendButton": "Vous n'avez pas reçu de code ? Renvoyer",
+ "subtitle": "pour continuer vers {{applicationName}}",
+ "title": "Vérifiez votre téléphone"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "Code de vérification",
+ "resendButton": "Vous n'avez pas reçu de code ? Renvoyer",
+ "subtitle": "Pour continuer, veuillez saisir le code de vérification envoyé à votre téléphone",
+ "title": "Vérifiez votre téléphone"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "Réinitialiser le mot de passe",
+ "requiredMessage": "Pour des raisons de sécurité, il est nécessaire de réinitialiser votre mot de passe.",
+ "successMessage": "Votre mot de passe a été changé avec succès. Nous vous connectons, veuillez patienter un instant.",
+ "title": "Définir un nouveau mot de passe"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "Nous devons vérifier votre identité avant de réinitialiser votre mot de passe."
+ },
+ "start": {
+ "actionLink": "S'inscrire",
+ "actionLink__use_email": "Utiliser un e-mail",
+ "actionLink__use_email_username": "Utiliser un e-mail ou un nom d'utilisateur",
+ "actionLink__use_passkey": "Utiliser le passkey à la place",
+ "actionLink__use_phone": "Utiliser le téléphone",
+ "actionLink__use_username": "Utiliser un nom d'utilisateur",
+ "actionText": "Vous n'avez pas de compte ?",
+ "subtitle": "Bienvenue ! Veuillez vous connecter pour continuer",
+ "title": "Se connecter à {{applicationName}}"
+ },
+ "totpMfa": {
+ "formTitle": "Code de vérification",
+ "subtitle": "Pour continuer, veuillez saisir le code de vérification généré par votre application d'authentification",
+ "title": "Vérification en deux étapes"
+ }
+ },
+ "signInEnterPasswordTitle": "Entrez votre mot de passe",
+ "signUp": {
+ "continue": {
+ "actionLink": "Se connecter",
+ "actionText": "Vous avez déjà un compte ?",
+ "subtitle": "Veuillez remplir les détails restants pour continuer",
+ "title": "Remplissez les champs manquants"
+ },
+ "emailCode": {
+ "formSubtitle": "Saisissez le code de vérification envoyé à votre adresse e-mail",
+ "formTitle": "Code de vérification",
+ "resendButton": "Vous n'avez pas reçu de code ? Renvoyer",
+ "subtitle": "Saisissez le code de vérification envoyé à votre e-mail",
+ "title": "Vérifiez votre e-mail"
+ },
+ "emailLink": {
+ "formSubtitle": "Utilisez le lien de vérification envoyé à votre adresse e-mail",
+ "formTitle": "Lien de vérification",
+ "loading": {
+ "title": "Inscription en cours..."
+ },
+ "resendButton": "Vous n'avez pas reçu de lien ? Renvoyer",
+ "subtitle": "pour continuer vers {{applicationName}}",
+ "title": "Vérifiez votre e-mail",
+ "verified": {
+ "title": "Inscription réussie"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Retournez à l'onglet nouvellement ouvert pour continuer",
+ "subtitleNewTab": "Retournez à l'onglet précédent pour continuer",
+ "title": "E-mail vérifié avec succès"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "Saisissez le code de vérification envoyé à votre numéro de téléphone",
+ "formTitle": "Code de vérification",
+ "resendButton": "Vous n'avez pas reçu de code ? Renvoyer",
+ "subtitle": "Saisissez le code de vérification envoyé à votre téléphone",
+ "title": "Vérifiez votre téléphone"
+ },
+ "start": {
+ "actionLink": "Se connecter",
+ "actionText": "Vous avez déjà un compte ?",
+ "subtitle": "Bienvenue ! Veuillez remplir les détails pour commencer",
+ "title": "Créez votre compte"
+ }
+ },
+ "socialButtonsBlockButton": "Continuer avec {{provider|titleize}}",
+ "unstable__errors": {
+ "captcha_invalid": "Inscription échouée en raison d'échecs de validation de sécurité. Veuillez rafraîchir la page pour réessayer ou contacter le support pour plus d'aide.",
+ "captcha_unavailable": "Inscription échouée en raison d'une validation de bot échouée. Veuillez rafraîchir la page pour réessayer ou contacter le support pour plus d'aide.",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "Cette adresse e-mail est déjà utilisée. Veuillez en essayer une autre.",
+ "form_identifier_exists__phone_number": "Ce numéro de téléphone est déjà utilisé. Veuillez en essayer un autre.",
+ "form_identifier_exists__username": "Ce nom d'utilisateur est déjà pris. Veuillez en essayer un autre.",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "L'adresse e-mail doit être une adresse e-mail valide.",
+ "form_param_format_invalid__phone_number": "Le numéro de téléphone doit être au format international valide.",
+ "form_param_max_length_exceeded__first_name": "Le prénom ne doit pas dépasser 256 caractères.",
+ "form_param_max_length_exceeded__last_name": "Le nom de famille ne doit pas dépasser 256 caractères.",
+ "form_param_max_length_exceeded__name": "Le nom ne doit pas dépasser 256 caractères.",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "Votre mot de passe n'est pas assez fort.",
+ "form_password_pwned": "Ce mot de passe a été trouvé dans une violation et ne peut pas être utilisé, veuillez en essayer un autre à la place.",
+ "form_password_pwned__sign_in": "Ce mot de passe a été trouvé dans une violation et ne peut pas être utilisé, veuillez réinitialiser votre mot de passe.",
+ "form_password_size_in_bytes_exceeded": "Votre mot de passe a dépassé le nombre maximum d'octets autorisé, veuillez le raccourcir ou supprimer certains caractères spéciaux.",
+ "form_password_validation_failed": "Mot de passe incorrect",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "Vous ne pouvez pas supprimer votre dernière identification.",
+ "not_allowed_access": "",
+ "passkey_already_exists": "Une clé d'accès est déjà enregistrée sur cet appareil.",
+ "passkey_not_supported": "Les clés d'accès ne sont pas prises en charge sur cet appareil.",
+ "passkey_pa_not_supported": "L'inscription nécessite un authentificateur de plateforme mais l'appareil ne le prend pas en charge.",
+ "passkey_registration_cancelled": "L'inscription de la clé d'accès a été annulée ou a expiré.",
+ "passkey_retrieval_cancelled": "La vérification de la clé d'accès a été annulée ou a expiré.",
+ "passwordComplexity": {
+ "maximumLength": "moins de {{length}} caractères",
+ "minimumLength": "{{length}} ou plus de caractères",
+ "requireLowercase": "une lettre minuscule",
+ "requireNumbers": "un chiffre",
+ "requireSpecialCharacter": "un caractère spécial",
+ "requireUppercase": "une lettre majuscule",
+ "sentencePrefix": "Votre mot de passe doit contenir"
+ },
+ "phone_number_exists": "Ce numéro de téléphone est déjà utilisé. Veuillez en essayer un autre.",
+ "zxcvbn": {
+ "couldBeStronger": "Votre mot de passe fonctionne, mais pourrait être plus fort. Essayez d'ajouter plus de caractères.",
+ "goodPassword": "Votre mot de passe répond à toutes les exigences nécessaires.",
+ "notEnough": "Votre mot de passe n'est pas assez fort.",
+ "suggestions": {
+ "allUppercase": "Mettez des majuscules sur certaines lettres, mais pas sur toutes.",
+ "anotherWord": "Ajoutez des mots moins communs.",
+ "associatedYears": "Évitez les années qui vous sont associées.",
+ "capitalization": "Mettez des majuscules sur plus que la première lettre.",
+ "dates": "Évitez les dates et années qui vous sont associées.",
+ "l33t": "Évitez les substitutions de lettres prévisibles comme '@' pour 'a'.",
+ "longerKeyboardPattern": "Utilisez des motifs de clavier plus longs et changez de direction de frappe plusieurs fois.",
+ "noNeed": "Vous pouvez créer des mots de passe forts sans utiliser de symboles, de chiffres ou de lettres majuscules.",
+ "pwned": "Si vous utilisez ce mot de passe ailleurs, vous devriez le changer.",
+ "recentYears": "Évitez les années récentes.",
+ "repeated": "Évitez les mots et caractères répétés.",
+ "reverseWords": "Évitez les mots communs écrits à l'envers.",
+ "sequences": "Évitez les séquences de caractères communes.",
+ "useWords": "Utilisez plusieurs mots, mais évitez les phrases courantes."
+ },
+ "warnings": {
+ "common": "C'est un mot de passe couramment utilisé.",
+ "commonNames": "Les noms et prénoms courants sont faciles à deviner.",
+ "dates": "Les dates sont faciles à deviner.",
+ "extendedRepeat": "Les motifs de caractères répétés comme \"abcabcabc\" sont faciles à deviner.",
+ "keyPattern": "Les motifs de clavier courts sont faciles à deviner.",
+ "namesByThemselves": "Les noms ou prénoms seuls sont faciles à deviner.",
+ "pwned": "Votre mot de passe a été exposé lors d'une violation de données sur Internet.",
+ "recentYears": "Les années récentes sont faciles à deviner.",
+ "sequences": "Les séquences de caractères communes comme \"abc\" sont faciles à deviner.",
+ "similarToCommon": "Cela ressemble à un mot de passe couramment utilisé.",
+ "simpleRepeat": "Les caractères répétés comme \"aaa\" sont faciles à deviner.",
+ "straightRow": "Les rangées de touches consécutives sur votre clavier sont faciles à deviner.",
+ "topHundred": "C'est un mot de passe fréquemment utilisé.",
+ "topTen": "C'est un mot de passe très utilisé.",
+ "userInputs": "Il ne devrait y avoir aucune donnée personnelle ou liée à la page.",
+ "wordByItself": "Les mots seuls sont faciles à deviner."
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "Ajouter un compte",
+ "action__manageAccount": "Gérer le compte",
+ "action__signOut": "Déconnexion",
+ "action__signOutAll": "Déconnexion de tous les comptes"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "Copié !",
+ "actionLabel__copy": "Copier tout",
+ "actionLabel__download": "Télécharger .txt",
+ "actionLabel__print": "Imprimer",
+ "infoText1": "Des codes de secours seront activés pour ce compte.",
+ "infoText2": "Gardez les codes de secours secrets et stockez-les en toute sécurité. Vous pouvez régénérer des codes de secours si vous soupçonnez qu'ils ont été compromis.",
+ "subtitle__codelist": "Stockez-les en toute sécurité et gardez-les secrets.",
+ "successMessage": "Les codes de secours sont maintenant activés. Vous pouvez utiliser l'un d'eux pour vous connecter à votre compte si vous perdez l'accès à votre appareil d'authentification. Chaque code ne peut être utilisé qu'une seule fois.",
+ "successSubtitle": "Vous pouvez utiliser l'un d'eux pour vous connecter à votre compte si vous perdez l'accès à votre appareil d'authentification.",
+ "title": "Ajouter la vérification par code de secours",
+ "title__codelist": "Codes de secours"
+ },
+ "connectedAccountPage": {
+ "formHint": "Sélectionnez un fournisseur pour connecter votre compte.",
+ "formHint__noAccounts": "Aucun fournisseur de compte externe disponible.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} sera supprimé de ce compte.",
+ "messageLine2": "Vous ne pourrez plus utiliser ce compte connecté et toutes les fonctionnalités dépendantes ne fonctionneront plus.",
+ "successMessage": "{{connectedAccount}} a été supprimé de votre compte.",
+ "title": "Supprimer le compte connecté"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "Le fournisseur a été ajouté à votre compte",
+ "title": "Ajouter un compte connecté"
+ },
+ "deletePage": {
+ "actionDescription": "Tapez \"Supprimer le compte\" ci-dessous pour continuer.",
+ "confirm": "Supprimer le compte",
+ "messageLine1": "Êtes-vous sûr de vouloir supprimer votre compte ?",
+ "messageLine2": "Cette action est permanente et irréversible.",
+ "title": "Supprimer le compte"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "Un e-mail contenant un code de vérification sera envoyé à cette adresse e-mail.",
+ "formSubtitle": "Entrez le code de vérification envoyé à {{identifier}}",
+ "formTitle": "Code de vérification",
+ "resendButton": "Vous n'avez pas reçu de code ? Renvoyer",
+ "successMessage": "L'e-mail {{identifier}} a été ajouté à votre compte."
+ },
+ "emailLink": {
+ "formHint": "Un e-mail contenant un lien de vérification sera envoyé à cette adresse e-mail.",
+ "formSubtitle": "Cliquez sur le lien de vérification dans l'e-mail envoyé à {{identifier}}",
+ "formTitle": "Lien de vérification",
+ "resendButton": "Vous n'avez pas reçu de lien ? Renvoyer",
+ "successMessage": "L'e-mail {{identifier}} a été ajouté à votre compte."
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}} sera supprimé de ce compte.",
+ "messageLine2": "Vous ne pourrez plus vous connecter en utilisant cette adresse e-mail.",
+ "successMessage": "{{emailAddress}} a été supprimé de votre compte.",
+ "title": "Supprimer l'adresse e-mail"
+ },
+ "title": "Ajouter une adresse e-mail",
+ "verifyTitle": "Vérifier l'adresse e-mail"
+ },
+ "formButtonPrimary__add": "Ajouter",
+ "formButtonPrimary__continue": "Continuer",
+ "formButtonPrimary__finish": "Terminer",
+ "formButtonPrimary__remove": "Supprimer",
+ "formButtonPrimary__save": "Enregistrer",
+ "formButtonReset": "Annuler",
+ "mfaPage": {
+ "formHint": "Sélectionnez une méthode à ajouter.",
+ "title": "Ajouter la vérification en deux étapes"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "Utiliser le numéro existant",
+ "primaryButton__addPhoneNumber": "Ajouter un numéro de téléphone",
+ "removeResource": {
+ "messageLine1": "{{identifier}} ne recevra plus de codes de vérification lors de la connexion.",
+ "messageLine2": "Votre compte peut ne pas être aussi sécurisé. Êtes-vous sûr de vouloir continuer ?",
+ "successMessage": "La vérification en deux étapes par code SMS a été supprimée pour {{mfaPhoneCode}}",
+ "title": "Supprimer la vérification en deux étapes"
+ },
+ "subtitle__availablePhoneNumbers": "Sélectionnez un numéro de téléphone existant pour vous inscrire à la vérification en deux étapes par code SMS ou en ajouter un nouveau.",
+ "subtitle__unavailablePhoneNumbers": "Aucun numéro de téléphone disponible pour vous inscrire à la vérification en deux étapes par code SMS, veuillez en ajouter un nouveau.",
+ "successMessage1": "Lors de la connexion, vous devrez entrer un code de vérification envoyé à ce numéro de téléphone en tant qu'étape supplémentaire.",
+ "successMessage2": "Sauvegardez ces codes de secours et stockez-les en lieu sûr. Si vous perdez l'accès à votre appareil d'authentification, vous pourrez utiliser les codes de secours pour vous connecter.",
+ "successTitle": "Vérification par code SMS activée",
+ "title": "Ajouter la vérification par code SMS"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "Scanner plutôt le code QR",
+ "buttonUnableToScan__nonPrimary": "Impossible de scanner le code QR ?",
+ "infoText__ableToScan": "Configurez une nouvelle méthode de connexion dans votre application d'authentification et scannez le code QR suivant pour le lier à votre compte.",
+ "infoText__unableToScan": "Configurez une nouvelle méthode de connexion dans votre application d'authentification et saisissez la clé fournie ci-dessous.",
+ "inputLabel__unableToScan1": "Assurez-vous que les mots de passe basés sur le temps ou à usage unique sont activés, puis terminez la liaison de votre compte.",
+ "inputLabel__unableToScan2": "Alternativement, si votre application d'authentification prend en charge les URI TOTP, vous pouvez également copier l'URI complet."
+ },
+ "removeResource": {
+ "messageLine1": "Les codes de vérification de cet authentificateur ne seront plus nécessaires lors de la connexion.",
+ "messageLine2": "Votre compte peut ne pas être aussi sécurisé. Êtes-vous sûr de vouloir continuer ?",
+ "successMessage": "La vérification en deux étapes via l'application d'authentification a été supprimée.",
+ "title": "Supprimer la vérification en deux étapes"
+ },
+ "successMessage": "La vérification en deux étapes est maintenant activée. Lors de la connexion, vous devrez entrer un code de vérification de cet authentificateur en tant qu'étape supplémentaire.",
+ "title": "Ajouter l'application d'authentification",
+ "verifySubtitle": "Saisissez le code de vérification généré par votre authentificateur",
+ "verifyTitle": "Code de vérification"
+ },
+ "mobileButton__menu": "Menu",
+ "navbar": {
+ "account": "Profil",
+ "description": "Gérez les informations de votre compte.",
+ "security": "Sécurité",
+ "title": "Compte"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} sera supprimé de ce compte.",
+ "title": "Supprimer le passkey"
+ },
+ "subtitle__rename": "Vous pouvez changer le nom du passkey pour le rendre plus facile à trouver.",
+ "title__rename": "Renommer Passkey"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "Il est recommandé de vous déconnecter de tous les autres appareils qui ont pu utiliser votre ancien mot de passe.",
+ "readonly": "Votre mot de passe ne peut actuellement pas être modifié car vous ne pouvez vous connecter que via la connexion d'entreprise.",
+ "successMessage__set": "Votre mot de passe a été défini.",
+ "successMessage__signOutOfOtherSessions": "Tous les autres appareils ont été déconnectés.",
+ "successMessage__update": "Votre mot de passe a été mis à jour.",
+ "title__set": "Définir le mot de passe",
+ "title__update": "Mettre à jour le mot de passe"
+ },
+ "phoneNumberPage": {
+ "infoText": "Un message texte contenant un code de vérification sera envoyé à ce numéro de téléphone. Des frais de messagerie et de données peuvent s'appliquer.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} sera supprimé de ce compte.",
+ "messageLine2": "Vous ne pourrez plus vous connecter en utilisant ce numéro de téléphone.",
+ "successMessage": "{{phoneNumber}} a été supprimé de votre compte.",
+ "title": "Supprimer le numéro de téléphone"
+ },
+ "successMessage": "{{identifier}} a été ajouté à votre compte.",
+ "title": "Ajouter un numéro de téléphone",
+ "verifySubtitle": "Entrez le code de vérification envoyé à {{identifier}}",
+ "verifyTitle": "Vérifier le numéro de téléphone"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "Taille recommandée 1:1, jusqu'à 10 Mo.",
+ "imageFormDestructiveActionSubtitle": "Supprimer",
+ "imageFormSubtitle": "Télécharger",
+ "imageFormTitle": "Image de profil",
+ "readonly": "Vos informations de profil ont été fournies par la connexion d'entreprise et ne peuvent pas être modifiées.",
+ "successMessage": "Votre profil a été mis à jour.",
+ "title": "Mettre à jour le profil"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "Déconnexion de l'appareil",
+ "title": "Appareils actifs"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "Réessayer",
+ "actionLabel__reauthorize": "Autoriser maintenant",
+ "destructiveActionTitle": "Supprimer",
+ "primaryButton": "Connecter un compte",
+ "subtitle__reauthorize": "Les autorisations requises ont été mises à jour, et vous pourriez rencontrer des fonctionnalités limitées. Veuillez réautoriser cette application pour éviter tout problème",
+ "title": "Comptes connectés"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "Supprimer le compte",
+ "title": "Supprimer le compte"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "Supprimer l'adresse e-mail",
+ "detailsAction__nonPrimary": "Définir comme principale",
+ "detailsAction__primary": "Vérification complète",
+ "detailsAction__unverified": "Vérifier",
+ "primaryButton": "Ajouter une adresse e-mail",
+ "title": "Adresses e-mail"
+ },
+ "enterpriseAccountsSection": {
+ "title": "Comptes d'entreprise"
+ },
+ "headerTitle__account": "Détails du profil",
+ "headerTitle__security": "Sécurité",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "Régénérer",
+ "headerTitle": "Codes de secours",
+ "subtitle__regenerate": "Obtenez un nouvel ensemble de codes de secours sécurisés. Les codes de secours précédents seront supprimés et ne pourront pas être utilisés.",
+ "title__regenerate": "Régénérer les codes de secours"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "Définir comme par défaut",
+ "destructiveActionLabel": "Supprimer"
+ },
+ "primaryButton": "Ajouter une vérification en deux étapes",
+ "title": "Vérification en deux étapes",
+ "totp": {
+ "destructiveActionTitle": "Supprimer",
+ "headerTitle": "Application d'authentification"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "Supprimer",
+ "menuAction__rename": "Renommer",
+ "title": "Passkeys"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "Définir le mot de passe",
+ "primaryButton__updatePassword": "Mettre à jour le mot de passe",
+ "title": "Mot de passe"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "Supprimer le numéro de téléphone",
+ "detailsAction__nonPrimary": "Définir comme principal",
+ "detailsAction__primary": "Vérification complète",
+ "detailsAction__unverified": "Vérifier le numéro de téléphone",
+ "primaryButton": "Ajouter un numéro de téléphone",
+ "title": "Numéros de téléphone"
+ },
+ "profileSection": {
+ "primaryButton": "Mettre à jour le profil",
+ "title": "Profil"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "Définir le nom d'utilisateur",
+ "primaryButton__updateUsername": "Mettre à jour le nom d'utilisateur",
+ "title": "Nom d'utilisateur"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "Supprimer le portefeuille",
+ "primaryButton": "Portefeuilles Web3",
+ "title": "Portefeuilles Web3"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "Votre nom d'utilisateur a été mis à jour.",
+ "title__set": "Définir le nom d'utilisateur",
+ "title__update": "Mettre à jour le nom d'utilisateur"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} sera supprimé de ce compte.",
+ "messageLine2": "Vous ne pourrez plus vous connecter en utilisant ce portefeuille web3.",
+ "successMessage": "{{web3Wallet}} a été supprimé de votre compte.",
+ "title": "Supprimer le portefeuille web3"
+ },
+ "subtitle__availableWallets": "Sélectionnez un portefeuille web3 pour vous connecter à votre compte.",
+ "subtitle__unavailableWallets": "Il n'y a pas de portefeuilles web3 disponibles.",
+ "successMessage": "Le portefeuille a été ajouté à votre compte.",
+ "title": "Ajouter un portefeuille web3"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/common.json b/DigitalHumanWeb/locales/fr-FR/common.json
new file mode 100644
index 0000000..f65b733
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "À propos",
+ "advanceSettings": "Paramètres avancés",
+ "alert": {
+ "cloud": {
+ "action": "Découvrir gratuitement",
+ "desc": "Nous offrons {{credit}} crédits de calcul gratuits à tous les utilisateurs inscrits, sans configuration compliquée, prêts à l'emploi, prenant en charge l'historique des conversations illimité et la synchronisation cloud globale. Découvrez davantage de fonctionnalités avancées ensemble.",
+ "descOnMobile": "Nous offrons {{credit}} crédits de calcul gratuits à tous les utilisateurs inscrits, sans configuration compliquée, prêts à l'emploi.",
+ "title": "Bienvenue à {{name}}"
+ }
+ },
+ "appInitializing": "L'application est en cours de démarrage...",
+ "autoGenerate": "Générer automatiquement",
+ "autoGenerateTooltip": "Générer automatiquement la description de l'agent basée sur les suggestions",
+ "autoGenerateTooltipDisabled": "Veuillez saisir un mot-clé avant d'activer la fonction de complétion automatique",
+ "back": "Retour",
+ "batchDelete": "Suppression en masse",
+ "blog": "Blog des produits",
+ "cancel": "Annuler",
+ "changelog": "Journal des modifications",
+ "close": "Fermer",
+ "contact": "Nous contacter",
+ "copy": "Copier",
+ "copyFail": "Échec de la copie",
+ "copySuccess": "Copie réussie",
+ "dataStatistics": {
+ "messages": "Messages",
+ "sessions": "Sessions",
+ "today": "Aujourd'hui",
+ "topics": "Sujets"
+ },
+ "defaultAgent": "Agent par défaut",
+ "defaultSession": "Session par défaut",
+ "delete": "Supprimer",
+ "document": "Document d'utilisation",
+ "download": "Télécharger",
+ "duplicate": "Dupliquer",
+ "edit": "Modifier",
+ "export": "Exporter",
+ "exportType": {
+ "agent": "Exporter les paramètres de l'agent",
+ "agentWithMessage": "Exporter l'agent et les messages",
+ "all": "Exporter les paramètres globaux et toutes les données des agents",
+ "allAgent": "Exporter tous les paramètres de l'agent",
+ "allAgentWithMessage": "Exporter tous les agents et les messages",
+ "globalSetting": "Exporter les paramètres globaux"
+ },
+ "feedback": "Retour d'information et suggestions",
+ "follow": "Suivez-nous sur {{name}}",
+ "footer": {
+ "action": {
+ "feedback": "Partager vos précieux commentaires",
+ "star": "Ajouter une étoile sur GitHub"
+ },
+ "and": "et",
+ "feedback": {
+ "action": "Partager des commentaires",
+ "desc": "Chaque idée et suggestion que vous avez est précieuse pour nous. Nous sommes impatients de connaître votre avis ! N'hésitez pas à nous contacter pour nous faire part de vos retours sur les fonctionnalités et l'expérience d'utilisation du produit, afin de nous aider à améliorer LobeChat.",
+ "title": "Partagez vos précieux commentaires sur GitHub"
+ },
+ "later": "Plus tard",
+ "star": {
+ "action": "Ajouter une étoile",
+ "desc": "Si vous aimez notre produit et souhaitez nous soutenir, pourriez-vous nous donner une étoile sur GitHub ? Ce petit geste est très important pour nous et nous encourage à continuer à vous offrir une expérience exceptionnelle.",
+ "title": "Ajoutez une étoile sur GitHub pour nous"
+ },
+ "title": "Vous aimez notre produit ?"
+ },
+ "fullscreen": "Mode plein écran",
+ "historyRange": "Plage d'historique",
+ "import": "Importer",
+ "importModal": {
+ "error": {
+ "desc": "Désolé, une erreur s'est produite lors de l'importation des données. Veuillez réessayer l'importation ou <1> soumettre un problème 1>, nous vous aiderons à résoudre le problème dès que possible.",
+ "title": "Échec de l'importation des données"
+ },
+ "finish": {
+ "onlySettings": "Importation des paramètres système réussie",
+ "start": "Commencer à utiliser",
+ "subTitle": "Importation des données réussie, durée : {{duration}} secondes. Détails de l'importation :",
+ "title": "Importation des données terminée"
+ },
+ "loading": "Importation des données en cours, veuillez patienter...",
+ "preparing": "Préparation du module d'importation des données en cours...",
+ "result": {
+ "added": "Importation réussie",
+ "errors": "Erreurs d'importation",
+ "messages": "Messages",
+ "sessionGroups": "Groupes de session",
+ "sessions": "Agents",
+ "skips": "Éléments ignorés en double",
+ "topics": "Sujets",
+ "type": "Type de données"
+ },
+ "title": "Importer des données",
+ "uploading": {
+ "desc": "Le fichier en cours est volumineux, veuillez patienter pendant le téléchargement...",
+ "restTime": "Temps restant",
+ "speed": "Vitesse de téléchargement"
+ }
+ },
+ "information": "Communauté et Informations",
+ "installPWA": "Installer l'application du navigateur",
+ "lang": {
+ "ar": "arabe",
+ "bg-BG": "Bulgare",
+ "bn": "Bengali",
+ "cs-CZ": "Tchèque",
+ "da-DK": "Danois",
+ "de-DE": "Allemand",
+ "el-GR": "Grec",
+ "en": "Anglais",
+ "en-US": "Anglais",
+ "es-ES": "Espagnol",
+ "fi-FI": "Finnois",
+ "fr-FR": "français",
+ "hi-IN": "Hindi",
+ "hu-HU": "Hongrois",
+ "id-ID": "Indonésien",
+ "it-IT": "Italien",
+ "ja-JP": "Japonais",
+ "ko-KR": "Coréen",
+ "nl-NL": "Néerlandais",
+ "no-NO": "Norvégien",
+ "pl-PL": "Polonais",
+ "pt-BR": "Portugais",
+ "pt-PT": "Portugais",
+ "ro-RO": "Roumain",
+ "ru-RU": "Russe",
+ "sk-SK": "Slovaque",
+ "sr-RS": "Serbe",
+ "sv-SE": "Suédois",
+ "th-TH": "Thaï",
+ "tr-TR": "turque",
+ "uk-UA": "Ukrainien",
+ "vi-VN": "Vietnamien",
+ "zh": "Chinois",
+ "zh-CN": "Chinois simplifié",
+ "zh-TW": "Chinois traditionnel"
+ },
+ "layoutInitializing": "Initialisation de la mise en page en cours...",
+ "legal": "Mentions légales",
+ "loading": "Chargement en cours...",
+ "mail": {
+ "business": "Partenariats commerciaux",
+ "support": "Support par e-mail"
+ },
+ "oauth": "Connexion SSO",
+ "officialSite": "Site officiel",
+ "ok": "OK",
+ "password": "Mot de passe",
+ "pin": "Épingler",
+ "pinOff": "Désactiver l'épinglage",
+ "privacy": "Politique de confidentialité",
+ "regenerate": "Régénérer",
+ "rename": "Renommer",
+ "reset": "Réinitialiser",
+ "retry": "Réessayer",
+ "send": "Envoyer",
+ "setting": "Paramètre",
+ "share": "Partager",
+ "stop": "Arrêter",
+ "sync": {
+ "actions": {
+ "settings": "Paramètres de synchronisation",
+ "sync": "Synchroniser maintenant"
+ },
+ "awareness": {
+ "current": "Appareil actuel"
+ },
+ "channel": "Canal",
+ "disabled": {
+ "actions": {
+ "enable": "Activer la synchronisation cloud",
+ "settings": "Paramètres de configuration"
+ },
+ "desc": "Les données de cette session sont uniquement stockées dans ce navigateur. Si vous avez besoin de synchroniser les données entre plusieurs appareils, veuillez configurer et activer la synchronisation cloud.",
+ "title": "La synchronisation des données n'est pas activée"
+ },
+ "enabled": {
+ "title": "Synchronisation des données"
+ },
+ "status": {
+ "connecting": "Connexion en cours",
+ "disabled": "Synchronisation désactivée",
+ "ready": "Connecté",
+ "synced": "Synchronisé",
+ "syncing": "Synchronisation en cours",
+ "unconnected": "Échec de la connexion"
+ },
+ "title": "État de synchronisation",
+ "unconnected": {
+ "tip": "Échec de la connexion au serveur de signalisation. Impossible d'établir un canal de communication peer-to-peer. Veuillez vérifier votre réseau et réessayer."
+ }
+ },
+ "tab": {
+ "chat": "Conversation",
+ "discover": "Découvrir",
+ "files": "Fichiers",
+ "me": "moi",
+ "setting": "Paramètre"
+ },
+ "telemetry": {
+ "allow": "Autoriser",
+ "deny": "Refuser",
+ "desc": "Nous aimerions recueillir anonymement des informations sur votre utilisation afin de nous aider à améliorer LobeChat et à vous offrir une meilleure expérience produit. Vous pouvez désactiver cette fonctionnalité à tout moment dans les paramètres - À propos.",
+ "learnMore": "En savoir plus",
+ "title": "Aider LobeChat à s'améliorer"
+ },
+ "temp": "Temporaire",
+ "terms": "Conditions de service",
+ "updateAgent": "Mettre à jour les informations de l'agent",
+ "upgradeVersion": {
+ "action": "Mettre à jour",
+ "hasNew": "Nouvelle mise à jour disponible",
+ "newVersion": "Nouvelle version disponible : {{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "Utilisateur anonyme",
+ "billing": "Gestion de la facturation",
+ "cloud": "Découvrir {{name}}",
+ "data": "Stockage des données",
+ "defaultNickname": "Utilisateur de la version communautaire",
+ "discord": "Support de la communauté",
+ "docs": "Documentation d'utilisation",
+ "email": "Support par e-mail",
+ "feedback": "Retours et suggestions",
+ "help": "Centre d'aide",
+ "moveGuide": "Le bouton de configuration a été déplacé ici",
+ "plans": "Forfaits d'abonnement",
+ "preview": "Aperçu",
+ "profile": "Gestion du compte",
+ "setting": "Paramètres de l'application",
+ "usages": "Statistiques d'utilisation"
+ },
+ "version": "Version"
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/components.json b/DigitalHumanWeb/locales/fr-FR/components.json
new file mode 100644
index 0000000..5c1c1c4
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "Faites glisser des fichiers ici, plusieurs images peuvent être téléchargées.",
+ "dragFileDesc": "Faites glisser des images et des fichiers ici, plusieurs images et fichiers peuvent être téléchargés.",
+ "dragFileTitle": "Télécharger des fichiers",
+ "dragTitle": "Télécharger des images"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "Ajouter à la base de connaissances",
+ "addToOtherKnowledgeBase": "Ajouter à une autre base de connaissances",
+ "batchChunking": "Découpage par lots",
+ "chunking": "Découpage",
+ "chunkingTooltip": "Divisez le fichier en plusieurs blocs de texte et vectorisez-les pour une recherche sémantique et un dialogue sur le fichier",
+ "confirmDelete": "Vous allez supprimer ce fichier. Une fois supprimé, il ne pourra pas être récupéré. Veuillez confirmer votre action.",
+ "confirmDeleteMultiFiles": "Vous allez supprimer les {{count}} fichiers sélectionnés. Une fois supprimés, ils ne pourront pas être récupérés. Veuillez confirmer votre action.",
+ "confirmRemoveFromKnowledgeBase": "Vous allez retirer les {{count}} fichiers sélectionnés de la base de connaissances. Une fois retirés, les fichiers resteront visibles dans tous les fichiers. Veuillez confirmer votre action.",
+ "copyUrl": "Copier le lien",
+ "copyUrlSuccess": "Adresse du fichier copiée avec succès",
+ "createChunkingTask": "Préparation en cours...",
+ "deleteSuccess": "Fichier supprimé avec succès",
+ "downloading": "Téléchargement du fichier en cours...",
+ "removeFromKnowledgeBase": "Retirer de la base de connaissances",
+ "removeFromKnowledgeBaseSuccess": "Fichier retiré avec succès"
+ },
+ "bottom": "C'est tout",
+ "config": {
+ "showFilesInKnowledgeBase": "Afficher le contenu dans la base de connaissances"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "Télécharger un fichier",
+ "folder": "Télécharger un dossier",
+ "knowledgeBase": "Créer une nouvelle base de connaissances"
+ },
+ "or": "ou",
+ "title": "Faites glisser un fichier ou un dossier ici"
+ },
+ "title": {
+ "createdAt": "Date de création",
+ "size": "Taille",
+ "title": "Fichier"
+ },
+ "total": {
+ "fileCount": "Total {{count}} éléments",
+ "selectedCount": "Sélectionné {{count}} éléments"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "Les blocs de texte n'ont pas encore été entièrement vectorisés, ce qui rendra la fonction de recherche sémantique indisponible. Pour améliorer la qualité de la recherche, veuillez vectoriser les blocs de texte.",
+ "error": "Échec de la vectorisation",
+ "errorResult": "Échec de la vectorisation, veuillez vérifier et réessayer. Raison de l'échec :",
+ "processing": "Les blocs de texte sont en cours de vectorisation, veuillez patienter.",
+ "success": "Tous les blocs de texte sont maintenant vectorisés."
+ },
+ "embeddings": "Vectorisation",
+ "status": {
+ "error": "Échec du découpage",
+ "errorResult": "Échec du découpage, veuillez vérifier et réessayer. Raison de l'échec :",
+ "processing": "Découpage en cours",
+ "processingTip": "Le serveur est en train de diviser les blocs de texte, fermer la page n'affectera pas le progrès du découpage."
+ }
+ }
+ },
+ "GoBack": {
+ "back": "Retour"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "Modèle personnalisé par défaut prenant en charge à la fois les appels de fonction et la reconnaissance visuelle. Veuillez vérifier la disponibilité de ces capacités en fonction de vos besoins réels.",
+ "file": "Ce modèle prend en charge la lecture et la reconnaissance de fichiers téléchargés.",
+ "functionCall": "Ce modèle prend en charge les appels de fonction.",
+ "tokens": "Ce modèle prend en charge jusqu'à {{tokens}} jetons par session.",
+ "vision": "Ce modèle prend en charge la reconnaissance visuelle."
+ },
+ "removed": "Le modèle n'est pas dans la liste, il sera automatiquement supprimé si vous annulez la sélection"
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "Aucun modèle activé. Veuillez vous rendre dans les paramètres pour l'activer.",
+ "provider": "Fournisseur"
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/discover.json b/DigitalHumanWeb/locales/fr-FR/discover.json
new file mode 100644
index 0000000..508b8d0
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "Ajouter un assistant",
+ "addAgentAndConverse": "Ajouter un assistant et discuter",
+ "addAgentSuccess": "Ajout réussi",
+ "conversation": {
+ "l1": "Bonjour, je suis **{{name}}**, vous pouvez me poser n'importe quelle question, je ferai de mon mieux pour y répondre ~",
+ "l2": "Voici un aperçu de mes capacités : ",
+ "l3": "Commençons la conversation !"
+ },
+ "description": "Présentation de l'assistant",
+ "detail": "Détails",
+ "list": "Liste des assistants",
+ "more": "Plus",
+ "plugins": "Intégrer des plugins",
+ "recentSubmits": "Mises à jour récentes",
+ "suggestions": "Suggestions connexes",
+ "systemRole": "Paramètres de l'assistant",
+ "try": "Essayer"
+ },
+ "back": "Retour à la découverte",
+ "category": {
+ "assistant": {
+ "academic": "Académique",
+ "all": "Tout",
+ "career": "Carrière",
+ "copywriting": "Rédaction",
+ "design": "Design",
+ "education": "Éducation",
+ "emotions": "Émotions",
+ "entertainment": "Divertissement",
+ "games": "Jeux",
+ "general": "Général",
+ "life": "Vie",
+ "marketing": "Marketing",
+ "office": "Bureau",
+ "programming": "Programmation",
+ "translation": "Traduction"
+ },
+ "plugin": {
+ "all": "Tout",
+ "gaming-entertainment": "Jeux et divertissement",
+ "life-style": "Style de vie",
+ "media-generate": "Génération de médias",
+ "science-education": "Science et éducation",
+ "social": "Médias sociaux",
+ "stocks-finance": "Actions et finances",
+ "tools": "Outils pratiques",
+ "web-search": "Recherche sur le web"
+ }
+ },
+ "cleanFilter": "Effacer le filtre",
+ "create": "Créer",
+ "createGuide": {
+ "func1": {
+ "desc1": "Accédez à la page de paramètres de l'assistant que vous souhaitez soumettre via le coin supérieur droit de la fenêtre de conversation ;",
+ "desc2": "Cliquez sur le bouton de soumission au marché des assistants dans le coin supérieur droit.",
+ "tag": "Méthode un",
+ "title": "Soumettre via LobeChat"
+ },
+ "func2": {
+ "button": "Aller au dépôt d'assistants Github",
+ "desc": "Si vous souhaitez ajouter un assistant à l'index, créez une entrée dans le répertoire plugins avec agent-template.json ou agent-template-full.json, rédigez une brève description et marquez-la de manière appropriée, puis créez une demande de tirage.",
+ "tag": "Méthode deux",
+ "title": "Soumettre via Github"
+ }
+ },
+ "dislike": "Je n'aime pas",
+ "filter": "Filtrer",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "Tous les auteurs",
+ "followed": "Auteurs suivis",
+ "title": "Plage d'auteurs"
+ },
+ "contentLength": "Longueur minimale du contexte",
+ "maxToken": {
+ "title": "Définir la longueur maximale (Token)",
+ "unlimited": "Illimité"
+ },
+ "other": {
+ "functionCall": "Appel de fonction pris en charge",
+ "title": "Autres",
+ "vision": "Reconnaissance visuelle prise en charge",
+ "withKnowledge": "Avec base de connaissances",
+ "withTool": "Avec plugin"
+ },
+ "pricing": "Prix du modèle",
+ "timePeriod": {
+ "all": "Tout le temps",
+ "day": "Dernières 24 heures",
+ "month": "Derniers 30 jours",
+ "title": "Plage de temps",
+ "week": "Dernières 7 jours",
+ "year": "Dernière année"
+ }
+ },
+ "home": {
+ "featuredAssistants": "Assistants recommandés",
+ "featuredModels": "Modèles recommandés",
+ "featuredProviders": "Fournisseurs de modèles recommandés",
+ "featuredTools": "Plugins recommandés",
+ "more": "Découvrez plus"
+ },
+ "like": "J'aime",
+ "models": {
+ "chat": "Commencer la conversation",
+ "contentLength": "Longueur maximale du contexte",
+ "free": "Gratuit",
+ "guide": "Guide de configuration",
+ "list": "Liste des modèles",
+ "more": "Plus",
+ "parameterList": {
+ "defaultValue": "Valeur par défaut",
+ "docs": "Voir la documentation",
+ "frequency_penalty": {
+ "desc": "Ce paramètre ajuste la fréquence à laquelle le modèle réutilise des mots spécifiques déjà présents dans l'entrée. Des valeurs plus élevées réduisent la probabilité de répétition, tandis que des valeurs négatives produisent l'effet inverse. La pénalité de vocabulaire n'augmente pas avec le nombre d'occurrences. Les valeurs négatives encouragent la réutilisation des mots.",
+ "title": "Pénalité de fréquence"
+ },
+ "max_tokens": {
+ "desc": "Ce paramètre définit la longueur maximale que le modèle peut générer dans une seule réponse. Un réglage plus élevé permet au modèle de produire des réponses plus longues, tandis qu'un réglage plus bas limite la longueur de la réponse, la rendant plus concise. Ajuster ce paramètre de manière appropriée en fonction des différents scénarios d'application peut aider à atteindre la longueur et le niveau de détail souhaités dans la réponse.",
+ "title": "Limite de réponse unique"
+ },
+ "presence_penalty": {
+ "desc": "Ce paramètre vise à contrôler la réutilisation des mots en fonction de leur fréquence d'apparition dans l'entrée. Il essaie d'utiliser moins de mots qui apparaissent fréquemment dans l'entrée, en proportion de leur fréquence d'apparition. La pénalité de vocabulaire augmente avec le nombre d'occurrences. Les valeurs négatives encouragent la réutilisation des mots.",
+ "title": "Fraîcheur des sujets"
+ },
+ "range": "Plage",
+ "temperature": {
+ "desc": "Ce paramètre influence la diversité des réponses du modèle. Des valeurs plus basses entraînent des réponses plus prévisibles et typiques, tandis que des valeurs plus élevées encouragent des réponses plus variées et moins courantes. Lorsque la valeur est fixée à 0, le modèle donne toujours la même réponse pour une entrée donnée.",
+ "title": "Aléatoire"
+ },
+ "title": "Paramètres du modèle",
+ "top_p": {
+ "desc": "Ce paramètre limite le choix du modèle à un certain pourcentage de mots ayant la plus haute probabilité : seuls les mots de pointe dont la probabilité cumulée atteint P sont sélectionnés. Des valeurs plus basses rendent les réponses du modèle plus prévisibles, tandis que les paramètres par défaut permettent au modèle de choisir parmi l'ensemble du vocabulaire.",
+ "title": "Échantillonnage nucléaire"
+ },
+ "type": "Type"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChat prend en charge l'utilisation de clés API personnalisées pour ce fournisseur.",
+ "input": "Prix d'entrée",
+ "inputTooltip": "Coût par million de tokens",
+ "latency": "Latence",
+ "latencyTooltip": "Temps de réponse moyen pour que le fournisseur envoie le premier token",
+ "maxOutput": "Longueur de sortie maximale",
+ "maxOutputTooltip": "Nombre maximal de tokens que ce point de terminaison peut générer",
+ "officialTooltip": "Service officiel de LobeHub",
+ "output": "Prix de sortie",
+ "outputTooltip": "Coût par million de tokens",
+ "streamCancellationTooltip": "Ce fournisseur prend en charge la fonction d'annulation de flux.",
+ "throughput": "Débit",
+ "throughputTooltip": "Nombre moyen de tokens transmis par seconde pour les requêtes de flux"
+ },
+ "suggestions": "Modèles connexes",
+ "supportedProviders": "Fournisseurs prenant en charge ce modèle"
+ },
+ "plugins": {
+ "community": "Plugins communautaires",
+ "install": "Installer le plugin",
+ "installed": "Installé",
+ "list": "Liste des plugins",
+ "meta": {
+ "description": "Description",
+ "parameter": "Paramètre",
+ "title": "Paramètres de l'outil",
+ "type": "Type"
+ },
+ "more": "Plus",
+ "official": "Plugins officiels",
+ "recentSubmits": "Mises à jour récentes",
+ "suggestions": "Suggestions connexes"
+ },
+ "providers": {
+ "config": "Configurer le fournisseur",
+ "list": "Liste des fournisseurs de modèles",
+ "modelCount": "{{count}} modèles",
+ "modelSite": "Documentation des modèles",
+ "more": "Plus",
+ "officialSite": "Site officiel",
+ "showAllModels": "Afficher tous les modèles",
+ "suggestions": "Fournisseurs connexes",
+ "supportedModels": "Modèles pris en charge"
+ },
+ "search": {
+ "placeholder": "Rechercher par nom, description ou mot-clé...",
+ "result": "{{count}} résultats de recherche concernant {{keyword}}",
+ "searching": "Recherche en cours..."
+ },
+ "sort": {
+ "mostLiked": "Le plus aimé",
+ "mostUsed": "Le plus utilisé",
+ "newest": "Du plus récent au plus ancien",
+ "oldest": "Du plus ancien au plus récent",
+ "recommended": "Recommandé"
+ },
+ "tab": {
+ "assistants": "Assistants",
+ "home": "Accueil",
+ "models": "Modèles",
+ "plugins": "Plugins",
+ "providers": "Fournisseurs de modèles"
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/error.json b/DigitalHumanWeb/locales/fr-FR/error.json
new file mode 100644
index 0000000..5602d68
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "Continuer la session",
+ "desc": "{{greeting}}, je suis ravi de pouvoir continuer à vous aider. Reprenons là où nous nous étions arrêtés.",
+ "title": "Bienvenue de retour, {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "Retour à la page d'accueil",
+ "desc": "Réessayez plus tard, ou retournez au monde connu",
+ "retry": "Recharger",
+ "title": "Un problème est survenu sur la page.."
+ },
+ "fetchError": "Échec de la requête",
+ "fetchErrorDetail": "Détails de l'erreur",
+ "notFound": {
+ "backHome": "Retour à la page d'accueil",
+ "check": "Veuillez vérifier si votre URL est correcte",
+ "desc": "Nous n'avons pas pu trouver la page que vous recherchez.",
+ "title": "Êtes-vous entré dans un domaine inconnu ?"
+ },
+ "pluginSettings": {
+ "desc": "Completez la configuration suivante pour commencer à utiliser ce plugin",
+ "title": "Configuration du plugin {{name}}"
+ },
+ "response": {
+ "400": "Désolé, le serveur ne comprend pas votre requête. Veuillez vérifier la validité de vos paramètres de requête",
+ "401": "Désolé, le serveur a refusé votre requête, probablement en raison d'autorisations insuffisantes ou d'une authentification invalide",
+ "403": "Désolé, le serveur a refusé votre requête. Vous n'avez pas l'autorisation d'accéder à ce contenu",
+ "404": "Désolé, le serveur n'a pas pu trouver la page ou la ressource demandée. Veuillez vérifier l'URL",
+ "405": "Désolé, le serveur ne prend pas en charge la méthode de requête que vous utilisez. Veuillez vérifier votre méthode de requête",
+ "406": "Désolé, le serveur n'a pas pu répondre à la demande en raison des caractéristiques de contenu spécifiées",
+ "407": "Désolé, une authentification de proxy est requise pour poursuivre cette demande",
+ "408": "Désolé, le serveur a expiré en attendant la demande, veuillez vérifier votre connexion réseau et réessayer",
+ "409": "Désolé, la demande ne peut être traitée en raison d'un conflit, peut-être que l'état de la ressource est incompatible avec la demande",
+ "410": "Désolé, la ressource demandée a été définitivement supprimée et est introuvable",
+ "411": "Désolé, le serveur ne peut pas traiter une demande sans une longueur de contenu valide",
+ "412": "Désolé, votre demande ne remplit pas les conditions requises par le serveur pour être traitée",
+ "413": "Désolé, votre demande contient une quantité de données trop importante pour être traitée par le serveur",
+ "414": "Désolé, l'URI de votre demande est trop longue pour être traitée par le serveur",
+ "415": "Désolé, le serveur ne peut pas traiter le format de média inclus dans la demande",
+ "416": "Désolé, le serveur ne peut pas satisfaire la plage de la demande",
+ "417": "Désolé, le serveur ne peut pas satisfaire vos attentes",
+ "422": "Désolé, votre demande est correctement formatée, mais contient des erreurs sémantiques qui empêchent une réponse",
+ "423": "Désolé, la ressource demandée est verrouillée",
+ "424": "Désolé, en raison d'une demande précédente infructueuse, la demande actuelle ne peut pas être complétée",
+ "426": "Désolé, le serveur exige que votre client soit mis à niveau vers une version de protocole supérieure",
+ "428": "Désolé, le serveur exige une condition préalable, votre demande doit inclure des en-têtes de condition corrects",
+ "429": "Désolé, votre requête est trop fréquente et le serveur est un peu fatigué. Veuillez réessayer plus tard",
+ "431": "Désolé, les en-têtes de votre demande sont trop volumineux pour être traités par le serveur",
+ "451": "Désolé, pour des raisons légales, le serveur refuse de fournir cette ressource",
+ "500": "Désolé, le serveur semble rencontrer des difficultés et ne peut temporairement pas traiter votre requête. Veuillez réessayer plus tard",
+ "502": "Désolé, le serveur semble perdu et ne peut temporairement pas fournir de service. Veuillez réessayer plus tard",
+ "503": "Désolé, le serveur ne peut actuellement pas traiter votre requête, probablement en raison d'une surcharge ou de travaux de maintenance. Veuillez réessayer plus tard",
+ "504": "Désolé, le serveur n'a pas reçu de réponse de la part du serveur amont. Veuillez réessayer plus tard",
+ "AgentRuntimeError": "Erreur d'exécution du modèle linguistique Lobe, veuillez vérifier les informations ci-dessous ou réessayer",
+ "FreePlanLimit": "Vous êtes actuellement un utilisateur gratuit et ne pouvez pas utiliser cette fonction. Veuillez passer à un plan payant pour continuer à l'utiliser.",
+ "InvalidAccessCode": "Le mot de passe est incorrect ou vide. Veuillez saisir le mot de passe d'accès correct ou ajouter une clé API personnalisée.",
+ "InvalidBedrockCredentials": "L'authentification Bedrock a échoué, veuillez vérifier AccessKeyId/SecretAccessKey et réessayer",
+ "InvalidClerkUser": "Désolé, vous n'êtes pas actuellement connecté. Veuillez vous connecter ou vous inscrire avant de continuer.",
+ "InvalidGithubToken": "Le jeton d'accès personnel GitHub est incorrect ou vide. Veuillez vérifier le jeton d'accès personnel GitHub et réessayer.",
+ "InvalidOllamaArgs": "La configuration d'Ollama n'est pas valide, veuillez vérifier la configuration d'Ollama et réessayer",
+ "InvalidProviderAPIKey": "{{provider}} API Key incorrect or missing, please check {{provider}} API Key and try again",
+ "LocationNotSupportError": "Désolé, votre emplacement actuel ne prend pas en charge ce service de modèle, peut-être en raison de restrictions géographiques ou de services non disponibles. Veuillez vérifier si votre emplacement actuel prend en charge ce service ou essayer avec une autre localisation.",
+ "NoOpenAIAPIKey": "La clé API OpenAI est vide. Veuillez ajouter une clé API OpenAI personnalisée",
+ "OllamaBizError": "Erreur commerciale lors de la demande de service Ollama, veuillez vérifier les informations ci-dessous ou réessayer",
+ "OllamaServiceUnavailable": "Le service Ollama n'est pas disponible. Veuillez vérifier si Ollama fonctionne correctement ou si la configuration de la communication inter-domaines d'Ollama est correcte.",
+ "OpenAIBizError": "Erreur de service OpenAI. Veuillez vérifier les informations suivantes ou réessayer.",
+ "PluginApiNotFound": "Désolé, l'API spécifiée n'existe pas dans le manifeste du plugin. Veuillez vérifier que votre méthode de requête correspond à l'API du manifeste du plugin",
+ "PluginApiParamsError": "Désolé, la validation des paramètres d'entrée de la requête de ce plugin a échoué. Veuillez vérifier que les paramètres d'entrée correspondent aux informations de l'API",
+ "PluginFailToTransformArguments": "Désolé, échec de la transformation des arguments de l'appel du plugin. Veuillez essayer de régénérer le message d'assistance ou de changer de modèle d'IA avec une capacité d'appel d'outils plus puissante, puis réessayer.",
+ "PluginGatewayError": "Désolé, une erreur est survenue avec la passerelle du plugin. Veuillez vérifier la configuration de la passerelle du plugin.",
+ "PluginManifestInvalid": "Désolé, la validation du manifeste de ce plugin a échoué. Veuillez vérifier le format du manifeste",
+ "PluginManifestNotFound": "Désolé, le serveur n'a pas trouvé le manifeste de description de ce plugin (manifest.json). Veuillez vérifier l'adresse du fichier de description du plugin",
+ "PluginMarketIndexInvalid": "Désolé, la validation de l'index du plugin a échoué. Veuillez vérifier le format du fichier d'index",
+ "PluginMarketIndexNotFound": "Désolé, le serveur n'a pas trouvé l'index du plugin. Veuillez vérifier l'adresse de l'index",
+ "PluginMetaInvalid": "Désolé, la validation des métadonnées de ce plugin a échoué. Veuillez vérifier le format des métadonnées du plugin",
+ "PluginMetaNotFound": "Désolé, aucune métadonnée de plugin n'a été trouvée dans l'index",
+ "PluginOpenApiInitError": "Désolé, l'initialisation du client OpenAPI a échoué. Veuillez vérifier les informations de configuration d'OpenAPI.",
+ "PluginServerError": "Erreur de réponse du serveur du plugin. Veuillez vérifier le fichier de description du plugin, la configuration du plugin ou la mise en œuvre côté serveur en fonction des informations d'erreur ci-dessous",
+ "PluginSettingsInvalid": "Ce plugin doit être correctement configuré avant de pouvoir être utilisé. Veuillez vérifier votre configuration",
+ "ProviderBizError": "Erreur de service {{provider}}. Veuillez vérifier les informations suivantes ou réessayer.",
+ "StreamChunkError": "Erreur de parsing du bloc de message de la requête en streaming. Veuillez vérifier si l'API actuelle respecte les normes ou contacter votre fournisseur d'API pour des conseils.",
+ "SubscriptionPlanLimit": "Vous avez atteint votre limite d'abonnement et ne pouvez pas utiliser cette fonction. Veuillez passer à un plan supérieur ou acheter un pack de ressources pour continuer à l'utiliser.",
+ "UnknownChatFetchError": "Désolé, une erreur de requête inconnue s'est produite. Veuillez vérifier les informations ci-dessous ou réessayer."
+ },
+ "stt": {
+ "responseError": "Échec de la requête de service. Veuillez vérifier la configuration ou réessayer"
+ },
+ "tts": {
+ "responseError": "Échec de la requête de service. Veuillez vérifier la configuration ou réessayer"
+ },
+ "unlock": {
+ "addProxyUrl": "Ajouter une adresse de proxy OpenAI (facultatif)",
+ "apiKey": {
+ "description": "Enter your {{name}} API Key to start the session",
+ "title": "Use custom {{name}} API Key"
+ },
+ "closeMessage": "Fermer le message",
+ "confirm": "Confirmer et réessayer",
+ "oauth": {
+ "description": "L'administrateur a activé l'authentification de connexion unique. Cliquez sur le bouton ci-dessous pour vous connecter et déverrouiller l'application.",
+ "success": "Connexion réussie",
+ "title": "Se connecter",
+ "welcome": "Bienvenue !"
+ },
+ "password": {
+ "description": "L'administrateur a activé le cryptage de l'application. Entrez le mot de passe de l'application pour déverrouiller. Le mot de passe ne doit être saisi qu'une seule fois.",
+ "placeholder": "Entrez le mot de passe",
+ "title": "Entrez le mot de passe pour déverrouiller l'application"
+ },
+ "tabs": {
+ "apiKey": "Clé API personnalisée",
+ "password": "Mot de passe"
+ }
+ },
+ "upload": {
+ "desc": "Détails : {{detail}}",
+ "fileOnlySupportInServerMode": "Le mode de déploiement actuel ne prend pas en charge le téléchargement de fichiers non image. Pour télécharger un format {{ext}}, veuillez passer à un déploiement de base de données côté serveur ou utiliser le service {{cloud}}.",
+ "networkError": "Veuillez vérifier si votre réseau fonctionne correctement et assurez-vous que la configuration CORS du service de stockage de fichiers est correcte.",
+ "title": "Échec de l'envoi du fichier, veuillez vérifier votre connexion réseau ou réessayer plus tard",
+ "unknownError": "Raison de l'erreur : {{reason}}",
+ "uploadFailed": "Échec du téléchargement du fichier."
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/file.json b/DigitalHumanWeb/locales/fr-FR/file.json
new file mode 100644
index 0000000..4c49cf6
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "Gérez vos fichiers et votre base de connaissances",
+ "detail": {
+ "basic": {
+ "createdAt": "Date de création",
+ "filename": "Nom du fichier",
+ "size": "Taille du fichier",
+ "title": "Informations de base",
+ "type": "Format",
+ "updatedAt": "Date de mise à jour"
+ },
+ "data": {
+ "chunkCount": "Nombre de segments",
+ "embedding": {
+ "default": "Non vectorisé",
+ "error": "Échec",
+ "pending": "En attente de démarrage",
+ "processing": "En cours",
+ "success": "Terminé"
+ },
+ "embeddingStatus": "Vectorisation"
+ }
+ },
+ "empty": "Aucun fichier/dossier téléchargé pour le moment",
+ "header": {
+ "actions": {
+ "newFolder": "Nouveau dossier",
+ "uploadFile": "Télécharger un fichier",
+ "uploadFolder": "Télécharger un dossier"
+ },
+ "uploadButton": "Télécharger"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "Vous allez supprimer cette base de connaissances. Les fichiers ne seront pas supprimés, mais déplacés dans tous les fichiers. La suppression de la base de connaissances est irréversible, veuillez agir avec prudence.",
+ "empty": "Cliquez sur <1>+1> pour commencer à créer une base de connaissances"
+ },
+ "new": "Nouvelle base de connaissances",
+ "title": "Base de connaissances"
+ },
+ "networkError": "Échec de l'accès à la base de connaissances, veuillez vérifier votre connexion réseau et réessayer",
+ "notSupportGuide": {
+ "desc": "L'instance déployée actuellement est en mode base de données client, la fonction de gestion des fichiers n'est pas disponible. Veuillez passer en <1>mode de déploiement de base de données serveur1>, ou utilisez directement <3>LobeChat Cloud3>",
+ "features": {
+ "allKind": {
+ "desc": "Prend en charge les types de fichiers courants, y compris Word, PPT, Excel, PDF, TXT et d'autres formats de documents courants, ainsi que des fichiers de code populaires comme JS et Python",
+ "title": "Analyse de divers types de fichiers"
+ },
+ "embeddings": {
+ "desc": "Utilise des modèles vectoriels haute performance pour vectoriser les segments de texte, permettant une recherche sémantique du contenu des fichiers",
+ "title": "Sémantisation vectorielle"
+ },
+ "repos": {
+ "desc": "Prend en charge la création de bases de connaissances et permet d'ajouter différents types de fichiers, construisant ainsi votre propre savoir-faire",
+ "title": "Base de connaissances"
+ }
+ },
+ "title": "Le mode de déploiement actuel ne prend pas en charge la gestion des fichiers"
+ },
+ "preview": {
+ "downloadFile": "Télécharger le fichier",
+ "unsupportedFileAndContact": "Ce format de fichier n'est pas encore pris en charge pour l'aperçu en ligne. Si vous souhaitez un aperçu, n'hésitez pas à <1>nous contacter1>."
+ },
+ "searchFilePlaceholder": "Rechercher un fichier",
+ "tab": {
+ "all": "Tous les fichiers",
+ "audios": "Audio",
+ "documents": "Documents",
+ "images": "Images",
+ "videos": "Vidéos",
+ "websites": "Sites web"
+ },
+ "title": "Fichiers",
+ "uploadDock": {
+ "body": {
+ "collapse": "Réduire",
+ "item": {
+ "done": "Téléchargé",
+ "error": "Échec du téléchargement, veuillez réessayer",
+ "pending": "Préparation au téléchargement...",
+ "processing": "Traitement du fichier...",
+ "restTime": "Temps restant {{time}}"
+ }
+ },
+ "totalCount": "Total {{count}} éléments",
+ "uploadStatus": {
+ "error": "Erreur de téléchargement",
+ "pending": "En attente de téléchargement",
+ "processing": "Téléchargement en cours",
+ "success": "Téléchargement terminé",
+ "uploading": "Téléchargement en cours"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/knowledgeBase.json b/DigitalHumanWeb/locales/fr-FR/knowledgeBase.json
new file mode 100644
index 0000000..605b1f3
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "Fichier ajouté avec succès, <1>voir immédiatement1>",
+ "confirm": "Ajouter",
+ "id": {
+ "placeholder": "Veuillez sélectionner la base de connaissances à ajouter",
+ "required": "Veuillez sélectionner une base de connaissances",
+ "title": "Base de connaissances cible"
+ },
+ "title": "Ajouter à la base de connaissances",
+ "totalFiles": "{{count}} fichier(s) sélectionné(s)"
+ },
+ "createNew": {
+ "confirm": "Créer",
+ "description": {
+ "placeholder": "Description de la base de connaissances (facultatif)"
+ },
+ "formTitle": "Informations de base",
+ "name": {
+ "placeholder": "Nom de la base de connaissances",
+ "required": "Veuillez remplir le nom de la base de connaissances"
+ },
+ "title": "Créer une nouvelle base de connaissances"
+ },
+ "tab": {
+ "evals": "Évaluations",
+ "files": "Documents",
+ "settings": "Paramètres",
+ "testing": "Test de rappel"
+ },
+ "title": "Base de connaissances"
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/market.json b/DigitalHumanWeb/locales/fr-FR/market.json
new file mode 100644
index 0000000..bf69580
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "Ajouter un assistant",
+ "addAgentAndConverse": "Ajouter un agent et converser",
+ "addAgentSuccess": "Ajout réussi",
+ "guide": {
+ "func1": {
+ "desc1": "Accédez à la page de configuration de l'assistant que vous souhaitez soumettre en cliquant sur l'icône Paramètres en haut à droite de la fenêtre de conversation.",
+ "desc2": "Cliquez sur le bouton Soumettre au marché des assistants en haut à droite.",
+ "tag": "Méthode 1",
+ "title": "Soumettre via LobeChat"
+ },
+ "func2": {
+ "button": "Accéder au référentiel d'assistants sur Github",
+ "desc": "Si vous souhaitez ajouter un assistant à l'index, créez une entrée dans le répertoire plugins en utilisant agent-template.json ou agent-template-full.json, rédigez une brève description et ajoutez des balises appropriées, puis créez une demande de tirage.",
+ "tag": "Méthode 2",
+ "title": "Soumettre via Github"
+ }
+ },
+ "search": {
+ "placeholder": "Rechercher par nom, description ou mot-clé de l'assistant..."
+ },
+ "sidebar": {
+ "comment": "Commentaires",
+ "prompt": "Consigne",
+ "title": "Détails de l'assistant"
+ },
+ "submitAgent": "Soumettre un assistant",
+ "title": {
+ "allAgents": "Tous les assistants",
+ "recentSubmits": "Soumissions récentes"
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/metadata.json b/DigitalHumanWeb/locales/fr-FR/metadata.json
new file mode 100644
index 0000000..ca1d976
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}} vous offre la meilleure expérience d'utilisation de ChatGPT, Claude, Gemini et OLLaMA WebUI",
+ "title": "{{appName}} : un outil d'efficacité personnelle en IA pour vous donner un cerveau plus intelligent"
+ },
+ "discover": {
+ "assistants": {
+ "description": "Création de contenu, rédaction, questions-réponses, génération d'images, génération de vidéos, génération de voix, agents intelligents, flux de travail automatisés, personnalisez votre assistant intelligent AI / GPTs / OLLaMA.",
+ "title": "Assistants IA"
+ },
+ "description": "Création de contenu, rédaction, questions-réponses, génération d'images, génération de vidéos, génération de voix, agents intelligents, flux de travail automatisés, applications AI personnalisées, personnalisez votre espace de travail AI.",
+ "models": {
+ "description": "Explorez les modèles AI populaires OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek.",
+ "title": "Modèles IA"
+ },
+ "plugins": {
+ "description": "Découvrez des capacités enrichies pour votre assistant avec des plugins pour la génération de graphiques, la recherche académique, la génération d'images, la génération de vidéos, la génération de voix et l'automatisation des flux de travail.",
+ "title": "Plugins IA"
+ },
+ "providers": {
+ "description": "Explorez les principaux fournisseurs de modèles OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter.",
+ "title": "Fournisseurs de services de modèles IA"
+ },
+ "search": "Recherche",
+ "title": "Découvrir"
+ },
+ "plugins": {
+ "description": "Recherche, génération de graphiques, académique, génération d'images, génération de vidéos, génération de voix, flux de travail automatisés, personnalisez les capacités de plugins ToolCall pour ChatGPT / Claude",
+ "title": "Marché des plugins"
+ },
+ "welcome": {
+ "description": "{{appName}} vous offre la meilleure expérience d'utilisation de ChatGPT, Claude, Gemini et OLLaMA WebUI",
+ "title": "Bienvenue sur {{appName}} : un outil d'efficacité personnelle en IA pour vous donner un cerveau plus intelligent"
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/migration.json b/DigitalHumanWeb/locales/fr-FR/migration.json
new file mode 100644
index 0000000..79b708a
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "Effacer la base de données locale",
+ "downloadBackup": "Télécharger la sauvegarde des données",
+ "reUpgrade": "Refaire la mise à niveau",
+ "start": "Commencer",
+ "upgrade": "Mise à niveau"
+ },
+ "clear": {
+ "confirm": "Vous êtes sur le point de vider les données locales (les paramètres globaux ne seront pas affectés). Veuillez confirmer que vous avez sauvegardé les données."
+ },
+ "description": "Dans cette nouvelle version, le stockage des données de {{appName}} a fait un bond en avant considérable. Nous devons donc mettre à niveau les données de l'ancienne version pour vous offrir une meilleure expérience d'utilisation.",
+ "features": {
+ "capability": {
+ "desc": "Basé sur la technologie IndexedDB, capable de stocker toutes vos conversations de toute une vie.",
+ "title": "Grande capacité"
+ },
+ "performance": {
+ "desc": "Indexation automatique de millions de messages, avec des réponses aux requêtes en millisecondes.",
+ "title": "Haute performance"
+ },
+ "use": {
+ "desc": "Prise en charge de la recherche par titre, description, étiquettes, contenu des messages et même texte traduit, rendant la recherche quotidienne beaucoup plus efficace.",
+ "title": "Plus facile à utiliser"
+ }
+ },
+ "title": "Évolution des données de {{appName}}",
+ "upgrade": {
+ "error": {
+ "subTitle": "Nous sommes désolés, une erreur s'est produite lors du processus de mise à niveau de la base de données. Veuillez essayer les solutions suivantes : A. Vider les données locales, puis réimporter les données de sauvegarde ; B. Cliquez sur le bouton « Réessayer la mise à niveau ».
Si l'erreur persiste, veuillez <1>soumettre un problème1>, nous vous aiderons à le résoudre dans les plus brefs délais.",
+ "title": "Échec de la mise à niveau de la base de données"
+ },
+ "success": {
+ "subTitle": "La base de données de {{appName}} a été mise à niveau vers la dernière version, commencez à en profiter dès maintenant.",
+ "title": "Mise à niveau de la base de données réussie"
+ }
+ },
+ "upgradeTip": "La mise à niveau prend environ 10 à 20 secondes, veuillez ne pas fermer {{appName}} pendant le processus."
+ },
+ "migrateError": {
+ "missVersion": "Les données importées ne comportent pas de numéro de version. Veuillez vérifier le fichier et réessayer.",
+ "noMigration": "Aucune solution de migration correspondant à la version actuelle n'a été trouvée. Veuillez vérifier le numéro de version et réessayer. Si le problème persiste, veuillez soumettre un rapport de problème."
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/modelProvider.json b/DigitalHumanWeb/locales/fr-FR/modelProvider.json
new file mode 100644
index 0000000..87e1045
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "Version de l'API Azure, au format YYYY-MM-DD, consultez [la dernière version](https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions)",
+ "fetch": "Obtenir la liste",
+ "title": "Version de l'API Azure"
+ },
+ "empty": "Veuillez saisir l'ID du modèle pour ajouter le premier modèle",
+ "endpoint": {
+ "desc": "Lors de l'inspection des ressources sur le portail Azure, vous pouvez trouver cette valeur dans la section 'Clés et points de terminaison'",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "Adresse de l'API Azure"
+ },
+ "modelListPlaceholder": "Sélectionnez ou ajoutez le modèle OpenAI que vous avez déployé",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "Lors de l'inspection des ressources sur le portail Azure, vous pouvez trouver cette valeur dans la section 'Clés et points de terminaison'. Vous pouvez utiliser KEY1 ou KEY2",
+ "placeholder": "Clé API Azure",
+ "title": "Clé API"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "Saisissez l'ID de clé d'accès AWS",
+ "placeholder": "ID de clé d'accès AWS",
+ "title": "ID de clé d'accès AWS"
+ },
+ "checker": {
+ "desc": "Vérifiez si l'AccessKeyId / SecretAccessKey est correctement saisi"
+ },
+ "region": {
+ "desc": "Saisissez la région AWS",
+ "placeholder": "Région AWS",
+ "title": "Région AWS"
+ },
+ "secretAccessKey": {
+ "desc": "Saisissez la clé d'accès secrète AWS",
+ "placeholder": "Clé d'accès secrète AWS",
+ "title": "Clé d'accès secrète AWS"
+ },
+ "sessionToken": {
+ "desc": "Si vous utilisez AWS SSO/STS, veuillez entrer votre jeton de session AWS",
+ "placeholder": "Jeton de session AWS",
+ "title": "Jeton de session AWS (facultatif)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "Région de service personnalisée",
+ "customSessionToken": "Jeton de session personnalisé",
+ "description": "Entrez votre ID de clé d'accès AWS / SecretAccessKey pour commencer la session. L'application ne stockera pas votre configuration d'authentification.",
+ "title": "Utiliser des informations d'authentification Bedrock personnalisées"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "Entrez votre PAT GitHub, cliquez [ici](https://github.com/settings/tokens) pour en créer un.",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "Vérifiez si l'adresse du proxy est correctement saisie",
+ "title": "Vérification de la connectivité"
+ },
+ "customModelName": {
+ "desc": "Ajoutez un modèle personnalisé, séparez les modèles multiples par des virgules (,)",
+ "placeholder": "vicuna,llava,codellama,llama2:13b-text",
+ "title": "Nom du modèle personnalisé"
+ },
+ "download": {
+ "desc": "Ollama is downloading the model. Please try not to close this page. The download will resume from where it left off if interrupted.",
+ "remainingTime": "Remaining Time",
+ "speed": "Download Speed",
+ "title": "Downloading model {{model}}"
+ },
+ "endpoint": {
+ "desc": "Saisissez l'adresse du proxy Ollama, laissez vide si non spécifié localement",
+ "title": "Adresse du proxy"
+ },
+ "setup": {
+ "cors": {
+ "description": "Due to browser security restrictions, you need to configure cross-origin settings for Ollama to function properly.",
+ "linux": {
+ "env": "Add `Environment` under [Service] section, and set the OLLAMA_ORIGINS environment variable:",
+ "reboot": "Reload systemd and restart Ollama.",
+ "systemd": "Invoke systemd to edit the ollama service:"
+ },
+ "macos": "Open the Terminal application, paste the following command, and press Enter.",
+ "reboot": "Restart the Ollama service after the execution is complete.",
+ "title": "Configure Ollama for Cross-Origin Access",
+ "windows": "On Windows, go to 'Control Panel' and edit system environment variables. Create a new environment variable named 'OLLAMA_ORIGINS' for your user account, set the value to '*', and click 'OK/Apply' to save."
+ },
+ "install": {
+ "description": "Veuillez vous assurer que vous avez activé Ollama. Si vous n'avez pas encore téléchargé Ollama, veuillez vous rendre sur le site officiel pour le <1>télécharger1>.",
+ "docker": "If you prefer using Docker, Ollama also provides an official Docker image. You can pull it using the following command:",
+ "linux": {
+ "command": "Install using the following command:",
+ "manual": "Alternatively, you can refer to the <1>Linux Manual Installation Guide1> for manual installation."
+ },
+ "title": "Install and Start Ollama Locally",
+ "windowsTab": "Windows (Preview)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "Cancel Download",
+ "confirm": "Download",
+ "description": "Enter your Ollama model tag to continue the session",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "Starting download...",
+ "title": "Download specified Ollama model"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI Zéro Un Tout"
+ },
+ "zhipu": {
+ "title": "Zhipu"
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/models.json b/DigitalHumanWeb/locales/fr-FR/models.json
new file mode 100644
index 0000000..73147b2
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34B, avec un ensemble d'échantillons d'entraînement riche, offre des performances supérieures dans les applications sectorielles."
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9B supporte 16K Tokens, offrant une capacité de génération de langage efficace et fluide."
+ },
+ "360gpt-pro": {
+ "description": "360GPT Pro, en tant que membre important de la série de modèles AI de 360, répond à des applications variées de traitement de texte avec une efficacité élevée, supportant la compréhension de longs textes et les dialogues multi-tours."
+ },
+ "360gpt-turbo": {
+ "description": "360GPT Turbo offre de puissantes capacités de calcul et de dialogue, avec une excellente compréhension sémantique et une efficacité de génération, ce qui en fait une solution idéale pour les entreprises et les développeurs."
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8K met l'accent sur la sécurité sémantique et l'orientation vers la responsabilité, conçu pour des scénarios d'application exigeant une sécurité de contenu élevée, garantissant l'exactitude et la robustesse de l'expérience utilisateur."
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Pro est un modèle avancé de traitement du langage naturel lancé par la société 360, offrant d'excellentes capacités de génération et de compréhension de texte, en particulier dans le domaine de la création et de la génération."
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultra est la version la plus puissante de la série de grands modèles Xinghuo, améliorant la compréhension et la capacité de résumé du contenu textuel tout en mettant à jour le lien de recherche en ligne. C'est une solution complète pour améliorer la productivité au bureau et répondre avec précision aux besoins, représentant un produit intelligent de premier plan dans l'industrie."
+ },
+ "Baichuan2-Turbo": {
+ "description": "Utilise une technologie d'amélioration de recherche pour relier complètement le grand modèle aux connaissances sectorielles et aux connaissances du web. Supporte le téléchargement de divers documents tels que PDF, Word, et l'entrée d'URL, permettant une acquisition d'informations rapide et complète, avec des résultats précis et professionnels."
+ },
+ "Baichuan3-Turbo": {
+ "description": "Optimisé pour des scénarios d'entreprise à haute fréquence, avec des améliorations significatives et un excellent rapport qualité-prix. Par rapport au modèle Baichuan2, la création de contenu a augmenté de 20%, les questions-réponses de 17%, et les capacités de jeu de rôle de 40%. Les performances globales surpassent celles de GPT-3.5."
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "Doté d'une fenêtre de contexte ultra-longue de 128K, optimisé pour des scénarios d'entreprise à haute fréquence, avec des améliorations significatives et un excellent rapport qualité-prix. Par rapport au modèle Baichuan2, la création de contenu a augmenté de 20%, les questions-réponses de 17%, et les capacités de jeu de rôle de 40%. Les performances globales surpassent celles de GPT-3.5."
+ },
+ "Baichuan4": {
+ "description": "Le modèle est le meilleur en Chine, surpassant les modèles étrangers dans des tâches en chinois telles que l'encyclopédie, les longs textes et la création. Il possède également des capacités multimodales de pointe, avec d'excellentes performances dans plusieurs évaluations de référence."
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B) est un modèle innovant, adapté à des applications dans plusieurs domaines et à des tâches complexes."
+ },
+ "Max-32k": {
+ "description": "Spark Max 32K est équipé d'une grande capacité de traitement de contexte, offrant une meilleure compréhension du contexte et des capacités de raisonnement logique, prenant en charge des entrées textuelles de 32K tokens, adapté à la lecture de longs documents, aux questions-réponses sur des connaissances privées et à d'autres scénarios."
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPO est une fusion de modèles hautement flexible, visant à offrir une expérience créative exceptionnelle."
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B) est un modèle d'instructions de haute précision, adapté aux calculs complexes."
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B) offre une sortie linguistique optimisée et des possibilités d'application diversifiées."
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "Rafraîchissement du modèle Phi-3-mini."
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "Même modèle Phi-3-medium, mais avec une taille de contexte plus grande pour RAG ou un prompt à quelques exemples."
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "Un modèle de 14 milliards de paramètres, prouvant une meilleure qualité que Phi-3-mini, avec un accent sur des données denses en raisonnement de haute qualité."
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "Même modèle Phi-3-mini, mais avec une taille de contexte plus grande pour RAG ou un prompt à quelques exemples."
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "Le plus petit membre de la famille Phi-3. Optimisé pour la qualité et la faible latence."
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "Même modèle Phi-3-small, mais avec une taille de contexte plus grande pour RAG ou un prompt à quelques exemples."
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "Un modèle de 7 milliards de paramètres, prouvant une meilleure qualité que Phi-3-mini, avec un accent sur des données denses en raisonnement de haute qualité."
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128K est configuré avec une capacité de traitement de contexte exceptionnel, capable de gérer jusqu'à 128K d'informations contextuelles, particulièrement adapté pour l'analyse complète et le traitement des relations logiques à long terme dans des contenus longs, offrant une logique fluide et cohérente ainsi qu'un support varié pour les références dans des communications textuelles complexes."
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "En tant que version bêta de Qwen2, Qwen1.5 utilise des données à grande échelle pour réaliser des fonctionnalités de dialogue plus précises."
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B) offre des réponses rapides et des capacités de dialogue naturel, adapté aux environnements multilingues."
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2 est un modèle de langage général avancé, prenant en charge divers types d'instructions."
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5 est une toute nouvelle série de modèles de langage à grande échelle, conçue pour optimiser le traitement des tâches d'instruction."
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5 est une toute nouvelle série de modèles de langage à grande échelle, conçue pour optimiser le traitement des tâches d'instruction."
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5 est une toute nouvelle série de modèles de langage à grande échelle, avec une capacité de compréhension et de génération améliorée."
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5 est une toute nouvelle série de modèles de langage à grande échelle, conçue pour optimiser le traitement des tâches d'instruction."
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coder se concentre sur la rédaction de code."
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Math se concentre sur la résolution de problèmes dans le domaine des mathématiques, fournissant des réponses professionnelles pour des questions de haute difficulté."
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9B est une version open source, offrant une expérience de dialogue optimisée pour les applications de conversation."
+ },
+ "abab5.5-chat": {
+ "description": "Orienté vers des scénarios de productivité, prenant en charge le traitement de tâches complexes et la génération de texte efficace, adapté aux applications professionnelles."
+ },
+ "abab5.5s-chat": {
+ "description": "Conçu pour des scénarios de dialogue en chinois, offrant une capacité de génération de dialogues en chinois de haute qualité, adaptée à divers scénarios d'application."
+ },
+ "abab6.5g-chat": {
+ "description": "Conçu pour des dialogues de personnages multilingues, prenant en charge la génération de dialogues de haute qualité en anglais et dans d'autres langues."
+ },
+ "abab6.5s-chat": {
+ "description": "Adapté à une large gamme de tâches de traitement du langage naturel, y compris la génération de texte, les systèmes de dialogue, etc."
+ },
+ "abab6.5t-chat": {
+ "description": "Optimisé pour des scénarios de dialogue en chinois, offrant une capacité de génération de dialogues fluide et conforme aux habitudes d'expression en chinois."
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "Le modèle d'appel de fonction open source de Fireworks offre d'excellentes capacités d'exécution d'instructions et des caractéristiques personnalisables."
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "Firefunction-v2, récemment lancé par Fireworks, est un modèle d'appel de fonction performant, développé sur la base de Llama-3 et optimisé pour des scénarios tels que les appels de fonction, les dialogues et le suivi d'instructions."
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13b est un modèle de langage visuel capable de recevoir simultanément des entrées d'images et de texte, entraîné sur des données de haute qualité, adapté aux tâches multimodales."
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "Le modèle d'instructions Gemma 2 9B, basé sur la technologie antérieure de Google, est adapté à diverses tâches de génération de texte telles que la réponse aux questions, le résumé et le raisonnement."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "Le modèle d'instructions Llama 3 70B est optimisé pour les dialogues multilingues et la compréhension du langage naturel, surpassant la plupart des modèles concurrents."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "Le modèle d'instructions Llama 3 70B (version HF) est conforme aux résultats de l'implémentation officielle, adapté aux tâches de suivi d'instructions de haute qualité."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "Le modèle d'instructions Llama 3 8B est optimisé pour les dialogues et les tâches multilingues, offrant des performances exceptionnelles et efficaces."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "Le modèle d'instructions Llama 3 8B (version HF) est conforme aux résultats de l'implémentation officielle, offrant une grande cohérence et une compatibilité multiplateforme."
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "Le modèle d'instructions Llama 3.1 405B, avec des paramètres de très grande échelle, est adapté aux tâches complexes et au suivi d'instructions dans des scénarios à forte charge."
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "Le modèle d'instructions Llama 3.1 70B offre une compréhension et une génération de langage exceptionnelles, idéal pour les tâches de dialogue et d'analyse."
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "Le modèle d'instructions Llama 3.1 8B est optimisé pour les dialogues multilingues, capable de surpasser la plupart des modèles open source et fermés sur des benchmarks industriels courants."
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "Le modèle d'instructions Mixtral MoE 8x22B, avec des paramètres à grande échelle et une architecture multi-experts, prend en charge efficacement le traitement de tâches complexes."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "Le modèle d'instructions Mixtral MoE 8x7B, avec une architecture multi-experts, offre un suivi et une exécution d'instructions efficaces."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "Le modèle d'instructions Mixtral MoE 8x7B (version HF) offre des performances conformes à l'implémentation officielle, adapté à divers scénarios de tâches efficaces."
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "Le modèle MythoMax L2 13B, combinant des techniques de fusion novatrices, excelle dans la narration et le jeu de rôle."
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "Le modèle d'instructions Phi 3 Vision est un modèle multimodal léger, capable de traiter des informations visuelles et textuelles complexes, avec une forte capacité de raisonnement."
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "Le modèle StarCoder 15.5B prend en charge des tâches de programmation avancées, avec des capacités multilingues améliorées, adapté à la génération et à la compréhension de code complexes."
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "Le modèle StarCoder 7B est entraîné sur plus de 80 langages de programmation, offrant d'excellentes capacités de complétion de code et de compréhension contextuelle."
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "Le modèle Yi-Large offre d'excellentes capacités de traitement multilingue, adapté à diverses tâches de génération et de compréhension de langage."
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "Un modèle multilingue de 398 milliards de paramètres (94 milliards actifs), offrant une fenêtre de contexte longue de 256K, des appels de fonction, une sortie structurée et une génération ancrée."
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "Un modèle multilingue de 52 milliards de paramètres (12 milliards actifs), offrant une fenêtre de contexte longue de 256K, des appels de fonction, une sortie structurée et une génération ancrée."
+ },
+ "ai21-jamba-instruct": {
+ "description": "Un modèle LLM basé sur Mamba de qualité production pour atteindre des performances, une qualité et une efficacité de coût de premier ordre."
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "Claude 3.5 Sonnet élève les normes de l'industrie, surpassant les modèles concurrents et Claude 3 Opus, avec d'excellentes performances dans une large gamme d'évaluations, tout en offrant la vitesse et le coût de nos modèles de niveau intermédiaire."
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "Claude 3 Haiku est le modèle le plus rapide et le plus compact d'Anthropic, offrant une vitesse de réponse quasi instantanée. Il peut répondre rapidement à des requêtes et demandes simples. Les clients pourront construire une expérience AI transparente imitant l'interaction humaine. Claude 3 Haiku peut traiter des images et retourner des sorties textuelles, avec une fenêtre contextuelle de 200K."
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "Claude 3 Opus est le modèle AI le plus puissant d'Anthropic, avec des performances de pointe sur des tâches hautement complexes. Il peut traiter des invites ouvertes et des scénarios non vus, avec une fluidité et une compréhension humaine exceptionnelles. Claude 3 Opus démontre les possibilités de génération AI à la pointe. Claude 3 Opus peut traiter des images et retourner des sorties textuelles, avec une fenêtre contextuelle de 200K."
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "Claude 3 Sonnet d'Anthropic atteint un équilibre idéal entre intelligence et vitesse, particulièrement adapté aux charges de travail d'entreprise. Il offre une utilité maximale à un prix inférieur à celui des concurrents, conçu pour être un modèle fiable et durable, adapté aux déploiements AI à grande échelle. Claude 3 Sonnet peut traiter des images et retourner des sorties textuelles, avec une fenêtre contextuelle de 200K."
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "Un modèle rapide, économique et toujours très capable, capable de traiter une série de tâches, y compris des conversations quotidiennes, l'analyse de texte, le résumé et les questions-réponses sur des documents."
+ },
+ "anthropic.claude-v2": {
+ "description": "Anthropic a démontré une grande capacité dans une large gamme de tâches, allant des dialogues complexes à la génération de contenu créatif, en passant par le suivi détaillé des instructions."
+ },
+ "anthropic.claude-v2:1": {
+ "description": "Version mise à jour de Claude 2, avec une fenêtre contextuelle doublée, ainsi que des améliorations en fiabilité, taux d'hallucination et précision basée sur des preuves dans des documents longs et des contextes RAG."
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 Haiku est le modèle le plus rapide et le plus compact d'Anthropic, conçu pour offrir des réponses quasi instantanées. Il présente des performances directionnelles rapides et précises."
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opus est le modèle le plus puissant d'Anthropic pour traiter des tâches hautement complexes. Il excelle en termes de performance, d'intelligence, de fluidité et de compréhension."
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 Sonnet offre des capacités supérieures à celles d'Opus et une vitesse plus rapide que Sonnet, tout en maintenant le même prix que Sonnet. Sonnet excelle particulièrement dans la programmation, la science des données, le traitement visuel et les tâches d'agent."
+ },
+ "aya": {
+ "description": "Aya 23 est un modèle multilingue lancé par Cohere, prenant en charge 23 langues, facilitant les applications linguistiques diversifiées."
+ },
+ "aya:35b": {
+ "description": "Aya 23 est un modèle multilingue lancé par Cohere, prenant en charge 23 langues, facilitant les applications linguistiques diversifiées."
+ },
+ "charglm-3": {
+ "description": "CharGLM-3 est conçu pour le jeu de rôle et l'accompagnement émotionnel, prenant en charge une mémoire multi-tours ultra-longue et des dialogues personnalisés, avec des applications variées."
+ },
+ "chatgpt-4o-latest": {
+ "description": "ChatGPT-4o est un modèle dynamique, mis à jour en temps réel pour rester à jour avec la dernière version. Il combine une compréhension et une génération de langage puissantes, adapté à des scénarios d'application à grande échelle, y compris le service client, l'éducation et le support technique."
+ },
+ "claude-2.0": {
+ "description": "Claude 2 offre des avancées clés pour les entreprises, y compris un contexte de 200K jetons, une réduction significative du taux d'illusion du modèle, des invites système et une nouvelle fonctionnalité de test : l'appel d'outils."
+ },
+ "claude-2.1": {
+ "description": "Claude 2 offre des avancées clés pour les entreprises, y compris un contexte de 200K jetons, une réduction significative du taux d'illusion du modèle, des invites système et une nouvelle fonctionnalité de test : l'appel d'outils."
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnet offre des capacités dépassant celles d'Opus et une vitesse plus rapide que Sonnet, tout en maintenant le même prix que Sonnet. Sonnet excelle particulièrement dans la programmation, la science des données, le traitement visuel et les tâches d'agent."
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haiku est le modèle le plus rapide et le plus compact d'Anthropic, conçu pour des réponses quasi instantanées. Il présente des performances directionnelles rapides et précises."
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opus est le modèle le plus puissant d'Anthropic pour traiter des tâches hautement complexes. Il excelle en performance, intelligence, fluidité et compréhension."
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnet offre un équilibre idéal entre intelligence et vitesse pour les charges de travail d'entreprise. Il fournit une utilité maximale à un coût inférieur, fiable et adapté à un déploiement à grande échelle."
+ },
+ "claude-instant-1.2": {
+ "description": "Le modèle d'Anthropic est conçu pour une génération de texte à faible latence et à haut débit, prenant en charge la génération de centaines de pages de texte."
+ },
+ "codegeex-4": {
+ "description": "CodeGeeX-4 est un puissant assistant de programmation AI, prenant en charge des questions intelligentes et l'achèvement de code dans divers langages de programmation, améliorant l'efficacité du développement."
+ },
+ "codegemma": {
+ "description": "CodeGemma est un modèle de langage léger dédié à différentes tâches de programmation, prenant en charge une itération et une intégration rapides."
+ },
+ "codegemma:2b": {
+ "description": "CodeGemma est un modèle de langage léger dédié à différentes tâches de programmation, prenant en charge une itération et une intégration rapides."
+ },
+ "codellama": {
+ "description": "Code Llama est un LLM axé sur la génération et la discussion de code, combinant un large support de langages de programmation, adapté aux environnements de développement."
+ },
+ "codellama:13b": {
+ "description": "Code Llama est un LLM axé sur la génération et la discussion de code, combinant un large support de langages de programmation, adapté aux environnements de développement."
+ },
+ "codellama:34b": {
+ "description": "Code Llama est un LLM axé sur la génération et la discussion de code, combinant un large support de langages de programmation, adapté aux environnements de développement."
+ },
+ "codellama:70b": {
+ "description": "Code Llama est un LLM axé sur la génération et la discussion de code, combinant un large support de langages de programmation, adapté aux environnements de développement."
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5 est un modèle de langage à grande échelle entraîné sur une grande quantité de données de code, conçu pour résoudre des tâches de programmation complexes."
+ },
+ "codestral": {
+ "description": "Codestral est le premier modèle de code de Mistral AI, offrant un excellent soutien pour les tâches de génération de code."
+ },
+ "codestral-latest": {
+ "description": "Codestral est un modèle de génération de pointe axé sur la génération de code, optimisé pour les tâches de remplissage intermédiaire et de complétion de code."
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22B est un modèle conçu pour le suivi des instructions, le dialogue et la programmation."
+ },
+ "cohere-command-r": {
+ "description": "Command R est un modèle génératif évolutif ciblant RAG et l'utilisation d'outils pour permettre une IA à l'échelle de la production pour les entreprises."
+ },
+ "cohere-command-r-plus": {
+ "description": "Command R+ est un modèle optimisé RAG de pointe conçu pour traiter des charges de travail de niveau entreprise."
+ },
+ "command-r": {
+ "description": "Command R est un LLM optimisé pour les tâches de dialogue et de long contexte, particulièrement adapté à l'interaction dynamique et à la gestion des connaissances."
+ },
+ "command-r-plus": {
+ "description": "Command R+ est un modèle de langage de grande taille à haute performance, conçu pour des scénarios d'entreprise réels et des applications complexes."
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instruct offre des capacités de traitement d'instructions hautement fiables, prenant en charge des applications dans divers secteurs."
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5 intègre les excellentes caractéristiques des versions précédentes, renforçant les capacités générales et de codage."
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67B est un modèle avancé formé pour des dialogues de haute complexité."
+ },
+ "deepseek-chat": {
+ "description": "Un nouveau modèle open source qui fusionne des capacités générales et de code, conservant non seulement la capacité de dialogue général du modèle Chat d'origine et la puissante capacité de traitement de code du modèle Coder, mais s'alignant également mieux sur les préférences humaines. De plus, DeepSeek-V2.5 a réalisé des améliorations significatives dans plusieurs domaines tels que les tâches d'écriture et le suivi des instructions."
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2 est un modèle de code open source de type expert mixte, performant dans les tâches de code, rivalisant avec GPT4-Turbo."
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2 est un modèle de code open source de type expert mixte, performant dans les tâches de code, rivalisant avec GPT4-Turbo."
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2 est un modèle de langage Mixture-of-Experts efficace, adapté aux besoins de traitement économique."
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236B est le modèle de code de conception de DeepSeek, offrant de puissantes capacités de génération de code."
+ },
+ "deepseek/deepseek-chat": {
+ "description": "Un nouveau modèle open source fusionnant des capacités générales et de codage, qui non seulement conserve les capacités de dialogue général du modèle Chat d'origine et la puissante capacité de traitement de code du modèle Coder, mais s'aligne également mieux sur les préférences humaines. De plus, DeepSeek-V2.5 a également réalisé des améliorations significatives dans plusieurs domaines tels que les tâches d'écriture et le suivi d'instructions."
+ },
+ "emohaa": {
+ "description": "Emohaa est un modèle psychologique, doté de compétences de conseil professionnel, aidant les utilisateurs à comprendre les problèmes émotionnels."
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001 (Ajustement) offre des performances stables et ajustables, ce qui en fait un choix idéal pour des solutions de tâches complexes."
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002 (Ajustement) offre un excellent soutien multimodal, se concentrant sur la résolution efficace de tâches complexes."
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Pro est le modèle d'IA haute performance de Google, conçu pour une large extension des tâches."
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001 est un modèle multimodal efficace, prenant en charge l'extension d'applications variées."
+ },
+ "gemini-1.5-flash-002": {
+ "description": "Gemini 1.5 Flash 002 est un modèle multimodal efficace, prenant en charge une large gamme d'applications."
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827 est conçu pour traiter des scénarios de tâches à grande échelle, offrant une vitesse de traitement inégalée."
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "Gemini 1.5 Flash 8B 0924 est le dernier modèle expérimental, offrant des améliorations significatives en termes de performance dans les cas d'utilisation textuels et multimodaux."
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827 offre des capacités de traitement multimodal optimisées, adaptées à divers scénarios de tâches complexes."
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flash est le dernier modèle d'IA multimodal de Google, doté de capacités de traitement rapide, prenant en charge les entrées de texte, d'images et de vidéos, adapté à une large gamme de tâches pour une extension efficace."
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001 est une solution d'IA multimodale extensible, prenant en charge une large gamme de tâches complexes."
+ },
+ "gemini-1.5-pro-002": {
+ "description": "Gemini 1.5 Pro 002 est le dernier modèle prêt pour la production, offrant une qualité de sortie supérieure, avec des améliorations notables dans les domaines des mathématiques, des contextes longs et des tâches visuelles."
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801 offre d'excellentes capacités de traitement multimodal, apportant une plus grande flexibilité au développement d'applications."
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827 combine les dernières technologies d'optimisation, offrant une capacité de traitement de données multimodales plus efficace."
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Pro prend en charge jusqu'à 2 millions de tokens, ce qui en fait un choix idéal pour un modèle multimodal de taille moyenne, adapté à un soutien polyvalent pour des tâches complexes."
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7B est adapté au traitement de tâches de taille moyenne, alliant coût et efficacité."
+ },
+ "gemma2": {
+ "description": "Gemma 2 est un modèle efficace lancé par Google, couvrant une variété de scénarios d'application allant des petites applications au traitement de données complexes."
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9B est un modèle optimisé pour des tâches spécifiques et l'intégration d'outils."
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2 est un modèle efficace lancé par Google, couvrant une variété de scénarios d'application allant des petites applications au traitement de données complexes."
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2 est un modèle efficace lancé par Google, couvrant une variété de scénarios d'application allant des petites applications au traitement de données complexes."
+ },
+ "general": {
+ "description": "Spark Lite est un modèle de langage léger, offrant une latence extrêmement faible et une capacité de traitement efficace, entièrement gratuit et ouvert, supportant une fonction de recherche en temps réel. Sa rapidité de réponse le rend exceptionnel dans les applications d'inférence sur des appareils à faible puissance de calcul et dans l'ajustement des modèles, offrant aux utilisateurs un excellent rapport coût-efficacité et une expérience intelligente, en particulier dans les scénarios de questions-réponses, de génération de contenu et de recherche."
+ },
+ "generalv3": {
+ "description": "Spark Pro est un modèle de langage de haute performance optimisé pour des domaines professionnels, se concentrant sur les mathématiques, la programmation, la médecine, l'éducation, etc., et supportant la recherche en ligne ainsi que des plugins intégrés pour la météo, la date, etc. Son modèle optimisé affiche d'excellentes performances et une efficacité dans des tâches complexes de questions-réponses, de compréhension linguistique et de création de textes de haut niveau, en faisant un choix idéal pour des applications professionnelles."
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Max est la version la plus complète, supportant la recherche en ligne et de nombreux plugins intégrés. Ses capacités centrales entièrement optimisées, ainsi que la définition des rôles système et la fonction d'appel de fonctions, lui permettent d'exceller dans divers scénarios d'application complexes."
+ },
+ "glm-4": {
+ "description": "GLM-4 est l'ancienne version phare publiée en janvier 2024, actuellement remplacée par le plus puissant GLM-4-0520."
+ },
+ "glm-4-0520": {
+ "description": "GLM-4-0520 est la dernière version du modèle, conçue pour des tâches hautement complexes et diversifiées, avec des performances exceptionnelles."
+ },
+ "glm-4-air": {
+ "description": "GLM-4-Air est une version économique, offrant des performances proches de GLM-4, avec une rapidité et un prix abordable."
+ },
+ "glm-4-airx": {
+ "description": "GLM-4-AirX offre une version efficace de GLM-4-Air, avec une vitesse d'inférence pouvant atteindre 2,6 fois celle de la version standard."
+ },
+ "glm-4-alltools": {
+ "description": "GLM-4-AllTools est un modèle d'agent multifonctionnel, optimisé pour prendre en charge la planification d'instructions complexes et les appels d'outils, tels que la navigation sur le web, l'interprétation de code et la génération de texte, adapté à l'exécution de multiples tâches."
+ },
+ "glm-4-flash": {
+ "description": "GLM-4-Flash est le choix idéal pour traiter des tâches simples, avec la vitesse la plus rapide et le prix le plus avantageux."
+ },
+ "glm-4-long": {
+ "description": "GLM-4-Long prend en charge des entrées de texte ultra-longues, adapté aux tâches de mémoire et au traitement de documents à grande échelle."
+ },
+ "glm-4-plus": {
+ "description": "GLM-4-Plus, en tant que modèle phare de haute intelligence, possède de puissantes capacités de traitement de longs textes et de tâches complexes, avec des performances globalement améliorées."
+ },
+ "glm-4v": {
+ "description": "GLM-4V offre de puissantes capacités de compréhension et de raisonnement d'image, prenant en charge diverses tâches visuelles."
+ },
+ "glm-4v-plus": {
+ "description": "GLM-4V-Plus possède la capacité de comprendre le contenu vidéo et plusieurs images, adapté aux tâches multimodales."
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827 offre des capacités de traitement multimodal optimisées, adaptées à divers scénarios de tâches complexes."
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827 combine les dernières technologies d'optimisation pour offrir des capacités de traitement de données multimodales plus efficaces."
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2 poursuit le concept de conception légère et efficace."
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2 est une série de modèles de texte open source allégés de Google."
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2 est une série de modèles de texte open source allégés de Google."
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B) offre des capacités de traitement d'instructions de base, adapté aux applications légères."
+ },
+ "gpt-3.5-turbo": {
+ "description": "GPT 3.5 Turbo, adapté à diverses tâches de génération et de compréhension de texte, pointe actuellement vers gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "GPT 3.5 Turbo, adapté à diverses tâches de génération et de compréhension de texte, pointe actuellement vers gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "GPT 3.5 Turbo, adapté à diverses tâches de génération et de compréhension de texte, pointe actuellement vers gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "GPT 3.5 Turbo, adapté à diverses tâches de génération et de compréhension de texte, pointe actuellement vers gpt-3.5-turbo-0125."
+ },
+ "gpt-4": {
+ "description": "GPT-4 offre une fenêtre contextuelle plus grande, capable de traiter des entrées textuelles plus longues, adapté aux scénarios nécessitant une intégration d'informations étendue et une analyse de données."
+ },
+ "gpt-4-0125-preview": {
+ "description": "Le dernier modèle GPT-4 Turbo dispose de fonctionnalités visuelles. Désormais, les requêtes visuelles peuvent être effectuées en utilisant le mode JSON et les appels de fonction. GPT-4 Turbo est une version améliorée, offrant un soutien rentable pour les tâches multimodales. Il trouve un équilibre entre précision et efficacité, adapté aux applications nécessitant des interactions en temps réel."
+ },
+ "gpt-4-0613": {
+ "description": "GPT-4 offre une fenêtre contextuelle plus grande, capable de traiter des entrées textuelles plus longues, adapté aux scénarios nécessitant une intégration d'informations étendue et une analyse de données."
+ },
+ "gpt-4-1106-preview": {
+ "description": "Le dernier modèle GPT-4 Turbo dispose de fonctionnalités visuelles. Désormais, les requêtes visuelles peuvent être effectuées en utilisant le mode JSON et les appels de fonction. GPT-4 Turbo est une version améliorée, offrant un soutien rentable pour les tâches multimodales. Il trouve un équilibre entre précision et efficacité, adapté aux applications nécessitant des interactions en temps réel."
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "Le dernier modèle GPT-4 Turbo dispose de fonctionnalités visuelles. Désormais, les requêtes visuelles peuvent être effectuées en utilisant le mode JSON et les appels de fonction. GPT-4 Turbo est une version améliorée, offrant un soutien rentable pour les tâches multimodales. Il trouve un équilibre entre précision et efficacité, adapté aux applications nécessitant des interactions en temps réel."
+ },
+ "gpt-4-32k": {
+ "description": "GPT-4 offre une fenêtre contextuelle plus grande, capable de traiter des entrées textuelles plus longues, adapté aux scénarios nécessitant une intégration d'informations étendue et une analyse de données."
+ },
+ "gpt-4-32k-0613": {
+ "description": "GPT-4 offre une fenêtre contextuelle plus grande, capable de traiter des entrées textuelles plus longues, adapté aux scénarios nécessitant une intégration d'informations étendue et une analyse de données."
+ },
+ "gpt-4-turbo": {
+ "description": "Le dernier modèle GPT-4 Turbo dispose de fonctionnalités visuelles. Désormais, les requêtes visuelles peuvent être effectuées en utilisant le mode JSON et les appels de fonction. GPT-4 Turbo est une version améliorée, offrant un soutien rentable pour les tâches multimodales. Il trouve un équilibre entre précision et efficacité, adapté aux applications nécessitant des interactions en temps réel."
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "Le dernier modèle GPT-4 Turbo dispose de fonctionnalités visuelles. Désormais, les requêtes visuelles peuvent être effectuées en utilisant le mode JSON et les appels de fonction. GPT-4 Turbo est une version améliorée, offrant un soutien rentable pour les tâches multimodales. Il trouve un équilibre entre précision et efficacité, adapté aux applications nécessitant des interactions en temps réel."
+ },
+ "gpt-4-turbo-preview": {
+ "description": "Le dernier modèle GPT-4 Turbo dispose de fonctionnalités visuelles. Désormais, les requêtes visuelles peuvent être effectuées en utilisant le mode JSON et les appels de fonction. GPT-4 Turbo est une version améliorée, offrant un soutien rentable pour les tâches multimodales. Il trouve un équilibre entre précision et efficacité, adapté aux applications nécessitant des interactions en temps réel."
+ },
+ "gpt-4-vision-preview": {
+ "description": "Le dernier modèle GPT-4 Turbo dispose de fonctionnalités visuelles. Désormais, les requêtes visuelles peuvent être effectuées en utilisant le mode JSON et les appels de fonction. GPT-4 Turbo est une version améliorée, offrant un soutien rentable pour les tâches multimodales. Il trouve un équilibre entre précision et efficacité, adapté aux applications nécessitant des interactions en temps réel."
+ },
+ "gpt-4o": {
+ "description": "ChatGPT-4o est un modèle dynamique, mis à jour en temps réel pour rester à jour avec la dernière version. Il combine une compréhension et une génération de langage puissantes, adapté à des scénarios d'application à grande échelle, y compris le service client, l'éducation et le support technique."
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "ChatGPT-4o est un modèle dynamique, mis à jour en temps réel pour rester à jour avec la dernière version. Il combine une compréhension et une génération de langage puissantes, adapté à des scénarios d'application à grande échelle, y compris le service client, l'éducation et le support technique."
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o est un modèle dynamique, mis à jour en temps réel pour rester à jour avec la dernière version. Il combine une compréhension et une génération de langage puissantes, adapté à des scénarios d'application à grande échelle, y compris le service client, l'éducation et le support technique."
+ },
+ "gpt-4o-mini": {
+ "description": "GPT-4o mini est le dernier modèle lancé par OpenAI après le GPT-4 Omni, prenant en charge les entrées multimodales et produisant des sorties textuelles. En tant que leur modèle compact le plus avancé, il est beaucoup moins cher que d'autres modèles de pointe récents et coûte plus de 60 % de moins que le GPT-3.5 Turbo. Il maintient une intelligence de pointe tout en offrant un rapport qualité-prix significatif. Le GPT-4o mini a obtenu un score de 82 % au test MMLU et se classe actuellement au-dessus du GPT-4 en termes de préférences de chat."
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13B est un modèle linguistique combinant créativité et intelligence, intégrant plusieurs modèles de pointe."
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "Le modèle open source innovant InternLM2.5 améliore l'intelligence des dialogues grâce à un grand nombre de paramètres."
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5 fournit des solutions de dialogue intelligent dans divers scénarios."
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "Le modèle Llama 3.1 70B Instruct, avec 70B de paramètres, offre des performances exceptionnelles dans la génération de texte et les tâches d'instructions."
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70B offre une capacité de raisonnement AI plus puissante, adaptée aux applications complexes, prenant en charge un traitement de calcul intensif tout en garantissant efficacité et précision."
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8B est un modèle à haute performance, offrant une capacité de génération de texte rapide, particulièrement adapté aux scénarios d'application nécessitant une efficacité à grande échelle et un rapport coût-efficacité."
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "Le modèle Llama 3.1 8B Instruct, avec 8B de paramètres, prend en charge l'exécution efficace des tâches d'instructions visuelles, offrant d'excellentes capacités de génération de texte."
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "Le modèle Llama 3.1 Sonar Huge Online, avec 405B de paramètres, prend en charge une longueur de contexte d'environ 127 000 jetons, conçu pour des applications de chat en ligne complexes."
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "Le modèle Llama 3.1 Sonar Large Chat, avec 70B de paramètres, prend en charge une longueur de contexte d'environ 127 000 jetons, adapté aux tâches de chat hors ligne complexes."
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "Le modèle Llama 3.1 Sonar Large Online, avec 70B de paramètres, prend en charge une longueur de contexte d'environ 127 000 jetons, adapté aux tâches de chat à haute capacité et diversifiées."
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "Le modèle Llama 3.1 Sonar Small Chat, avec 8B de paramètres, est conçu pour le chat hors ligne, prenant en charge une longueur de contexte d'environ 127 000 jetons."
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "Le modèle Llama 3.1 Sonar Small Online, avec 8B de paramètres, prend en charge une longueur de contexte d'environ 127 000 jetons, conçu pour le chat en ligne, capable de traiter efficacement diverses interactions textuelles."
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70B offre une capacité de traitement de complexité inégalée, sur mesure pour des projets exigeants."
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8B offre d'excellentes performances de raisonnement, adaptées à des besoins d'application variés."
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Use offre de puissantes capacités d'appel d'outils, prenant en charge le traitement efficace de tâches complexes."
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Use est un modèle optimisé pour une utilisation efficace des outils, prenant en charge un calcul parallèle rapide."
+ },
+ "llama3.1": {
+ "description": "Llama 3.1 est le modèle de pointe lancé par Meta, prenant en charge jusqu'à 405B de paramètres, applicable dans les domaines des dialogues complexes, de la traduction multilingue et de l'analyse de données."
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1 est le modèle de pointe lancé par Meta, prenant en charge jusqu'à 405B de paramètres, applicable dans les domaines des dialogues complexes, de la traduction multilingue et de l'analyse de données."
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1 est le modèle de pointe lancé par Meta, prenant en charge jusqu'à 405B de paramètres, applicable dans les domaines des dialogues complexes, de la traduction multilingue et de l'analyse de données."
+ },
+ "llava": {
+ "description": "LLaVA est un modèle multimodal combinant un encodeur visuel et Vicuna, utilisé pour une compréhension puissante du visuel et du langage."
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7B offre une capacité de traitement visuel intégrée, générant des sorties complexes à partir d'entrées d'informations visuelles."
+ },
+ "llava:13b": {
+ "description": "LLaVA est un modèle multimodal combinant un encodeur visuel et Vicuna, utilisé pour une compréhension puissante du visuel et du langage."
+ },
+ "llava:34b": {
+ "description": "LLaVA est un modèle multimodal combinant un encodeur visuel et Vicuna, utilisé pour une compréhension puissante du visuel et du langage."
+ },
+ "mathstral": {
+ "description": "MathΣtral est conçu pour la recherche scientifique et le raisonnement mathématique, offrant des capacités de calcul efficaces et des interprétations de résultats."
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "Un puissant modèle de 70 milliards de paramètres excelling dans le raisonnement, le codage et les applications linguistiques larges."
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "Un modèle polyvalent de 8 milliards de paramètres optimisé pour les tâches de dialogue et de génération de texte."
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "Les modèles textuels uniquement ajustés par instruction Llama 3.1 sont optimisés pour les cas d'utilisation de dialogue multilingue et surpassent de nombreux modèles de chat open source et fermés disponibles sur les benchmarks industriels courants."
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "Les modèles textuels uniquement ajustés par instruction Llama 3.1 sont optimisés pour les cas d'utilisation de dialogue multilingue et surpassent de nombreux modèles de chat open source et fermés disponibles sur les benchmarks industriels courants."
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "Les modèles textuels uniquement ajustés par instruction Llama 3.1 sont optimisés pour les cas d'utilisation de dialogue multilingue et surpassent de nombreux modèles de chat open source et fermés disponibles sur les benchmarks industriels courants."
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B) offre d'excellentes capacités de traitement du langage et une expérience interactive exceptionnelle."
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B) est un modèle de chat puissant, prenant en charge des besoins de dialogue complexes."
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B) offre un support multilingue, couvrant un large éventail de connaissances."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Lite est adapté aux environnements nécessitant une haute performance et une faible latence."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turbo offre une compréhension et une génération de langage exceptionnelles, adapté aux tâches de calcul les plus exigeantes."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Lite est adapté aux environnements à ressources limitées, offrant un excellent équilibre de performance."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turbo est un modèle de langage à haute performance, prenant en charge une large gamme de scénarios d'application."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405B est un modèle puissant pour le pré-entraînement et l'ajustement des instructions."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "Le modèle Llama 3.1 Turbo 405B offre un support de contexte de très grande capacité pour le traitement de grandes données, se distinguant dans les applications d'intelligence artificielle à très grande échelle."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70B offre un support de dialogue efficace en plusieurs langues."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "Le modèle Llama 3.1 70B est finement ajusté pour des applications à forte charge, quantifié en FP8 pour offrir une capacité de calcul et une précision plus efficaces, garantissant des performances exceptionnelles dans des scénarios complexes."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1 offre un support multilingue, étant l'un des modèles génératifs les plus avancés de l'industrie."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "Le modèle Llama 3.1 8B utilise la quantification FP8, prenant en charge jusqu'à 131 072 jetons de contexte, se distinguant parmi les modèles open source, adapté aux tâches complexes, surpassant de nombreux benchmarks industriels."
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instruct est optimisé pour des scénarios de dialogue de haute qualité, affichant d'excellentes performances dans diverses évaluations humaines."
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instruct optimise les scénarios de dialogue de haute qualité, avec des performances supérieures à de nombreux modèles fermés."
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B Instruct est la dernière version lancée par Meta, optimisée pour générer des dialogues de haute qualité, surpassant de nombreux modèles fermés de premier plan."
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct est conçu pour des dialogues de haute qualité, se distinguant dans les évaluations humaines, particulièrement adapté aux scénarios d'interaction élevée."
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct est la dernière version lancée par Meta, optimisée pour des scénarios de dialogue de haute qualité, surpassant de nombreux modèles fermés de premier plan."
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1 offre un support multilingue et est l'un des modèles génératifs les plus avancés de l'industrie."
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 405B Instruct est le modèle le plus grand et le plus puissant du modèle Llama 3.1 Instruct. C'est un modèle de génération de données de dialogue et de raisonnement hautement avancé, qui peut également servir de base pour un pré-entraînement ou un ajustement fin spécialisé dans des domaines spécifiques. Les modèles de langage multilingues (LLMs) fournis par Llama 3.1 sont un ensemble de modèles génératifs pré-entraînés et ajustés par instructions, comprenant des tailles de 8B, 70B et 405B (entrée/sortie de texte). Les modèles de texte ajustés par instructions de Llama 3.1 (8B, 70B, 405B) sont optimisés pour des cas d'utilisation de dialogue multilingue et ont surpassé de nombreux modèles de chat open source disponibles dans des benchmarks industriels courants. Llama 3.1 est conçu pour des usages commerciaux et de recherche dans plusieurs langues. Les modèles de texte ajustés par instructions conviennent aux chats de type assistant, tandis que les modèles pré-entraînés peuvent s'adapter à diverses tâches de génération de langage naturel. Le modèle Llama 3.1 prend également en charge l'amélioration d'autres modèles en utilisant sa sortie, y compris la génération de données synthétiques et le raffinement. Llama 3.1 est un modèle de langage autoregressif utilisant une architecture de transformateur optimisée. Les versions ajustées utilisent un ajustement fin supervisé (SFT) et un apprentissage par renforcement avec retour humain (RLHF) pour répondre aux préférences humaines en matière d'utilité et de sécurité."
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 70B Instruct est une version mise à jour, incluant une longueur de contexte étendue de 128K, une multilinguisme et des capacités de raisonnement améliorées. Les modèles de langage à grande échelle (LLMs) fournis par Llama 3.1 sont un ensemble de modèles génératifs pré-entraînés et ajustés par instruction, comprenant des tailles de 8B, 70B et 405B (entrée/sortie de texte). Les modèles de texte ajustés par instruction de Llama 3.1 (8B, 70B, 405B) sont optimisés pour des cas d'utilisation de dialogue multilingue et ont surpassé de nombreux modèles de chat open source disponibles dans des benchmarks industriels courants. Llama 3.1 est conçu pour des usages commerciaux et de recherche dans plusieurs langues. Les modèles de texte ajustés par instruction sont adaptés aux chats de type assistant, tandis que les modèles pré-entraînés peuvent s'adapter à diverses tâches de génération de langage naturel. Le modèle Llama 3.1 prend également en charge l'utilisation de ses sorties pour améliorer d'autres modèles, y compris la génération de données synthétiques et le raffinement. Llama 3.1 est un modèle de langage autoregressif utilisant une architecture de transformateur optimisée. La version ajustée utilise un affinement supervisé (SFT) et un apprentissage par renforcement avec retour humain (RLHF) pour répondre aux préférences humaines en matière d'utilité et de sécurité."
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 8B Instruct est une version mise à jour, incluant une longueur de contexte étendue de 128K, une multilinguisme et des capacités de raisonnement améliorées. Les modèles de langage à grande échelle (LLMs) fournis par Llama 3.1 sont un ensemble de modèles génératifs pré-entraînés et ajustés par instruction, comprenant des tailles de 8B, 70B et 405B (entrée/sortie de texte). Les modèles de texte ajustés par instruction de Llama 3.1 (8B, 70B, 405B) sont optimisés pour des cas d'utilisation de dialogue multilingue et ont surpassé de nombreux modèles de chat open source disponibles dans des benchmarks industriels courants. Llama 3.1 est conçu pour des usages commerciaux et de recherche dans plusieurs langues. Les modèles de texte ajustés par instruction sont adaptés aux chats de type assistant, tandis que les modèles pré-entraînés peuvent s'adapter à diverses tâches de génération de langage naturel. Le modèle Llama 3.1 prend également en charge l'utilisation de ses sorties pour améliorer d'autres modèles, y compris la génération de données synthétiques et le raffinement. Llama 3.1 est un modèle de langage autoregressif utilisant une architecture de transformateur optimisée. La version ajustée utilise un affinement supervisé (SFT) et un apprentissage par renforcement avec retour humain (RLHF) pour répondre aux préférences humaines en matière d'utilité et de sécurité."
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3 est un modèle de langage ouvert (LLM) destiné aux développeurs, chercheurs et entreprises, conçu pour les aider à construire, expérimenter et étendre de manière responsable leurs idées d'IA générative. En tant que partie intégrante d'un système de base pour l'innovation de la communauté mondiale, il est particulièrement adapté à la création de contenu, à l'IA de dialogue, à la compréhension du langage, à la recherche et aux applications d'entreprise."
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3 est un modèle de langage ouvert (LLM) destiné aux développeurs, chercheurs et entreprises, conçu pour les aider à construire, expérimenter et étendre de manière responsable leurs idées d'IA générative. En tant que partie intégrante d'un système de base pour l'innovation de la communauté mondiale, il est particulièrement adapté aux appareils à capacité de calcul et de ressources limitées, ainsi qu'à des temps d'entraînement plus rapides."
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7B est le dernier modèle léger et rapide de Microsoft AI, offrant des performances proches de dix fois celles des modèles leaders open source existants."
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22B est le modèle Wizard le plus avancé de Microsoft AI, montrant des performances extrêmement compétitives."
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-V est un nouveau modèle multimodal de nouvelle génération lancé par OpenBMB, offrant d'excellentes capacités de reconnaissance OCR et de compréhension multimodale, prenant en charge une large gamme d'applications."
+ },
+ "mistral": {
+ "description": "Mistral est le modèle 7B lancé par Mistral AI, adapté aux besoins variés de traitement du langage."
+ },
+ "mistral-large": {
+ "description": "Mixtral Large est le modèle phare de Mistral, combinant des capacités de génération de code, de mathématiques et de raisonnement, prenant en charge une fenêtre de contexte de 128k."
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407) est un modèle de langage avancé (LLM) avec des capacités de raisonnement, de connaissance et de codage à la pointe de la technologie."
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Large est le modèle phare, excellent pour les tâches multilingues, le raisonnement complexe et la génération de code, idéal pour des applications haut de gamme."
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemo, développé en collaboration entre Mistral AI et NVIDIA, est un modèle de 12B à performance efficace."
+ },
+ "mistral-small": {
+ "description": "Mistral Small peut être utilisé pour toute tâche basée sur le langage nécessitant une haute efficacité et une faible latence."
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Small est une option rentable, rapide et fiable, adaptée aux cas d'utilisation tels que la traduction, le résumé et l'analyse des sentiments."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instruct est réputé pour ses performances élevées, adapté à diverses tâches linguistiques."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7B est un modèle fine-tuné à la demande, offrant des réponses optimisées pour les tâches."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3 offre une capacité de calcul efficace et une compréhension du langage naturel, adapté à un large éventail d'applications."
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B) est un super grand modèle de langage, prenant en charge des besoins de traitement extrêmement élevés."
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7B est un modèle de mélange d'experts pré-entraîné, utilisé pour des tâches textuelles générales."
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instruct est un modèle standard de l'industrie, alliant optimisation de la vitesse et support de longs contextes."
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemo est un modèle de 7,3 milliards de paramètres, offrant un support multilingue et une programmation haute performance."
+ },
+ "mixtral": {
+ "description": "Mixtral est le modèle d'expert de Mistral AI, avec des poids open source, offrant un soutien dans la génération de code et la compréhension du langage."
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7B offre une capacité de calcul parallèle à haute tolérance aux pannes, adaptée aux tâches complexes."
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtral est le modèle d'expert de Mistral AI, avec des poids open source, offrant un soutien dans la génération de code et la compréhension du langage."
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128K est un modèle doté d'une capacité de traitement de contexte ultra-long, adapté à la génération de textes très longs, répondant aux besoins de tâches de génération complexes, capable de traiter jusqu'à 128 000 tokens, idéal pour la recherche, l'académie et la génération de documents volumineux."
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32K offre une capacité de traitement de contexte de longueur moyenne, capable de traiter 32 768 tokens, particulièrement adapté à la génération de divers documents longs et de dialogues complexes, utilisé dans la création de contenu, la génération de rapports et les systèmes de dialogue."
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8K est conçu pour des tâches de génération de courts textes, avec des performances de traitement efficaces, capable de traiter 8 192 tokens, idéal pour des dialogues courts, des prises de notes et une génération rapide de contenu."
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8B est une version améliorée de Nous Hermes 2, intégrant les derniers ensembles de données développés en interne."
+ },
+ "o1-mini": {
+ "description": "o1-mini est un modèle de raisonnement rapide et économique conçu pour les applications de programmation, de mathématiques et de sciences. Ce modèle dispose d'un contexte de 128K et d'une date limite de connaissance en octobre 2023."
+ },
+ "o1-preview": {
+ "description": "o1 est le nouveau modèle de raisonnement d'OpenAI, adapté aux tâches complexes nécessitant une vaste connaissance générale. Ce modèle dispose d'un contexte de 128K et d'une date limite de connaissance en octobre 2023."
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mamba est un modèle de langage Mamba 2 axé sur la génération de code, offrant un soutien puissant pour des tâches avancées de codage et de raisonnement."
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7B est un modèle compact mais performant, excellent pour le traitement par lots et les tâches simples, telles que la classification et la génération de texte, avec de bonnes capacités de raisonnement."
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemo est un modèle de 12B développé en collaboration avec Nvidia, offrant d'excellentes performances de raisonnement et de codage, facile à intégrer et à remplacer."
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22B est un modèle d'expert plus grand, axé sur des tâches complexes, offrant d'excellentes capacités de raisonnement et un débit plus élevé."
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7B est un modèle d'expert épars, utilisant plusieurs paramètres pour améliorer la vitesse de raisonnement, adapté au traitement de tâches multilingues et de génération de code."
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o est un modèle dynamique, mis à jour en temps réel pour rester à jour avec la dernière version. Il combine une compréhension et une génération de langage puissantes, adapté à des scénarios d'application à grande échelle, y compris le service client, l'éducation et le support technique."
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o mini est le dernier modèle d'OpenAI lancé après GPT-4 Omni, prenant en charge les entrées d'images et de texte et produisant du texte en sortie. En tant que leur modèle compact le plus avancé, il est beaucoup moins cher que d'autres modèles de pointe récents et coûte plus de 60 % de moins que GPT-3.5 Turbo. Il maintient une intelligence de pointe tout en offrant un rapport qualité-prix significatif. GPT-4o mini a obtenu un score de 82 % au test MMLU et se classe actuellement au-dessus de GPT-4 en termes de préférences de chat."
+ },
+ "openai/o1-mini": {
+ "description": "o1-mini est un modèle de raisonnement rapide et économique conçu pour les applications de programmation, de mathématiques et de sciences. Ce modèle dispose d'un contexte de 128K et d'une date limite de connaissance en octobre 2023."
+ },
+ "openai/o1-preview": {
+ "description": "o1 est le nouveau modèle de raisonnement d'OpenAI, adapté aux tâches complexes nécessitant une vaste connaissance générale. Ce modèle dispose d'un contexte de 128K et d'une date limite de connaissance en octobre 2023."
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7B est une bibliothèque de modèles linguistiques open source, affinée par la stratégie de 'C-RLFT (Conditionnal Reinforcement Learning Fine-Tuning)'."
+ },
+ "openrouter/auto": {
+ "description": "En fonction de la longueur du contexte, du sujet et de la complexité, votre demande sera envoyée à Llama 3 70B Instruct, Claude 3.5 Sonnet (auto-régulé) ou GPT-4o."
+ },
+ "phi3": {
+ "description": "Phi-3 est un modèle ouvert léger lancé par Microsoft, adapté à une intégration efficace et à un raisonnement de connaissances à grande échelle."
+ },
+ "phi3:14b": {
+ "description": "Phi-3 est un modèle ouvert léger lancé par Microsoft, adapté à une intégration efficace et à un raisonnement de connaissances à grande échelle."
+ },
+ "pixtral-12b-2409": {
+ "description": "Le modèle Pixtral montre de puissantes capacités dans des tâches telles que la compréhension des graphiques et des images, le questionnement de documents, le raisonnement multimodal et le respect des instructions, capable d'ingérer des images à résolution naturelle et à rapport d'aspect, tout en traitant un nombre quelconque d'images dans une fenêtre de contexte longue allant jusqu'à 128K tokens."
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "Le modèle de code Tongyi Qwen."
+ },
+ "qwen-long": {
+ "description": "Qwen est un modèle de langage à grande échelle, prenant en charge un contexte de texte long, ainsi que des fonctionnalités de dialogue basées sur des documents longs et multiples."
+ },
+ "qwen-math-plus-latest": {
+ "description": "Le modèle de langage Tongyi Qwen pour les mathématiques, spécialement conçu pour résoudre des problèmes mathématiques."
+ },
+ "qwen-math-turbo-latest": {
+ "description": "Le modèle de langage Tongyi Qwen pour les mathématiques, spécialement conçu pour résoudre des problèmes mathématiques."
+ },
+ "qwen-max-latest": {
+ "description": "Le modèle de langage à grande échelle Tongyi Qwen de niveau milliard, prenant en charge des entrées en chinois, en anglais et dans d'autres langues, actuellement le modèle API derrière la version produit Tongyi Qwen 2.5."
+ },
+ "qwen-plus-latest": {
+ "description": "La version améliorée du modèle de langage à grande échelle Tongyi Qwen, prenant en charge des entrées en chinois, en anglais et dans d'autres langues."
+ },
+ "qwen-turbo-latest": {
+ "description": "Le modèle de langage à grande échelle Tongyi Qwen, prenant en charge des entrées en chinois, en anglais et dans d'autres langues."
+ },
+ "qwen-vl-chat-v1": {
+ "description": "Qwen VL prend en charge des modes d'interaction flexibles, y compris la capacité de poser des questions à plusieurs images, des dialogues multi-tours, et plus encore."
+ },
+ "qwen-vl-max": {
+ "description": "Qwen est un modèle de langage visuel à grande échelle. Par rapport à la version améliorée, elle améliore encore la capacité de raisonnement visuel et de suivi des instructions, offrant un niveau de perception et de cognition visuelle plus élevé."
+ },
+ "qwen-vl-plus": {
+ "description": "Qwen est une version améliorée du modèle de langage visuel à grande échelle. Elle améliore considérablement la capacité de reconnaissance des détails et de reconnaissance de texte, prenant en charge des images avec une résolution de plus d'un million de pixels et des spécifications de rapport d'aspect arbitraire."
+ },
+ "qwen-vl-v1": {
+ "description": "Initialisé avec le modèle de langage Qwen-7B, ajoutant un modèle d'image, un modèle pré-entraîné avec une résolution d'entrée d'image de 448."
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2 est une toute nouvelle série de modèles de langage de grande taille, offrant des capacités de compréhension et de génération plus puissantes."
+ },
+ "qwen2": {
+ "description": "Qwen2 est le nouveau modèle de langage à grande échelle d'Alibaba, offrant d'excellentes performances pour des besoins d'application diversifiés."
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "Le modèle de 14B de Tongyi Qwen 2.5, open source."
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "Le modèle de 32B de Tongyi Qwen 2.5, open source."
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "Le modèle de 72B de Tongyi Qwen 2.5, open source."
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "Le modèle de 7B de Tongyi Qwen 2.5, open source."
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "Version open source du modèle de code Tongyi Qwen."
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "Version open source du modèle de code Tongyi Qwen."
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "Le modèle Qwen-Math possède de puissantes capacités de résolution de problèmes mathématiques."
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "Le modèle Qwen-Math possède de puissantes capacités de résolution de problèmes mathématiques."
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "Le modèle Qwen-Math possède de puissantes capacités de résolution de problèmes mathématiques."
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2 est le nouveau modèle de langage à grande échelle d'Alibaba, offrant d'excellentes performances pour des besoins d'application diversifiés."
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2 est le nouveau modèle de langage à grande échelle d'Alibaba, offrant d'excellentes performances pour des besoins d'application diversifiés."
+ },
+ "qwen2:72b": {
+ "description": "Qwen2 est le nouveau modèle de langage à grande échelle d'Alibaba, offrant d'excellentes performances pour des besoins d'application diversifiés."
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar Mini est un LLM compact, surpassant GPT-3.5, avec de puissantes capacités multilingues, supportant l'anglais et le coréen, offrant une solution efficace et compacte."
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja) étend les capacités de Solar Mini, se concentrant sur le japonais tout en maintenant une efficacité et des performances exceptionnelles en anglais et en coréen."
+ },
+ "solar-pro": {
+ "description": "Solar Pro est un LLM hautement intelligent lancé par Upstage, axé sur la capacité de suivi des instructions sur un seul GPU, avec un score IFEval supérieur à 80. Actuellement, il supporte l'anglais, et la version officielle est prévue pour novembre 2024, avec une extension du support linguistique et de la longueur du contexte."
+ },
+ "step-1-128k": {
+ "description": "Équilibre entre performance et coût, adapté à des scénarios généraux."
+ },
+ "step-1-256k": {
+ "description": "Capacité de traitement de contexte ultra long, particulièrement adapté à l'analyse de documents longs."
+ },
+ "step-1-32k": {
+ "description": "Prend en charge des dialogues de longueur moyenne, adapté à divers scénarios d'application."
+ },
+ "step-1-8k": {
+ "description": "Modèle de petite taille, adapté aux tâches légères."
+ },
+ "step-1-flash": {
+ "description": "Modèle à haute vitesse, adapté aux dialogues en temps réel."
+ },
+ "step-1v-32k": {
+ "description": "Prend en charge les entrées visuelles, améliorant l'expérience d'interaction multimodale."
+ },
+ "step-1v-8k": {
+ "description": "Modèle visuel compact, adapté aux tâches de base en texte et image."
+ },
+ "step-2-16k": {
+ "description": "Prend en charge des interactions contextuelles à grande échelle, adapté aux scénarios de dialogue complexes."
+ },
+ "taichu_llm": {
+ "description": "Le modèle de langage Taichu Zidong possède une forte capacité de compréhension linguistique ainsi que des compétences en création de texte, questions-réponses, programmation, calcul mathématique, raisonnement logique, analyse des sentiments, et résumé de texte. Il combine de manière innovante le pré-entraînement sur de grandes données avec des connaissances riches provenant de multiples sources, en perfectionnant continuellement la technologie algorithmique et en intégrant de nouvelles connaissances sur le vocabulaire, la structure, la grammaire et le sens à partir de vastes ensembles de données textuelles, offrant aux utilisateurs des informations et des services plus pratiques ainsi qu'une expérience plus intelligente."
+ },
+ "taichu_vqa": {
+ "description": "Taichu 2.0V intègre des capacités de compréhension d'image, de transfert de connaissances et d'attribution logique, se distinguant dans le domaine des questions-réponses textuelles et visuelles."
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B) offre une capacité de calcul améliorée grâce à des stratégies et une architecture de modèle efficaces."
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B) est adapté aux tâches d'instructions détaillées, offrant d'excellentes capacités de traitement du langage."
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2 est un modèle de langage proposé par Microsoft AI, particulièrement performant dans les domaines des dialogues complexes, du multilinguisme, du raisonnement et des assistants intelligents."
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2 est un modèle de langage proposé par Microsoft AI, particulièrement performant dans les domaines des dialogues complexes, du multilinguisme, du raisonnement et des assistants intelligents."
+ },
+ "yi-large": {
+ "description": "Un modèle de nouvelle génération avec des milliards de paramètres, offrant des capacités de question-réponse et de génération de texte exceptionnelles."
+ },
+ "yi-large-fc": {
+ "description": "Basé sur le modèle yi-large, il prend en charge et renforce les capacités d'appel d'outils, adapté à divers scénarios d'affaires nécessitant la création d'agents ou de workflows."
+ },
+ "yi-large-preview": {
+ "description": "Version préliminaire, il est recommandé d'utiliser yi-large (nouvelle version)."
+ },
+ "yi-large-rag": {
+ "description": "Un service de haut niveau basé sur le modèle yi-large, combinant des techniques de recherche et de génération pour fournir des réponses précises, avec un service de recherche d'informations en temps réel sur le web."
+ },
+ "yi-large-turbo": {
+ "description": "Un excellent rapport qualité-prix avec des performances exceptionnelles. Optimisé pour un équilibre de haute précision en fonction des performances, de la vitesse de raisonnement et des coûts."
+ },
+ "yi-medium": {
+ "description": "Modèle de taille moyenne, optimisé et ajusté, offrant un équilibre de capacités et un bon rapport qualité-prix. Optimisation approfondie des capacités de suivi des instructions."
+ },
+ "yi-medium-200k": {
+ "description": "Fenêtre de contexte ultra longue de 200K, offrant une compréhension et une génération de texte en profondeur."
+ },
+ "yi-spark": {
+ "description": "Petit mais puissant, un modèle léger et rapide. Offre des capacités renforcées en calcul mathématique et en rédaction de code."
+ },
+ "yi-vision": {
+ "description": "Modèle pour des tâches visuelles complexes, offrant des capacités de compréhension et d'analyse d'images de haute performance."
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/plugin.json b/DigitalHumanWeb/locales/fr-FR/plugin.json
new file mode 100644
index 0000000..9e52584
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "Arguments de l'appel",
+ "function_call": "Appel de fonction",
+ "off": "Désactivé",
+ "on": "Activer le débogage",
+ "payload": "charge du plugin",
+ "response": "Réponse",
+ "tool_call": "demande d'appel d'outil"
+ },
+ "detailModal": {
+ "info": {
+ "description": "Description de l'API",
+ "name": "Nom de l'API"
+ },
+ "tabs": {
+ "info": "Capacités du plugin",
+ "manifest": "Fichier d'installation",
+ "settings": "Paramètres"
+ },
+ "title": "Détails du plugin"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "Êtes-vous sûr de vouloir supprimer ce plugin local ? Cette action est irréversible.",
+ "customParams": {
+ "useProxy": {
+ "label": "Installer via proxy (if encountering cross-origin access errors, try enabling this option and reinstalling)"
+ }
+ },
+ "deleteSuccess": "Suppression du plugin réussie",
+ "manifest": {
+ "identifier": {
+ "desc": "Identifiant unique du plugin",
+ "label": "Identifiant"
+ },
+ "mode": {
+ "local": "Configuration visuelle",
+ "local-tooltip": "Configuration visuelle non prise en charge pour le moment",
+ "url": "Lien en ligne"
+ },
+ "name": {
+ "desc": "Titre du plugin",
+ "label": "Titre",
+ "placeholder": "Moteur de recherche"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "Auteur du plugin",
+ "label": "Auteur"
+ },
+ "avatar": {
+ "desc": "Icône du plugin, peut être un Emoji ou une URL",
+ "label": "Icône"
+ },
+ "description": {
+ "desc": "Description du plugin",
+ "label": "Description",
+ "placeholder": "Rechercher un moteur de recherche pour obtenir des informations"
+ },
+ "formFieldRequired": "Ce champ est requis",
+ "homepage": {
+ "desc": "Page d'accueil du plugin",
+ "label": "Page d'accueil"
+ },
+ "identifier": {
+ "desc": "Identifiant unique du plugin, sera automatiquement reconnu à partir du manifest",
+ "errorDuplicate": "L'identifiant du plugin existe déjà, veuillez le modifier",
+ "label": "Identifiant",
+ "pattenErrorMessage": "Seuls les caractères alphanumériques, - et _ sont autorisés"
+ },
+ "manifest": {
+ "desc": "{{appName}} sera installé via ce lien pour ajouter le plugin.",
+ "label": "URL du fichier de description du plugin (Manifest)",
+ "preview": "Aperçu du Manifest",
+ "refresh": "Actualiser"
+ },
+ "title": {
+ "desc": "Titre du plugin",
+ "label": "Titre",
+ "placeholder": "Moteur de recherche"
+ }
+ },
+ "metaConfig": "Configuration des métadonnées du plugin",
+ "modalDesc": "Une fois le plugin personnalisé ajouté, il peut être utilisé pour valider le développement du plugin ou directement dans la session. Veuillez consulter le <1>guide de développement↗> pour le développement de plugins.",
+ "openai": {
+ "importUrl": "Importer depuis l'URL",
+ "schema": "Schéma"
+ },
+ "preview": {
+ "card": "Aperçu de l'interface du plugin",
+ "desc": "Aperçu de la description du plugin",
+ "title": "Aperçu du nom du plugin"
+ },
+ "save": "Installer le plugin",
+ "saveSuccess": "Paramètres du plugin enregistrés avec succès",
+ "tabs": {
+ "manifest": "Manifeste des fonctionnalités",
+ "meta": "Métadonnées du plugin"
+ },
+ "title": {
+ "create": "Ajouter un plugin personnalisé",
+ "edit": "Modifier un plugin personnalisé"
+ },
+ "type": {
+ "lobe": "Plugin LobeChat",
+ "openai": "Plugin OpenAI"
+ },
+ "update": "Mettre à jour",
+ "updateSuccess": "Paramètres du plugin mis à jour avec succès"
+ },
+ "error": {
+ "fetchError": "Échec de la requête vers ce lien de manifest. Veuillez vous assurer que le lien est valide et autorise les requêtes cross-origin.",
+ "installError": "Échec de l'installation du plugin {{name}}",
+ "manifestInvalid": "Le manifest ne respecte pas les normes. Résultat de la validation : \n\n {{error}}",
+ "noManifest": "Aucun fichier de description trouvé",
+ "openAPIInvalid": "Échec d'analyse de l'OpenAPI, erreur : \n\n {{error}}",
+ "reinstallError": "Échec de la mise à jour du plugin {{name}}",
+ "urlError": "Ce lien ne renvoie pas de contenu au format JSON. Veuillez vous assurer qu'il s'agit d'un lien valide."
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "Obsolète",
+ "local.config": "Configuration",
+ "local.title": "Personnalisé"
+ }
+ },
+ "loading": {
+ "content": "Appel du plugin en cours...",
+ "plugin": "Exécution du plugin en cours..."
+ },
+ "pluginList": "Liste des plugins",
+ "setting": "Paramètres des plugins",
+ "settings": {
+ "indexUrl": {
+ "title": "Index du marché",
+ "tooltip": "L'édition en ligne n'est pas encore prise en charge. Veuillez configurer via les variables d'environnement lors du déploiement."
+ },
+ "modalDesc": "Une fois l'adresse du marché des plugins configurée, vous pourrez utiliser un marché de plugins personnalisé.",
+ "title": "Paramètres du marché des plugins"
+ },
+ "showInPortal": "Veuillez consulter les détails dans l'espace de travail",
+ "store": {
+ "actions": {
+ "confirmUninstall": "Vous êtes sur le point de désinstaller ce plugin. Une fois désinstallé, sa configuration sera effacée. Veuillez confirmer votre action.",
+ "detail": "Détails",
+ "install": "Installer",
+ "manifest": "Modifier le fichier d'installation",
+ "settings": "Paramètres",
+ "uninstall": "Désinstaller"
+ },
+ "communityPlugin": "Plugin communautaire",
+ "customPlugin": "Plugin personnalisé",
+ "empty": "Aucun plugin installé pour le moment",
+ "installAllPlugins": "Installer tous les plugins",
+ "networkError": "Échec de la récupération de la boutique de plugins. Veuillez vérifier votre connexion réseau et réessayer.",
+ "placeholder": "Rechercher le nom ou les mots-clés de l'extension...",
+ "releasedAt": "Publié le {{createdAt}}",
+ "tabs": {
+ "all": "Tous",
+ "installed": "Installés"
+ },
+ "title": "Boutique de plugins"
+ },
+ "unknownPlugin": "Plugin inconnu"
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/portal.json b/DigitalHumanWeb/locales/fr-FR/portal.json
new file mode 100644
index 0000000..9c92513
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "Artifacts",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "Fragment",
+ "file": "Fichier"
+ }
+ },
+ "Plugins": "Plugins",
+ "actions": {
+ "genAiMessage": "Créer un message d'assistant",
+ "summary": "Résumé",
+ "summaryTooltip": "Résumé du contenu actuel"
+ },
+ "artifacts": {
+ "display": {
+ "code": "Code",
+ "preview": "Aperçu"
+ },
+ "svg": {
+ "copyAsImage": "Copier en tant qu'image",
+ "copyFail": "Échec de la copie, raison de l'erreur : {{error}}",
+ "copySuccess": "Image copiée avec succès",
+ "download": {
+ "png": "Télécharger en tant que PNG",
+ "svg": "Télécharger en tant que SVG"
+ }
+ }
+ },
+ "emptyArtifactList": "La liste des Artifacts est actuellement vide. Veuillez utiliser les plugins dans la conversation avant de consulter à nouveau.",
+ "emptyKnowledgeList": "La liste des connaissances est actuellement vide. Veuillez activer la base de connaissances selon vos besoins dans la conversation avant de consulter.",
+ "files": "Fichiers",
+ "messageDetail": "Détails du message",
+ "title": "Fenêtre d'extension"
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/providers.json b/DigitalHumanWeb/locales/fr-FR/providers.json
new file mode 100644
index 0000000..60e26b6
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "360 AI est une plateforme de modèles et de services IA lancée par la société 360, offrant divers modèles avancés de traitement du langage naturel, y compris 360GPT2 Pro, 360GPT Pro, 360GPT Turbo et 360GPT Turbo Responsibility 8K. Ces modèles combinent de grands paramètres et des capacités multimodales, largement utilisés dans la génération de texte, la compréhension sémantique, les systèmes de dialogue et la génération de code. Grâce à une stratégie de tarification flexible, 360 AI répond à des besoins variés des utilisateurs, soutenant l'intégration des développeurs et favorisant l'innovation et le développement des applications intelligentes."
+ },
+ "anthropic": {
+ "description": "Anthropic est une entreprise axée sur la recherche et le développement en intelligence artificielle, offrant une gamme de modèles linguistiques avancés, tels que Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus et Claude 3 Haiku. Ces modèles atteignent un équilibre idéal entre intelligence, rapidité et coût, adaptés à divers scénarios d'application, allant des charges de travail d'entreprise aux réponses rapides. Claude 3.5 Sonnet, en tant que dernier modèle, a excellé dans plusieurs évaluations tout en maintenant un bon rapport qualité-prix."
+ },
+ "azure": {
+ "description": "Azure propose une variété de modèles IA avancés, y compris GPT-3.5 et la dernière série GPT-4, prenant en charge divers types de données et tâches complexes, tout en s'engageant à fournir des solutions IA sécurisées, fiables et durables."
+ },
+ "baichuan": {
+ "description": "Baichuan Intelligent est une entreprise spécialisée dans le développement de grands modèles d'intelligence artificielle, dont les modèles excellent dans les tâches en chinois telles que l'encyclopédie de connaissances, le traitement de longs textes et la création, surpassant les modèles dominants étrangers. Baichuan Intelligent possède également des capacités multimodales de premier plan, se distinguant dans plusieurs évaluations autorisées. Ses modèles incluent Baichuan 4, Baichuan 3 Turbo et Baichuan 3 Turbo 128k, chacun optimisé pour différents scénarios d'application, offrant des solutions à bon rapport qualité-prix."
+ },
+ "bedrock": {
+ "description": "Bedrock est un service proposé par Amazon AWS, axé sur la fourniture de modèles linguistiques et visuels avancés pour les entreprises. Sa famille de modèles comprend la série Claude d'Anthropic, la série Llama 3.1 de Meta, etc., offrant une variété d'options allant des modèles légers aux modèles haute performance, prenant en charge des tâches telles que la génération de texte, les dialogues et le traitement d'images, adaptées aux applications d'entreprise de différentes tailles et besoins."
+ },
+ "deepseek": {
+ "description": "DeepSeek est une entreprise spécialisée dans la recherche et l'application des technologies d'intelligence artificielle, dont le dernier modèle, DeepSeek-V2.5, combine des capacités de dialogue général et de traitement de code, réalisant des améliorations significatives dans l'alignement des préférences humaines, les tâches d'écriture et le suivi des instructions."
+ },
+ "fireworksai": {
+ "description": "Fireworks AI est un fournisseur de services de modèles linguistiques avancés, axé sur les appels de fonction et le traitement multimodal. Son dernier modèle, Firefunction V2, basé sur Llama-3, est optimisé pour les appels de fonction, les dialogues et le suivi des instructions. Le modèle de langage visuel FireLLaVA-13B prend en charge les entrées mixtes d'images et de texte. D'autres modèles notables incluent la série Llama et la série Mixtral, offrant un support efficace pour le suivi et la génération d'instructions multilingues."
+ },
+ "github": {
+ "description": "Avec les modèles GitHub, les développeurs peuvent devenir des ingénieurs en IA et créer avec les modèles d'IA les plus avancés de l'industrie."
+ },
+ "google": {
+ "description": "La série Gemini de Google est son modèle IA le plus avancé et polyvalent, développé par Google DeepMind, conçu pour le multimédia, prenant en charge la compréhension et le traitement sans couture de texte, code, images, audio et vidéo. Adapté à divers environnements, des centres de données aux appareils mobiles, il améliore considérablement l'efficacité et l'applicabilité des modèles IA."
+ },
+ "groq": {
+ "description": "Le moteur d'inférence LPU de Groq a excellé dans les derniers tests de référence des grands modèles de langage (LLM), redéfinissant les normes des solutions IA grâce à sa vitesse et son efficacité impressionnantes. Groq représente une vitesse d'inférence instantanée, montrant de bonnes performances dans les déploiements basés sur le cloud."
+ },
+ "minimax": {
+ "description": "MiniMax est une entreprise de technologie d'intelligence artificielle générale fondée en 2021, dédiée à la co-création d'intelligence avec les utilisateurs. MiniMax a développé de manière autonome différents modèles de grande taille, y compris un modèle de texte MoE à un trillion de paramètres, un modèle vocal et un modèle d'image. Elle a également lancé des applications telles que Conch AI."
+ },
+ "mistral": {
+ "description": "Mistral propose des modèles avancés généraux, professionnels et de recherche, largement utilisés dans des domaines tels que le raisonnement complexe, les tâches multilingues et la génération de code. Grâce à une interface d'appel de fonction, les utilisateurs peuvent intégrer des fonctionnalités personnalisées pour des applications spécifiques."
+ },
+ "moonshot": {
+ "description": "Moonshot est une plateforme open source lancée par Beijing Dark Side Technology Co., Ltd., offrant divers modèles de traitement du langage naturel, avec des applications dans des domaines variés, y compris mais sans s'y limiter, la création de contenu, la recherche académique, les recommandations intelligentes, le diagnostic médical, etc., prenant en charge le traitement de longs textes et des tâches de génération complexes."
+ },
+ "novita": {
+ "description": "Novita AI est une plateforme offrant des services API pour divers grands modèles de langage et la génération d'images IA, flexible, fiable et rentable. Elle prend en charge les derniers modèles open source tels que Llama3, Mistral, et fournit des solutions API complètes, conviviales et évolutives pour le développement d'applications IA, adaptées à la croissance rapide des startups IA."
+ },
+ "ollama": {
+ "description": "Les modèles proposés par Ollama couvrent largement des domaines tels que la génération de code, les calculs mathématiques, le traitement multilingue et les interactions conversationnelles, répondant à des besoins diversifiés pour le déploiement en entreprise et la localisation."
+ },
+ "openai": {
+ "description": "OpenAI est un institut de recherche en intelligence artificielle de premier plan au monde, dont les modèles, tels que la série GPT, font progresser les frontières du traitement du langage naturel. OpenAI s'engage à transformer plusieurs secteurs grâce à des solutions IA innovantes et efficaces. Leurs produits offrent des performances et une rentabilité remarquables, largement utilisés dans la recherche, le commerce et les applications innovantes."
+ },
+ "openrouter": {
+ "description": "OpenRouter est une plateforme de service fournissant des interfaces pour divers modèles de pointe, prenant en charge OpenAI, Anthropic, LLaMA et plus encore, adaptée à des besoins de développement et d'application diversifiés. Les utilisateurs peuvent choisir de manière flexible le modèle et le prix optimaux en fonction de leurs besoins, améliorant ainsi l'expérience IA."
+ },
+ "perplexity": {
+ "description": "Perplexity est un fournisseur de modèles de génération de dialogue de premier plan, offrant divers modèles avancés Llama 3.1, prenant en charge les applications en ligne et hors ligne, particulièrement adaptés aux tâches complexes de traitement du langage naturel."
+ },
+ "qwen": {
+ "description": "Tongyi Qianwen est un modèle de langage à grande échelle développé de manière autonome par Alibaba Cloud, doté de puissantes capacités de compréhension et de génération du langage naturel. Il peut répondre à diverses questions, créer du contenu écrit, exprimer des opinions, rédiger du code, etc., jouant un rôle dans plusieurs domaines."
+ },
+ "siliconcloud": {
+ "description": "SiliconFlow s'engage à accélérer l'AGI pour le bénéfice de l'humanité, en améliorant l'efficacité de l'IA à grande échelle grâce à une pile GenAI facile à utiliser et à faible coût."
+ },
+ "spark": {
+ "description": "Le modèle Spark de iFlytek offre de puissantes capacités IA dans plusieurs domaines et langues, utilisant des technologies avancées de traitement du langage naturel pour construire des applications innovantes adaptées à divers scénarios verticaux tels que le matériel intelligent, la santé intelligente et la finance intelligente."
+ },
+ "stepfun": {
+ "description": "Le modèle StepFun est doté de capacités multimodales et de raisonnement complexe de premier plan dans l'industrie, prenant en charge la compréhension de textes très longs et des fonctionnalités puissantes de moteur de recherche autonome."
+ },
+ "taichu": {
+ "description": "L'Institut d'automatisation de l'Académie chinoise des sciences et l'Institut de recherche en intelligence artificielle de Wuhan ont lancé une nouvelle génération de grands modèles multimodaux, prenant en charge des tâches de questions-réponses complètes, de création de texte, de génération d'images, de compréhension 3D, d'analyse de signaux, avec des capacités cognitives, de compréhension et de création renforcées, offrant une toute nouvelle expérience interactive."
+ },
+ "togetherai": {
+ "description": "Together AI s'engage à réaliser des performances de pointe grâce à des modèles IA innovants, offrant une large capacité de personnalisation, y compris un support d'évolutivité rapide et un processus de déploiement intuitif, répondant à divers besoins d'entreprise."
+ },
+ "upstage": {
+ "description": "Upstage se concentre sur le développement de modèles IA pour divers besoins commerciaux, y compris Solar LLM et Document AI, visant à réaliser une intelligence générale artificielle (AGI) pour le travail. Créez des agents de dialogue simples via l'API Chat, et prenez en charge les appels de fonction, la traduction, l'intégration et les applications spécifiques à un domaine."
+ },
+ "zeroone": {
+ "description": "01.AI se concentre sur les technologies d'intelligence artificielle de l'ère IA 2.0, promouvant activement l'innovation et l'application de \"l'homme + l'intelligence artificielle\", utilisant des modèles puissants et des technologies IA avancées pour améliorer la productivité humaine et réaliser l'autonomisation technologique."
+ },
+ "zhipu": {
+ "description": "Zhipu AI propose une plateforme ouverte de modèles multimodaux et linguistiques, prenant en charge une large gamme de scénarios d'application IA, y compris le traitement de texte, la compréhension d'images et l'assistance à la programmation."
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/ragEval.json b/DigitalHumanWeb/locales/fr-FR/ragEval.json
new file mode 100644
index 0000000..21c5ba8
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "Créer",
+ "description": {
+ "placeholder": "Description du jeu de données (optionnel)"
+ },
+ "name": {
+ "placeholder": "Nom du jeu de données",
+ "required": "Veuillez remplir le nom du jeu de données"
+ },
+ "title": "Ajouter un jeu de données"
+ },
+ "dataset": {
+ "addNewButton": "Créer un jeu de données",
+ "emptyGuide": "Le jeu de données actuel est vide, veuillez en créer un.",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "Importer des données"
+ },
+ "columns": {
+ "actions": "Actions",
+ "ideal": {
+ "title": "Réponse idéale"
+ },
+ "question": {
+ "title": "Question"
+ },
+ "referenceFiles": {
+ "title": "Fichiers de référence"
+ }
+ },
+ "notSelected": "Veuillez sélectionner un jeu de données à gauche",
+ "title": "Détails du jeu de données"
+ },
+ "title": "Jeu de données"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "Créer",
+ "datasetId": {
+ "placeholder": "Veuillez sélectionner votre jeu de données d'évaluation",
+ "required": "Veuillez sélectionner un jeu de données d'évaluation"
+ },
+ "description": {
+ "placeholder": "Description de la tâche d'évaluation (optionnel)"
+ },
+ "name": {
+ "placeholder": "Nom de la tâche d'évaluation",
+ "required": "Veuillez remplir le nom de la tâche d'évaluation"
+ },
+ "title": "Ajouter une tâche d'évaluation"
+ },
+ "addNewButton": "Créer une évaluation",
+ "emptyGuide": "La tâche d'évaluation actuelle est vide, commencez à créer une évaluation.",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "Vérifier l'état",
+ "confirmDelete": "Voulez-vous supprimer cette évaluation ?",
+ "confirmRun": "Voulez-vous commencer l'exécution ? L'exécution sera effectuée en arrière-plan de manière asynchrone, fermer la page n'affectera pas l'exécution de la tâche asynchrone.",
+ "downloadRecords": "Télécharger l'évaluation",
+ "retry": "Réessayer",
+ "run": "Exécuter",
+ "title": "Actions"
+ },
+ "datasetId": {
+ "title": "Jeu de données"
+ },
+ "name": {
+ "title": "Nom de la tâche d'évaluation"
+ },
+ "records": {
+ "title": "Nombre d'enregistrements d'évaluation"
+ },
+ "referenceFiles": {
+ "title": "Fichiers de référence"
+ },
+ "status": {
+ "error": "Erreur d'exécution",
+ "pending": "En attente d'exécution",
+ "processing": "En cours d'exécution",
+ "success": "Exécution réussie",
+ "title": "État"
+ }
+ },
+ "title": "Liste des tâches d'évaluation"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/setting.json b/DigitalHumanWeb/locales/fr-FR/setting.json
new file mode 100644
index 0000000..3d73f22
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "À propos"
+ },
+ "agentTab": {
+ "chat": "Préférences de discussion",
+ "meta": "Informations de l'agent",
+ "modal": "Paramètres du modèle",
+ "plugin": "Paramètres du plugin",
+ "prompt": "Configuration du rôle",
+ "tts": "Service vocal"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "En choisissant d'envoyer des données de télémétrie, vous pouvez nous aider à améliorer l'expérience utilisateur globale de {{appName}}.",
+ "title": "Envoyer des données d'utilisation anonymes"
+ },
+ "title": "Analytique"
+ },
+ "danger": {
+ "clear": {
+ "action": "Effacer immédiatement",
+ "confirm": "Confirmer la suppression de toutes les données de chat ?",
+ "desc": "Cela supprimera toutes les données de session, y compris les agents, les fichiers, les messages, les plugins, etc.",
+ "success": "Tous les messages de session ont été effacés",
+ "title": "Effacer tous les messages de session"
+ },
+ "reset": {
+ "action": "Réinitialiser immédiatement",
+ "confirm": "Confirmer la réinitialisation de tous les paramètres ?",
+ "currentVersion": "Version actuelle",
+ "desc": "Réinitialiser tous les paramètres aux valeurs par défaut",
+ "success": "Toutes les configurations ont été réinitialisées avec succès",
+ "title": "Réinitialiser tous les paramètres"
+ }
+ },
+ "header": {
+ "desc": "Préférences et paramètres du modèle.",
+ "global": "Paramètres globaux",
+ "session": "Paramètres de session",
+ "sessionDesc": "Paramètres de personnage et préférences de session.",
+ "sessionWithName": "Paramètres de session · {{name}}",
+ "title": "Paramètres"
+ },
+ "llm": {
+ "aesGcm": "Votre clé, votre adresse de proxy, etc. seront cryptées à l'aide de l'algorithme de chiffrement <1>AES-GCM1>",
+ "apiKey": {
+ "desc": "Veuillez saisir votre clé API {{name}}",
+ "placeholder": "Clé API {{name}}",
+ "title": "Clé API"
+ },
+ "checker": {
+ "button": "Vérifier",
+ "desc": "Vérifie si la clé API et l'adresse du proxy sont correctement renseignées",
+ "pass": "Vérification réussie",
+ "title": "Vérification de la connectivité"
+ },
+ "customModelCards": {
+ "addNew": "Créer et ajouter le modèle {{id}}",
+ "config": "Configurer le modèle",
+ "confirmDelete": "Vous êtes sur le point de supprimer ce modèle personnalisé. Cette action est irréversible, veuillez procéder avec prudence.",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "Champ réellement demandé dans Azure OpenAI",
+ "placeholder": "Veuillez saisir le nom du déploiement du modèle dans Azure",
+ "title": "Nom du déploiement du modèle"
+ },
+ "displayName": {
+ "placeholder": "Veuillez saisir le nom d'affichage du modèle, par exemple ChatGPT, GPT-4, etc.",
+ "title": "Nom d'affichage du modèle"
+ },
+ "files": {
+ "extra": "La mise en œuvre actuelle du téléchargement de fichiers n'est qu'une solution de contournement, réservée à un usage personnel. Veuillez attendre la mise en œuvre complète de la capacité de téléchargement de fichiers.",
+ "title": "Prise en charge du téléchargement de fichiers"
+ },
+ "functionCall": {
+ "extra": "Cette configuration n'activera que la capacité d'appel de fonctions dans l'application. La prise en charge des appels de fonctions dépend entièrement du modèle lui-même, veuillez tester la disponibilité des capacités d'appel de fonctions de ce modèle.",
+ "title": "Prise en charge de l'appel de fonctions"
+ },
+ "id": {
+ "extra": "Sera affiché comme libellé du modèle",
+ "placeholder": "Veuillez saisir l'identifiant du modèle, par exemple gpt-4-turbo-preview ou claude-2.1",
+ "title": "ID du modèle"
+ },
+ "modalTitle": "Configuration du modèle personnalisé",
+ "tokens": {
+ "title": "Nombre maximal de jetons",
+ "unlimited": "illimité"
+ },
+ "vision": {
+ "extra": "Cette configuration n'activera que la configuration de téléchargement d'images dans l'application. La prise en charge de la reconnaissance dépend entièrement du modèle lui-même, veuillez tester la disponibilité des capacités de reconnaissance visuelle de ce modèle.",
+ "title": "Prise en charge de la reconnaissance visuelle"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "Le mode de requête client lancera directement une session à partir du navigateur, améliorant ainsi la vitesse de réponse",
+ "title": "Utiliser le mode de requête client"
+ },
+ "fetcher": {
+ "fetch": "Obtenir la liste des modèles",
+ "fetching": "Récupération de la liste des modèles en cours...",
+ "latestTime": "Dernière mise à jour : {{time}}",
+ "noLatestTime": "Aucune mise à jour disponible"
+ },
+ "helpDoc": "Guide de configuration",
+ "modelList": {
+ "desc": "Sélectionnez les modèles à afficher dans la session. Les modèles sélectionnés seront affichés dans la liste des modèles.",
+ "placeholder": "Veuillez sélectionner un modèle dans la liste",
+ "title": "Liste des modèles",
+ "total": "{{count}} modèles disponibles au total"
+ },
+ "proxyUrl": {
+ "desc": "Doit inclure http(s):// en plus de l'adresse par défaut",
+ "title": "Adresse du proxy de l'API"
+ },
+ "waitingForMore": "Plus de modèles sont en cours de <1>planification pour être ajoutés1>, restez à l'écoute"
+ },
+ "plugin": {
+ "addTooltip": "Ajouter un plugin personnalisé",
+ "clearDeprecated": "Effacer les plugins obsolètes",
+ "empty": "Aucun plugin installé pour le moment, veuillez visiter <1>la boutique de plugins1> pour explorer",
+ "installStatus": {
+ "deprecated": "Désinstallé"
+ },
+ "settings": {
+ "hint": "Veuillez remplir les configurations suivantes en fonction de la description",
+ "title": "Configuration du plugin {{id}}",
+ "tooltip": "Configuration du plugin"
+ },
+ "store": "Boutique de plugins"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "Avatar"
+ },
+ "backgroundColor": {
+ "title": "Couleur de fond"
+ },
+ "description": {
+ "placeholder": "Veuillez saisir la description de l'agent",
+ "title": "Description de l'agent"
+ },
+ "name": {
+ "placeholder": "Veuillez saisir le nom de l'agent",
+ "title": "Nom"
+ },
+ "prompt": {
+ "placeholder": "Veuillez saisir le mot de prompt du rôle",
+ "title": "Paramètre du rôle"
+ },
+ "tag": {
+ "placeholder": "Veuillez saisir l'étiquette",
+ "title": "Étiquette"
+ },
+ "title": "Informations sur l'agent"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "Une fois que le nombre de messages atteint cette valeur, un sujet sera automatiquement créé",
+ "title": "Seuil de création automatique de sujet"
+ },
+ "chatStyleType": {
+ "title": "Style de la fenêtre de chat",
+ "type": {
+ "chat": "Mode conversation",
+ "docs": "Mode document"
+ }
+ },
+ "compressThreshold": {
+ "desc": "Lorsque la longueur des messages non compressés dépasse cette valeur, une compression sera effectuée",
+ "title": "Seuil de compression de la longueur des messages"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "Activer la création automatique de sujets pendant la conversation, uniquement valable pour les sujets temporaires",
+ "title": "Activer la création automatique de sujets"
+ },
+ "enableCompressThreshold": {
+ "title": "Activer le seuil de compression de la longueur des messages"
+ },
+ "enableHistoryCount": {
+ "alias": "Illimité",
+ "limited": "Inclure uniquement {{number}} messages de conversation",
+ "setlimited": "Définir le nombre de messages d'historique",
+ "title": "Limite du nombre de messages historiques",
+ "unlimited": "Aucune limite sur le nombre de messages historiques"
+ },
+ "historyCount": {
+ "desc": "Nombre de messages historiques à inclure dans chaque requête",
+ "title": "Nombre de messages historiques inclus"
+ },
+ "inputTemplate": {
+ "desc": "Le dernier message de l'utilisateur sera rempli dans ce modèle",
+ "placeholder": "Le modèle de prétraitement {{text}} sera remplacé par les informations d'entrée en temps réel",
+ "title": "Modèle de prétraitement de l'entrée utilisateur"
+ },
+ "title": "Paramètres de chat"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "Activer la limite de tokens par réponse"
+ },
+ "frequencyPenalty": {
+ "desc": "Plus la valeur est élevée, plus il est probable de réduire les mots répétés",
+ "title": "Pénalité de fréquence"
+ },
+ "maxTokens": {
+ "desc": "Nombre maximal de tokens utilisés par interaction",
+ "title": "Limite de tokens par réponse"
+ },
+ "model": {
+ "desc": "Modèle {{provider}}",
+ "title": "Modèle"
+ },
+ "presencePenalty": {
+ "desc": "Plus la valeur est élevée, plus il est probable d'explorer de nouveaux sujets",
+ "title": "Pénalité de présence"
+ },
+ "temperature": {
+ "desc": "Plus la valeur est élevée, plus la réponse est aléatoire",
+ "title": "Aléatoire",
+ "titleWithValue": "Aléatoire {{value}}"
+ },
+ "title": "Paramètres du modèle",
+ "topP": {
+ "desc": "Similaire à l'aléatoire, mais ne doit pas être modifié en même temps que l'aléatoire",
+ "title": "Échantillonnage topP"
+ }
+ },
+ "settingPlugin": {
+ "title": "Liste des plugins"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "L'administrateur a activé l'accès chiffré",
+ "placeholder": "Veuillez entrer le mot de passe d'accès",
+ "title": "Mot de passe d'accès"
+ },
+ "oauth": {
+ "info": {
+ "desc": "Connecté",
+ "title": "Informations sur le compte"
+ },
+ "signin": {
+ "action": "Se connecter",
+ "desc": "Connectez-vous avec SSO pour débloquer l'application",
+ "title": "Se connecter"
+ },
+ "signout": {
+ "action": "Se déconnecter",
+ "confirm": "Confirmez-vous la déconnexion ?",
+ "success": "Déconnexion réussie"
+ }
+ },
+ "title": "Paramètres du système"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "Modèle de reconnaissance vocale OpenAI",
+ "title": "OpenAI",
+ "ttsModel": "Modèle de synthèse vocale OpenAI"
+ },
+ "showAllLocaleVoice": {
+ "desc": "Si désactivé, seules les voix de la langue actuelle seront affichées",
+ "title": "Afficher toutes les voix locales"
+ },
+ "stt": "Paramètres de reconnaissance vocale",
+ "sttAutoStop": {
+ "desc": "Si désactivé, la reconnaissance vocale ne s'arrêtera pas automatiquement et devra être arrêtée manuellement en cliquant sur le bouton d'arrêt",
+ "title": "Arrêt automatique de la reconnaissance vocale"
+ },
+ "sttLocale": {
+ "desc": "Langue de l'entrée vocale, cette option peut améliorer la précision de la reconnaissance vocale",
+ "title": "Langue de reconnaissance vocale"
+ },
+ "sttService": {
+ "desc": "Le service de reconnaissance vocale, où 'browser' est le service natif de reconnaissance vocale du navigateur",
+ "title": "Service de reconnaissance vocale"
+ },
+ "title": "Services vocaux",
+ "tts": "Paramètres de synthèse vocale",
+ "ttsService": {
+ "desc": "Si vous utilisez le service de synthèse vocale OpenAI, assurez-vous que le service de modèle OpenAI est activé",
+ "title": "Service de synthèse vocale"
+ },
+ "voice": {
+ "desc": "Choisissez une voix pour l'agent actuel, les services TTS prennent en charge des voix différentes",
+ "preview": "Prévisualisation de la voix",
+ "title": "Voix de synthèse vocale"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "Avatar"
+ },
+ "fontSize": {
+ "desc": "Taille de la police du contenu du chat",
+ "marks": {
+ "normal": "Normal"
+ },
+ "title": "Taille de la police"
+ },
+ "lang": {
+ "autoMode": "Suivre le système",
+ "title": "Langue"
+ },
+ "neutralColor": {
+ "desc": "Personnalisation des nuances de gris pour différentes tendances de couleur",
+ "title": "Couleur neutre"
+ },
+ "primaryColor": {
+ "desc": "Couleur de thème personnalisée",
+ "title": "Couleur principale"
+ },
+ "themeMode": {
+ "auto": "Automatique",
+ "dark": "Sombre",
+ "light": "Clair",
+ "title": "Thème"
+ },
+ "title": "Paramètres du thème"
+ },
+ "submitAgentModal": {
+ "button": "Soumettre l'agent",
+ "identifier": "Identifiant de l'agent",
+ "metaMiss": "Veuillez compléter les informations de l'agent avant de soumettre. Elles doivent inclure le nom, la description et les balises.",
+ "placeholder": "Veuillez entrer l'identifiant de l'agent, qui doit être unique, par exemple développement-web",
+ "tooltips": "Partager sur le marché des agents"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "Ajoutez un nom pour l'identifier",
+ "placeholder": "Entrez le nom de l'appareil",
+ "title": "Nom de l'appareil"
+ },
+ "title": "Informations sur l'appareil",
+ "unknownBrowser": "Navigateur inconnu",
+ "unknownOS": "Système d'exploitation inconnu"
+ },
+ "warning": {
+ "tip": "Après une longue période de test communautaire, la synchronisation WebRTC peut ne pas répondre de manière stable aux besoins généraux de synchronisation des données. Veuillez <1>déployer votre propre serveur de signalisation1> avant utilisation."
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "WebRTC utilisera ce nom pour créer un canal de synchronisation. Assurez-vous que le nom du canal est unique",
+ "placeholder": "Entrez le nom du canal de synchronisation",
+ "shuffle": "Générer aléatoirement",
+ "title": "Nom du canal de synchronisation"
+ },
+ "channelPassword": {
+ "desc": "Ajoutez un mot de passe pour assurer la confidentialité du canal. Seuls les appareils avec le bon mot de passe pourront rejoindre le canal",
+ "placeholder": "Entrez le mot de passe du canal de synchronisation",
+ "title": "Mot de passe du canal de synchronisation"
+ },
+ "desc": "Communication de données en temps réel et en pair-à-pair. Les appareils doivent être en ligne simultanément pour se synchroniser",
+ "enabled": {
+ "invalid": "Veuillez saisir l'adresse du serveur de signalisation et le nom du canal de synchronisation avant d'activer.",
+ "title": "Activer la synchronisation"
+ },
+ "signaling": {
+ "desc": "WebRTC utilisera cette adresse pour la synchronisation",
+ "placeholder": "Veuillez entrer l'adresse du serveur de signalisation",
+ "title": "Serveur de signalisation"
+ },
+ "title": "Synchronisation WebRTC"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "Modèle de génération de métadonnées d'assistant",
+ "modelDesc": "Modèle spécifié pour générer le nom, la description, l'avatar et les balises de l'assistant",
+ "title": "Génération automatique des informations de l'assistant"
+ },
+ "queryRewrite": {
+ "label": "Modèle de reformulation des questions",
+ "modelDesc": "Modèle utilisé pour optimiser les questions des utilisateurs",
+ "title": "Base de connaissances"
+ },
+ "title": "Agent système",
+ "topic": {
+ "label": "Modèle de nommage des sujets",
+ "modelDesc": "Modèle spécifié pour le renommage automatique des sujets",
+ "title": "Renommage automatique des sujets"
+ },
+ "translation": {
+ "label": "Modèle de traduction",
+ "modelDesc": "Modèle spécifié pour la traduction",
+ "title": "Paramètres de l'agent de traduction"
+ }
+ },
+ "tab": {
+ "about": "À propos",
+ "agent": "Agent par défaut",
+ "common": "Paramètres généraux",
+ "experiment": "Expérience",
+ "llm": "Modèle de langue",
+ "sync": "Synchronisation cloud",
+ "system-agent": "Agent système",
+ "tts": "Service vocal"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "Intégré"
+ },
+ "disabled": "Ce modèle ne prend pas en charge les appels de fonction et ne peut pas utiliser de plugins",
+ "plugins": {
+ "enabled": "Activé {{num}}",
+ "groupName": "Plugins",
+ "noEnabled": "Aucun plugin activé pour le moment",
+ "store": "Boutique de plugins"
+ },
+ "title": "Outils supplémentaires"
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/tool.json b/DigitalHumanWeb/locales/fr-FR/tool.json
new file mode 100644
index 0000000..ed373d0
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "Auto-générer",
+ "downloading": "Les liens d'image générés par DallE3 ne sont valides que pendant 1 heure. Le téléchargement de l'image est en cours...",
+ "generate": "Générer",
+ "generating": "En cours de génération...",
+ "images": "Images :",
+ "prompt": "Mot de rappel"
+ }
+}
diff --git a/DigitalHumanWeb/locales/fr-FR/welcome.json b/DigitalHumanWeb/locales/fr-FR/welcome.json
new file mode 100644
index 0000000..ebc3cff
--- /dev/null
+++ b/DigitalHumanWeb/locales/fr-FR/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "Importer la configuration",
+ "market": "Parcourir le marché",
+ "start": "Démarrer maintenant"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "Remplacer",
+ "title": "Nouvelles recommandations d'assistants :"
+ },
+ "defaultMessage": "Je suis votre assistant intelligent personnel {{appName}}. Que puis-je faire pour vous maintenant ?\nSi vous avez besoin d'un assistant plus professionnel ou personnalisé, vous pouvez cliquer sur `+` pour créer un assistant sur mesure.",
+ "defaultMessageWithoutCreate": "Je suis votre assistant intelligent personnel {{appName}}. Que puis-je faire pour vous maintenant ?",
+ "qa": {
+ "q01": "Qu'est-ce que LobeHub ?",
+ "q02": "Qu'est-ce que {{appName}} ?",
+ "q03": "{{appName}} a-t-il un support communautaire ?",
+ "q04": "Quelles fonctionnalités {{appName}} prend-il en charge ?",
+ "q05": "Comment déployer et utiliser {{appName}} ?",
+ "q06": "Quel est le prix de {{appName}} ?",
+ "q07": "{{appName}} est-il gratuit ?",
+ "q08": "Y a-t-il une version cloud ?",
+ "q09": "Prend-il en charge les modèles de langue locaux ?",
+ "q10": "Prend-il en charge la reconnaissance et la génération d'images ?",
+ "q11": "Prend-il en charge la synthèse vocale et la reconnaissance vocale ?",
+ "q12": "Prend-il en charge un système de plugins ?",
+ "q13": "Y a-t-il un marché pour obtenir des GPTs ?",
+ "q14": "Prend-il en charge plusieurs fournisseurs de services d'IA ?",
+ "q15": "Que dois-je faire si je rencontre des problèmes lors de l'utilisation ?"
+ },
+ "questions": {
+ "moreBtn": "En savoir plus",
+ "title": "Questions fréquentes :"
+ },
+ "welcome": {
+ "afternoon": "Bon après-midi",
+ "morning": "Bonjour",
+ "night": "Bonsoir",
+ "noon": "Bonjour"
+ }
+ },
+ "header": "Bienvenue",
+ "pickAgent": "Ou choisissez parmi les modèles d'agent suivants",
+ "skip": "Passer",
+ "slogan": {
+ "desc1": "Déployez un cluster cérébral, suscitez des étincelles de réflexion. Votre agent intelligent est toujours là.",
+ "desc2": "Créez votre premier agent, commençons maintenant~",
+ "title": "Offrez-vous un cerveau plus intelligent"
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/auth.json b/DigitalHumanWeb/locales/it-IT/auth.json
new file mode 100644
index 0000000..da60291
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "Accedi",
+ "loginOrSignup": "Accedi / Registrati",
+ "profile": "Profilo",
+ "security": "Sicurezza",
+ "signout": "Esci",
+ "signup": "Registrati"
+}
diff --git a/DigitalHumanWeb/locales/it-IT/chat.json b/DigitalHumanWeb/locales/it-IT/chat.json
new file mode 100644
index 0000000..8c166b8
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "Modelli"
+ },
+ "agentDefaultMessage": "Ciao, sono **{{name}}**, puoi iniziare subito a parlare con me oppure andare su [Impostazioni assistente]({{url}}) per completare le mie informazioni.",
+ "agentDefaultMessageWithSystemRole": "Ciao, sono **{{name}}**, {{systemRole}}, iniziamo a chattare!",
+ "agentDefaultMessageWithoutEdit": "Ciao, sono **{{name}}**. Cominciamo a chiacchierare!",
+ "agents": "Assistente",
+ "artifact": {
+ "generating": "Generazione in corso",
+ "thinking": "In fase di riflessione",
+ "thought": "Processo di pensiero",
+ "unknownTitle": "Opera non nominata"
+ },
+ "backToBottom": "Torna in fondo",
+ "chatList": {
+ "longMessageDetail": "Visualizza dettagli"
+ },
+ "clearCurrentMessages": "Cancella messaggi attuali",
+ "confirmClearCurrentMessages": "Stai per cancellare i messaggi attuali, questa operazione non potrà essere annullata. Confermi?",
+ "confirmRemoveSessionItemAlert": "Stai per rimuovere questo assistente, l'operazione non potrà essere annullata. Confermi?",
+ "confirmRemoveSessionSuccess": "Session eliminata con successo",
+ "defaultAgent": "Assistente predefinito",
+ "defaultList": "Lista predefinita",
+ "defaultSession": "Sessione predefinita",
+ "duplicateSession": {
+ "loading": "In corso di duplicazione...",
+ "success": "Duplicazione riuscita",
+ "title": "{{title}} Copia"
+ },
+ "duplicateTitle": "{{title}} Copia",
+ "emptyAgent": "Nessun assistente disponibile",
+ "historyRange": "Intervallo cronologico",
+ "inbox": {
+ "desc": "Attiva il cluster cerebrale, accendi la scintilla del pensiero. Il tuo assistente intelligente, qui per comunicare con te su tutto.",
+ "title": "Chiacchierata casuale"
+ },
+ "input": {
+ "addAi": "Aggiungi un messaggio AI",
+ "addUser": "Aggiungi un messaggio utente",
+ "more": "Ulteriori",
+ "send": "Invia",
+ "sendWithCmdEnter": "Invia premendo {{meta}} + Invio",
+ "sendWithEnter": "Invia premendo Invio",
+ "stop": "Ferma",
+ "warp": "A capo"
+ },
+ "knowledgeBase": {
+ "all": "Tutti i contenuti",
+ "allFiles": "Tutti i file",
+ "allKnowledgeBases": "Tutte le basi di conoscenza",
+ "disabled": "L'attuale modalità di distribuzione non supporta le conversazioni con la base di conoscenza. Se desideri utilizzarla, passa alla distribuzione del database server o utilizza il servizio {{cloud}}.",
+ "library": {
+ "action": {
+ "add": "Aggiungi",
+ "detail": "Dettagli",
+ "remove": "Rimuovi"
+ },
+ "title": "File/Basi di conoscenza"
+ },
+ "relativeFilesOrKnowledgeBases": "File/Basi di conoscenza correlate",
+ "title": "Base di conoscenza",
+ "uploadGuide": "I file caricati possono essere visualizzati nella 'Base di conoscenza'.",
+ "viewMore": "Visualizza di più"
+ },
+ "messageAction": {
+ "delAndRegenerate": "Elimina e rigenera",
+ "regenerate": "Rigenera"
+ },
+ "newAgent": "Nuovo assistente",
+ "pin": "Fissa in alto",
+ "pinOff": "Annulla fissaggio in alto",
+ "rag": {
+ "referenceChunks": "Citazioni di riferimento",
+ "userQuery": {
+ "actions": {
+ "delete": "Elimina la Query riscritta",
+ "regenerate": "Rigenera la Query"
+ }
+ }
+ },
+ "regenerate": "Rigenera",
+ "roleAndArchive": "Ruolo e archivio",
+ "searchAgentPlaceholder": "Assistente di ricerca...",
+ "sendPlaceholder": "Inserisci il testo della chat...",
+ "sessionGroup": {
+ "config": "Gestione gruppi",
+ "confirmRemoveGroupAlert": "Stai per rimuovere questo gruppo. Dopo la rimozione, gli assistenti di questo gruppo verranno spostati nella lista predefinita. Confermi l'operazione?",
+ "createAgentSuccess": "Assistente creato con successo",
+ "createGroup": "Aggiungi nuovo gruppo",
+ "createSuccess": "Creazione riuscita",
+ "creatingAgent": "Creazione dell'assistente in corso...",
+ "inputPlaceholder": "Inserisci il nome del gruppo...",
+ "moveGroup": "Sposta nel gruppo",
+ "newGroup": "Nuovo gruppo",
+ "rename": "Rinomina gruppo",
+ "renameSuccess": "Rinominazione riuscita",
+ "sortSuccess": "Riordinamento riuscito",
+ "sorting": "Aggiornamento dell'ordinamento del gruppo in corso...",
+ "tooLong": "Il nome del gruppo deve essere lungo 1-20 caratteri"
+ },
+ "shareModal": {
+ "download": "Scarica screenshot",
+ "imageType": "Tipo di immagine",
+ "screenshot": "Screenshot",
+ "settings": "Impostazioni di esportazione",
+ "shareToShareGPT": "Genera link di condivisione ShareGPT",
+ "withBackground": "Con immagine di sfondo",
+ "withFooter": "Con piè di pagina",
+ "withPluginInfo": "Con informazioni sul plugin",
+ "withSystemRole": "Con impostazione del ruolo dell'assistente"
+ },
+ "stt": {
+ "action": "Input vocale",
+ "loading": "Riconoscimento in corso...",
+ "prettifying": "Miglioramento in corso..."
+ },
+ "temp": "Temporaneo",
+ "tokenDetails": {
+ "chats": "Chat",
+ "rest": "Rimanenti",
+ "systemRole": "Ruolo di sistema",
+ "title": "Dettagli del Token",
+ "tools": "Strumenti",
+ "total": "Totale",
+ "used": "Utilizzati"
+ },
+ "tokenTag": {
+ "overload": "Superamento limite",
+ "remained": "Rimasti",
+ "used": "Utilizzati"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "Rinomina automaticamente",
+ "duplicate": "Crea copia",
+ "export": "Esporta argomento"
+ },
+ "checkOpenNewTopic": "Abilitare un nuovo argomento?",
+ "checkSaveCurrentMessages": "Vuoi salvare la conversazione attuale come argomento?",
+ "confirmRemoveAll": "Stai per rimuovere tutti gli argomenti, questa operazione non potrà essere annullata. Procedere con cautela.",
+ "confirmRemoveTopic": "Stai per rimuovere questo argomento, l'operazione non potrà essere annullata. Procedere con cautela.",
+ "confirmRemoveUnstarred": "Stai per rimuovere gli argomenti non contrassegnati, questa operazione non potrà essere annullata. Procedere con cautela.",
+ "defaultTitle": "Argomento predefinito",
+ "duplicateLoading": "Duplicazione dell'argomento in corso...",
+ "duplicateSuccess": "Argomento duplicato con successo",
+ "guide": {
+ "desc": "Fare clic sul pulsante a sinistra per salvare l'attuale sessione come argomento storico e avviare una nuova sessione",
+ "title": "Elenco argomenti"
+ },
+ "openNewTopic": "Apri nuovo argomento",
+ "removeAll": "Rimuovi tutti gli argomenti",
+ "removeUnstarred": "Rimuovi argomenti non contrassegnati",
+ "saveCurrentMessages": "Salva la conversazione attuale come argomento",
+ "searchPlaceholder": "Cerca argomenti...",
+ "title": "Elenco argomenti"
+ },
+ "translate": {
+ "action": "Traduci",
+ "clear": "Cancella traduzione"
+ },
+ "tts": {
+ "action": "Lettura vocale",
+ "clear": "Cancella lettura vocale"
+ },
+ "updateAgent": "Aggiorna informazioni assistente",
+ "upload": {
+ "action": {
+ "fileUpload": "Carica file",
+ "folderUpload": "Carica cartella",
+ "imageDisabled": "Il modello attuale non supporta il riconoscimento visivo, si prega di cambiare modello per utilizzare questa funzione",
+ "imageUpload": "Carica immagine",
+ "tooltip": "Carica"
+ },
+ "clientMode": {
+ "actionFiletip": "Carica file",
+ "actionTooltip": "Carica",
+ "disabled": "Il modello attuale non supporta il riconoscimento visivo e l'analisi dei file, si prega di cambiare modello per utilizzare questa funzione"
+ },
+ "preview": {
+ "prepareTasks": "Preparazione dei blocchi...",
+ "status": {
+ "pending": "Preparazione al caricamento...",
+ "processing": "Elaborazione del file..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/clerk.json b/DigitalHumanWeb/locales/it-IT/clerk.json
new file mode 100644
index 0000000..1cb454f
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "Indietro",
+ "badge__default": "Predefinito",
+ "badge__otherImpersonatorDevice": "Altro dispositivo impersonato",
+ "badge__primary": "Primario",
+ "badge__requiresAction": "Richiede azione",
+ "badge__thisDevice": "Questo dispositivo",
+ "badge__unverified": "Non verificato",
+ "badge__userDevice": "Dispositivo dell'utente",
+ "badge__you": "Tu",
+ "createOrganization": {
+ "formButtonSubmit": "Crea organizzazione",
+ "invitePage": {
+ "formButtonReset": "Ignora"
+ },
+ "title": "Crea organizzazione"
+ },
+ "dates": {
+ "lastDay": "Ieri alle {{ date | timeString('it-IT') }}",
+ "next6Days": "{{ date | weekday('it-IT','long') }} alle {{ date | timeString('it-IT') }}",
+ "nextDay": "Domani alle {{ date | timeString('it-IT') }}",
+ "numeric": "{{ date | numeric('it-IT') }}",
+ "previous6Days": "Ultimo {{ date | weekday('it-IT','long') }} alle {{ date | timeString('it-IT') }}",
+ "sameDay": "Oggi alle {{ date | timeString('it-IT') }}"
+ },
+ "dividerText": "o",
+ "footerActionLink__useAnotherMethod": "Usa un altro metodo",
+ "footerPageLink__help": "Aiuto",
+ "footerPageLink__privacy": "Privacy",
+ "footerPageLink__terms": "Termini",
+ "formButtonPrimary": "Continua",
+ "formButtonPrimary__verify": "Verifica",
+ "formFieldAction__forgotPassword": "Password dimenticata?",
+ "formFieldError__matchingPasswords": "Le password corrispondono.",
+ "formFieldError__notMatchingPasswords": "Le password non corrispondono.",
+ "formFieldError__verificationLinkExpired": "Il link di verifica è scaduto. Si prega di richiedere un nuovo link.",
+ "formFieldHintText__optional": "Opzionale",
+ "formFieldHintText__slug": "Uno slug è un ID leggibile dall'uomo che deve essere univoco. Spesso viene utilizzato negli URL.",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "Elimina account",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "esempio@email.com, esempio2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "mia-org",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "Abilita inviti automatici per questo dominio",
+ "formFieldLabel__backupCode": "Codice di backup",
+ "formFieldLabel__confirmDeletion": "Conferma",
+ "formFieldLabel__confirmPassword": "Conferma password",
+ "formFieldLabel__currentPassword": "Password attuale",
+ "formFieldLabel__emailAddress": "Indirizzo email",
+ "formFieldLabel__emailAddress_username": "Indirizzo email o username",
+ "formFieldLabel__emailAddresses": "Indirizzi email",
+ "formFieldLabel__firstName": "Nome",
+ "formFieldLabel__lastName": "Cognome",
+ "formFieldLabel__newPassword": "Nuova password",
+ "formFieldLabel__organizationDomain": "Dominio",
+ "formFieldLabel__organizationDomainDeletePending": "Elimina inviti e suggerimenti in sospeso",
+ "formFieldLabel__organizationDomainEmailAddress": "Indirizzo email di verifica",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "Inserisci un indirizzo email sotto questo dominio per ricevere un codice e verificare questo dominio.",
+ "formFieldLabel__organizationName": "Nome",
+ "formFieldLabel__organizationSlug": "Slug",
+ "formFieldLabel__passkeyName": "Nome del passkey",
+ "formFieldLabel__password": "Password",
+ "formFieldLabel__phoneNumber": "Numero di telefono",
+ "formFieldLabel__role": "Ruolo",
+ "formFieldLabel__signOutOfOtherSessions": "Disconnetti da tutti gli altri dispositivi",
+ "formFieldLabel__username": "Username",
+ "impersonationFab": {
+ "action__signOut": "Disconnetti",
+ "title": "Accesso come {{identifier}}"
+ },
+ "locale": "it-IT",
+ "maintenanceMode": "Attualmente siamo in manutenzione, ma non preoccuparti, non dovrebbe richiedere più di qualche minuto.",
+ "membershipRole__admin": "Amministratore",
+ "membershipRole__basicMember": "Membro",
+ "membershipRole__guestMember": "Ospite",
+ "organizationList": {
+ "action__createOrganization": "Crea organizzazione",
+ "action__invitationAccept": "Unisciti",
+ "action__suggestionsAccept": "Richiedi di unirti",
+ "createOrganization": "Crea Organizzazione",
+ "invitationAcceptedLabel": "Unito",
+ "subtitle": "per continuare con {{applicationName}}",
+ "suggestionsAcceptedLabel": "In attesa di approvazione",
+ "title": "Scegli un account",
+ "titleWithoutPersonal": "Scegli un'organizzazione"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "Inviti automatici",
+ "badge__automaticSuggestion": "Suggerimenti automatici",
+ "badge__manualInvitation": "Nessuna iscrizione automatica",
+ "badge__unverified": "Non verificato",
+ "createDomainPage": {
+ "subtitle": "Aggiungi il dominio da verificare. Gli utenti con indirizzi email presso questo dominio possono unirsi all'organizzazione automaticamente o richiedere di farlo.",
+ "title": "Aggiungi dominio"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "Gli inviti non possono essere inviati. Ci sono già inviti in sospeso per i seguenti indirizzi email: {{email_addresses}}.",
+ "formButtonPrimary__continue": "Invia inviti",
+ "selectDropdown__role": "Seleziona ruolo",
+ "subtitle": "Inserisci o incolla uno o più indirizzi email, separati da spazi o virgole.",
+ "successMessage": "Inviti inviati con successo",
+ "title": "Invita nuovi membri"
+ },
+ "membersPage": {
+ "action__invite": "Invita",
+ "activeMembersTab": {
+ "menuAction__remove": "Rimuovi membro",
+ "tableHeader__actions": "",
+ "tableHeader__joined": "Unito",
+ "tableHeader__role": "Ruolo",
+ "tableHeader__user": "Utente"
+ },
+ "detailsTitle__emptyRow": "Nessun membro da visualizzare",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "Invita gli utenti collegando un dominio email con la tua organizzazione. Chiunque si registri con un dominio email corrispondente potrà unirsi all'organizzazione in qualsiasi momento.",
+ "headerTitle": "Inviti automatici",
+ "primaryButton": "Gestisci domini verificati"
+ },
+ "table__emptyRow": "Nessun invito da visualizzare"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "Revoca invito",
+ "tableHeader__invited": "Invitato"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "Gli utenti che si registrano con un dominio email corrispondente, potranno vedere un suggerimento per richiedere di unirsi alla tua organizzazione.",
+ "headerTitle": "Suggerimenti automatici",
+ "primaryButton": "Gestisci domini verificati"
+ },
+ "menuAction__approve": "Approva",
+ "menuAction__reject": "Rifiuta",
+ "tableHeader__requested": "Accesso richiesto",
+ "table__emptyRow": "Nessuna richiesta da visualizzare"
+ },
+ "start": {
+ "headerTitle__invitations": "Inviti",
+ "headerTitle__members": "Membri",
+ "headerTitle__requests": "Richieste"
+ }
+ },
+ "navbar": {
+ "description": "Gestisci la tua organizzazione.",
+ "general": "Generale",
+ "members": "Membri",
+ "title": "Organizzazione"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "Digita \"{{organizationName}}\" di seguito per continuare.",
+ "messageLine1": "Sei sicuro di voler eliminare questa organizzazione?",
+ "messageLine2": "Questa azione è permanente e irreversibile.",
+ "successMessage": "Hai eliminato l'organizzazione.",
+ "title": "Elimina organizzazione"
+ },
+ "leaveOrganization": {
+ "actionDescription": "Digita \"{{organizationName}}\" di seguito per continuare.",
+ "messageLine1": "Sei sicuro di voler lasciare questa organizzazione? Perderai l'accesso a questa organizzazione e alle sue applicazioni.",
+ "messageLine2": "Questa azione è permanente e irreversibile.",
+ "successMessage": "Hai lasciato l'organizzazione.",
+ "title": "Lascia organizzazione"
+ },
+ "title": "Pericolo"
+ },
+ "domainSection": {
+ "menuAction__manage": "Gestisci",
+ "menuAction__remove": "Elimina",
+ "menuAction__verify": "Verifica",
+ "primaryButton": "Aggiungi dominio",
+ "subtitle": "Permetti agli utenti di unirsi all'organizzazione automaticamente o richiedere di farlo basandosi su un dominio email verificato.",
+ "title": "Domini verificati"
+ },
+ "successMessage": "L'organizzazione è stata aggiornata.",
+ "title": "Aggiorna profilo"
+ },
+ "removeDomainPage": {
+ "messageLine1": "Il dominio email {{domain}} verrà rimosso.",
+ "messageLine2": "Gli utenti non potranno più unirsi all'organizzazione automaticamente dopo questa operazione.",
+ "successMessage": "{{domain}} è stato rimosso.",
+ "title": "Rimuovi dominio"
+ },
+ "start": {
+ "headerTitle__general": "Generale",
+ "headerTitle__members": "Membri",
+ "profileSection": {
+ "primaryButton": "Aggiorna profilo",
+ "title": "Profilo organizzazione",
+ "uploadAction__title": "Logo"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "La rimozione di questo dominio influenzerà gli utenti invitati.",
+ "removeDomainActionLabel__remove": "Rimuovi dominio",
+ "removeDomainSubtitle": "Rimuovi questo dominio dai tuoi domini verificati",
+ "removeDomainTitle": "Rimuovi dominio"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "Gli utenti sono automaticamente invitati a unirsi all'organizzazione quando si registrano e possono farlo in qualsiasi momento.",
+ "automaticInvitationOption__label": "Inviti automatici",
+ "automaticSuggestionOption__description": "Gli utenti ricevono un suggerimento per richiedere di unirsi, ma devono essere approvati da un amministratore prima di poter farlo.",
+ "automaticSuggestionOption__label": "Suggerimenti automatici",
+ "calloutInfoLabel": "La modifica della modalità di iscrizione influenzerà solo i nuovi utenti.",
+ "calloutInvitationCountLabel": "Inviti in sospeso inviati agli utenti: {{count}}",
+ "calloutSuggestionCountLabel": "Suggerimenti in sospeso inviati agli utenti: {{count}}",
+ "manualInvitationOption__description": "Gli utenti possono essere invitati manualmente all'organizzazione.",
+ "manualInvitationOption__label": "Nessuna iscrizione automatica",
+ "subtitle": "Scegli come gli utenti da questo dominio possono unirsi all'organizzazione."
+ },
+ "start": {
+ "headerTitle__danger": "Pericolo",
+ "headerTitle__enrollment": "Opzioni di iscrizione"
+ },
+ "subtitle": "Il dominio {{domain}} è ora verificato. Prosegui selezionando la modalità di iscrizione.",
+ "title": "Aggiorna {{domain}}"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "Inserisci il codice di verifica inviato al tuo indirizzo email",
+ "formTitle": "Codice di verifica",
+ "resendButton": "Non hai ricevuto il codice? Invia di nuovo",
+ "subtitle": "Il dominio {{domainName}} deve essere verificato tramite email.",
+ "subtitleVerificationCodeScreen": "Un codice di verifica è stato inviato a {{emailAddress}}. Inserisci il codice per continuare.",
+ "title": "Verifica dominio"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "Crea organizzazione",
+ "action__invitationAccept": "Unisciti",
+ "action__manageOrganization": "Gestisci",
+ "action__suggestionsAccept": "Richiedi di unirti",
+ "notSelected": "Nessuna organizzazione selezionata",
+ "personalWorkspace": "Account personale",
+ "suggestionsAcceptedLabel": "In attesa di approvazione"
+ },
+ "paginationButton__next": "Avanti",
+ "paginationButton__previous": "Indietro",
+ "paginationRowText__displaying": "Visualizzazione",
+ "paginationRowText__of": "di",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "Aggiungi account",
+ "action__signOutAll": "Disconnetti da tutti gli account",
+ "subtitle": "Seleziona l'account con cui desideri continuare.",
+ "title": "Scegli un account"
+ },
+ "alternativeMethods": {
+ "actionLink": "Ottieni aiuto",
+ "actionText": "Non ne hai nessuno di questi?",
+ "blockButton__backupCode": "Usa un codice di backup",
+ "blockButton__emailCode": "Invia codice via email a {{identifier}}",
+ "blockButton__emailLink": "Invia link via email a {{identifier}}",
+ "blockButton__passkey": "Accedi con il tuo passkey",
+ "blockButton__password": "Accedi con la tua password",
+ "blockButton__phoneCode": "Invia codice SMS a {{identifier}}",
+ "blockButton__totp": "Usa la tua app di autenticazione",
+ "getHelp": {
+ "blockButton__emailSupport": "Supporto via email",
+ "content": "Se riscontri difficoltà nell'accesso al tuo account, inviaci un'email e lavoreremo con te per ripristinare l'accesso il prima possibile.",
+ "title": "Ottieni aiuto"
+ },
+ "subtitle": "Problemi? Puoi utilizzare uno di questi metodi per accedere.",
+ "title": "Usa un altro metodo"
+ },
+ "backupCodeMfa": {
+ "subtitle": "Il tuo codice di backup è quello che hai ricevuto durante la configurazione dell'autenticazione a due fattori.",
+ "title": "Inserisci un codice di backup"
+ },
+ "emailCode": {
+ "formTitle": "Codice di verifica",
+ "resendButton": "Non hai ricevuto il codice? Rispedisci",
+ "subtitle": "per continuare su {{applicationName}}",
+ "title": "Controlla la tua email"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "Torna alla scheda originale per continuare.",
+ "title": "Questo link di verifica è scaduto"
+ },
+ "failed": {
+ "subtitle": "Torna alla scheda originale per continuare.",
+ "title": "Questo link di verifica non è valido"
+ },
+ "formSubtitle": "Utilizza il link di verifica inviato alla tua email",
+ "formTitle": "Link di verifica",
+ "loading": {
+ "subtitle": "Verrai reindirizzato presto",
+ "title": "Accesso in corso..."
+ },
+ "resendButton": "Non hai ricevuto il link? Rispedisci",
+ "subtitle": "per continuare su {{applicationName}}",
+ "title": "Controlla la tua email",
+ "unusedTab": {
+ "title": "Puoi chiudere questa scheda"
+ },
+ "verified": {
+ "subtitle": "Verrai reindirizzato presto",
+ "title": "Accesso effettuato con successo"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Torna alla scheda originale per continuare",
+ "subtitleNewTab": "Torna alla scheda appena aperta per continuare",
+ "titleNewTab": "Accesso effettuato su un'altra scheda"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "Codice di reset della password",
+ "resendButton": "Non hai ricevuto il codice? Rispedisci",
+ "subtitle": "per reimpostare la tua password",
+ "subtitle_email": "Inserisci prima il codice inviato al tuo indirizzo email",
+ "subtitle_phone": "Inserisci prima il codice inviato al tuo telefono",
+ "title": "Reimposta la password"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "Reimposta la tua password",
+ "label__alternativeMethods": "Oppure accedi con un altro metodo",
+ "title": "Password dimenticata?"
+ },
+ "noAvailableMethods": {
+ "message": "Impossibile procedere con l'accesso. Non è disponibile alcun fattore di autenticazione.",
+ "subtitle": "Si è verificato un errore",
+ "title": "Impossibile accedere"
+ },
+ "passkey": {
+ "subtitle": "L'utilizzo del tuo passkey conferma la tua identità. Il tuo dispositivo potrebbe richiedere l'impronta digitale, il riconoscimento facciale o il blocco dello schermo.",
+ "title": "Usa il tuo passkey"
+ },
+ "password": {
+ "actionLink": "Usa un altro metodo",
+ "subtitle": "Inserisci la password associata al tuo account",
+ "title": "Inserisci la tua password"
+ },
+ "passwordPwned": {
+ "title": "Password compromessa"
+ },
+ "phoneCode": {
+ "formTitle": "Codice di verifica",
+ "resendButton": "Non hai ricevuto il codice? Rispedisci",
+ "subtitle": "per continuare su {{applicationName}}",
+ "title": "Controlla il tuo telefono"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "Codice di verifica",
+ "resendButton": "Non hai ricevuto il codice? Rispedisci",
+ "subtitle": "Per continuare, inserisci il codice di verifica inviato al tuo telefono",
+ "title": "Controlla il tuo telefono"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "Reimposta password",
+ "requiredMessage": "Per motivi di sicurezza, è necessario reimpostare la tua password.",
+ "successMessage": "La tua password è stata cambiata con successo. Ti stiamo accedendo, attendi un momento.",
+ "title": "Imposta una nuova password"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "Dobbiamo verificare la tua identità prima di reimpostare la tua password."
+ },
+ "start": {
+ "actionLink": "Registrati",
+ "actionLink__use_email": "Usa l'email",
+ "actionLink__use_email_username": "Usa l'email o il nome utente",
+ "actionLink__use_passkey": "Usa il passkey invece",
+ "actionLink__use_phone": "Usa il telefono",
+ "actionLink__use_username": "Usa il nome utente",
+ "actionText": "Non hai un account?",
+ "subtitle": "Bentornato! Effettua l'accesso per continuare",
+ "title": "Accedi a {{applicationName}}"
+ },
+ "totpMfa": {
+ "formTitle": "Codice di verifica",
+ "subtitle": "Per continuare, inserisci il codice di verifica generato dalla tua app di autenticazione",
+ "title": "Verifica a due passaggi"
+ }
+ },
+ "signInEnterPasswordTitle": "Inserisci la tua password",
+ "signUp": {
+ "continue": {
+ "actionLink": "Accedi",
+ "actionText": "Hai già un account?",
+ "subtitle": "Completa i dettagli rimanenti per continuare",
+ "title": "Completa i campi mancanti"
+ },
+ "emailCode": {
+ "formSubtitle": "Inserisci il codice di verifica inviato al tuo indirizzo email",
+ "formTitle": "Codice di verifica",
+ "resendButton": "Non hai ricevuto il codice? Rispedisci",
+ "subtitle": "Inserisci il codice di verifica inviato alla tua email",
+ "title": "Verifica la tua email"
+ },
+ "emailLink": {
+ "formSubtitle": "Utilizza il link di verifica inviato al tuo indirizzo email",
+ "formTitle": "Link di verifica",
+ "loading": {
+ "title": "Registrazione in corso..."
+ },
+ "resendButton": "Non hai ricevuto il link? Rispedisci",
+ "subtitle": "per continuare su {{applicationName}}",
+ "title": "Verifica la tua email",
+ "verified": {
+ "title": "Registrazione completata"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Torna alla scheda appena aperta per continuare",
+ "subtitleNewTab": "Torna alla scheda precedente per continuare",
+ "title": "Email verificata con successo"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "Inserisci il codice di verifica inviato al tuo numero di telefono",
+ "formTitle": "Codice di verifica",
+ "resendButton": "Non hai ricevuto il codice? Rispedisci",
+ "subtitle": "Inserisci il codice di verifica inviato al tuo telefono",
+ "title": "Verifica il tuo telefono"
+ },
+ "start": {
+ "actionLink": "Accedi",
+ "actionText": "Hai già un account?",
+ "subtitle": "Benvenuto! Completa i dettagli per iniziare",
+ "title": "Crea il tuo account"
+ }
+ },
+ "socialButtonsBlockButton": "Continua con {{provider|titleize}}",
+ "unstable__errors": {
+ "captcha_invalid": "Registrazione non riuscita a causa di validazioni di sicurezza fallite. Si prega di aggiornare la pagina e riprovare o contattare il supporto per ulteriore assistenza.",
+ "captcha_unavailable": "Registrazione non riuscita a causa di validazione bot fallita. Si prega di aggiornare la pagina e riprovare o contattare il supporto per ulteriore assistenza.",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "Questo indirizzo email è già in uso. Si prega di provare un altro.",
+ "form_identifier_exists__phone_number": "Questo numero di telefono è già in uso. Si prega di provare un altro.",
+ "form_identifier_exists__username": "Questo nome utente è già in uso. Si prega di provare un altro.",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "L'indirizzo email deve essere un indirizzo email valido.",
+ "form_param_format_invalid__phone_number": "Il numero di telefono deve essere in un formato internazionale valido.",
+ "form_param_max_length_exceeded__first_name": "Il nome non deve superare i 256 caratteri.",
+ "form_param_max_length_exceeded__last_name": "Il cognome non deve superare i 256 caratteri.",
+ "form_param_max_length_exceeded__name": "Il nome non deve superare i 256 caratteri.",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "La tua password non è abbastanza sicura.",
+ "form_password_pwned": "Questa password è stata trovata in una violazione e non può essere utilizzata, si prega di provare un'altra password.",
+ "form_password_pwned__sign_in": "Questa password è stata trovata in una violazione e non può essere utilizzata, si prega di reimpostare la password.",
+ "form_password_size_in_bytes_exceeded": "La tua password ha superato il numero massimo di byte consentito, si prega di accorciarla o rimuovere alcuni caratteri speciali.",
+ "form_password_validation_failed": "Password incorretta.",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "Non puoi eliminare la tua ultima identificazione.",
+ "not_allowed_access": "",
+ "passkey_already_exists": "Un passkey è già registrato su questo dispositivo.",
+ "passkey_not_supported": "I passkey non sono supportati su questo dispositivo.",
+ "passkey_pa_not_supported": "La registrazione richiede un autenticatore di piattaforma ma il dispositivo non lo supporta.",
+ "passkey_registration_cancelled": "La registrazione del passkey è stata annullata o è scaduta.",
+ "passkey_retrieval_cancelled": "La verifica del passkey è stata annullata o è scaduta.",
+ "passwordComplexity": {
+ "maximumLength": "meno di {{length}} caratteri",
+ "minimumLength": "{{length}} o più caratteri",
+ "requireLowercase": "una lettera minuscola",
+ "requireNumbers": "un numero",
+ "requireSpecialCharacter": "un carattere speciale",
+ "requireUppercase": "una lettera maiuscola",
+ "sentencePrefix": "La tua password deve contenere"
+ },
+ "phone_number_exists": "Questo numero di telefono è già in uso. Si prega di provare un altro.",
+ "zxcvbn": {
+ "couldBeStronger": "La tua password funziona, ma potrebbe essere più sicura. Prova ad aggiungere più caratteri.",
+ "goodPassword": "La tua password soddisfa tutti i requisiti necessari.",
+ "notEnough": "La tua password non è abbastanza sicura.",
+ "suggestions": {
+ "allUppercase": "Maiuscolizzare alcune lettere, ma non tutte.",
+ "anotherWord": "Aggiungi più parole meno comuni.",
+ "associatedYears": "Evita anni associati a te.",
+ "capitalization": "Maiuscolizzare più della prima lettera.",
+ "dates": "Evita date e anni associati a te.",
+ "l33t": "Evita sostituzioni di lettere prevedibili come '@' per 'a'.",
+ "longerKeyboardPattern": "Usa pattern di tastiera più lunghi e cambia direzione di scrittura più volte.",
+ "noNeed": "Puoi creare password sicure senza simboli, numeri o lettere maiuscole.",
+ "pwned": "Se usi questa password altrove, dovresti cambiarla.",
+ "recentYears": "Evita anni recenti.",
+ "repeated": "Evita parole e caratteri ripetuti.",
+ "reverseWords": "Evita inversioni di parole comuni.",
+ "sequences": "Evita sequenze di caratteri comuni.",
+ "useWords": "Usa più parole, ma evita frasi comuni."
+ },
+ "warnings": {
+ "common": "Questa è una password comunemente usata.",
+ "commonNames": "Nomi e cognomi comuni sono facili da indovinare.",
+ "dates": "Le date sono facili da indovinare.",
+ "extendedRepeat": "Pattern di caratteri ripetuti come \"abcabcabc\" sono facili da indovinare.",
+ "keyPattern": "Pattern di tastiera corti sono facili da indovinare.",
+ "namesByThemselves": "Nomi o cognomi singoli sono facili da indovinare.",
+ "pwned": "La tua password è stata esposta da una violazione di dati su Internet.",
+ "recentYears": "Gli anni recenti sono facili da indovinare.",
+ "sequences": "Sequenze di caratteri comuni come \"abc\" sono facili da indovinare.",
+ "similarToCommon": "Questo è simile a una password comunemente usata.",
+ "simpleRepeat": "Caratteri ripetuti come \"aaa\" sono facili da indovinare.",
+ "straightRow": "File di tasti consecutivi sulla tastiera sono facili da indovinare.",
+ "topHundred": "Questa è una password frequentemente usata.",
+ "topTen": "Questa è una password molto usata.",
+ "userInputs": "Non dovrebbero esserci dati personali o relativi alla pagina.",
+ "wordByItself": "Le singole parole sono facili da indovinare."
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "Aggiungi account",
+ "action__manageAccount": "Gestisci account",
+ "action__signOut": "Disconnetti",
+ "action__signOutAll": "Disconnetti da tutti gli account"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "Copiato!",
+ "actionLabel__copy": "Copia tutto",
+ "actionLabel__download": "Scarica .txt",
+ "actionLabel__print": "Stampa",
+ "infoText1": "I codici di backup saranno abilitati per questo account.",
+ "infoText2": "Mantieni segreti i codici di backup e conservali in modo sicuro. Puoi rigenerare i codici di backup se sospetti che siano stati compromessi.",
+ "subtitle__codelist": "Conservali in modo sicuro e mantienili segreti.",
+ "successMessage": "I codici di backup sono ora abilitati. Puoi utilizzarne uno per accedere al tuo account, nel caso in cui perdessi l'accesso al tuo dispositivo di autenticazione. Ogni codice può essere utilizzato una sola volta.",
+ "successSubtitle": "Puoi utilizzarne uno per accedere al tuo account, nel caso in cui perdessi l'accesso al tuo dispositivo di autenticazione.",
+ "title": "Aggiungi verifica del codice di backup",
+ "title__codelist": "Codici di backup"
+ },
+ "connectedAccountPage": {
+ "formHint": "Seleziona un provider per collegare il tuo account.",
+ "formHint__noAccounts": "Non ci sono provider di account esterni disponibili.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} sarà rimosso da questo account.",
+ "messageLine2": "Non potrai più utilizzare questo account collegato e le relative funzionalità non funzioneranno più.",
+ "successMessage": "{{connectedAccount}} è stato rimosso dal tuo account.",
+ "title": "Rimuovi account collegato"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "Il provider è stato aggiunto al tuo account",
+ "title": "Aggiungi account collegato"
+ },
+ "deletePage": {
+ "actionDescription": "Digita \"Elimina account\" di seguito per continuare.",
+ "confirm": "Elimina account",
+ "messageLine1": "Sei sicuro di voler eliminare il tuo account?",
+ "messageLine2": "Questa azione è permanente e irreversibile.",
+ "title": "Elimina account"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "Verrà inviata un'email contenente un codice di verifica a questo indirizzo email.",
+ "formSubtitle": "Inserisci il codice di verifica inviato a {{identifier}}",
+ "formTitle": "Codice di verifica",
+ "resendButton": "Non hai ricevuto un codice? Richiedi di nuovo",
+ "successMessage": "L'email {{identifier}} è stata aggiunta al tuo account."
+ },
+ "emailLink": {
+ "formHint": "Verrà inviata un'email contenente un link di verifica a questo indirizzo email.",
+ "formSubtitle": "Clicca sul link di verifica nell'email inviata a {{identifier}}",
+ "formTitle": "Link di verifica",
+ "resendButton": "Non hai ricevuto un link? Richiedi di nuovo",
+ "successMessage": "L'email {{identifier}} è stata aggiunta al tuo account."
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}} sarà rimosso da questo account.",
+ "messageLine2": "Non potrai più accedere utilizzando questo indirizzo email.",
+ "successMessage": "{{emailAddress}} è stato rimosso dal tuo account.",
+ "title": "Rimuovi indirizzo email"
+ },
+ "title": "Aggiungi indirizzo email",
+ "verifyTitle": "Verifica indirizzo email"
+ },
+ "formButtonPrimary__add": "Aggiungi",
+ "formButtonPrimary__continue": "Continua",
+ "formButtonPrimary__finish": "Fine",
+ "formButtonPrimary__remove": "Rimuovi",
+ "formButtonPrimary__save": "Salva",
+ "formButtonReset": "Annulla",
+ "mfaPage": {
+ "formHint": "Seleziona un metodo da aggiungere.",
+ "title": "Aggiungi verifica a due passaggi"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "Usa numero esistente",
+ "primaryButton__addPhoneNumber": "Aggiungi numero di telefono",
+ "removeResource": {
+ "messageLine1": "{{identifier}} non riceverà più codici di verifica durante l'accesso.",
+ "messageLine2": "Il tuo account potrebbe non essere più sicuro. Sei sicuro di voler continuare?",
+ "successMessage": "La verifica a due passaggi tramite codice SMS è stata rimossa per {{mfaPhoneCode}}",
+ "title": "Rimuovi verifica a due passaggi"
+ },
+ "subtitle__availablePhoneNumbers": "Seleziona un numero di telefono esistente per registrarti alla verifica a due passaggi tramite codice SMS o aggiungine uno nuovo.",
+ "subtitle__unavailablePhoneNumbers": "Non ci sono numeri di telefono disponibili per registrarsi alla verifica a due passaggi tramite codice SMS, aggiungine uno nuovo.",
+ "successMessage1": "Durante l'accesso, dovrai inserire un codice di verifica inviato a questo numero di telefono come passaggio aggiuntivo.",
+ "successMessage2": "Salva questi codici di backup e conservali in un posto sicuro. Se perdi l'accesso al tuo dispositivo di autenticazione, puoi utilizzare i codici di backup per accedere.",
+ "successTitle": "Verifica tramite codice SMS abilitata",
+ "title": "Aggiungi verifica tramite codice SMS"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "Scansiona invece il codice QR",
+ "buttonUnableToScan__nonPrimary": "Non puoi scansionare il codice QR?",
+ "infoText__ableToScan": "Configura un nuovo metodo di accesso nella tua app di autenticazione e scannerizza il seguente codice QR per collegarlo al tuo account.",
+ "infoText__unableToScan": "Configura un nuovo metodo di accesso nella tua app di autenticazione e inserisci la chiave fornita di seguito.",
+ "inputLabel__unableToScan1": "Assicurati che le password basate sul tempo o monouso siano abilitate, quindi completa il collegamento del tuo account.",
+ "inputLabel__unableToScan2": "In alternativa, se il tuo autenticatore supporta gli URI TOTP, puoi anche copiare l'URI completo."
+ },
+ "removeResource": {
+ "messageLine1": "I codici di verifica da questo autenticatore non saranno più richiesti durante l'accesso.",
+ "messageLine2": "Il tuo account potrebbe non essere più sicuro. Sei sicuro di voler continuare?",
+ "successMessage": "La verifica a due passaggi tramite app di autenticazione è stata rimossa.",
+ "title": "Rimuovi verifica a due passaggi"
+ },
+ "successMessage": "La verifica a due passaggi è ora abilitata. Durante l'accesso, dovrai inserire un codice di verifica da questo autenticatore come passaggio aggiuntivo.",
+ "title": "Aggiungi app di autenticazione",
+ "verifySubtitle": "Inserisci il codice di verifica generato dal tuo autenticatore",
+ "verifyTitle": "Codice di verifica"
+ },
+ "mobileButton__menu": "Menu",
+ "navbar": {
+ "account": "Profilo",
+ "description": "Gestisci le informazioni del tuo account.",
+ "security": "Sicurezza",
+ "title": "Account"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} sarà rimosso da questo account.",
+ "title": "Rimuovi passkey"
+ },
+ "subtitle__rename": "Puoi cambiare il nome del passkey per trovarlo più facilmente.",
+ "title__rename": "Rinomina Passkey"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "Si consiglia di disconnettersi da tutti gli altri dispositivi che potrebbero aver utilizzato la tua vecchia password.",
+ "readonly": "Attualmente la tua password non può essere modificata perché puoi accedere solo tramite la connessione aziendale.",
+ "successMessage__set": "La tua password è stata impostata.",
+ "successMessage__signOutOfOtherSessions": "Tutti gli altri dispositivi sono stati disconnessi.",
+ "successMessage__update": "La tua password è stata aggiornata.",
+ "title__set": "Imposta password",
+ "title__update": "Aggiorna password"
+ },
+ "phoneNumberPage": {
+ "infoText": "Un messaggio di testo contenente un codice di verifica verrà inviato a questo numero di telefono. Potrebbero essere applicate tariffe per messaggi e dati.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} sarà rimosso da questo account.",
+ "messageLine2": "Non potrai più accedere utilizzando questo numero di telefono.",
+ "successMessage": "{{phoneNumber}} è stato rimosso dal tuo account.",
+ "title": "Rimuovi numero di telefono"
+ },
+ "successMessage": "{{identifier}} è stato aggiunto al tuo account.",
+ "title": "Aggiungi numero di telefono",
+ "verifySubtitle": "Inserisci il codice di verifica inviato a {{identifier}}",
+ "verifyTitle": "Verifica numero di telefono"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "Dimensione consigliata 1:1, fino a 10MB.",
+ "imageFormDestructiveActionSubtitle": "Rimuovi",
+ "imageFormSubtitle": "Carica",
+ "imageFormTitle": "Immagine del profilo",
+ "readonly": "Le informazioni del tuo profilo sono state fornite dalla connessione aziendale e non possono essere modificate.",
+ "successMessage": "Il tuo profilo è stato aggiornato.",
+ "title": "Aggiorna profilo"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "Disconnetti dal dispositivo",
+ "title": "Dispositivi attivi"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "Riprova",
+ "actionLabel__reauthorize": "Autorizza ora",
+ "destructiveActionTitle": "Rimuovi",
+ "primaryButton": "Collega account",
+ "subtitle__reauthorize": "Gli ambiti richiesti sono stati aggiornati e potresti riscontrare funzionalità limitate. Si prega di ri-autorizzare questa applicazione per evitare problemi",
+ "title": "Account collegati"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "Elimina account",
+ "title": "Elimina account"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "Rimuovi email",
+ "detailsAction__nonPrimary": "Imposta come primaria",
+ "detailsAction__primary": "Completa la verifica",
+ "detailsAction__unverified": "Verifica",
+ "primaryButton": "Aggiungi indirizzo email",
+ "title": "Indirizzi email"
+ },
+ "enterpriseAccountsSection": {
+ "title": "Account aziendali"
+ },
+ "headerTitle__account": "Dettagli del profilo",
+ "headerTitle__security": "Sicurezza",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "Rigenera",
+ "headerTitle": "Codici di backup",
+ "subtitle__regenerate": "Ottieni un nuovo set di codici di backup sicuri. I codici di backup precedenti saranno eliminati e non potranno essere utilizzati.",
+ "title__regenerate": "Rigenera codici di backup"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "Imposta come predefinito",
+ "destructiveActionLabel": "Rimuovi"
+ },
+ "primaryButton": "Aggiungi verifica a due passaggi",
+ "title": "Verifica a due passaggi",
+ "totp": {
+ "destructiveActionTitle": "Rimuovi",
+ "headerTitle": "Applicazione autenticatore"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "Rimuovi",
+ "menuAction__rename": "Rinomina",
+ "title": "Passkeys"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "Imposta password",
+ "primaryButton__updatePassword": "Aggiorna password",
+ "title": "Password"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "Rimuovi numero di telefono",
+ "detailsAction__nonPrimary": "Imposta come primario",
+ "detailsAction__primary": "Completa la verifica",
+ "detailsAction__unverified": "Verifica numero di telefono",
+ "primaryButton": "Aggiungi numero di telefono",
+ "title": "Numeri di telefono"
+ },
+ "profileSection": {
+ "primaryButton": "Aggiorna profilo",
+ "title": "Profilo"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "Imposta username",
+ "primaryButton__updateUsername": "Aggiorna username",
+ "title": "Username"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "Rimuovi portafoglio",
+ "primaryButton": "Portafogli Web3",
+ "title": "Portafogli Web3"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "Il tuo nome utente è stato aggiornato.",
+ "title__set": "Imposta nome utente",
+ "title__update": "Aggiorna nome utente"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} sarà rimosso da questo account.",
+ "messageLine2": "Non potrai più accedere utilizzando questo portafoglio web3.",
+ "successMessage": "{{web3Wallet}} è stato rimosso dal tuo account.",
+ "title": "Rimuovi portafoglio web3"
+ },
+ "subtitle__availableWallets": "Seleziona un portafoglio web3 per connetterti al tuo account.",
+ "subtitle__unavailableWallets": "Non ci sono portafogli web3 disponibili.",
+ "successMessage": "Il portafoglio è stato aggiunto al tuo account.",
+ "title": "Aggiungi portafoglio web3"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/common.json b/DigitalHumanWeb/locales/it-IT/common.json
new file mode 100644
index 0000000..fe0f0c3
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "Informazioni",
+ "advanceSettings": "Impostazioni avanzate",
+ "alert": {
+ "cloud": {
+ "action": "Provalo gratuitamente",
+ "desc": "Abbiamo fornito a tutti gli utenti registrati {{credit}} crediti di calcolo gratuiti, senza la necessità di configurazioni complesse, pronto all'uso, supporta la cronologia delle conversazioni illimitata e la sincronizzazione cloud globale. Ci sono molte altre funzionalità avanzate che ti aspettano da scoprire.",
+ "descOnMobile": "Offriamo a tutti gli utenti registrati {{credit}} crediti di calcolo gratuiti, pronti all'uso senza configurazioni complesse.",
+ "title": "Benvenuto a {{name}}"
+ }
+ },
+ "appInitializing": "Applicazione in fase di avvio...",
+ "autoGenerate": "Generazione automatica",
+ "autoGenerateTooltip": "Completamento automatico basato su suggerimenti",
+ "autoGenerateTooltipDisabled": "Si prega di compilare il campo suggerimento per abilitare la funzione di completamento automatico",
+ "back": "Indietro",
+ "batchDelete": "Elimina in batch",
+ "blog": "Blog sui prodotti",
+ "cancel": "Annulla",
+ "changelog": "Registro modifiche",
+ "close": "Chiudi",
+ "contact": "Contattaci",
+ "copy": "Copia",
+ "copyFail": "Copia non riuscita",
+ "copySuccess": "Copia riuscita",
+ "dataStatistics": {
+ "messages": "Messaggi",
+ "sessions": "Sessioni",
+ "today": "Oggi",
+ "topics": "Argomenti"
+ },
+ "defaultAgent": "Assistente predefinito",
+ "defaultSession": "Sessione predefinita",
+ "delete": "Elimina",
+ "document": "Documento di utilizzo",
+ "download": "Scarica",
+ "duplicate": "Duplicato",
+ "edit": "Modifica",
+ "export": "Esporta configurazione",
+ "exportType": {
+ "agent": "Esporta impostazioni assistente",
+ "agentWithMessage": "Esporta assistente e messaggi",
+ "all": "Esporta impostazioni globali e tutti i dati degli assistenti",
+ "allAgent": "Esporta tutte le impostazioni degli assistenti",
+ "allAgentWithMessage": "Esporta tutti gli assistenti e i messaggi",
+ "globalSetting": "Esporta impostazioni globali"
+ },
+ "feedback": "Feedback e suggerimenti",
+ "follow": "Seguici su {{name}}",
+ "footer": {
+ "action": {
+ "feedback": "Condividi i tuoi preziosi suggerimenti",
+ "star": "Aggiungi una stella su GitHub"
+ },
+ "and": "e",
+ "feedback": {
+ "action": "Condividi feedback",
+ "desc": "Ogni tua idea e suggerimento è prezioso per noi, non vediamo l'ora di conoscere la tua opinione! Contattaci per fornire feedback sulle funzionalità del prodotto e sull'esperienza utente, aiutandoci a migliorare LobeChat.",
+ "title": "Condividi il tuo prezioso feedback su GitHub"
+ },
+ "later": "Più tardi",
+ "star": {
+ "action": "Aggiungi una stella",
+ "desc": "Se ami il nostro prodotto e desideri supportarci, potresti aggiungerci una stella su GitHub? Questo piccolo gesto è di grande significato per noi e ci motiva a continuare a offrirti un'esperienza di qualità.",
+ "title": "Aggiungi una stella su GitHub per supportarci"
+ },
+ "title": "Ti piace il nostro prodotto?"
+ },
+ "fullscreen": "Modalità a schermo intero",
+ "historyRange": "Intervallo cronologico",
+ "import": "Importa configurazione",
+ "importModal": {
+ "error": {
+ "desc": "Ci dispiace molto, si è verificato un errore durante il processo di importazione dei dati. Si prega di provare a importare nuovamente, o <1>invia un problema1>, saremo pronti ad aiutarti a risolvere il problema al più presto.",
+ "title": "Importazione dei dati fallita"
+ },
+ "finish": {
+ "onlySettings": "Impostazioni di sistema importate con successo",
+ "start": "Inizia utilizzo",
+ "subTitle": "Importazione dati completata in {{duration}} secondi. Dettagli dell'importazione:",
+ "title": "Importazione dati completata"
+ },
+ "loading": "Importazione dati in corso, attendere prego...",
+ "preparing": "Preparazione del modulo di importazione dei dati in corso...",
+ "result": {
+ "added": "Importazione riuscita",
+ "errors": "Errori di importazione",
+ "messages": "Messaggi",
+ "sessionGroups": "Gruppi di sessione",
+ "sessions": "Sessioni",
+ "skips": "Elementi saltati",
+ "topics": "Argomenti",
+ "type": "Tipo di dati"
+ },
+ "title": "Importa dati",
+ "uploading": {
+ "desc": "Il file attuale è troppo grande, sta venendo caricato con impegno...",
+ "restTime": "Tempo rimanente",
+ "speed": "Velocità di caricamento"
+ }
+ },
+ "information": "Comunità e informazioni",
+ "installPWA": "Installa l'applicazione del browser",
+ "lang": {
+ "ar": "Arabo",
+ "bg-BG": "bulgaro",
+ "bn": "Bengalese",
+ "cs-CZ": "Ceco",
+ "da-DK": "Danese",
+ "de-DE": "Tedesco",
+ "el-GR": "Greco",
+ "en": "Inglese",
+ "en-US": "Inglese",
+ "es-ES": "Spagnolo",
+ "fi-FI": "Finlandese",
+ "fr-FR": "Francese",
+ "hi-IN": "Hindi",
+ "hu-HU": "Ungherese",
+ "id-ID": "Indonesiano",
+ "it-IT": "Italiano",
+ "ja-JP": "Giapponese",
+ "ko-KR": "Coreano",
+ "nl-NL": "Olandese",
+ "no-NO": "Norvegese",
+ "pl-PL": "Polacco",
+ "pt-BR": "Portoghese",
+ "pt-PT": "Portoghese",
+ "ro-RO": "Rumeno",
+ "ru-RU": "Russo",
+ "sk-SK": "Slovacco",
+ "sr-RS": "Serbo",
+ "sv-SE": "Svedese",
+ "th-TH": "Tailandese",
+ "tr-TR": "Turco",
+ "uk-UA": "Ucraino",
+ "vi-VN": "Vietnamita",
+ "zh": "Cinese semplificato",
+ "zh-CN": "Cinese semplificato",
+ "zh-TW": "Cinese tradizionale"
+ },
+ "layoutInitializing": "Inizializzazione layout in corso...",
+ "legal": "Avviso legale",
+ "loading": "Caricamento in corso...",
+ "mail": {
+ "business": "Collaborazioni commerciali",
+ "support": "Supporto via email"
+ },
+ "oauth": "Accesso SSO",
+ "officialSite": "Sito ufficiale",
+ "ok": "OK",
+ "password": "Password",
+ "pin": "Fissa in alto",
+ "pinOff": "Annulla fissaggio",
+ "privacy": "Informativa sulla privacy",
+ "regenerate": "Rigenera",
+ "rename": "Rinomina",
+ "reset": "Ripristina",
+ "retry": "Riprova",
+ "send": "Invia",
+ "setting": "Impostazioni",
+ "share": "Condividi",
+ "stop": "Ferma",
+ "sync": {
+ "actions": {
+ "settings": "Impostazioni di sincronizzazione",
+ "sync": "Sincronizza ora"
+ },
+ "awareness": {
+ "current": "Dispositivo corrente"
+ },
+ "channel": "Canale",
+ "disabled": {
+ "actions": {
+ "enable": "Abilita la sincronizzazione cloud",
+ "settings": "Configura le impostazioni di sincronizzazione"
+ },
+ "desc": "I dati della sessione corrente sono memorizzati solo in questo browser. Se hai bisogno di sincronizzare i dati tra più dispositivi, configura e abilita la sincronizzazione cloud.",
+ "title": "Sincronizzazione dati disabilitata"
+ },
+ "enabled": {
+ "title": "Sincronizzazione dati"
+ },
+ "status": {
+ "connecting": "Connessione in corso",
+ "disabled": "Sincronizzazione disabilitata",
+ "ready": "Pronto",
+ "synced": "Sincronizzato",
+ "syncing": "Sincronizzazione in corso",
+ "unconnected": "Connessione non riuscita"
+ },
+ "title": "Stato di sincronizzazione",
+ "unconnected": {
+ "tip": "Connessione al server di segnalazione non riuscita. Impossibile stabilire un canale di comunicazione punto a punto. Controlla la rete e riprova."
+ }
+ },
+ "tab": {
+ "chat": "Chat",
+ "discover": "Scopri",
+ "files": "File",
+ "me": "io",
+ "setting": "Impostazioni"
+ },
+ "telemetry": {
+ "allow": "Consenti",
+ "deny": "Rifiuta",
+ "desc": "Speriamo di acquisire in modo anonimo le tue informazioni sull'uso per aiutarci a migliorare LobeChat e offrirti un'esperienza prodotto migliore. Puoi disattivarlo in qualsiasi momento in Impostazioni - Informazioni su.",
+ "learnMore": "Ulteriori informazioni",
+ "title": "Aiuta LobeChat a migliorare"
+ },
+ "temp": "Temporaneo",
+ "terms": "Termini di servizio",
+ "updateAgent": "Aggiorna informazioni agente",
+ "upgradeVersion": {
+ "action": "Aggiorna",
+ "hasNew": "Nuovo aggiornamento disponibile",
+ "newVersion": "Nuova versione disponibile: {{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "Utente Anonimo",
+ "billing": "Gestione fatturazione",
+ "cloud": "Prova {{name}}",
+ "data": "Archiviazione dati",
+ "defaultNickname": "Utente Community",
+ "discord": "Supporto della community",
+ "docs": "Documentazione",
+ "email": "Supporto via email",
+ "feedback": "Feedback e suggerimenti",
+ "help": "Centro assistenza",
+ "moveGuide": "Il pulsante delle impostazioni è stato spostato qui",
+ "plans": "Piani di abbonamento",
+ "preview": "Anteprima",
+ "profile": "Gestione account",
+ "setting": "Impostazioni app",
+ "usages": "Statistiche di utilizzo"
+ },
+ "version": "Versione"
+}
diff --git a/DigitalHumanWeb/locales/it-IT/components.json b/DigitalHumanWeb/locales/it-IT/components.json
new file mode 100644
index 0000000..ef6e808
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "Trascina i file qui, supporta il caricamento di più immagini.",
+ "dragFileDesc": "Trascina immagini e file qui, supporta il caricamento di più immagini e file.",
+ "dragFileTitle": "Carica file",
+ "dragTitle": "Carica immagini"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "Aggiungi alla base di conoscenza",
+ "addToOtherKnowledgeBase": "Aggiungi a un'altra base di conoscenza",
+ "batchChunking": "Suddivisione in batch",
+ "chunking": "Suddivisione",
+ "chunkingTooltip": "Dividi il file in più blocchi di testo e vettorizzali, utilizzabili per la ricerca semantica e il dialogo sui file",
+ "confirmDelete": "Stai per eliminare questo file. Una volta eliminato, non sarà possibile recuperarlo. Ti preghiamo di confermare l'operazione.",
+ "confirmDeleteMultiFiles": "Stai per eliminare i {{count}} file selezionati. Una volta eliminati, non sarà possibile recuperarli. Ti preghiamo di confermare l'operazione.",
+ "confirmRemoveFromKnowledgeBase": "Stai per rimuovere i {{count}} file selezionati dalla base di conoscenza. I file rimarranno visibili in tutti i file. Ti preghiamo di confermare l'operazione.",
+ "copyUrl": "Copia link",
+ "copyUrlSuccess": "Indirizzo del file copiato con successo",
+ "createChunkingTask": "Preparazione in corso...",
+ "deleteSuccess": "File eliminato con successo",
+ "downloading": "Download del file in corso...",
+ "removeFromKnowledgeBase": "Rimuovi dalla base di conoscenza",
+ "removeFromKnowledgeBaseSuccess": "File rimosso con successo"
+ },
+ "bottom": "Hai raggiunto il fondo",
+ "config": {
+ "showFilesInKnowledgeBase": "Mostra contenuti nella base di conoscenza"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "Carica file",
+ "folder": "Carica cartella",
+ "knowledgeBase": "Crea nuova base di conoscenza"
+ },
+ "or": "oppure",
+ "title": "Trascina qui file o cartelle"
+ },
+ "title": {
+ "createdAt": "Data di creazione",
+ "size": "Dimensione",
+ "title": "File"
+ },
+ "total": {
+ "fileCount": "Totale {{count}} elementi",
+ "selectedCount": "Selezionati {{count}} elementi"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "I blocchi di testo non sono stati completamente vettorizzati, il che comporterà l'impossibilità di utilizzare la funzione di ricerca semantica. Per migliorare la qualità della ricerca, si prega di vettorizzare i blocchi di testo.",
+ "error": "Errore di vettorizzazione",
+ "errorResult": "Vettorizzazione fallita, controlla e riprova. Motivo del fallimento:",
+ "processing": "I blocchi di testo sono in fase di vettorizzazione, ti preghiamo di attendere",
+ "success": "Attualmente tutti i blocchi di testo sono stati vettorizzati"
+ },
+ "embeddings": "Vettorizzazione",
+ "status": {
+ "error": "Suddivisione fallita",
+ "errorResult": "Suddivisione fallita, controlla e riprova. Motivo del fallimento:",
+ "processing": "In fase di suddivisione",
+ "processingTip": "Il server sta suddividendo i blocchi di testo, chiudere la pagina non influisce sul progresso della suddivisione"
+ }
+ }
+ },
+ "GoBack": {
+ "back": "Indietro"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "Modello personalizzato: di default supporta sia la chiamata di funzioni che il riconoscimento visivo. Verifica l'effettiva disponibilità di tali funzionalità.",
+ "file": "Questo modello supporta il caricamento e il riconoscimento di file.",
+ "functionCall": "Questo modello supporta la chiamata di funzioni.",
+ "tokens": "Questo modello supporta un massimo di {{tokens}} token per sessione.",
+ "vision": "Questo modello supporta il riconoscimento visivo."
+ },
+ "removed": "Il modello non è più nella lista, verrà rimosso automaticamente se deselezionato"
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "Nessun modello attivo. Vai alle impostazioni per attivarne uno.",
+ "provider": "Provider"
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/discover.json b/DigitalHumanWeb/locales/it-IT/discover.json
new file mode 100644
index 0000000..ac7805a
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "Aggiungi assistente",
+ "addAgentAndConverse": "Aggiungi assistente e inizia una conversazione",
+ "addAgentSuccess": "Aggiunta riuscita",
+ "conversation": {
+ "l1": "Ciao, sono **{{name}}**, puoi chiedermi qualsiasi cosa e farò del mio meglio per risponderti ~",
+ "l2": "Ecco una panoramica delle mie capacità: ",
+ "l3": "Iniziamo la conversazione!"
+ },
+ "description": "Introduzione all'assistente",
+ "detail": "Dettagli",
+ "list": "Elenco assistenti",
+ "more": "Di più",
+ "plugins": "Plugin integrati",
+ "recentSubmits": "Aggiornamenti recenti",
+ "suggestions": "Suggerimenti correlati",
+ "systemRole": "Impostazioni assistente",
+ "try": "Prova"
+ },
+ "back": "Torna alla scoperta",
+ "category": {
+ "assistant": {
+ "academic": "Accademico",
+ "all": "Tutti",
+ "career": "Carriera",
+ "copywriting": "Copywriting",
+ "design": "Design",
+ "education": "Educazione",
+ "emotions": "Emozioni",
+ "entertainment": "Intrattenimento",
+ "games": "Giochi",
+ "general": "Generale",
+ "life": "Vita",
+ "marketing": "Marketing",
+ "office": "Ufficio",
+ "programming": "Programmazione",
+ "translation": "Traduzione"
+ },
+ "plugin": {
+ "all": "Tutti",
+ "gaming-entertainment": "Gioco e intrattenimento",
+ "life-style": "Stile di vita",
+ "media-generate": "Generazione media",
+ "science-education": "Scienza e istruzione",
+ "social": "Social media",
+ "stocks-finance": "Azioni e finanza",
+ "tools": "Strumenti utili",
+ "web-search": "Ricerca web"
+ }
+ },
+ "cleanFilter": "Pulisci filtro",
+ "create": "Crea",
+ "createGuide": {
+ "func1": {
+ "desc1": "Accedi alla pagina delle impostazioni dell'assistente che desideri inviare tramite l'icona in alto a destra nella finestra di conversazione;",
+ "desc2": "Clicca sul pulsante per inviare al mercato degli assistenti in alto a destra.",
+ "tag": "Metodo uno",
+ "title": "Invia tramite LobeChat"
+ },
+ "func2": {
+ "button": "Vai al repository degli assistenti su Github",
+ "desc": "Se desideri aggiungere un assistente all'indice, utilizza agent-template.json o agent-template-full.json per creare un'entrata nella directory plugins, scrivi una breve descrizione e contrassegnala adeguatamente, quindi crea una richiesta di pull.",
+ "tag": "Metodo due",
+ "title": "Invia tramite Github"
+ }
+ },
+ "dislike": "Non mi piace",
+ "filter": "Filtra",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "Tutti gli autori",
+ "followed": "Autori seguiti",
+ "title": "Intervallo autori"
+ },
+ "contentLength": "Lunghezza minima del contenuto",
+ "maxToken": {
+ "title": "Imposta lunghezza massima (Token)",
+ "unlimited": "Illimitato"
+ },
+ "other": {
+ "functionCall": "Supporta chiamate di funzione",
+ "title": "Altro",
+ "vision": "Supporta riconoscimento visivo",
+ "withKnowledge": "Con knowledge base",
+ "withTool": "Con plugin"
+ },
+ "pricing": "Prezzo del modello",
+ "timePeriod": {
+ "all": "Tutto il tempo",
+ "day": "Ultime 24 ore",
+ "month": "Ultimi 30 giorni",
+ "title": "Intervallo di tempo",
+ "week": "Ultimi 7 giorni",
+ "year": "Ultimo anno"
+ }
+ },
+ "home": {
+ "featuredAssistants": "Assistenti consigliati",
+ "featuredModels": "Modelli consigliati",
+ "featuredProviders": "Fornitori di modelli consigliati",
+ "featuredTools": "Plugin consigliati",
+ "more": "Scopri di più"
+ },
+ "like": "Mi piace",
+ "models": {
+ "chat": "Inizia conversazione",
+ "contentLength": "Lunghezza massima del contenuto",
+ "free": "Gratuito",
+ "guide": "Guida alla configurazione",
+ "list": "Elenco modelli",
+ "more": "Di più",
+ "parameterList": {
+ "defaultValue": "Valore predefinito",
+ "docs": "Visualizza documentazione",
+ "frequency_penalty": {
+ "desc": "Questa impostazione regola la frequenza con cui il modello riutilizza vocaboli specifici già presenti nell'input. Valori più alti riducono la probabilità di ripetizione, mentre valori negativi producono l'effetto opposto. La penalità per il vocabolario non aumenta con il numero di apparizioni. Valori negativi incoraggiano il riutilizzo del vocabolario.",
+ "title": "Penalità di frequenza"
+ },
+ "max_tokens": {
+ "desc": "Questa impostazione definisce la lunghezza massima che il modello può generare in una singola risposta. Impostare un valore più alto consente al modello di generare risposte più lunghe, mentre un valore più basso limita la lunghezza della risposta, rendendola più concisa. Regolare questo valore in base ai diversi scenari di applicazione può aiutare a raggiungere la lunghezza e il livello di dettaglio desiderati nella risposta.",
+ "title": "Limite di risposta singola"
+ },
+ "presence_penalty": {
+ "desc": "Questa impostazione mira a controllare il riutilizzo del vocabolario in base alla frequenza con cui appare nell'input. Cerca di utilizzare meno vocaboli che appaiono frequentemente, in proporzione alla loro frequenza di apparizione. La penalità per il vocabolario aumenta con il numero di apparizioni. Valori negativi incoraggiano il riutilizzo del vocabolario.",
+ "title": "Freschezza del tema"
+ },
+ "range": "Intervallo",
+ "temperature": {
+ "desc": "Questa impostazione influisce sulla diversità delle risposte del modello. Valori più bassi portano a risposte più prevedibili e tipiche, mentre valori più alti incoraggiano risposte più varie e insolite. Quando il valore è impostato a 0, il modello fornisce sempre la stessa risposta per un dato input.",
+ "title": "Casualità"
+ },
+ "title": "Parametri del modello",
+ "top_p": {
+ "desc": "Questa impostazione limita le scelte del modello a una certa proporzione di vocaboli con la massima probabilità: seleziona solo i vocaboli di punta la cui probabilità cumulativa raggiunge P. Valori più bassi rendono le risposte del modello più prevedibili, mentre l'impostazione predefinita consente al modello di scegliere da tutto l'intervallo di vocaboli.",
+ "title": "Campionamento nucleare"
+ },
+ "type": "Tipo"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChat supporta l'uso di chiavi API personalizzate per questo fornitore.",
+ "input": "Prezzo di input",
+ "inputTooltip": "Costo per milione di Token",
+ "latency": "Latenza",
+ "latencyTooltip": "Tempo medio di risposta del fornitore per il primo Token inviato",
+ "maxOutput": "Lunghezza massima di output",
+ "maxOutputTooltip": "Numero massimo di Token che questo endpoint può generare",
+ "officialTooltip": "Servizio ufficiale LobeHub",
+ "output": "Prezzo di output",
+ "outputTooltip": "Costo per milione di Token",
+ "streamCancellationTooltip": "Questo fornitore supporta la funzione di cancellazione dello streaming.",
+ "throughput": "Throughput",
+ "throughputTooltip": "Numero medio di Token trasmessi per secondo nelle richieste di streaming"
+ },
+ "suggestions": "Modelli correlati",
+ "supportedProviders": "Fornitori supportati da questo modello"
+ },
+ "plugins": {
+ "community": "Plugin della comunità",
+ "install": "Installa plugin",
+ "installed": "Installato",
+ "list": "Elenco plugin",
+ "meta": {
+ "description": "Descrizione",
+ "parameter": "Parametro",
+ "title": "Parametri dello strumento",
+ "type": "Tipo"
+ },
+ "more": "Di più",
+ "official": "Plugin ufficiale",
+ "recentSubmits": "Aggiornamenti recenti",
+ "suggestions": "Suggerimenti correlati"
+ },
+ "providers": {
+ "config": "Configurazione del fornitore",
+ "list": "Elenco fornitori di modelli",
+ "modelCount": "{{count}} modelli",
+ "modelSite": "Documentazione del modello",
+ "more": "Di più",
+ "officialSite": "Sito ufficiale",
+ "showAllModels": "Mostra tutti i modelli",
+ "suggestions": "Fornitori correlati",
+ "supportedModels": "Modelli supportati"
+ },
+ "search": {
+ "placeholder": "Cerca nome, descrizione o parole chiave...",
+ "result": "{{count}} risultati di ricerca su {{keyword}}",
+ "searching": "Ricerca in corso..."
+ },
+ "sort": {
+ "mostLiked": "Più apprezzati",
+ "mostUsed": "Più utilizzati",
+ "newest": "Dal più recente al più vecchio",
+ "oldest": "Dal più vecchio al più recente",
+ "recommended": "Consigliati"
+ },
+ "tab": {
+ "assistants": "Assistenti",
+ "home": "Home",
+ "models": "Modelli",
+ "plugins": "Plugin",
+ "providers": "Fornitori di modelli"
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/error.json b/DigitalHumanWeb/locales/it-IT/error.json
new file mode 100644
index 0000000..add5dbd
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "Continua la sessione",
+ "desc": "{{greeting}}, Sono felice di poterti aiutare ancora. Continuiamo a parlare di quello di cui stavamo discutendo.",
+ "title": "Bentornato, {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "Torna alla homepage",
+ "desc": "Prova di nuovo più tardi, o torna al mondo conosciuto",
+ "retry": "Ricarica",
+ "title": "La pagina ha riscontrato un problema.."
+ },
+ "fetchError": "Errore di richiesta",
+ "fetchErrorDetail": "Dettagli dell'errore",
+ "notFound": {
+ "backHome": "Torna alla homepage",
+ "check": "Controlla se l'URL è corretto",
+ "desc": "Non siamo riusciti a trovare la pagina che stai cercando",
+ "title": "Hai raggiunto un territorio sconosciuto?"
+ },
+ "pluginSettings": {
+ "desc": "Completa la seguente configurazione per iniziare a utilizzare il plugin",
+ "title": "Configurazione del plugin {{name}}"
+ },
+ "response": {
+ "400": "Spiacenti, il server non comprende la tua richiesta. Verifica che i parametri della tua richiesta siano corretti",
+ "401": "Spiacenti, il server ha rifiutato la tua richiesta, probabilmente a causa di autorizzazioni insufficienti o di un'identificazione non valida",
+ "403": "Spiacenti, il server ha rifiutato la tua richiesta. Non hai accesso a questo contenuto",
+ "404": "Spiacenti, il server non trova la pagina o la risorsa richiesta. Verifica che l'URL sia corretto",
+ "405": "Spiacenti, il server non supporta il metodo di richiesta utilizzato. Verifica che il metodo di richiesta sia corretto",
+ "406": "Spiacenti, il server non è in grado di completare la richiesta in base alle caratteristiche del contenuto richiesto",
+ "407": "Spiacenti, è necessario autenticarsi come proxy prima di poter continuare la richiesta",
+ "408": "Spiacenti, il server ha superato il tempo di attesa per la richiesta, controlla la connessione di rete e riprova",
+ "409": "Spiacenti, la richiesta non può essere elaborata a causa di un conflitto, probabilmente a causa di uno stato delle risorse incompatibile con la richiesta",
+ "410": "Spiacenti, la risorsa richiesta è stata rimossa in modo permanente e non può essere trovata",
+ "411": "Spiacenti, il server non può elaborare una richiesta senza una lunghezza del contenuto valida",
+ "412": "Spiacenti, la tua richiesta non soddisfa le condizioni sul lato server e non può essere completata",
+ "413": "Spiacenti, la dimensione dei dati della tua richiesta è troppo grande per il server da gestire",
+ "414": "Spiacenti, l'URI della tua richiesta è troppo lungo per il server da gestire",
+ "415": "Spiacenti, il server non può gestire il formato dei media allegato alla richiesta",
+ "416": "Spiacenti, il server non può soddisfare l'intervallo richiesto",
+ "417": "Spiacenti, il server non può soddisfare le aspettative della richiesta",
+ "422": "Spiacenti, la tua richiesta è formattata correttamente, ma a causa di errori semantici non può essere elaborata",
+ "423": "Spiacenti, la risorsa richiesta è bloccata",
+ "424": "Spiacenti, a causa di un precedente fallimento della richiesta, la richiesta attuale non può essere completata",
+ "426": "Spiacenti, il server richiede un aggiornamento del tuo client a una versione di protocollo superiore",
+ "428": "Spiacenti, il server richiede una condizione preliminare e la tua richiesta deve includere le intestazioni delle condizioni corrette",
+ "429": "Spiacenti, ci sono troppe richieste in arrivo. Il server è un po' stanco. Riprova più tardi",
+ "431": "Spiacenti, le intestazioni della tua richiesta sono troppo grandi per il server da gestire",
+ "451": "Spiacenti, per motivi legali, il server rifiuta di fornire questa risorsa",
+ "500": "Spiacenti, il server sembra avere qualche difficoltà al momento e non può completare la tua richiesta. Riprova più tardi",
+ "502": "Spiacenti, il server sembra smarrito e non può fornire servizio al momento. Riprova più tardi",
+ "503": "Spiacenti, il server non può elaborare la tua richiesta al momento, probabilmente a causa di sovraccarico o manutenzione in corso. Riprova più tardi",
+ "504": "Spiacenti, il server non ha ricevuto risposta dal server upstream. Riprova più tardi",
+ "AgentRuntimeError": "Errore di esecuzione del modello linguistico Lobe, controlla le informazioni seguenti o riprova",
+ "FreePlanLimit": "Attualmente sei un utente gratuito e non puoi utilizzare questa funzione. Per favore, passa a un piano a pagamento per continuare.",
+ "InvalidAccessCode": "Password incorrect or empty, please enter the correct access password, or add a custom API Key",
+ "InvalidBedrockCredentials": "Autenticazione Bedrock non riuscita, controlla AccessKeyId/SecretAccessKey e riprova",
+ "InvalidClerkUser": "Spiacenti, al momento non hai effettuato l'accesso. Per favore, effettua l'accesso o registrati prima di continuare.",
+ "InvalidGithubToken": "Il token di accesso personale di Github non è corretto o è vuoto. Controlla il token di accesso personale di Github e riprova.",
+ "InvalidOllamaArgs": "Configurazione Ollama non valida, controllare la configurazione di Ollama e riprovare",
+ "InvalidProviderAPIKey": "{{provider}} Chiave API non valida o vuota, controlla la Chiave API di {{provider}} e riprova",
+ "LocationNotSupportError": "Spiacenti, la tua posizione attuale non supporta questo servizio modello, potrebbe essere a causa di restrizioni geografiche o servizi non attivati. Verifica se la posizione attuale supporta l'uso di questo servizio o prova a utilizzare un'altra posizione.",
+ "NoOpenAIAPIKey": "La chiave API OpenAI è vuota. Aggiungi una chiave API personalizzata OpenAI",
+ "OllamaBizError": "Errore di servizio Ollama, controllare le informazioni seguenti o riprovare",
+ "OllamaServiceUnavailable": "Servizio Ollama non disponibile: controllare che Ollama sia in esecuzione correttamente o che la configurazione di cross-origin di Ollama sia corretta",
+ "OpenAIBizError": "Errore di business di OpenAI. Si prega di controllare le informazioni seguenti o riprovare.",
+ "PluginApiNotFound": "Spiacenti, l'API specificata non esiste nel manifesto del plugin. Verifica che il metodo di richiesta corrisponda all'API del manifesto del plugin",
+ "PluginApiParamsError": "Spiacenti, la convalida dei parametri di input della richiesta del plugin non è riuscita. Verifica che i parametri di input corrispondano alle informazioni dell'API",
+ "PluginFailToTransformArguments": "Spiacenti, la trasformazione degli argomenti della chiamata al plugin non è riuscita. Si prega di provare a rigenerare il messaggio dell'assistente o riprovare dopo aver cambiato il modello AI di Tools Calling con capacità più avanzate.",
+ "PluginGatewayError": "Spiacenti, si è verificato un errore nel gateway del plugin. Verifica che la configurazione del gateway del plugin sia corretta",
+ "PluginManifestInvalid": "Spiacenti, la convalida del manifesto descrittivo del plugin non è riuscita. Verifica che il formato del manifesto descrittivo sia conforme alle specifiche",
+ "PluginManifestNotFound": "Spiacenti, il server non trova il manifesto descrittivo del plugin (manifest.json). Verifica che l'indirizzo del file descrittivo del plugin sia corretto",
+ "PluginMarketIndexInvalid": "Spiacenti, la convalida dell'indice del plugin non è riuscita. Verifica che il formato del file dell'indice sia conforme alle specifiche",
+ "PluginMarketIndexNotFound": "Spiacenti, il server non trova l'indice del plugin. Verifica che l'indirizzo dell'indice sia corretto",
+ "PluginMetaInvalid": "Spiacenti, la convalida dei metadati del plugin non è riuscita. Verifica che il formato dei metadati del plugin sia conforme alle specifiche",
+ "PluginMetaNotFound": "Spiacenti, il plugin non è stato trovato nell'indice. Verifica che le informazioni di configurazione del plugin siano presenti nell'indice",
+ "PluginOpenApiInitError": "Spiacenti, inizializzazione fallita del client OpenAPI. Verifica che le informazioni di configurazione di OpenAPI siano corrette",
+ "PluginServerError": "Errore nella risposta del server del plugin. Verifica il file descrittivo del plugin, la configurazione del plugin o l'implementazione del server",
+ "PluginSettingsInvalid": "Il plugin deve essere configurato correttamente prima di poter essere utilizzato. Verifica che la tua configurazione sia corretta",
+ "ProviderBizError": "Errore di business del fornitore {{provider}}. Si prega di controllare le informazioni seguenti o riprovare.",
+ "StreamChunkError": "Erro di analisi del blocco di messaggi della richiesta in streaming. Controlla se l'interfaccia API attuale è conforme agli standard o contatta il tuo fornitore di API per ulteriori informazioni.",
+ "SubscriptionPlanLimit": "Il tuo piano di abbonamento ha raggiunto il limite e non puoi utilizzare questa funzione. Per favore, passa a un piano superiore o acquista un pacchetto di risorse per continuare.",
+ "UnknownChatFetchError": "Ci scusiamo, si è verificato un errore di richiesta sconosciuto. Si prega di controllare le informazioni seguenti o riprovare."
+ },
+ "stt": {
+ "responseError": "Errore nella richiesta del servizio. Verifica la configurazione o riprova"
+ },
+ "tts": {
+ "responseError": "Errore nella richiesta del servizio. Verifica la configurazione o riprova"
+ },
+ "unlock": {
+ "addProxyUrl": "Aggiungi URL del proxy OpenAI (opzionale)",
+ "apiKey": {
+ "description": "Inserisci la tua Chiave API {{name}} per iniziare la sessione",
+ "title": "Usa la tua Chiave API personalizzata {{name}}"
+ },
+ "closeMessage": "Chiudi messaggio",
+ "confirm": "Conferma e riprova",
+ "oauth": {
+ "description": "L'amministratore ha abilitato l'autenticazione di accesso unificata. Fai clic sul pulsante sottostante per accedere e sbloccare l'applicazione.",
+ "success": "Accesso riuscito",
+ "title": "Accedi all'account",
+ "welcome": "Benvenuto!"
+ },
+ "password": {
+ "description": "L'amministratore ha attivato la crittografia dell'applicazione. Inserisci la password dell'applicazione per sbloccarla. La password va inserita solo una volta.",
+ "placeholder": "Inserisci la password",
+ "title": "Inserisci la password per sbloccare l'applicazione"
+ },
+ "tabs": {
+ "apiKey": "Chiave API personalizzata",
+ "password": "Password"
+ }
+ },
+ "upload": {
+ "desc": "Dettagli: {{detail}}",
+ "fileOnlySupportInServerMode": "L'attuale modalità di distribuzione non supporta il caricamento di file non immagine. Per caricare file in formato {{ext}}, si prega di passare alla distribuzione del database sul server o di utilizzare il servizio {{cloud}}.",
+ "networkError": "Si prega di verificare che la connessione di rete sia stabile e controllare se la configurazione CORS del servizio di archiviazione file è corretta.",
+ "title": "Caricamento del file fallito, controlla la connessione di rete o riprova più tardi",
+ "unknownError": "Motivo dell'errore: {{reason}}",
+ "uploadFailed": "Caricamento del file non riuscito."
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/file.json b/DigitalHumanWeb/locales/it-IT/file.json
new file mode 100644
index 0000000..f7a1e3d
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "Gestisci i tuoi file e il tuo knowledge base",
+ "detail": {
+ "basic": {
+ "createdAt": "Data di creazione",
+ "filename": "Nome del file",
+ "size": "Dimensione del file",
+ "title": "Informazioni di base",
+ "type": "Formato",
+ "updatedAt": "Data di aggiornamento"
+ },
+ "data": {
+ "chunkCount": "Numero di blocchi",
+ "embedding": {
+ "default": "Non ancora vettorizzato",
+ "error": "Errore",
+ "pending": "In attesa di avvio",
+ "processing": "In elaborazione",
+ "success": "Completato"
+ },
+ "embeddingStatus": "Vettorizzazione"
+ }
+ },
+ "empty": "Nessun file/cartella caricato",
+ "header": {
+ "actions": {
+ "newFolder": "Nuova cartella",
+ "uploadFile": "Carica file",
+ "uploadFolder": "Carica cartella"
+ },
+ "uploadButton": "Carica"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "Stai per eliminare questa knowledge base. I file al suo interno non verranno eliminati, ma spostati in 'Tutti i file'. Una volta eliminata, la knowledge base non potrà essere recuperata, procedi con cautela.",
+ "empty": "Clicca <1>+1> per iniziare a creare una knowledge base"
+ },
+ "new": "Nuova knowledge base",
+ "title": "Knowledge Base"
+ },
+ "networkError": "Impossibile ottenere la knowledge base, controlla la connessione di rete e riprova",
+ "notSupportGuide": {
+ "desc": "L'istanza attuale è in modalità database client e non supporta la gestione dei file. Passa a <1>modalità di distribuzione del database server1>, oppure utilizza direttamente <3>LobeChat Cloud3>",
+ "features": {
+ "allKind": {
+ "desc": "Supporta i formati di file più comuni, inclusi Word, PPT, Excel, PDF, TXT e altri formati di documenti, così come file di codice comuni come JS e Python",
+ "title": "Analisi di vari tipi di file"
+ },
+ "embeddings": {
+ "desc": "Utilizza modelli vettoriali ad alte prestazioni per vettorizzare i blocchi di testo, consentendo la ricerca semantica del contenuto dei file",
+ "title": "Semantizzazione vettoriale"
+ },
+ "repos": {
+ "desc": "Supporta la creazione di knowledge base e consente di aggiungere diversi tipi di file, costruendo la tua conoscenza di settore",
+ "title": "Knowledge Base"
+ }
+ },
+ "title": "La modalità di distribuzione attuale non supporta la gestione dei file"
+ },
+ "preview": {
+ "downloadFile": "Scarica file",
+ "unsupportedFileAndContact": "Questo formato di file non è attualmente supportato per la visualizzazione online. Se hai bisogno di una visualizzazione, ti preghiamo di <1>contattarci1>."
+ },
+ "searchFilePlaceholder": "Cerca file",
+ "tab": {
+ "all": "Tutti i file",
+ "audios": "Audio",
+ "documents": "Documenti",
+ "images": "Immagini",
+ "videos": "Video",
+ "websites": "Siti web"
+ },
+ "title": "File",
+ "uploadDock": {
+ "body": {
+ "collapse": "Riduci",
+ "item": {
+ "done": "Caricato",
+ "error": "Caricamento fallito, riprova",
+ "pending": "Pronto per il caricamento...",
+ "processing": "Elaborazione del file...",
+ "restTime": "Tempo rimanente {{time}}"
+ }
+ },
+ "totalCount": "Totale {{count}} elementi",
+ "uploadStatus": {
+ "error": "Errore di caricamento",
+ "pending": "In attesa di caricamento",
+ "processing": "Caricamento in corso",
+ "success": "Caricamento completato",
+ "uploading": "Caricamento in corso"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/knowledgeBase.json b/DigitalHumanWeb/locales/it-IT/knowledgeBase.json
new file mode 100644
index 0000000..d800424
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "File aggiunto con successo, <1>visualizza subito1>",
+ "confirm": "Aggiungi",
+ "id": {
+ "placeholder": "Seleziona il knowledge base da aggiungere",
+ "required": "Seleziona il knowledge base",
+ "title": "Knowledge base di destinazione"
+ },
+ "title": "Aggiungi al knowledge base",
+ "totalFiles": "Hai selezionato {{count}} file"
+ },
+ "createNew": {
+ "confirm": "Crea nuovo",
+ "description": {
+ "placeholder": "Descrizione del knowledge base (opzionale)"
+ },
+ "formTitle": "Informazioni di base",
+ "name": {
+ "placeholder": "Nome del knowledge base",
+ "required": "Per favore, inserisci il nome del knowledge base"
+ },
+ "title": "Crea knowledge base"
+ },
+ "tab": {
+ "evals": "Valutazioni",
+ "files": "Documenti",
+ "settings": "Impostazioni",
+ "testing": "Test di richiamo"
+ },
+ "title": "Knowledge base"
+}
diff --git a/DigitalHumanWeb/locales/it-IT/market.json b/DigitalHumanWeb/locales/it-IT/market.json
new file mode 100644
index 0000000..6f9add5
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "Aggiungi assistente",
+ "addAgentAndConverse": "Aggiungi assistente e avvia la conversazione",
+ "addAgentSuccess": "Aggiunta riuscita",
+ "guide": {
+ "func1": {
+ "desc1": "Nella finestra di chat, accedi alle impostazioni nell'angolo in alto a destra per accedere alla pagina di configurazione dell'assistente che desideri inviare;",
+ "desc2": "Fai clic sul pulsante Invia al mercato degli assistenti nell'angolo in alto a destra.",
+ "tag": "Metodo uno",
+ "title": "Invia tramite LobeChat"
+ },
+ "func2": {
+ "button": "Vai al repository assistenti su Github",
+ "desc": "Se desideri aggiungere un assistente all'indice, utilizza agent-template.json o agent-template-full.json per creare una voce nella directory dei plugin, scrivi una breve descrizione e aggiungi i tag appropriati, quindi invia una richiesta di pull.",
+ "tag": "Metodo due",
+ "title": "Invia tramite Github"
+ }
+ },
+ "search": {
+ "placeholder": "Cerca nome, descrizione o parole chiave dell'assistente..."
+ },
+ "sidebar": {
+ "comment": "Commenti",
+ "prompt": "Suggerimenti",
+ "title": "Dettagli assistente"
+ },
+ "submitAgent": "Invia assistente",
+ "title": {
+ "allAgents": "Tutti gli assistenti",
+ "recentSubmits": "Aggiunte recenti"
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/metadata.json b/DigitalHumanWeb/locales/it-IT/metadata.json
new file mode 100644
index 0000000..31549cc
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}} ti offre la migliore esperienza con ChatGPT, Claude, Gemini e OLLaMA WebUI",
+ "title": "{{appName}}: strumento di efficienza personale AI, per darti un cervello più intelligente"
+ },
+ "discover": {
+ "assistants": {
+ "description": "Creazione di contenuti, copywriting, domande e risposte, generazione di immagini, generazione di video, generazione vocale, agenti intelligenti, flussi di lavoro automatizzati, personalizza il tuo assistente AI / GPTs / OLLaMA",
+ "title": "Assistenti AI"
+ },
+ "description": "Creazione di contenuti, copywriting, domande e risposte, generazione di immagini, generazione di video, generazione vocale, agenti intelligenti, flussi di lavoro automatizzati, applicazioni AI personalizzate, personalizza il tuo spazio di lavoro per le applicazioni AI",
+ "models": {
+ "description": "Esplora i modelli AI più diffusi: OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek",
+ "title": "Modelli AI"
+ },
+ "plugins": {
+ "description": "Scopri la generazione di grafici, accademica, generazione di immagini, generazione di video, generazione vocale e flussi di lavoro automatizzati, integra capacità ricche di plugin per il tuo assistente.",
+ "title": "Plugin AI"
+ },
+ "providers": {
+ "description": "Esplora i principali fornitori di modelli: OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter",
+ "title": "Fornitori di servizi di modelli AI"
+ },
+ "search": "Cerca",
+ "title": "Scopri"
+ },
+ "plugins": {
+ "description": "Ricerca, generazione di grafici, accademico, generazione di immagini, generazione di video, generazione vocale, flussi di lavoro automatizzati, personalizza le capacità dei plugin ToolCall esclusivi di ChatGPT / Claude",
+ "title": "Mercato dei plugin"
+ },
+ "welcome": {
+ "description": "{{appName}} ti offre la migliore esperienza con ChatGPT, Claude, Gemini e OLLaMA WebUI",
+ "title": "Benvenuto in {{appName}}: strumento di efficienza personale AI, per darti un cervello più intelligente"
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/migration.json b/DigitalHumanWeb/locales/it-IT/migration.json
new file mode 100644
index 0000000..f7464c7
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "Pulisci dati locali",
+ "downloadBackup": "Scarica backup dati",
+ "reUpgrade": "Riaggiorna",
+ "start": "Inizia",
+ "upgrade": "Aggiornamento"
+ },
+ "clear": {
+ "confirm": "Stai per cancellare i dati locali (le impostazioni globali non saranno influenzate), assicurati di aver scaricato il backup dei dati."
+ },
+ "description": "Nella nuova versione, il sistema di archiviazione dati di {{appName}} ha fatto un enorme balzo in avanti. Pertanto, dobbiamo aggiornare i dati della versione precedente per offrirti un'esperienza d'uso migliore.",
+ "features": {
+ "capability": {
+ "desc": "Basato sulla tecnologia IndexedDB, sufficiente per contenere tutti i messaggi delle tue conversazioni per tutta la vita",
+ "title": "Ampia capacità"
+ },
+ "performance": {
+ "desc": "Indicizzazione automatica di milioni di messaggi, con risposte alle query in millisecondi",
+ "title": "Alta performance"
+ },
+ "use": {
+ "desc": "Supporta la ricerca di titoli, descrizioni, etichette, contenuti dei messaggi e persino testi tradotti, migliorando notevolmente l'efficienza della ricerca quotidiana",
+ "title": "Più facile da usare"
+ }
+ },
+ "title": "Evoluzione dei dati di {{appName}}",
+ "upgrade": {
+ "error": {
+ "subTitle": "Ci scusiamo, si è verificato un errore durante il processo di aggiornamento del database. Ti preghiamo di provare le seguenti soluzioni: A. Cancella i dati locali e importa nuovamente i dati di backup; B. Clicca sul pulsante 'Riprova aggiornamento'.
Se l'errore persiste, ti preghiamo di <1>inviare un problema1>, ci attiveremo immediatamente per aiutarti.",
+ "title": "Aggiornamento del database fallito"
+ },
+ "success": {
+ "subTitle": "Il database di {{appName}} è stato aggiornato all'ultima versione, inizia subito a utilizzarlo!",
+ "title": "Aggiornamento del database riuscito"
+ }
+ },
+ "upgradeTip": "L'aggiornamento richiede circa 10-20 secondi, durante il processo di aggiornamento non chiudere {{appName}}."
+ },
+ "migrateError": {
+ "missVersion": "I dati importati non contengono il numero di versione, controlla il file e riprova",
+ "noMigration": "Non è stata trovata alcuna soluzione di migrazione corrispondente alla versione attuale, controlla il numero di versione e riprova. Se il problema persiste, invia un feedback"
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/modelProvider.json b/DigitalHumanWeb/locales/it-IT/modelProvider.json
new file mode 100644
index 0000000..c4a52d7
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "Versione dell'API di Azure, nel formato YYYY-MM-DD, consulta [ultima versione](https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions)",
+ "fetch": "Ottieni elenco",
+ "title": "Versione API Azure"
+ },
+ "empty": "Inserisci l'ID del modello per aggiungere il primo modello",
+ "endpoint": {
+ "desc": "Quando si controllano le risorse dal portale di Azure, questo valore si trova nella sezione 'Chiavi e endpoint'",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "Indirizzo API Azure"
+ },
+ "modelListPlaceholder": "Seleziona o aggiungi il modello OpenAI che hai distribuito",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "Quando si controllano le risorse dal portale di Azure, questo valore si trova nella sezione 'Chiavi e endpoint'. Puoi usare KEY1 o KEY2",
+ "placeholder": "Chiave API Azure",
+ "title": "Chiave API"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "Inserisci l'ID chiave di accesso AWS",
+ "placeholder": "ID chiave di accesso AWS",
+ "title": "ID chiave di accesso AWS"
+ },
+ "checker": {
+ "desc": "Verifica se AccessKeyId / SecretAccessKey sono stati inseriti correttamente"
+ },
+ "region": {
+ "desc": "Inserisci la regione AWS",
+ "placeholder": "Regione AWS",
+ "title": "Regione AWS"
+ },
+ "secretAccessKey": {
+ "desc": "Inserisci la chiave di accesso segreta AWS",
+ "placeholder": "Chiave di accesso segreta AWS",
+ "title": "Chiave di accesso segreta AWS"
+ },
+ "sessionToken": {
+ "desc": "Se stai utilizzando AWS SSO/STS, inserisci il tuo AWS Session Token",
+ "placeholder": "AWS Session Token",
+ "title": "AWS Session Token (opzionale)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "Regione del servizio personalizzata",
+ "customSessionToken": "Token di sessione personalizzato",
+ "description": "Inserisci la tua chiave di accesso AWS AccessKeyId / SecretAccessKey per avviare la sessione. L'applicazione non memorizzerà la tua configurazione di autenticazione",
+ "title": "Usa le informazioni di autenticazione Bedrock personalizzate"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "Inserisci il tuo PAT di Github, clicca [qui](https://github.com/settings/tokens) per crearne uno",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "Verifica se l'indirizzo del proxy è stato compilato correttamente",
+ "title": "Controllo della connettività"
+ },
+ "customModelName": {
+ "desc": "Aggiungi modelli personalizzati, separati da virgola (,)",
+ "placeholder": "vicuna,llava,codellama,llama2:13b-text",
+ "title": "Nome del modello personalizzato"
+ },
+ "download": {
+ "desc": "Ollama sta scaricando questo modello, per favore non chiudere questa pagina. Il download verrà interrotto e riprenderà dal punto in cui si è interrotto in caso di riavvio.",
+ "remainingTime": "Tempo rimanente",
+ "speed": "Velocità di download",
+ "title": "Download del modello in corso {{model}}"
+ },
+ "endpoint": {
+ "desc": "Inserisci l'indirizzo del proxy dell'interfaccia Ollama. Lascia vuoto se non specificato localmente",
+ "title": "Indirizzo del proxy dell'interfaccia"
+ },
+ "setup": {
+ "cors": {
+ "description": "A causa delle restrizioni di sicurezza del browser, è necessario configurare il cross-origin resource sharing (CORS) per consentire l'utilizzo di Ollama.",
+ "linux": {
+ "env": "Nella sezione [Service], aggiungi `Environment` e inserisci la variabile di ambiente OLLAMA_ORIGINS:",
+ "reboot": "Dopo aver completato l'esecuzione, riavvia il servizio Ollama.",
+ "systemd": "Per modificare il servizio ollama, chiama systemd:"
+ },
+ "macos": "Apri l'applicazione 'Terminale', incolla il comando seguente e premi Invio per eseguirlo",
+ "reboot": "Riavvia il servizio Ollama una volta completata l'esecuzione",
+ "title": "Configura Ollama per consentire l'accesso cross-origin",
+ "windows": "Su Windows, fai clic su 'Pannello di controllo', accedi alle variabili di ambiente di sistema. Crea una nuova variabile di ambiente chiamata 'OLLAMA_ORIGINS' per il tuo account utente, con valore *, quindi fai clic su 'OK/Applica' per salvare le modifiche"
+ },
+ "install": {
+ "description": "Assicurati di aver avviato Ollama. Se non l'hai ancora scaricato, visita il sito ufficiale per <1>scaricarlo1>",
+ "docker": "Se preferisci utilizzare Docker, Ollama fornisce anche un'immagine Docker ufficiale che puoi scaricare tramite il seguente comando:",
+ "linux": {
+ "command": "Per installare, utilizza il seguente comando:",
+ "manual": "Oppure, puoi consultare la <1>Guida all'installazione manuale di Linux1> per installare manualmente"
+ },
+ "title": "Installa e avvia l'applicazione Ollama localmente",
+ "windowsTab": "Windows (Versione di anteprima)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "Annulla download",
+ "confirm": "Download",
+ "description": "Inserisci l'etichetta del modello Ollama per continuare la sessione",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "Inizio download...",
+ "title": "Scarica il modello Ollama specificato"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI ZeroOne"
+ },
+ "zhipu": {
+ "title": "Zhipu"
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/models.json b/DigitalHumanWeb/locales/it-IT/models.json
new file mode 100644
index 0000000..bc45405
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34B, con un ricco campione di addestramento, offre prestazioni superiori nelle applicazioni di settore."
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9B supporta 16K Tokens, offrendo capacità di generazione linguistica efficienti e fluide."
+ },
+ "360gpt-pro": {
+ "description": "360GPT Pro, come membro importante della serie di modelli AI di 360, soddisfa le diverse applicazioni del linguaggio naturale con un'efficace capacità di elaborazione del testo, supportando la comprensione di testi lunghi e conversazioni a più turni."
+ },
+ "360gpt-turbo": {
+ "description": "360GPT Turbo offre potenti capacità di calcolo e dialogo, con un'eccellente comprensione semantica e efficienza di generazione, rappresentando una soluzione ideale per assistenti intelligenti per aziende e sviluppatori."
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8K enfatizza la sicurezza semantica e l'orientamento alla responsabilità, progettato specificamente per scenari applicativi con elevati requisiti di sicurezza dei contenuti, garantendo l'accuratezza e la robustezza dell'esperienza utente."
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Pro è un modello avanzato di elaborazione del linguaggio naturale lanciato da 360, con eccellenti capacità di generazione e comprensione del testo, in particolare nel campo della generazione e creazione, capace di gestire compiti complessi di conversione linguistica e interpretazione di ruoli."
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultra è la versione più potente della serie di modelli Spark, migliorando la comprensione e la sintesi del contenuto testuale mentre aggiorna il collegamento alla ricerca online. È una soluzione completa per migliorare la produttività lavorativa e rispondere con precisione alle esigenze, rappresentando un prodotto intelligente all'avanguardia nel settore."
+ },
+ "Baichuan2-Turbo": {
+ "description": "Utilizza tecnologie di ricerca avanzate per collegare completamente il grande modello con la conoscenza di settore e la conoscenza globale. Supporta il caricamento di vari documenti come PDF, Word e l'immissione di URL, con acquisizione di informazioni tempestiva e completa, e risultati di output accurati e professionali."
+ },
+ "Baichuan3-Turbo": {
+ "description": "Ottimizzato per scenari aziendali ad alta frequenza, con un notevole miglioramento delle prestazioni e un ottimo rapporto qualità-prezzo. Rispetto al modello Baichuan2, la creazione di contenuti è migliorata del 20%, le domande di conoscenza del 17% e le capacità di interpretazione di ruoli del 40%. Le prestazioni complessive superano quelle di GPT3.5."
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "Dotato di una finestra di contesto ultra lunga di 128K, ottimizzato per scenari aziendali ad alta frequenza, con un notevole miglioramento delle prestazioni e un ottimo rapporto qualità-prezzo. Rispetto al modello Baichuan2, la creazione di contenuti è migliorata del 20%, le domande di conoscenza del 17% e le capacità di interpretazione di ruoli del 40%. Le prestazioni complessive superano quelle di GPT3.5."
+ },
+ "Baichuan4": {
+ "description": "Il modello ha la migliore capacità in Cina, superando i modelli mainstream esteri in compiti cinesi come enciclopedie, testi lunghi e creazione di contenuti. Ha anche capacità multimodali leader nel settore, con prestazioni eccellenti in vari benchmark di valutazione."
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B) è un modello innovativo, adatto per applicazioni in più settori e compiti complessi."
+ },
+ "Max-32k": {
+ "description": "Spark Max 32K è dotato di una grande capacità di elaborazione del contesto, con una comprensione e un ragionamento logico più potenti, supporta l'input di testo fino a 32K token, adatto per la lettura di documenti lunghi, domande e risposte su conoscenze private e altri scenari."
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPO è un modello altamente flessibile, progettato per offrire un'esperienza creativa eccezionale."
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B) è un modello di istruzioni ad alta precisione, adatto per calcoli complessi."
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B) offre output linguistici ottimizzati e possibilità di applicazione diversificate."
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "Aggiornamento del modello Phi-3-mini."
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "Stesso modello Phi-3-medium, ma con una dimensione di contesto più grande per RAG o prompting a pochi colpi."
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "Un modello con 14 miliardi di parametri, dimostra una qualità migliore rispetto a Phi-3-mini, con un focus su dati densi di ragionamento di alta qualità."
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "Stesso modello Phi-3-mini, ma con una dimensione di contesto più grande per RAG o prompting a pochi colpi."
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "Il membro più piccolo della famiglia Phi-3. Ottimizzato sia per qualità che per bassa latenza."
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "Stesso modello Phi-3-small, ma con una dimensione di contesto più grande per RAG o prompting a pochi colpi."
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "Un modello con 7 miliardi di parametri, dimostra una qualità migliore rispetto a Phi-3-mini, con un focus su dati densi di ragionamento di alta qualità."
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128K è dotato di capacità di elaborazione del contesto eccezionalmente grandi, in grado di gestire fino a 128K di informazioni contestuali, particolarmente adatto per contenuti lunghi che richiedono analisi complete e gestione di associazioni logiche a lungo termine, fornendo logica fluida e coerenza in comunicazioni testuali complesse e supporto per citazioni varie."
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "Come versione beta di Qwen2, Qwen1.5 utilizza dati su larga scala per realizzare funzionalità di dialogo più precise."
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B) offre risposte rapide e capacità di dialogo naturale, adatto per ambienti multilingue."
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2 è un modello di linguaggio universale avanzato, supportando vari tipi di istruzioni."
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5 è una nuova serie di modelli di linguaggio di grandi dimensioni, progettata per ottimizzare l'elaborazione di compiti istruzionali."
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5 è una nuova serie di modelli di linguaggio di grandi dimensioni, progettata per ottimizzare l'elaborazione di compiti istruzionali."
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5 è una nuova serie di modelli di linguaggio di grandi dimensioni, con capacità di comprensione e generazione superiori."
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5 è una nuova serie di modelli di linguaggio di grandi dimensioni, progettata per ottimizzare l'elaborazione di compiti istruzionali."
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coder si concentra sulla scrittura di codice."
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Math si concentra sulla risoluzione di problemi nel campo della matematica, fornendo risposte professionali a domande di alta difficoltà."
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9B è una versione open source, progettata per fornire un'esperienza di dialogo ottimizzata per applicazioni conversazionali."
+ },
+ "abab5.5-chat": {
+ "description": "Focalizzato su scenari di produttività, supporta l'elaborazione di compiti complessi e la generazione di testo efficiente, adatto per applicazioni professionali."
+ },
+ "abab5.5s-chat": {
+ "description": "Progettato per scenari di dialogo con personaggi cinesi, offre capacità di generazione di dialoghi di alta qualità, adatto per vari scenari applicativi."
+ },
+ "abab6.5g-chat": {
+ "description": "Progettato per dialoghi con personaggi multilingue, supporta la generazione di dialoghi di alta qualità in inglese e in molte altre lingue."
+ },
+ "abab6.5s-chat": {
+ "description": "Adatto per una vasta gamma di compiti di elaborazione del linguaggio naturale, inclusa la generazione di testo e i sistemi di dialogo."
+ },
+ "abab6.5t-chat": {
+ "description": "Ottimizzato per scenari di dialogo con personaggi cinesi, offre capacità di generazione di dialoghi fluida e conforme alle espressioni cinesi."
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "Il modello open source di chiamata di funzione di Fireworks offre capacità di esecuzione di istruzioni eccezionali e caratteristiche personalizzabili."
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "Firefunction-v2, l'ultima offerta di Fireworks, è un modello di chiamata di funzione ad alte prestazioni, sviluppato su Llama-3 e ottimizzato per scenari come chiamate di funzione, dialogo e seguimento di istruzioni."
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13b è un modello di linguaggio visivo in grado di ricevere input sia visivi che testuali, addestrato su dati di alta qualità, adatto per compiti multimodali."
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "Il modello di istruzioni Gemma 2 9B, basato sulla tecnologia Google precedente, è adatto per rispondere a domande, riassumere e generare testi in vari contesti."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "Il modello di istruzioni Llama 3 70B è ottimizzato per dialoghi multilingue e comprensione del linguaggio naturale, superando le prestazioni della maggior parte dei modelli concorrenti."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "Il modello di istruzioni Llama 3 70B (versione HF) è allineato con i risultati dell'implementazione ufficiale, adatto per compiti di seguimento di istruzioni di alta qualità."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "Il modello di istruzioni Llama 3 8B è ottimizzato per dialoghi e compiti multilingue, offrendo prestazioni eccellenti e alta efficienza."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "Il modello di istruzioni Llama 3 8B (versione HF) è coerente con i risultati dell'implementazione ufficiale, garantendo alta coerenza e compatibilità cross-platform."
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "Il modello di istruzioni Llama 3.1 405B ha parametri su scala estremamente grande, adatto per compiti complessi e seguimento di istruzioni in scenari ad alto carico."
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "Il modello di istruzioni Llama 3.1 70B offre capacità superiori di comprensione e generazione del linguaggio, ideale per compiti di dialogo e analisi."
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "Il modello di istruzioni Llama 3.1 8B è ottimizzato per dialoghi multilingue, in grado di superare la maggior parte dei modelli open e closed source su benchmark di settore comuni."
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "Il modello di istruzioni Mixtral MoE 8x22B, con parametri su larga scala e architettura multi-esperto, supporta in modo completo l'elaborazione efficiente di compiti complessi."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "Il modello di istruzioni Mixtral MoE 8x7B, con architettura multi-esperto, offre un'elevata efficienza nel seguire e eseguire istruzioni."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "Il modello di istruzioni Mixtral MoE 8x7B (versione HF) ha prestazioni coerenti con l'implementazione ufficiale, adatto per vari scenari di compiti efficienti."
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "Il modello MythoMax L2 13B combina tecnologie di fusione innovative, specializzandosi in narrazione e interpretazione di ruoli."
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "Il modello di istruzioni Phi 3 Vision è un modello multimodale leggero, in grado di gestire informazioni visive e testuali complesse, con forti capacità di ragionamento."
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "Il modello StarCoder 15.5B supporta compiti di programmazione avanzati, con capacità multilingue potenziate, adatto per la generazione e comprensione di codice complesso."
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "Il modello StarCoder 7B è addestrato su oltre 80 linguaggi di programmazione, con eccellenti capacità di completamento del codice e comprensione del contesto."
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "Il modello Yi-Large offre capacità eccezionali di elaborazione multilingue, utilizzabile per vari compiti di generazione e comprensione del linguaggio."
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "Un modello multilingue con 398 miliardi di parametri (94 miliardi attivi), offre una finestra di contesto lunga 256K, chiamata di funzione, output strutturato e generazione ancorata."
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "Un modello multilingue con 52 miliardi di parametri (12 miliardi attivi), offre una finestra di contesto lunga 256K, chiamata di funzione, output strutturato e generazione ancorata."
+ },
+ "ai21-jamba-instruct": {
+ "description": "Un modello LLM basato su Mamba di grado di produzione per ottenere prestazioni, qualità e efficienza dei costi di prim'ordine."
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "Claude 3.5 Sonnet ha elevato gli standard del settore, superando i modelli concorrenti e Claude 3 Opus, dimostrando prestazioni eccezionali in una vasta gamma di valutazioni, mantenendo la velocità e i costi dei nostri modelli di livello medio."
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "Claude 3 Haiku è il modello più veloce e compatto di Anthropic, offrendo una velocità di risposta quasi istantanea. Può rispondere rapidamente a query e richieste semplici. I clienti saranno in grado di costruire un'esperienza AI senza soluzione di continuità che imita l'interazione umana. Claude 3 Haiku può gestire immagini e restituire output testuali, con una finestra di contesto di 200K."
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "Claude 3 Opus è il modello AI più potente di Anthropic, con prestazioni all'avanguardia in compiti altamente complessi. Può gestire prompt aperti e scenari mai visti prima, con un'eccellente fluidità e comprensione simile a quella umana. Claude 3 Opus mostra le possibilità all'avanguardia dell'AI generativa. Claude 3 Opus può gestire immagini e restituire output testuali, con una finestra di contesto di 200K."
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "Claude 3 Sonnet di Anthropic raggiunge un equilibrio ideale tra intelligenza e velocità, particolarmente adatto per carichi di lavoro aziendali. Offre la massima utilità a un prezzo inferiore rispetto ai concorrenti ed è progettato per essere un pilastro affidabile e durevole, adatto per implementazioni AI su larga scala. Claude 3 Sonnet può gestire immagini e restituire output testuali, con una finestra di contesto di 200K."
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "Un modello veloce, economico e comunque molto capace, in grado di gestire una serie di compiti, tra cui conversazioni quotidiane, analisi testuali, sintesi e domande e risposte su documenti."
+ },
+ "anthropic.claude-v2": {
+ "description": "Un modello di Anthropic che dimostra elevate capacità in una vasta gamma di compiti, dalla conversazione complessa alla generazione di contenuti creativi, fino al seguire istruzioni dettagliate."
+ },
+ "anthropic.claude-v2:1": {
+ "description": "Versione aggiornata di Claude 2, con una finestra di contesto doppia e miglioramenti nella affidabilità, nel tasso di allucinazione e nell'accuratezza basata su prove nei contesti di documenti lunghi e RAG."
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 Haiku è il modello più veloce e compatto di Anthropic, progettato per fornire risposte quasi istantanee. Ha prestazioni direzionali rapide e accurate."
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opus è il modello più potente di Anthropic per gestire compiti altamente complessi. Eccelle in prestazioni, intelligenza, fluidità e comprensione."
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 Sonnet offre capacità superiori rispetto a Opus e una velocità maggiore rispetto a Sonnet, mantenendo lo stesso prezzo di Sonnet. Sonnet è particolarmente abile in programmazione, scienza dei dati, elaborazione visiva e compiti di agenzia."
+ },
+ "aya": {
+ "description": "Aya 23 è un modello multilingue lanciato da Cohere, supporta 23 lingue, facilitando applicazioni linguistiche diversificate."
+ },
+ "aya:35b": {
+ "description": "Aya 23 è un modello multilingue lanciato da Cohere, supporta 23 lingue, facilitando applicazioni linguistiche diversificate."
+ },
+ "charglm-3": {
+ "description": "CharGLM-3 è progettato per il gioco di ruolo e la compagnia emotiva, supporta una memoria multi-turno ultra-lunga e dialoghi personalizzati, con ampie applicazioni."
+ },
+ "chatgpt-4o-latest": {
+ "description": "ChatGPT-4o è un modello dinamico, aggiornato in tempo reale per mantenere la versione più recente. Combina una potente comprensione e generazione del linguaggio, adatta a scenari di applicazione su larga scala, inclusi servizi clienti, educazione e supporto tecnico."
+ },
+ "claude-2.0": {
+ "description": "Claude 2 offre progressi nelle capacità chiave per le aziende, inclusi contesti leader del settore fino a 200K token, riduzione significativa della frequenza di allucinazioni del modello, suggerimenti di sistema e una nuova funzionalità di test: chiamate di strumenti."
+ },
+ "claude-2.1": {
+ "description": "Claude 2 offre progressi nelle capacità chiave per le aziende, inclusi contesti leader del settore fino a 200K token, riduzione significativa della frequenza di allucinazioni del modello, suggerimenti di sistema e una nuova funzionalità di test: chiamate di strumenti."
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnet offre capacità superiori a Opus e velocità più elevate rispetto a Sonnet, mantenendo lo stesso prezzo. Sonnet è particolarmente abile in programmazione, scienza dei dati, elaborazione visiva e compiti di agenti."
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haiku è il modello più veloce e compatto di Anthropic, progettato per risposte quasi istantanee. Ha prestazioni di orientamento rapide e accurate."
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opus è il modello più potente di Anthropic per gestire compiti altamente complessi. Eccelle in prestazioni, intelligenza, fluidità e comprensione."
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnet offre un equilibrio ideale tra intelligenza e velocità per i carichi di lavoro aziendali. Fornisce la massima utilità a un prezzo inferiore, affidabile e adatto per distribuzioni su larga scala."
+ },
+ "claude-instant-1.2": {
+ "description": "Il modello di Anthropic è progettato per generazione di testi a bassa latenza e alta capacità, supportando la generazione di centinaia di pagine di testo."
+ },
+ "codegeex-4": {
+ "description": "CodeGeeX-4 è un potente assistente di programmazione AI, supporta domande intelligenti e completamento del codice in vari linguaggi di programmazione, migliorando l'efficienza dello sviluppo."
+ },
+ "codegemma": {
+ "description": "CodeGemma è un modello linguistico leggero dedicato a vari compiti di programmazione, supporta iterazioni rapide e integrazione."
+ },
+ "codegemma:2b": {
+ "description": "CodeGemma è un modello linguistico leggero dedicato a vari compiti di programmazione, supporta iterazioni rapide e integrazione."
+ },
+ "codellama": {
+ "description": "Code Llama è un LLM focalizzato sulla generazione e discussione di codice, combinando un ampio supporto per i linguaggi di programmazione, adatto per ambienti di sviluppo."
+ },
+ "codellama:13b": {
+ "description": "Code Llama è un LLM focalizzato sulla generazione e discussione di codice, combinando un ampio supporto per i linguaggi di programmazione, adatto per ambienti di sviluppo."
+ },
+ "codellama:34b": {
+ "description": "Code Llama è un LLM focalizzato sulla generazione e discussione di codice, combinando un ampio supporto per i linguaggi di programmazione, adatto per ambienti di sviluppo."
+ },
+ "codellama:70b": {
+ "description": "Code Llama è un LLM focalizzato sulla generazione e discussione di codice, combinando un ampio supporto per i linguaggi di programmazione, adatto per ambienti di sviluppo."
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5 è un modello di linguaggio di grandi dimensioni addestrato su un ampio set di dati di codice, progettato per risolvere compiti di programmazione complessi."
+ },
+ "codestral": {
+ "description": "Codestral è il primo modello di codice di Mistral AI, offre un supporto eccezionale per i compiti di generazione di codice."
+ },
+ "codestral-latest": {
+ "description": "Codestral è un modello generativo all'avanguardia focalizzato sulla generazione di codice, ottimizzato per compiti di completamento e riempimento intermedio."
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22B è un modello progettato per seguire istruzioni, dialogo e programmazione."
+ },
+ "cohere-command-r": {
+ "description": "Command R è un modello generativo scalabile mirato a RAG e all'uso di strumenti per abilitare l'IA su scala aziendale."
+ },
+ "cohere-command-r-plus": {
+ "description": "Command R+ è un modello ottimizzato per RAG all'avanguardia progettato per affrontare carichi di lavoro di livello aziendale."
+ },
+ "command-r": {
+ "description": "Command R è un LLM ottimizzato per compiti di dialogo e contesti lunghi, particolarmente adatto per interazioni dinamiche e gestione della conoscenza."
+ },
+ "command-r-plus": {
+ "description": "Command R+ è un modello di linguaggio di grandi dimensioni ad alte prestazioni, progettato per scenari aziendali reali e applicazioni complesse."
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instruct offre capacità di elaborazione di istruzioni altamente affidabili, supportando applicazioni in vari settori."
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5 combina le eccellenti caratteristiche delle versioni precedenti, migliorando le capacità generali e di codifica."
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67B è un modello avanzato addestrato per dialoghi ad alta complessità."
+ },
+ "deepseek-chat": {
+ "description": "Un nuovo modello open source che integra capacità generali e di codifica, mantenendo non solo le capacità conversazionali generali del modello Chat originale, ma anche la potente capacità di elaborazione del codice del modello Coder, allineandosi meglio alle preferenze umane. Inoltre, DeepSeek-V2.5 ha ottenuto notevoli miglioramenti in vari aspetti, come i compiti di scrittura e il rispetto delle istruzioni."
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2 è un modello di codice open source di esperti misti, eccelle nei compiti di codice, paragonabile a GPT4-Turbo."
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2 è un modello di codice open source di esperti misti, eccelle nei compiti di codice, paragonabile a GPT4-Turbo."
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2 è un modello di linguaggio Mixture-of-Experts efficiente, adatto per esigenze di elaborazione economica."
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236B è il modello di codice progettato di DeepSeek, offre potenti capacità di generazione di codice."
+ },
+ "deepseek/deepseek-chat": {
+ "description": "Un nuovo modello open source che integra capacità generali e di codice, mantenendo non solo le capacità di dialogo generali del modello Chat originale e la potente capacità di elaborazione del codice del modello Coder, ma allineandosi anche meglio alle preferenze umane. Inoltre, DeepSeek-V2.5 ha ottenuto notevoli miglioramenti in vari aspetti, come compiti di scrittura e seguire istruzioni."
+ },
+ "emohaa": {
+ "description": "Emohaa è un modello psicologico, con capacità di consulenza professionale, aiuta gli utenti a comprendere i problemi emotivi."
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001 (Tuning) offre prestazioni stabili e ottimizzabili, è la scelta ideale per soluzioni a compiti complessi."
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002 (Tuning) offre un'eccellente supporto multimodale, focalizzandosi sulla risoluzione efficace di compiti complessi."
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Pro è il modello AI ad alte prestazioni di Google, progettato per l'espansione su una vasta gamma di compiti."
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001 è un modello multimodale efficiente, supporta l'espansione per applicazioni ampie."
+ },
+ "gemini-1.5-flash-002": {
+ "description": "Gemini 1.5 Flash 002 è un modello multimodale altamente efficiente, che supporta un'ampia gamma di applicazioni."
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827 è progettato per gestire scenari di compiti su larga scala, offrendo una velocità di elaborazione senza pari."
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "Gemini 1.5 Flash 8B 0924 è il modello sperimentale più recente, con miglioramenti significativi nelle prestazioni sia nei casi d'uso testuali che multimodali."
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827 offre capacità di elaborazione multimodale ottimizzate, adatte a vari scenari di compiti complessi."
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flash è il più recente modello AI multimodale di Google, dotato di capacità di elaborazione rapida, supporta input di testo, immagini e video, ed è adatto per un'ampia gamma di compiti di scalabilità efficiente."
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001 è una soluzione AI multimodale scalabile, supporta un'ampia gamma di compiti complessi."
+ },
+ "gemini-1.5-pro-002": {
+ "description": "Gemini 1.5 Pro 002 è il modello più recente pronto per la produzione, che offre output di qualità superiore, con miglioramenti significativi in particolare in matematica, contesti lunghi e compiti visivi."
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801 offre eccellenti capacità di elaborazione multimodale, fornendo maggiore flessibilità per lo sviluppo delle applicazioni."
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827 combina le più recenti tecnologie ottimizzate, offrendo una capacità di elaborazione dei dati multimodali più efficiente."
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Pro supporta fino a 2 milioni di token, è la scelta ideale per modelli multimodali di medie dimensioni, adatta a un supporto multifunzionale per compiti complessi."
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7B è adatto per l'elaborazione di compiti di piccole e medie dimensioni, combinando efficienza dei costi."
+ },
+ "gemma2": {
+ "description": "Gemma 2 è un modello efficiente lanciato da Google, coprendo una vasta gamma di scenari applicativi, da applicazioni di piccole dimensioni a elaborazioni di dati complesse."
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9B è un modello ottimizzato per l'integrazione di compiti specifici e strumenti."
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2 è un modello efficiente lanciato da Google, coprendo una vasta gamma di scenari applicativi, da applicazioni di piccole dimensioni a elaborazioni di dati complesse."
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2 è un modello efficiente lanciato da Google, coprendo una vasta gamma di scenari applicativi, da applicazioni di piccole dimensioni a elaborazioni di dati complesse."
+ },
+ "general": {
+ "description": "Spark Lite è un modello linguistico di grandi dimensioni leggero, con latenza estremamente bassa e capacità di elaborazione efficiente, completamente gratuito e aperto, supportando funzionalità di ricerca online in tempo reale. La sua caratteristica di risposta rapida lo rende eccellente per applicazioni di inferenza su dispositivi a bassa potenza e per il fine-tuning del modello, offrendo un ottimo rapporto costo-efficacia e un'esperienza intelligente, in particolare in scenari di domande e risposte, generazione di contenuti e ricerca."
+ },
+ "generalv3": {
+ "description": "Spark Pro è un modello linguistico di grandi dimensioni ad alte prestazioni, ottimizzato per settori professionali, focalizzandosi su matematica, programmazione, medicina, educazione e altro, supportando la ricerca online e plugin integrati per meteo, data e altro. Il modello ottimizzato mostra prestazioni eccellenti e alta efficienza in domande e risposte complesse, comprensione del linguaggio e creazione di testi di alto livello, rendendolo una scelta ideale per scenari di applicazione professionale."
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Max è la versione più completa, supportando la ricerca online e numerosi plugin integrati. Le sue capacità core completamente ottimizzate, insieme alla definizione dei ruoli di sistema e alla funzionalità di chiamata di funzioni, lo rendono estremamente eccellente e performante in vari scenari di applicazione complessi."
+ },
+ "glm-4": {
+ "description": "GLM-4 è la versione flagship rilasciata a gennaio 2024, attualmente sostituita da GLM-4-0520, più potente."
+ },
+ "glm-4-0520": {
+ "description": "GLM-4-0520 è l'ultima versione del modello, progettata per compiti altamente complessi e diversificati, con prestazioni eccezionali."
+ },
+ "glm-4-air": {
+ "description": "GLM-4-Air è una versione economica, con prestazioni simili a GLM-4, che offre velocità elevate a un prezzo accessibile."
+ },
+ "glm-4-airx": {
+ "description": "GLM-4-AirX offre una versione efficiente di GLM-4-Air, con velocità di inferenza fino a 2,6 volte superiore."
+ },
+ "glm-4-alltools": {
+ "description": "GLM-4-AllTools è un modello di agente multifunzionale, ottimizzato per supportare la pianificazione di istruzioni complesse e le chiamate agli strumenti, come la navigazione web, l'interpretazione del codice e la generazione di testo, adatto per l'esecuzione di più compiti."
+ },
+ "glm-4-flash": {
+ "description": "GLM-4-Flash è l'ideale per compiti semplici, con la massima velocità e il prezzo più conveniente."
+ },
+ "glm-4-long": {
+ "description": "GLM-4-Long supporta input di testo ultra-lunghi, adatto per compiti di memoria e gestione di documenti su larga scala."
+ },
+ "glm-4-plus": {
+ "description": "GLM-4-Plus, come flagship ad alta intelligenza, ha potenti capacità di elaborazione di testi lunghi e compiti complessi, con prestazioni complessive migliorate."
+ },
+ "glm-4v": {
+ "description": "GLM-4V offre potenti capacità di comprensione e ragionamento visivo, supportando vari compiti visivi."
+ },
+ "glm-4v-plus": {
+ "description": "GLM-4V-Plus ha la capacità di comprendere contenuti video e più immagini, adatto per compiti multimodali."
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827 offre capacità di elaborazione multimodale ottimizzate, adatte a vari scenari di compiti complessi."
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827 combina le più recenti tecnologie di ottimizzazione, offrendo capacità di elaborazione dei dati multimodali più efficienti."
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2 continua il concetto di design leggero ed efficiente."
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2 è una serie di modelli di testo open source leggeri di Google."
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2 è la serie di modelli di testo open source leggeri di Google."
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B) offre capacità di elaborazione di istruzioni di base, adatta per applicazioni leggere."
+ },
+ "gpt-3.5-turbo": {
+ "description": "GPT 3.5 Turbo, adatto a una varietà di compiti di generazione e comprensione del testo, attualmente punta a gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "GPT 3.5 Turbo, adatto a una varietà di compiti di generazione e comprensione del testo, attualmente punta a gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "GPT 3.5 Turbo, adatto a una varietà di compiti di generazione e comprensione del testo, attualmente punta a gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "GPT 3.5 Turbo, adatto a una varietà di compiti di generazione e comprensione del testo, attualmente punta a gpt-3.5-turbo-0125."
+ },
+ "gpt-4": {
+ "description": "GPT-4 offre una finestra di contesto più ampia, in grado di gestire input testuali più lunghi, adatta a scenari che richiedono un'integrazione ampia delle informazioni e analisi dei dati."
+ },
+ "gpt-4-0125-preview": {
+ "description": "L'ultimo modello GPT-4 Turbo ha funzionalità visive. Ora, le richieste visive possono essere effettuate utilizzando il formato JSON e le chiamate di funzione. GPT-4 Turbo è una versione potenziata che offre supporto economico per compiti multimodali. Trova un equilibrio tra accuratezza ed efficienza, adatta a scenari di applicazione che richiedono interazioni in tempo reale."
+ },
+ "gpt-4-0613": {
+ "description": "GPT-4 offre una finestra di contesto più ampia, in grado di gestire input testuali più lunghi, adatta a scenari che richiedono un'integrazione ampia delle informazioni e analisi dei dati."
+ },
+ "gpt-4-1106-preview": {
+ "description": "L'ultimo modello GPT-4 Turbo ha funzionalità visive. Ora, le richieste visive possono essere effettuate utilizzando il formato JSON e le chiamate di funzione. GPT-4 Turbo è una versione potenziata che offre supporto economico per compiti multimodali. Trova un equilibrio tra accuratezza ed efficienza, adatta a scenari di applicazione che richiedono interazioni in tempo reale."
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "L'ultimo modello GPT-4 Turbo ha funzionalità visive. Ora, le richieste visive possono essere effettuate utilizzando il formato JSON e le chiamate di funzione. GPT-4 Turbo è una versione potenziata che offre supporto economico per compiti multimodali. Trova un equilibrio tra accuratezza ed efficienza, adatta a scenari di applicazione che richiedono interazioni in tempo reale."
+ },
+ "gpt-4-32k": {
+ "description": "GPT-4 offre una finestra di contesto più ampia, in grado di gestire input testuali più lunghi, adatta a scenari che richiedono un'integrazione ampia delle informazioni e analisi dei dati."
+ },
+ "gpt-4-32k-0613": {
+ "description": "GPT-4 offre una finestra di contesto più ampia, in grado di gestire input testuali più lunghi, adatta a scenari che richiedono un'integrazione ampia delle informazioni e analisi dei dati."
+ },
+ "gpt-4-turbo": {
+ "description": "L'ultimo modello GPT-4 Turbo ha funzionalità visive. Ora, le richieste visive possono essere effettuate utilizzando il formato JSON e le chiamate di funzione. GPT-4 Turbo è una versione potenziata che offre supporto economico per compiti multimodali. Trova un equilibrio tra accuratezza ed efficienza, adatta a scenari di applicazione che richiedono interazioni in tempo reale."
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "L'ultimo modello GPT-4 Turbo ha funzionalità visive. Ora, le richieste visive possono essere effettuate utilizzando il formato JSON e le chiamate di funzione. GPT-4 Turbo è una versione potenziata che offre supporto economico per compiti multimodali. Trova un equilibrio tra accuratezza ed efficienza, adatta a scenari di applicazione che richiedono interazioni in tempo reale."
+ },
+ "gpt-4-turbo-preview": {
+ "description": "L'ultimo modello GPT-4 Turbo ha funzionalità visive. Ora, le richieste visive possono essere effettuate utilizzando il formato JSON e le chiamate di funzione. GPT-4 Turbo è una versione potenziata che offre supporto economico per compiti multimodali. Trova un equilibrio tra accuratezza ed efficienza, adatta a scenari di applicazione che richiedono interazioni in tempo reale."
+ },
+ "gpt-4-vision-preview": {
+ "description": "L'ultimo modello GPT-4 Turbo ha funzionalità visive. Ora, le richieste visive possono essere effettuate utilizzando il formato JSON e le chiamate di funzione. GPT-4 Turbo è una versione potenziata che offre supporto economico per compiti multimodali. Trova un equilibrio tra accuratezza ed efficienza, adatta a scenari di applicazione che richiedono interazioni in tempo reale."
+ },
+ "gpt-4o": {
+ "description": "ChatGPT-4o è un modello dinamico, aggiornato in tempo reale per mantenere la versione più recente. Combina una potente comprensione e generazione del linguaggio, adatta a scenari di applicazione su larga scala, inclusi servizi clienti, educazione e supporto tecnico."
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "ChatGPT-4o è un modello dinamico, aggiornato in tempo reale per mantenere la versione più recente. Combina una potente comprensione e generazione del linguaggio, adatta a scenari di applicazione su larga scala, inclusi servizi clienti, educazione e supporto tecnico."
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o è un modello dinamico, aggiornato in tempo reale per mantenere la versione più recente. Combina una potente comprensione e generazione del linguaggio, adatta a scenari di applicazione su larga scala, inclusi servizi clienti, educazione e supporto tecnico."
+ },
+ "gpt-4o-mini": {
+ "description": "GPT-4o mini è il modello più recente lanciato da OpenAI dopo il GPT-4 Omni, supporta input visivi e testuali e produce output testuali. Come il loro modello di punta in formato ridotto, è molto più economico rispetto ad altri modelli all'avanguardia recenti e costa oltre il 60% in meno rispetto a GPT-3.5 Turbo. Mantiene un'intelligenza all'avanguardia, offrendo un rapporto qualità-prezzo significativo. GPT-4o mini ha ottenuto un punteggio dell'82% nel test MMLU e attualmente è classificato più in alto di GPT-4 per preferenze di chat."
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13B è un modello linguistico che combina creatività e intelligenza, unendo diversi modelli di punta."
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "Il modello open source innovativo InternLM2.5, con un gran numero di parametri, migliora l'intelligenza del dialogo."
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5 offre soluzioni di dialogo intelligente in vari scenari."
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "Il modello Llama 3.1 70B Instruct, con 70B parametri, offre prestazioni eccezionali in generazione di testi di grandi dimensioni e compiti di istruzione."
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70B offre capacità di ragionamento AI più potenti, adatto per applicazioni complesse, supporta un'elaborazione computazionale elevata garantendo efficienza e precisione."
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8B è un modello ad alte prestazioni, offre capacità di generazione di testo rapida, particolarmente adatto per scenari applicativi che richiedono efficienza su larga scala e costi contenuti."
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "Il modello Llama 3.1 8B Instruct, con 8B parametri, supporta l'esecuzione efficiente di compiti di istruzione, offrendo capacità di generazione testuale di alta qualità."
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "Il modello Llama 3.1 Sonar Huge Online, con 405B parametri, supporta una lunghezza di contesto di circa 127.000 token, progettato per applicazioni di chat online complesse."
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "Il modello Llama 3.1 Sonar Large Chat, con 70B parametri, supporta una lunghezza di contesto di circa 127.000 token, adatto per compiti di chat offline complessi."
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "Il modello Llama 3.1 Sonar Large Online, con 70B parametri, supporta una lunghezza di contesto di circa 127.000 token, adatto per compiti di chat ad alta capacità e diversificati."
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "Il modello Llama 3.1 Sonar Small Chat, con 8B parametri, è progettato per chat offline, supportando una lunghezza di contesto di circa 127.000 token."
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "Il modello Llama 3.1 Sonar Small Online, con 8B parametri, supporta una lunghezza di contesto di circa 127.000 token, progettato per chat online, in grado di gestire interazioni testuali in modo efficiente."
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70B offre capacità di elaborazione della complessità senza pari, progettato su misura per progetti ad alta richiesta."
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8B offre prestazioni di ragionamento di alta qualità, adatto per esigenze applicative in vari scenari."
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Use offre potenti capacità di invocazione degli strumenti, supporta l'elaborazione efficiente di compiti complessi."
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Use è un modello ottimizzato per l'uso efficiente degli strumenti, supporta calcoli paralleli rapidi."
+ },
+ "llama3.1": {
+ "description": "Llama 3.1 è il modello leader lanciato da Meta, supporta fino a 405B parametri, applicabile a dialoghi complessi, traduzione multilingue e analisi dei dati."
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1 è il modello leader lanciato da Meta, supporta fino a 405B parametri, applicabile a dialoghi complessi, traduzione multilingue e analisi dei dati."
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1 è il modello leader lanciato da Meta, supporta fino a 405B parametri, applicabile a dialoghi complessi, traduzione multilingue e analisi dei dati."
+ },
+ "llava": {
+ "description": "LLaVA è un modello multimodale che combina un codificatore visivo e Vicuna, per una potente comprensione visiva e linguistica."
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7B offre capacità di elaborazione visiva integrate, generando output complessi attraverso input visivi."
+ },
+ "llava:13b": {
+ "description": "LLaVA è un modello multimodale che combina un codificatore visivo e Vicuna, per una potente comprensione visiva e linguistica."
+ },
+ "llava:34b": {
+ "description": "LLaVA è un modello multimodale che combina un codificatore visivo e Vicuna, per una potente comprensione visiva e linguistica."
+ },
+ "mathstral": {
+ "description": "MathΣtral è progettato per la ricerca scientifica e il ragionamento matematico, offre capacità di calcolo efficaci e interpretazione dei risultati."
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "Un potente modello con 70 miliardi di parametri che eccelle nel ragionamento, nella codifica e nelle ampie applicazioni linguistiche."
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "Un modello versatile con 8 miliardi di parametri ottimizzato per compiti di dialogo e generazione di testo."
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "I modelli di testo solo ottimizzati per istruzioni Llama 3.1 sono progettati per casi d'uso di dialogo multilingue e superano molti dei modelli di chat open source e chiusi disponibili su benchmark industriali comuni."
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "I modelli di testo solo ottimizzati per istruzioni Llama 3.1 sono progettati per casi d'uso di dialogo multilingue e superano molti dei modelli di chat open source e chiusi disponibili su benchmark industriali comuni."
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "I modelli di testo solo ottimizzati per istruzioni Llama 3.1 sono progettati per casi d'uso di dialogo multilingue e superano molti dei modelli di chat open source e chiusi disponibili su benchmark industriali comuni."
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B) offre eccellenti capacità di elaborazione linguistica e un'interazione di alta qualità."
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B) è un potente modello di chat, in grado di gestire esigenze di dialogo complesse."
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B) offre supporto multilingue, coprendo una vasta gamma di conoscenze di dominio."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Lite è adatto per ambienti che richiedono alta efficienza e bassa latenza."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turbo offre capacità superiori di comprensione e generazione del linguaggio, adatto per i compiti computazionali più impegnativi."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Lite è adatto per ambienti a risorse limitate, offrendo prestazioni bilanciate eccellenti."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turbo è un modello di linguaggio ad alte prestazioni, supportando una vasta gamma di scenari applicativi."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405B è un potente modello pre-addestrato e ottimizzato per istruzioni."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "Il modello Llama 3.1 Turbo 405B offre un supporto di contesto di capacità estremamente grande per l'elaborazione di big data, eccellendo nelle applicazioni di intelligenza artificiale su larga scala."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70B offre supporto per dialoghi multilingue ad alta efficienza."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "Il modello Llama 3.1 70B è stato ottimizzato per applicazioni ad alto carico, quantizzato a FP8 per fornire una maggiore efficienza computazionale e accuratezza, garantendo prestazioni eccezionali in scenari complessi."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1 offre supporto multilingue ed è uno dei modelli generativi leader nel settore."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "Il modello Llama 3.1 8B utilizza la quantizzazione FP8, supportando fino a 131.072 token di contesto, ed è un leader tra i modelli open source, adatto per compiti complessi, superando molti benchmark di settore."
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instruct è ottimizzato per scenari di dialogo di alta qualità, con prestazioni eccellenti in varie valutazioni umane."
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instruct è ottimizzato per scenari di dialogo di alta qualità, con prestazioni superiori a molti modelli chiusi."
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B Instruct è l'ultima versione rilasciata da Meta, ottimizzata per generare dialoghi di alta qualità, superando molti modelli chiusi di punta."
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct è progettato per dialoghi di alta qualità, con prestazioni eccezionali nelle valutazioni umane, particolarmente adatto per scenari ad alta interazione."
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct è l'ultima versione rilasciata da Meta, ottimizzata per scenari di dialogo di alta qualità, superando molti modelli chiusi di punta."
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1 offre supporto multilingue ed è uno dei modelli generativi leader nel settore."
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 405B Instruct è il modello più grande e potente della serie Llama 3.1 Instruct, un modello avanzato per la generazione di dati e il ragionamento conversazionale, utilizzabile anche come base per un pre-addestramento o un fine-tuning specializzato in determinati settori. I modelli di linguaggio di grandi dimensioni (LLMs) multilingue forniti da Llama 3.1 sono un insieme di modelli generativi pre-addestrati e ottimizzati per le istruzioni, che includono dimensioni di 8B, 70B e 405B (input/output di testo). I modelli di testo ottimizzati per le istruzioni di Llama 3.1 (8B, 70B, 405B) sono stati progettati per casi d'uso conversazionali multilingue e hanno superato molti modelli di chat open source disponibili in benchmark di settore comuni. Llama 3.1 è progettato per usi commerciali e di ricerca in diverse lingue. I modelli di testo ottimizzati per le istruzioni sono adatti a chat simili a assistenti, mentre i modelli pre-addestrati possono adattarsi a vari compiti di generazione di linguaggio naturale. I modelli Llama 3.1 supportano anche l'uso della loro output per migliorare altri modelli, inclusa la generazione di dati sintetici e il raffinamento. Llama 3.1 è un modello di linguaggio autoregressivo basato su un'architettura di trasformatore ottimizzata. Le versioni ottimizzate utilizzano il fine-tuning supervisionato (SFT) e l'apprendimento per rinforzo con feedback umano (RLHF) per allinearsi alle preferenze umane in termini di utilità e sicurezza."
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 70B Instruct è una versione aggiornata, con una lunghezza di contesto estesa a 128K, multilinguismo e capacità di ragionamento migliorate. I modelli di linguaggio di grandi dimensioni (LLMs) forniti da Llama 3.1 sono un insieme di modelli generativi pre-addestrati e regolati per istruzioni, che includono dimensioni di 8B, 70B e 405B (input/output testuali). I modelli di testo regolati per istruzioni di Llama 3.1 (8B, 70B, 405B) sono ottimizzati per casi d'uso di conversazione multilingue e superano molti modelli di chat open source disponibili nei test di benchmark di settore comuni. Llama 3.1 è progettato per usi commerciali e di ricerca in più lingue. I modelli di testo regolati per istruzioni sono adatti per chat simili a quelle di un assistente, mentre i modelli pre-addestrati possono adattarsi a vari compiti di generazione di linguaggio naturale. I modelli Llama 3.1 supportano anche l'uso della loro output per migliorare altri modelli, inclusa la generazione di dati sintetici e il raffinamento. Llama 3.1 è un modello di linguaggio autoregressivo basato su un'architettura di trasformatore ottimizzata. Le versioni regolate utilizzano il fine-tuning supervisionato (SFT) e l'apprendimento per rinforzo con feedback umano (RLHF) per allinearsi alle preferenze umane per utilità e sicurezza."
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 8B Instruct è una versione aggiornata, con una lunghezza di contesto estesa a 128K, multilinguismo e capacità di ragionamento migliorate. I modelli di linguaggio di grandi dimensioni (LLMs) forniti da Llama 3.1 sono un insieme di modelli generativi pre-addestrati e regolati per istruzioni, che includono dimensioni di 8B, 70B e 405B (input/output testuali). I modelli di testo regolati per istruzioni di Llama 3.1 (8B, 70B, 405B) sono ottimizzati per casi d'uso di conversazione multilingue e superano molti modelli di chat open source disponibili nei test di benchmark di settore comuni. Llama 3.1 è progettato per usi commerciali e di ricerca in più lingue. I modelli di testo regolati per istruzioni sono adatti per chat simili a quelle di un assistente, mentre i modelli pre-addestrati possono adattarsi a vari compiti di generazione di linguaggio naturale. I modelli Llama 3.1 supportano anche l'uso della loro output per migliorare altri modelli, inclusa la generazione di dati sintetici e il raffinamento. Llama 3.1 è un modello di linguaggio autoregressivo basato su un'architettura di trasformatore ottimizzata. Le versioni regolate utilizzano il fine-tuning supervisionato (SFT) e l'apprendimento per rinforzo con feedback umano (RLHF) per allinearsi alle preferenze umane per utilità e sicurezza."
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3 è un modello di linguaggio di grandi dimensioni (LLM) open source progettato per sviluppatori, ricercatori e aziende, per aiutarli a costruire, sperimentare e scalare responsabilmente le loro idee di AI generativa. Come parte di un sistema di base per l'innovazione della comunità globale, è particolarmente adatto per la creazione di contenuti, AI conversazionale, comprensione del linguaggio, ricerca e applicazioni aziendali."
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3 è un modello di linguaggio di grandi dimensioni (LLM) open source progettato per sviluppatori, ricercatori e aziende, per aiutarli a costruire, sperimentare e scalare responsabilmente le loro idee di AI generativa. Come parte di un sistema di base per l'innovazione della comunità globale, è particolarmente adatto per dispositivi a bassa potenza e risorse limitate, oltre a garantire tempi di addestramento più rapidi."
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7B è il modello leggero e veloce più recente di Microsoft AI, con prestazioni vicine a quelle dei modelli leader open source esistenti."
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22B è il modello Wizard più avanzato di Microsoft AI, mostrando prestazioni estremamente competitive."
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-V è la nuova generazione di modelli multimodali lanciata da OpenBMB, dotata di eccellenti capacità di riconoscimento OCR e comprensione multimodale, supportando una vasta gamma di scenari applicativi."
+ },
+ "mistral": {
+ "description": "Mistral è un modello da 7B lanciato da Mistral AI, adatto per esigenze di elaborazione linguistica variabili."
+ },
+ "mistral-large": {
+ "description": "Mixtral Large è il modello di punta di Mistral, combinando capacità di generazione di codice, matematica e ragionamento, supporta una finestra di contesto di 128k."
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407) è un modello di linguaggio avanzato (LLM) con capacità di ragionamento, conoscenza e codifica all'avanguardia."
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Large è il modello di punta, specializzato in compiti multilingue, ragionamento complesso e generazione di codice, è la scelta ideale per applicazioni di alta gamma."
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemo è un modello da 12B lanciato in collaborazione tra Mistral AI e NVIDIA, offre prestazioni eccellenti."
+ },
+ "mistral-small": {
+ "description": "Mistral Small può essere utilizzato in qualsiasi compito basato su linguaggio che richiede alta efficienza e bassa latenza."
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Small è un'opzione economica, veloce e affidabile, adatta per casi d'uso come traduzione, sintesi e analisi del sentiment."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instruct è noto per le sue alte prestazioni, adatto per vari compiti linguistici."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7B è un modello fine-tuned su richiesta, fornendo risposte ottimizzate per i compiti."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3 offre capacità computazionali efficienti e comprensione del linguaggio naturale, adatta per una vasta gamma di applicazioni."
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B) è un super modello di linguaggio, supportando esigenze di elaborazione estremamente elevate."
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7B è un modello di esperti misti pre-addestrato, utilizzato per compiti testuali generali."
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instruct è un modello standard di settore ad alte prestazioni, ottimizzato per velocità e supporto di contesti lunghi."
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemo è un modello con 7.3B parametri, supporta più lingue e offre prestazioni elevate nella programmazione."
+ },
+ "mixtral": {
+ "description": "Mixtral è il modello di esperti di Mistral AI, con pesi open source, offre supporto per la generazione di codice e la comprensione del linguaggio."
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7B offre capacità di calcolo parallelo ad alta tolleranza, adatto per compiti complessi."
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtral è il modello di esperti di Mistral AI, con pesi open source, offre supporto per la generazione di codice e la comprensione del linguaggio."
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128K è un modello con capacità di elaborazione di contesti ultra lunghi, adatto per generare testi molto lunghi, soddisfacendo le esigenze di compiti complessi, in grado di gestire contenuti fino a 128.000 token, particolarmente adatto per applicazioni di ricerca, accademiche e generazione di documenti di grandi dimensioni."
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32K offre capacità di elaborazione di contesti di lunghezza media, in grado di gestire 32.768 token, particolarmente adatto per generare vari documenti lunghi e dialoghi complessi, utilizzato in creazione di contenuti, generazione di report e sistemi di dialogo."
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8K è progettato per generare compiti di testo brevi, con prestazioni di elaborazione efficienti, in grado di gestire 8.192 token, particolarmente adatto per dialoghi brevi, appunti e generazione rapida di contenuti."
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8B è una versione aggiornata di Nous Hermes 2, contenente i più recenti dataset sviluppati internamente."
+ },
+ "o1-mini": {
+ "description": "o1-mini è un modello di inferenza rapido ed economico progettato per applicazioni di programmazione, matematica e scienza. Questo modello ha un contesto di 128K e una data di cutoff della conoscenza di ottobre 2023."
+ },
+ "o1-preview": {
+ "description": "o1 è il nuovo modello di inferenza di OpenAI, adatto a compiti complessi che richiedono una vasta conoscenza generale. Questo modello ha un contesto di 128K e una data di cutoff della conoscenza di ottobre 2023."
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mamba è un modello linguistico Mamba 2 focalizzato sulla generazione di codice, offre un forte supporto per compiti avanzati di codifica e ragionamento."
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7B è un modello compatto ma ad alte prestazioni, specializzato nell'elaborazione batch e in compiti semplici, come la classificazione e la generazione di testo, con buone capacità di ragionamento."
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemo è un modello da 12B sviluppato in collaborazione con Nvidia, offre prestazioni di ragionamento e codifica eccezionali, facile da integrare e sostituire."
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22B è un modello di esperti più grande, focalizzato su compiti complessi, offre eccellenti capacità di ragionamento e una maggiore capacità di elaborazione."
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7B è un modello di esperti sparsi, che utilizza più parametri per aumentare la velocità di ragionamento, adatto per gestire compiti di generazione di linguaggio e codice multilingue."
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o è un modello dinamico, aggiornato in tempo reale per mantenere la versione più recente. Combina potenti capacità di comprensione e generazione del linguaggio, adatto a scenari applicativi su larga scala, inclusi servizi clienti, educazione e supporto tecnico."
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o mini è il modello più recente di OpenAI, lanciato dopo GPT-4 Omni, che supporta input visivi e testuali e produce output testuali. Come il loro modello di piccole dimensioni più avanzato, è molto più economico rispetto ad altri modelli all'avanguardia recenti e costa oltre il 60% in meno rispetto a GPT-3.5 Turbo. Mantiene un'intelligenza all'avanguardia, offrendo un notevole rapporto qualità-prezzo. GPT-4o mini ha ottenuto un punteggio dell'82% nel test MMLU e attualmente è classificato più in alto di GPT-4 per preferenze di chat."
+ },
+ "openai/o1-mini": {
+ "description": "o1-mini è un modello di inferenza rapido ed economico progettato per applicazioni di programmazione, matematica e scienza. Questo modello ha un contesto di 128K e una data di cutoff della conoscenza di ottobre 2023."
+ },
+ "openai/o1-preview": {
+ "description": "o1 è il nuovo modello di inferenza di OpenAI, adatto a compiti complessi che richiedono una vasta conoscenza generale. Questo modello ha un contesto di 128K e una data di cutoff della conoscenza di ottobre 2023."
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7B è una libreria di modelli linguistici open source, ottimizzata tramite la strategia di 'C-RLFT (fine-tuning di rinforzo condizionato)'."
+ },
+ "openrouter/auto": {
+ "description": "In base alla lunghezza del contesto, al tema e alla complessità, la tua richiesta verrà inviata a Llama 3 70B Instruct, Claude 3.5 Sonnet (auto-regolato) o GPT-4o."
+ },
+ "phi3": {
+ "description": "Phi-3 è un modello open source leggero lanciato da Microsoft, adatto per integrazioni efficienti e ragionamento su larga scala."
+ },
+ "phi3:14b": {
+ "description": "Phi-3 è un modello open source leggero lanciato da Microsoft, adatto per integrazioni efficienti e ragionamento su larga scala."
+ },
+ "pixtral-12b-2409": {
+ "description": "Il modello Pixtral dimostra potenti capacità in compiti di comprensione di grafici e immagini, domande e risposte su documenti, ragionamento multimodale e rispetto delle istruzioni, in grado di elaborare immagini a risoluzione naturale e proporzioni, e di gestire un numero arbitrario di immagini in una finestra di contesto lunga fino a 128K token."
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "Modello di codice Tongyi Qwen."
+ },
+ "qwen-long": {
+ "description": "Qwen è un modello di linguaggio su larga scala che supporta contesti di testo lunghi e funzionalità di dialogo basate su documenti lunghi e multipli."
+ },
+ "qwen-math-plus-latest": {
+ "description": "Il modello matematico Tongyi Qwen è progettato specificamente per la risoluzione di problemi matematici."
+ },
+ "qwen-math-turbo-latest": {
+ "description": "Il modello matematico Tongyi Qwen è progettato specificamente per la risoluzione di problemi matematici."
+ },
+ "qwen-max-latest": {
+ "description": "Modello linguistico su larga scala Tongyi Qwen con miliardi di parametri, supporta input in diverse lingue tra cui cinese e inglese, attualmente il modello API dietro la versione del prodotto Tongyi Qwen 2.5."
+ },
+ "qwen-plus-latest": {
+ "description": "Versione potenziata del modello linguistico su larga scala Tongyi Qwen, supporta input in diverse lingue tra cui cinese e inglese."
+ },
+ "qwen-turbo-latest": {
+ "description": "Il modello linguistico su larga scala Tongyi Qwen, supporta input in diverse lingue tra cui cinese e inglese."
+ },
+ "qwen-vl-chat-v1": {
+ "description": "Qwen VL supporta modalità di interazione flessibili, inclusi modelli di domande e risposte multipli e creativi."
+ },
+ "qwen-vl-max": {
+ "description": "Qwen è un modello di linguaggio visivo su larga scala. Rispetto alla versione potenziata, migliora ulteriormente le capacità di ragionamento visivo e di aderenza alle istruzioni, offrendo un livello superiore di percezione e cognizione visiva."
+ },
+ "qwen-vl-plus": {
+ "description": "Qwen è una versione potenziata del modello di linguaggio visivo su larga scala. Migliora notevolmente le capacità di riconoscimento dei dettagli e di riconoscimento del testo, supportando immagini con risoluzione superiore a un milione di pixel e proporzioni di qualsiasi dimensione."
+ },
+ "qwen-vl-v1": {
+ "description": "Inizializzato con il modello di linguaggio Qwen-7B, aggiunge un modello di immagine, con una risoluzione di input dell'immagine di 448."
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2 è una nuova serie di modelli di linguaggio di grandi dimensioni, con capacità di comprensione e generazione più forti."
+ },
+ "qwen2": {
+ "description": "Qwen2 è la nuova generazione di modelli di linguaggio su larga scala di Alibaba, supporta prestazioni eccellenti per esigenze applicative diversificate."
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "Modello da 14B di Tongyi Qwen 2.5, open source."
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "Modello da 32B di Tongyi Qwen 2.5, open source."
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "Modello da 72B di Tongyi Qwen 2.5, open source."
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "Modello da 7B di Tongyi Qwen 2.5, open source."
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "Versione open source del modello di codice Tongyi Qwen."
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "Versione open source del modello di codice Tongyi Qwen."
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "Il modello Qwen-Math ha potenti capacità di risoluzione di problemi matematici."
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "Il modello Qwen-Math ha potenti capacità di risoluzione di problemi matematici."
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "Il modello Qwen-Math ha potenti capacità di risoluzione di problemi matematici."
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2 è la nuova generazione di modelli di linguaggio su larga scala di Alibaba, supporta prestazioni eccellenti per esigenze applicative diversificate."
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2 è la nuova generazione di modelli di linguaggio su larga scala di Alibaba, supporta prestazioni eccellenti per esigenze applicative diversificate."
+ },
+ "qwen2:72b": {
+ "description": "Qwen2 è la nuova generazione di modelli di linguaggio su larga scala di Alibaba, supporta prestazioni eccellenti per esigenze applicative diversificate."
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar Mini è un LLM compatto, con prestazioni superiori a GPT-3.5, dotato di forti capacità multilingue, supportando inglese e coreano, offrendo soluzioni efficienti e compatte."
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja) espande le capacità di Solar Mini, focalizzandosi sul giapponese, mantenendo al contempo prestazioni elevate e un uso efficiente in inglese e coreano."
+ },
+ "solar-pro": {
+ "description": "Solar Pro è un LLM altamente intelligente lanciato da Upstage, focalizzato sulla capacità di seguire istruzioni su singolo GPU, con un punteggio IFEval superiore a 80. Attualmente supporta l'inglese, con una versione ufficiale prevista per novembre 2024, che espanderà il supporto linguistico e la lunghezza del contesto."
+ },
+ "step-1-128k": {
+ "description": "Equilibrio tra prestazioni e costi, adatto per scenari generali."
+ },
+ "step-1-256k": {
+ "description": "Capacità di elaborazione di contesto ultra lungo, particolarmente adatto per l'analisi di documenti lunghi."
+ },
+ "step-1-32k": {
+ "description": "Supporta dialoghi di lunghezza media, adatto per vari scenari applicativi."
+ },
+ "step-1-8k": {
+ "description": "Modello di piccole dimensioni, adatto per compiti leggeri."
+ },
+ "step-1-flash": {
+ "description": "Modello ad alta velocità, adatto per dialoghi in tempo reale."
+ },
+ "step-1v-32k": {
+ "description": "Supporta input visivi, migliorando l'esperienza di interazione multimodale."
+ },
+ "step-1v-8k": {
+ "description": "Modello visivo di piccole dimensioni, adatto per compiti di base di testo e immagine."
+ },
+ "step-2-16k": {
+ "description": "Supporta interazioni di contesto su larga scala, adatto per scenari di dialogo complessi."
+ },
+ "taichu_llm": {
+ "description": "Il modello linguistico Taichu di Zīdōng ha una straordinaria capacità di comprensione del linguaggio e abilità in creazione di testi, domande di conoscenza, programmazione, calcoli matematici, ragionamento logico, analisi del sentimento e sintesi di testi. Combina in modo innovativo il pre-addestramento su grandi dati con una ricca conoscenza multi-sorgente, affinando continuamente la tecnologia degli algoritmi e assorbendo costantemente nuove conoscenze da dati testuali massivi, migliorando continuamente le prestazioni del modello. Fornisce agli utenti informazioni e servizi più convenienti e un'esperienza più intelligente."
+ },
+ "taichu_vqa": {
+ "description": "Taichu 2.0V integra capacità di comprensione delle immagini, trasferimento di conoscenze e attribuzione logica, eccellendo nel campo delle domande e risposte basate su testo e immagini."
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B) offre capacità di calcolo potenziate attraverso strategie e architetture di modelli efficienti."
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B) è adatto per compiti di istruzione dettagliati, offrendo eccellenti capacità di elaborazione linguistica."
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2 è un modello di linguaggio fornito da Microsoft AI, particolarmente efficace in dialoghi complessi, multilingue, ragionamento e assistenti intelligenti."
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2 è un modello di linguaggio fornito da Microsoft AI, particolarmente efficace in dialoghi complessi, multilingue, ragionamento e assistenti intelligenti."
+ },
+ "yi-large": {
+ "description": "Un nuovo modello con centinaia di miliardi di parametri, offre capacità eccezionali di risposta e generazione di testi."
+ },
+ "yi-large-fc": {
+ "description": "Basato sul modello yi-large, supporta e potenzia le capacità di chiamata di strumenti, adatto per vari scenari aziendali che richiedono la costruzione di agenti o flussi di lavoro."
+ },
+ "yi-large-preview": {
+ "description": "Versione iniziale, si consiglia di utilizzare yi-large (nuova versione)."
+ },
+ "yi-large-rag": {
+ "description": "Servizio avanzato basato sul modello yi-large, combina tecnologie di recupero e generazione per fornire risposte precise, con servizi di ricerca in tempo reale su tutto il web."
+ },
+ "yi-large-turbo": {
+ "description": "Eccellente rapporto qualità-prezzo e prestazioni superiori. Ottimizzazione ad alta precisione in base a prestazioni, velocità di inferenza e costi."
+ },
+ "yi-medium": {
+ "description": "Modello di dimensioni medie aggiornato e ottimizzato, con capacità bilanciate e un buon rapporto qualità-prezzo. Ottimizzazione profonda delle capacità di seguire istruzioni."
+ },
+ "yi-medium-200k": {
+ "description": "Finestra di contesto ultra lunga di 200K, offre capacità di comprensione e generazione di testi lunghi."
+ },
+ "yi-spark": {
+ "description": "Piccolo e potente, modello leggero e veloce. Offre capacità potenziate di calcolo matematico e scrittura di codice."
+ },
+ "yi-vision": {
+ "description": "Modello per compiti visivi complessi, offre elevate prestazioni nella comprensione e analisi delle immagini."
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/plugin.json b/DigitalHumanWeb/locales/it-IT/plugin.json
new file mode 100644
index 0000000..01b1861
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "Argomenti di chiamata",
+ "function_call": "Chiamata di funzione",
+ "off": "Disattivato",
+ "on": "Visualizza informazioni sulla chiamata del plugin",
+ "payload": "carico del plugin",
+ "response": "Risposta",
+ "tool_call": "richiesta di chiamata dello strumento"
+ },
+ "detailModal": {
+ "info": {
+ "description": "Descrizione API",
+ "name": "Nome API"
+ },
+ "tabs": {
+ "info": "Abilità del plugin",
+ "manifest": "File di installazione",
+ "settings": "Impostazioni"
+ },
+ "title": "Dettagli del plugin"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "Stai per eliminare questo plugin locale. Una volta eliminato, non sarà possibile recuperarlo. Vuoi eliminare questo plugin?",
+ "customParams": {
+ "useProxy": {
+ "label": "Installa tramite proxy (se si verificano errori di accesso cross-origin, prova ad abilitare questa opzione e reinstallare)"
+ }
+ },
+ "deleteSuccess": "Plugin eliminato con successo",
+ "manifest": {
+ "identifier": {
+ "desc": "Identificatore univoco del plugin",
+ "label": "Identificatore"
+ },
+ "mode": {
+ "local": "Configurazione visuale",
+ "local-tooltip": "Configurazione visuale non supportata al momento",
+ "url": "Collegamento online"
+ },
+ "name": {
+ "desc": "Titolo del plugin",
+ "label": "Titolo",
+ "placeholder": "Motore di ricerca"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "Autore del plugin",
+ "label": "Autore"
+ },
+ "avatar": {
+ "desc": "Icona del plugin, puoi usare Emoji o un URL",
+ "label": "Icona"
+ },
+ "description": {
+ "desc": "Descrizione del plugin",
+ "label": "Descrizione",
+ "placeholder": "Ottieni informazioni dai motori di ricerca"
+ },
+ "formFieldRequired": "Questo campo è obbligatorio",
+ "homepage": {
+ "desc": "Homepage del plugin",
+ "label": "Homepage"
+ },
+ "identifier": {
+ "desc": "Identificatore univoco del plugin, verrà riconosciuto automaticamente dal manifesto",
+ "errorDuplicate": "Identificatore duplicato rispetto a un plugin esistente. Modifica l'identificatore",
+ "label": "Identificatore",
+ "pattenErrorMessage": "Puoi inserire solo caratteri alfanumerici, - e _"
+ },
+ "manifest": {
+ "desc": "{{appName}} installerà il plugin tramite questo link",
+ "label": "URL del file di descrizione del plugin (Manifest)",
+ "preview": "Anteprima Manifest",
+ "refresh": "Aggiorna"
+ },
+ "title": {
+ "desc": "Titolo del plugin",
+ "label": "Titolo",
+ "placeholder": "Motore di ricerca"
+ }
+ },
+ "metaConfig": "Configurazione metadati del plugin",
+ "modalDesc": "Dopo aver aggiunto un plugin personalizzato, potrà essere utilizzato per la convalida dello sviluppo del plugin o direttamente nelle conversazioni. Per lo sviluppo del plugin, consulta il <1>documento di sviluppo↗>",
+ "openai": {
+ "importUrl": "Importa da URL",
+ "schema": "Schema"
+ },
+ "preview": {
+ "card": "Anteprima dell'aspetto del plugin",
+ "desc": "Anteprima della descrizione del plugin",
+ "title": "Anteprima del nome del plugin"
+ },
+ "save": "Installa plugin",
+ "saveSuccess": "Impostazioni del plugin salvate con successo",
+ "tabs": {
+ "manifest": "Elenco delle funzionalità (Manifest)",
+ "meta": "Metadati del plugin"
+ },
+ "title": {
+ "create": "Aggiungi plugin personalizzato",
+ "edit": "Modifica plugin personalizzato"
+ },
+ "type": {
+ "lobe": "Plugin LobeChat",
+ "openai": "Plugin OpenAI"
+ },
+ "update": "Aggiorna",
+ "updateSuccess": "Impostazioni del plugin aggiornate con successo"
+ },
+ "error": {
+ "fetchError": "Errore nel recupero del collegamento al manifesto. Assicurati che il collegamento sia valido e che sia consentito l'accesso cross-origin.",
+ "installError": "Installazione del plugin {{name}} fallita",
+ "manifestInvalid": "Il manifesto non è conforme allo standard. Risultato della convalida: \n\n {{error}}",
+ "noManifest": "Il file di descrizione non esiste",
+ "openAPIInvalid": "Analisi dell'OpenAPI fallita. Errore: \n\n {{error}}",
+ "reinstallError": "Ricaricamento del plugin {{name}} fallito",
+ "urlError": "Il collegamento non restituisce contenuti nel formato JSON. Assicurati che il collegamento sia valido"
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "Deprecato",
+ "local.config": "Configurazione",
+ "local.title": "Personalizzato"
+ }
+ },
+ "loading": {
+ "content": "Caricamento del plugin in corso...",
+ "plugin": "Esecuzione del plugin in corso..."
+ },
+ "pluginList": "Elenco dei plugin",
+ "setting": "Impostazioni del plugin",
+ "settings": {
+ "indexUrl": {
+ "title": "Indice di mercato",
+ "tooltip": "Modifica non supportata online. Imposta tramite variabili d'ambiente durante il deploy"
+ },
+ "modalDesc": "Dopo aver configurato l'indirizzo del mercato dei plugin, è possibile utilizzare un mercato dei plugin personalizzato",
+ "title": "Impostazioni del mercato dei plugin"
+ },
+ "showInPortal": "Si prega di visualizzare i dettagli nell'area di lavoro",
+ "store": {
+ "actions": {
+ "confirmUninstall": "Stai per disinstallare questo plugin. La disinstallazione cancellerà la configurazione del plugin. Conferma l'operazione",
+ "detail": "Dettagli",
+ "install": "Installa",
+ "manifest": "Modifica file di installazione",
+ "settings": "Impostazioni",
+ "uninstall": "Disinstalla"
+ },
+ "communityPlugin": "Plugin della community",
+ "customPlugin": "Personalizzato",
+ "empty": "Nessun plugin installato al momento",
+ "installAllPlugins": "Installa tutti",
+ "networkError": "Impossibile recuperare il negozio dei plugin. Controlla la connessione di rete e riprova",
+ "placeholder": "Cerca per nome, descrizione o parola chiave del plugin...",
+ "releasedAt": "Pubblicato il {{createdAt}}",
+ "tabs": {
+ "all": "Tutti",
+ "installed": "Installati"
+ },
+ "title": "Negozio dei plugin"
+ },
+ "unknownPlugin": "Plugin sconosciuto"
+}
diff --git a/DigitalHumanWeb/locales/it-IT/portal.json b/DigitalHumanWeb/locales/it-IT/portal.json
new file mode 100644
index 0000000..151d67f
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "Artefatti",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "Blocco",
+ "file": "File"
+ }
+ },
+ "Plugins": "Plugin",
+ "actions": {
+ "genAiMessage": "Genera messaggio AI",
+ "summary": "Sommario",
+ "summaryTooltip": "Sommario del contenuto attuale"
+ },
+ "artifacts": {
+ "display": {
+ "code": "Codice",
+ "preview": "Anteprima"
+ },
+ "svg": {
+ "copyAsImage": "Copia come immagine",
+ "copyFail": "Copia fallita, motivo dell'errore: {{error}}",
+ "copySuccess": "Immagine copiata con successo",
+ "download": {
+ "png": "Scarica come PNG",
+ "svg": "Scarica come SVG"
+ }
+ }
+ },
+ "emptyArtifactList": "La lista degli Artefatti attuale è vuota, si prega di utilizzare i plugin necessari durante la sessione e poi controllare di nuovo",
+ "emptyKnowledgeList": "L'elenco delle conoscenze attuale è vuoto. Si prega di attivare il database delle conoscenze durante la conversazione per visualizzarlo.",
+ "files": "File",
+ "messageDetail": "Dettagli del messaggio",
+ "title": "Finestra di espansione"
+}
diff --git a/DigitalHumanWeb/locales/it-IT/providers.json b/DigitalHumanWeb/locales/it-IT/providers.json
new file mode 100644
index 0000000..f965aba
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "360 AI è una piattaforma di modelli e servizi AI lanciata da 360 Company, che offre vari modelli avanzati di elaborazione del linguaggio naturale, tra cui 360GPT2 Pro, 360GPT Pro, 360GPT Turbo e 360GPT Turbo Responsibility 8K. Questi modelli combinano parametri su larga scala e capacità multimodali, trovando ampio utilizzo in generazione di testo, comprensione semantica, sistemi di dialogo e generazione di codice. Con strategie di prezzo flessibili, 360 AI soddisfa le esigenze diversificate degli utenti, supportando l'integrazione degli sviluppatori e promuovendo l'innovazione e lo sviluppo delle applicazioni intelligenti."
+ },
+ "anthropic": {
+ "description": "Anthropic è un'azienda focalizzata sulla ricerca e sviluppo dell'intelligenza artificiale, che offre una serie di modelli linguistici avanzati, come Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus e Claude 3 Haiku. Questi modelli raggiungono un equilibrio ideale tra intelligenza, velocità e costi, adatti a una varietà di scenari applicativi, dalle operazioni aziendali a risposte rapide. Claude 3.5 Sonnet, il loro modello più recente, ha mostrato prestazioni eccezionali in diverse valutazioni, mantenendo un alto rapporto qualità-prezzo."
+ },
+ "azure": {
+ "description": "Azure offre una varietà di modelli AI avanzati, tra cui GPT-3.5 e l'ultima serie GPT-4, supportando diversi tipi di dati e compiti complessi, con un impegno per soluzioni AI sicure, affidabili e sostenibili."
+ },
+ "baichuan": {
+ "description": "Baichuan Intelligence è un'azienda focalizzata sulla ricerca e sviluppo di modelli di intelligenza artificiale di grandi dimensioni, i cui modelli eccellono in compiti in cinese come enciclopedie di conoscenza, elaborazione di testi lunghi e creazione di contenuti, superando i modelli mainstream esteri. Baichuan Intelligence ha anche capacità multimodali leader nel settore, mostrando prestazioni eccezionali in diverse valutazioni autorevoli. I suoi modelli includono Baichuan 4, Baichuan 3 Turbo e Baichuan 3 Turbo 128k, ottimizzati per diversi scenari applicativi, offrendo soluzioni ad alto rapporto qualità-prezzo."
+ },
+ "bedrock": {
+ "description": "Bedrock è un servizio offerto da Amazon AWS, focalizzato sulla fornitura di modelli linguistici e visivi AI avanzati per le aziende. La sua famiglia di modelli include la serie Claude di Anthropic, la serie Llama 3.1 di Meta e altro, coprendo una varietà di opzioni da leggere a ad alte prestazioni, supportando generazione di testo, dialogo, elaborazione di immagini e altro, adatta a diverse applicazioni aziendali di varie dimensioni e necessità."
+ },
+ "deepseek": {
+ "description": "DeepSeek è un'azienda focalizzata sulla ricerca e applicazione della tecnologia AI, il cui ultimo modello DeepSeek-V2.5 combina capacità di dialogo generico e elaborazione del codice, realizzando miglioramenti significativi nell'allineamento delle preferenze umane, nei compiti di scrittura e nel rispetto delle istruzioni."
+ },
+ "fireworksai": {
+ "description": "Fireworks AI è un fornitore leader di servizi di modelli linguistici avanzati, focalizzato su chiamate funzionali e elaborazione multimodale. Il suo ultimo modello Firefunction V2, basato su Llama-3, è ottimizzato per chiamate di funzione, dialogo e rispetto delle istruzioni. Il modello di linguaggio visivo FireLLaVA-13B supporta input misti di immagini e testo. Altri modelli notevoli includono la serie Llama e la serie Mixtral, offrendo supporto efficiente per il rispetto e la generazione di istruzioni multilingue."
+ },
+ "github": {
+ "description": "Con i modelli di GitHub, gli sviluppatori possono diventare ingegneri AI e costruire con i modelli AI leader del settore."
+ },
+ "google": {
+ "description": "La serie Gemini di Google è il suo modello AI più avanzato e versatile, sviluppato da Google DeepMind, progettato per il multimodale, supportando la comprensione e l'elaborazione senza soluzione di continuità di testo, codice, immagini, audio e video. Adatto a una varietà di ambienti, dai data center ai dispositivi mobili, ha notevolmente migliorato l'efficienza e l'ampiezza delle applicazioni dei modelli AI."
+ },
+ "groq": {
+ "description": "Il motore di inferenza LPU di Groq ha mostrato prestazioni eccezionali nei recenti benchmark indipendenti sui modelli di linguaggio di grandi dimensioni (LLM), ridefinendo gli standard delle soluzioni AI con la sua incredibile velocità ed efficienza. Groq rappresenta una velocità di inferenza istantanea, mostrando buone prestazioni nelle implementazioni basate su cloud."
+ },
+ "minimax": {
+ "description": "MiniMax è un'azienda di tecnologia dell'intelligenza artificiale generale fondata nel 2021, dedicata alla co-creazione di intelligenza con gli utenti. MiniMax ha sviluppato modelli generali di diverse modalità, tra cui un modello di testo MoE con trilioni di parametri, un modello vocale e un modello visivo. Ha anche lanciato applicazioni come Conch AI."
+ },
+ "mistral": {
+ "description": "Mistral offre modelli avanzati generali, professionali e di ricerca, ampiamente utilizzati in ragionamenti complessi, compiti multilingue, generazione di codice e altro, consentendo agli utenti di integrare funzionalità personalizzate tramite interfacce di chiamata funzionale."
+ },
+ "moonshot": {
+ "description": "Moonshot è una piattaforma open source lanciata da Beijing Dark Side Technology Co., Ltd., che offre vari modelli di elaborazione del linguaggio naturale, con ampie applicazioni, inclusi ma non limitati a creazione di contenuti, ricerca accademica, raccomandazioni intelligenti, diagnosi mediche e altro, supportando l'elaborazione di testi lunghi e compiti di generazione complessi."
+ },
+ "novita": {
+ "description": "Novita AI è una piattaforma che offre API per vari modelli di linguaggio di grandi dimensioni e generazione di immagini AI, flessibile, affidabile e conveniente. Supporta i più recenti modelli open source come Llama3 e Mistral, fornendo soluzioni API complete, user-friendly e scalabili per lo sviluppo di applicazioni AI, adatte alla rapida crescita delle startup AI."
+ },
+ "ollama": {
+ "description": "I modelli forniti da Ollama coprono ampiamente aree come generazione di codice, operazioni matematiche, elaborazione multilingue e interazioni conversazionali, supportando esigenze diversificate per implementazioni aziendali e localizzate."
+ },
+ "openai": {
+ "description": "OpenAI è un'agenzia di ricerca sull'intelligenza artificiale leader a livello globale, i cui modelli come la serie GPT hanno spinto in avanti il campo dell'elaborazione del linguaggio naturale. OpenAI si impegna a trasformare diversi settori attraverso soluzioni AI innovative ed efficienti. I loro prodotti offrono prestazioni e costi notevoli, trovando ampio utilizzo nella ricerca, nel commercio e nelle applicazioni innovative."
+ },
+ "openrouter": {
+ "description": "OpenRouter è una piattaforma di servizio che offre interfacce per vari modelli di grandi dimensioni all'avanguardia, supportando OpenAI, Anthropic, LLaMA e altro, adatta a esigenze di sviluppo e applicazione diversificate. Gli utenti possono scegliere in modo flessibile il modello e il prezzo ottimali in base alle proprie necessità, migliorando l'esperienza AI."
+ },
+ "perplexity": {
+ "description": "Perplexity è un fornitore leader di modelli di generazione di dialogo, offrendo vari modelli avanzati Llama 3.1, supportando applicazioni online e offline, particolarmente adatti per compiti complessi di elaborazione del linguaggio naturale."
+ },
+ "qwen": {
+ "description": "Qwen è un modello di linguaggio di grande scala sviluppato autonomamente da Alibaba Cloud, con potenti capacità di comprensione e generazione del linguaggio naturale. Può rispondere a varie domande, creare contenuti testuali, esprimere opinioni e scrivere codice, svolgendo un ruolo in vari settori."
+ },
+ "siliconcloud": {
+ "description": "SiliconFlow si impegna ad accelerare l'AGI per il bene dell'umanità, migliorando l'efficienza dell'AI su larga scala attraverso stack GenAI facili da usare e a basso costo."
+ },
+ "spark": {
+ "description": "Il modello Spark di iFlytek offre potenti capacità AI in vari settori e lingue, utilizzando tecnologie avanzate di elaborazione del linguaggio naturale per costruire applicazioni innovative adatte a scenari verticali come hardware intelligente, assistenza sanitaria intelligente e finanza intelligente."
+ },
+ "stepfun": {
+ "description": "Il modello StepFun offre capacità multimodali e di ragionamento complesso leader nel settore, supportando la comprensione di testi molto lunghi e potenti funzionalità di motore di ricerca autonomo."
+ },
+ "taichu": {
+ "description": "L'Istituto di Automazione dell'Accademia Cinese delle Scienze e l'Istituto di Ricerca sull'Intelligenza Artificiale di Wuhan hanno lanciato una nuova generazione di modelli di grandi dimensioni multimodali, supportando domande e risposte a più turni, creazione di testi, generazione di immagini, comprensione 3D, analisi dei segnali e altre attività di domanda e risposta complete, con capacità cognitive, di comprensione e di creazione più forti, offrendo un'esperienza interattiva completamente nuova."
+ },
+ "togetherai": {
+ "description": "Together AI si impegna a raggiungere prestazioni leader attraverso modelli AI innovativi, offrendo ampie capacità di personalizzazione, inclusi supporto per scalabilità rapida e processi di distribuzione intuitivi, per soddisfare le varie esigenze aziendali."
+ },
+ "upstage": {
+ "description": "Upstage si concentra sullo sviluppo di modelli AI per varie esigenze commerciali, inclusi Solar LLM e document AI, con l'obiettivo di realizzare un'intelligenza artificiale generale artificiale (AGI) per il lavoro. Crea semplici agenti di dialogo tramite Chat API e supporta chiamate funzionali, traduzioni, embedding e applicazioni specifiche del settore."
+ },
+ "zeroone": {
+ "description": "01.AI si concentra sulla tecnologia AI dell'era 2.0, promuovendo attivamente l'innovazione e l'applicazione di \"uomo + intelligenza artificiale\", utilizzando modelli potenti e tecnologie AI avanzate per migliorare la produttività umana e realizzare l'abilitazione tecnologica."
+ },
+ "zhipu": {
+ "description": "Zhipu AI offre una piattaforma aperta per modelli multimodali e linguistici, supportando una vasta gamma di scenari applicativi AI, inclusi elaborazione del testo, comprensione delle immagini e assistenza alla programmazione."
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/ragEval.json b/DigitalHumanWeb/locales/it-IT/ragEval.json
new file mode 100644
index 0000000..91284f2
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "Nuovo",
+ "description": {
+ "placeholder": "Descrizione del dataset (opzionale)"
+ },
+ "name": {
+ "placeholder": "Nome del dataset",
+ "required": "Si prega di inserire il nome del dataset"
+ },
+ "title": "Aggiungi dataset"
+ },
+ "dataset": {
+ "addNewButton": "Crea dataset",
+ "emptyGuide": "Il dataset attuale è vuoto, si prega di crearne uno.",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "Importa dati"
+ },
+ "columns": {
+ "actions": "Operazioni",
+ "ideal": {
+ "title": "Risposta ideale"
+ },
+ "question": {
+ "title": "Domanda"
+ },
+ "referenceFiles": {
+ "title": "File di riferimento"
+ }
+ },
+ "notSelected": "Si prega di selezionare un dataset a sinistra",
+ "title": "Dettagli del dataset"
+ },
+ "title": "Dataset"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "Nuovo",
+ "datasetId": {
+ "placeholder": "Seleziona il tuo dataset di valutazione",
+ "required": "Si prega di selezionare un dataset di valutazione"
+ },
+ "description": {
+ "placeholder": "Descrizione del compito di valutazione (opzionale)"
+ },
+ "name": {
+ "placeholder": "Nome del compito di valutazione",
+ "required": "Si prega di inserire il nome del compito di valutazione"
+ },
+ "title": "Aggiungi compito di valutazione"
+ },
+ "addNewButton": "Crea valutazione",
+ "emptyGuide": "Attualmente non ci sono compiti di valutazione, inizia a crearne uno.",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "Controlla stato",
+ "confirmDelete": "Sei sicuro di voler eliminare questa valutazione?",
+ "confirmRun": "Sei sicuro di voler avviare l'esecuzione? L'esecuzione avverrà in modo asincrono in background, chiudere la pagina non influenzerà l'esecuzione del compito asincrono.",
+ "downloadRecords": "Scarica valutazione",
+ "retry": "Riprova",
+ "run": "Esegui",
+ "title": "Operazioni"
+ },
+ "datasetId": {
+ "title": "Dataset"
+ },
+ "name": {
+ "title": "Nome del compito di valutazione"
+ },
+ "records": {
+ "title": "Numero di registrazioni di valutazione"
+ },
+ "referenceFiles": {
+ "title": "File di riferimento"
+ },
+ "status": {
+ "error": "Errore durante l'esecuzione",
+ "pending": "In attesa di esecuzione",
+ "processing": "In esecuzione",
+ "success": "Esecuzione riuscita",
+ "title": "Stato"
+ }
+ },
+ "title": "Elenco dei compiti di valutazione"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/setting.json b/DigitalHumanWeb/locales/it-IT/setting.json
new file mode 100644
index 0000000..03dfa64
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "Informazioni"
+ },
+ "agentTab": {
+ "chat": "Preferenze di chat",
+ "meta": "Informazioni assistente",
+ "modal": "Impostazioni modello",
+ "plugin": "Impostazioni plugin",
+ "prompt": "Impostazioni ruolo",
+ "tts": "Servizio vocale"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "Selezionando di inviare i dati di telemetria, puoi aiutarci a migliorare l'esperienza complessiva degli utenti di {{appName}}",
+ "title": "Invio dati anonimi"
+ },
+ "title": "Analisi"
+ },
+ "danger": {
+ "clear": {
+ "action": "Cancella subito",
+ "confirm": "Confermi di cancellare tutti i dati della chat?",
+ "desc": "Cancellerà tutti i dati della sessione, inclusi gli assistenti, i file, i messaggi, i plugin, ecc.",
+ "success": "Tutti i messaggi della sessione sono stati cancellati",
+ "title": "Cancella tutti i messaggi della sessione"
+ },
+ "reset": {
+ "action": "Ripristina subito",
+ "confirm": "Confermi di ripristinare tutte le impostazioni?",
+ "currentVersion": "Versione corrente",
+ "desc": "Ripristina tutte le impostazioni ai valori predefiniti",
+ "success": "Tutte le impostazioni sono state ripristinate con successo",
+ "title": "Ripristina tutte le impostazioni"
+ }
+ },
+ "header": {
+ "desc": "Preferenze e impostazioni del modello.",
+ "global": "Impostazioni globali",
+ "session": "Impostazioni della sessione",
+ "sessionDesc": "Impostazioni del personaggio e preferenze di sessione.",
+ "sessionWithName": "Impostazioni della sessione · {{name}}",
+ "title": "Impostazioni"
+ },
+ "llm": {
+ "aesGcm": "La tua chiave e l'indirizzo dell'agente saranno crittografati utilizzando l'algoritmo di crittografia <1>AES-GCM1>",
+ "apiKey": {
+ "desc": "Inserisci la tua chiave API {{name}}",
+ "placeholder": "Chiave API {{name}}",
+ "title": "Chiave API"
+ },
+ "checker": {
+ "button": "Verifica",
+ "desc": "Verifica se la chiave API e l'indirizzo del proxy sono stati inseriti correttamente",
+ "pass": "Verifica superata",
+ "title": "Verifica di connettività"
+ },
+ "customModelCards": {
+ "addNew": "Crea e aggiungi il modello {{id}}",
+ "config": "Configura il modello",
+ "confirmDelete": "Stai per eliminare questo modello personalizzato, l'eliminazione non potrà essere annullata, procedere con cautela.",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "Campo effettivo richiesto in Azure OpenAI",
+ "placeholder": "Inserisci il nome del deployment del modello in Azure",
+ "title": "Nome del deployment del modello"
+ },
+ "displayName": {
+ "placeholder": "Inserisci il nome di visualizzazione del modello, ad esempio ChatGPT, GPT-4, ecc.",
+ "title": "Nome di visualizzazione del modello"
+ },
+ "files": {
+ "extra": "L'attuale implementazione del caricamento dei file è solo una soluzione temporanea, limitata a tentativi personali. Ti preghiamo di attendere implementazioni complete per la capacità di caricamento dei file.",
+ "title": "Supporto per il caricamento dei file"
+ },
+ "functionCall": {
+ "extra": "Questa configurazione attiverà solo la capacità di chiamata di funzioni all'interno dell'app, se la chiamata di funzioni è supportata dipende interamente dal modello stesso, ti invitiamo a testare l'usabilità della chiamata di funzioni di questo modello.",
+ "title": "Supporto per la chiamata di funzione"
+ },
+ "id": {
+ "extra": "Sarà visualizzato come etichetta del modello",
+ "placeholder": "Inserisci l'ID del modello, ad esempio gpt-4-turbo-preview o claude-2.1",
+ "title": "ID del modello"
+ },
+ "modalTitle": "Configurazione del modello personalizzato",
+ "tokens": {
+ "title": "Numero massimo di token",
+ "unlimited": "illimitato"
+ },
+ "vision": {
+ "extra": "Questa configurazione attiverà solo la capacità di caricamento delle immagini nell'app, se il riconoscimento è supportato dipende interamente dal modello stesso, ti invitiamo a testare l'usabilità del riconoscimento visivo di questo modello.",
+ "title": "Supporto per il riconoscimento visivo"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "Il modo di richiesta del client consente di avviare direttamente una richiesta di sessione dal browser, migliorando i tempi di risposta",
+ "title": "Utilizzo del modo di richiesta del client"
+ },
+ "fetcher": {
+ "fetch": "Ottenere l'elenco dei modelli",
+ "fetching": "Recupero dell'elenco dei modelli in corso...",
+ "latestTime": "Ultimo aggiornamento: {{time}}",
+ "noLatestTime": "Nessun elenco disponibile al momento"
+ },
+ "helpDoc": "Guida alla configurazione",
+ "modelList": {
+ "desc": "Seleziona i modelli da visualizzare durante la sessione, i modelli selezionati verranno mostrati nell'elenco dei modelli",
+ "placeholder": "Seleziona un modello dall'elenco",
+ "title": "Elenco dei modelli",
+ "total": "Totale modelli disponibili: {{count}}"
+ },
+ "proxyUrl": {
+ "desc": "Deve includere http(s):// oltre all'indirizzo predefinito",
+ "title": "Indirizzo del proxy API"
+ },
+ "waitingForMore": "Altri modelli sono in fase di <1> pianificazione per l'integrazione 1>, resta sintonizzato"
+ },
+ "plugin": {
+ "addTooltip": "Aggiungi plugin personalizzato",
+ "clearDeprecated": "Rimuovi plugin non validi",
+ "empty": "Nessun plugin installato al momento, visita il <1>negozio dei plugin1> per esplorare",
+ "installStatus": {
+ "deprecated": "Disinstallato"
+ },
+ "settings": {
+ "hint": "Si prega di compilare le seguenti configurazioni in base alla descrizione",
+ "title": "Configurazione del plugin {{id}}",
+ "tooltip": "Configurazione del plugin"
+ },
+ "store": "Negozio dei plugin"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "Avatar"
+ },
+ "backgroundColor": {
+ "title": "Colore di sfondo"
+ },
+ "description": {
+ "placeholder": "Inserisci la descrizione dell'assistente",
+ "title": "Descrizione dell'assistente"
+ },
+ "name": {
+ "placeholder": "Inserisci il nome dell'assistente",
+ "title": "Nome"
+ },
+ "prompt": {
+ "placeholder": "Inserisci la parola di prompt del ruolo",
+ "title": "Impostazione del ruolo"
+ },
+ "tag": {
+ "placeholder": "Inserisci un'etichetta",
+ "title": "Etichetta"
+ },
+ "title": "Informazioni sull'assistente"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "Una volta superato questo numero di messaggi, verrà creato automaticamente un argomento",
+ "title": "Soglia dei messaggi"
+ },
+ "chatStyleType": {
+ "title": "Stile della finestra di chat",
+ "type": {
+ "chat": "Modalità conversazione",
+ "docs": "Modalità documenti"
+ }
+ },
+ "compressThreshold": {
+ "desc": "Quando la lunghezza dei messaggi storici non compressi supera questo valore, verrà eseguita la compressione",
+ "title": "Soglia di compressione della lunghezza dei messaggi storici"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "Se creare automaticamente un argomento durante la conversazione, valido solo per le conversazioni temporanee",
+ "title": "Abilita la creazione automatica di argomenti"
+ },
+ "enableCompressThreshold": {
+ "title": "Abilita la soglia di compressione della lunghezza dei messaggi storici"
+ },
+ "enableHistoryCount": {
+ "alias": "Illimitato",
+ "limited": "Include solo {{number}} messaggi di conversazione",
+ "setlimited": "Imposta il numero di messaggi storici da utilizzare",
+ "title": "Limita il numero di messaggi storici",
+ "unlimited": "Numero illimitato di messaggi storici"
+ },
+ "historyCount": {
+ "desc": "Numero di messaggi inclusi in ogni richiesta (inclusi gli ultimi messaggi scritti, ogni domanda e risposta conta come 1)",
+ "title": "Numero di messaggi inclusi"
+ },
+ "inputTemplate": {
+ "desc": "Il template verrà popolato con l'ultimo messaggio dell'utente",
+ "placeholder": "Il modello di input {{text}} verrà sostituito con le informazioni in tempo reale",
+ "title": "Pre-elaborazione dell'input dell'utente"
+ },
+ "title": "Impostazioni della chat"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "Abilita limite di risposta singola"
+ },
+ "frequencyPenalty": {
+ "desc": "Più alto è il valore, più probabile è la riduzione delle parole ripetute",
+ "title": "Penalità di frequenza"
+ },
+ "maxTokens": {
+ "desc": "Numero massimo di token utilizzati per interazione singola",
+ "title": "Limite di risposta singola"
+ },
+ "model": {
+ "desc": "Modello {{provider}}",
+ "title": "Modello"
+ },
+ "presencePenalty": {
+ "desc": "Più alto è il valore, più probabile è l'estensione a nuovi argomenti",
+ "title": "Freschezza dell'argomento"
+ },
+ "temperature": {
+ "desc": "Più alto è il valore, più casuale è la risposta",
+ "title": "Casualità",
+ "titleWithValue": "Casualità {{value}}"
+ },
+ "title": "Impostazioni del modello",
+ "topP": {
+ "desc": "Simile alla casualità, ma non modificare insieme alla casualità",
+ "title": "Campionamento principale"
+ }
+ },
+ "settingPlugin": {
+ "title": "Elenco dei plugin"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "L'amministratore ha abilitato l'accesso crittografato",
+ "placeholder": "Inserisci la password di accesso",
+ "title": "Password di accesso"
+ },
+ "oauth": {
+ "info": {
+ "desc": "Accesso effettuato",
+ "title": "Informazioni account"
+ },
+ "signin": {
+ "action": "Accedi",
+ "desc": "Accedi tramite SSO per sbloccare l'applicazione",
+ "title": "Accedi all'account"
+ },
+ "signout": {
+ "action": "Esci",
+ "confirm": "Confermi l'uscita?",
+ "success": "Disconnessione avvenuta con successo"
+ }
+ },
+ "title": "Impostazioni di sistema"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "Modello di riconoscimento vocale OpenAI",
+ "title": "OpenAI",
+ "ttsModel": "Modello di sintesi vocale OpenAI"
+ },
+ "showAllLocaleVoice": {
+ "desc": "Se disabilitato, mostra solo le voci della lingua corrente",
+ "title": "Mostra tutte le voci locali"
+ },
+ "stt": "Impostazioni di riconoscimento vocale",
+ "sttAutoStop": {
+ "desc": "Se disabilitato, il riconoscimento vocale non si interromperà automaticamente e richiederà un clic manuale per terminare",
+ "title": "Arresto automatico del riconoscimento vocale"
+ },
+ "sttLocale": {
+ "desc": "Lingua di input vocale, migliora la precisione del riconoscimento vocale",
+ "title": "Lingua del riconoscimento vocale"
+ },
+ "sttService": {
+ "desc": "Il servizio di riconoscimento vocale, dove 'browser' è il servizio nativo del browser",
+ "title": "Servizio di riconoscimento vocale"
+ },
+ "title": "Servizio vocale",
+ "tts": "Impostazioni di sintesi vocale",
+ "ttsService": {
+ "desc": "Se si utilizza il servizio di sintesi vocale OpenAI, assicurarsi che il servizio del modello OpenAI sia attivo",
+ "title": "Servizio di sintesi vocale"
+ },
+ "voice": {
+ "desc": "Scegli una voce per l'assistente attuale, i servizi TTS supportano voci diverse",
+ "preview": "Anteprima della voce",
+ "title": "Voce di sintesi vocale"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "Avatar"
+ },
+ "fontSize": {
+ "desc": "Dimensione del carattere per i messaggi",
+ "marks": {
+ "normal": "Normale"
+ },
+ "title": "Dimensione del carattere"
+ },
+ "lang": {
+ "autoMode": "Segui il sistema",
+ "title": "Lingua"
+ },
+ "neutralColor": {
+ "desc": "Personalizzazione delle sfumature di grigio per diverse preferenze di colore",
+ "title": "Colore neutro"
+ },
+ "primaryColor": {
+ "desc": "Colore del tema personalizzato",
+ "title": "Colore principale"
+ },
+ "themeMode": {
+ "auto": "Automatico",
+ "dark": "Scuro",
+ "light": "Chiaro",
+ "title": "Tema"
+ },
+ "title": "Impostazioni del tema"
+ },
+ "submitAgentModal": {
+ "button": "Invia assistente",
+ "identifier": "Identificatore assistente",
+ "metaMiss": "Si prega di completare le informazioni dell'assistente prima di inviare, è necessario includere nome, descrizione e tag",
+ "placeholder": "Inserisci l'identificatore dell'assistente, deve essere univoco, ad esempio sviluppo-web",
+ "tooltips": "Condividi sul mercato degli assistenti"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "Aggiungi un nome per identificare il dispositivo",
+ "placeholder": "Inserisci il nome del dispositivo",
+ "title": "Nome del dispositivo"
+ },
+ "title": "Informazioni sul dispositivo",
+ "unknownBrowser": "Browser sconosciuto",
+ "unknownOS": "Sistema operativo sconosciuto"
+ },
+ "warning": {
+ "tip": "Dopo un lungo periodo di test della comunità, la sincronizzazione WebRTC potrebbe non essere in grado di soddisfare in modo stabile le esigenze generali di sincronizzazione dei dati. Si prega di <1>configurare un server di segnalazione1> prima di utilizzarlo."
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "WebRTC utilizzerà questo nome per creare un canale di sincronizzazione, assicurati che il nome del canale sia univoco",
+ "placeholder": "Inserisci il nome del canale di sincronizzazione",
+ "shuffle": "Genera casualmente",
+ "title": "Nome del canale di sincronizzazione"
+ },
+ "channelPassword": {
+ "desc": "Aggiungi una password per garantire la privacy del canale, solo con la password corretta i dispositivi potranno unirsi al canale",
+ "placeholder": "Inserisci la password del canale di sincronizzazione",
+ "title": "Password del canale di sincronizzazione"
+ },
+ "desc": "Comunicazione dati in tempo reale punto a punto, entrambi i dispositivi devono essere online per sincronizzarsi",
+ "enabled": {
+ "invalid": "Si prega di inserire l'indirizzo del server di segnalazione e il nome del canale di sincronizzazione prima di abilitare.",
+ "title": "Abilita la sincronizzazione"
+ },
+ "signaling": {
+ "desc": "WebRTC utilizzerà questo indirizzo per la sincronizzazione",
+ "placeholder": "Inserisci l'indirizzo del server di segnalazione",
+ "title": "Server di segnalazione"
+ },
+ "title": "Sincronizzazione WebRTC"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "Modello di generazione metadati assistente",
+ "modelDesc": "Modello specificato per generare nome, descrizione, avatar e etichetta dell'assistente",
+ "title": "Genera automaticamente informazioni sull'assistente"
+ },
+ "queryRewrite": {
+ "label": "Modello di riscrittura delle domande",
+ "modelDesc": "Modello specificato per ottimizzare le domande degli utenti",
+ "title": "Banca dati"
+ },
+ "title": "Assistente di sistema",
+ "topic": {
+ "label": "Modello di denominazione degli argomenti",
+ "modelDesc": "Modello designato per la ridenominazione automatica degli argomenti",
+ "title": "Ridenominazione automatica degli argomenti"
+ },
+ "translation": {
+ "label": "Modello di traduzione",
+ "modelDesc": "Modello specificato per la traduzione",
+ "title": "Impostazioni dell'assistente di traduzione"
+ }
+ },
+ "tab": {
+ "about": "Informazioni",
+ "agent": "Assistente predefinito",
+ "common": "Impostazioni comuni",
+ "experiment": "实验",
+ "llm": "Modello linguistico",
+ "sync": "云端同步",
+ "system-agent": "Assistente di sistema",
+ "tts": "Servizio vocale"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "Predefiniti"
+ },
+ "disabled": "Il modello attuale non supporta le chiamate di funzione e non è possibile utilizzare il plugin",
+ "plugins": {
+ "enabled": "Abilitato {{num}}",
+ "groupName": "Plugin",
+ "noEnabled": "Nessun plugin abilitato al momento",
+ "store": "Negozio dei plugin"
+ },
+ "title": "Strumenti aggiuntivi"
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/tool.json b/DigitalHumanWeb/locales/it-IT/tool.json
new file mode 100644
index 0000000..3ec2f38
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "Auto-generato",
+ "downloading": "Il link dell'immagine generata da DALL·E3 è valido solo per 1 ora, sta scaricando l'immagine in locale...",
+ "generate": "Genera",
+ "generating": "Generazione in corso...",
+ "images": "Immagini:",
+ "prompt": "parola chiave"
+ }
+}
diff --git a/DigitalHumanWeb/locales/it-IT/welcome.json b/DigitalHumanWeb/locales/it-IT/welcome.json
new file mode 100644
index 0000000..4f7a727
--- /dev/null
+++ b/DigitalHumanWeb/locales/it-IT/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "Importa configurazione",
+ "market": "Esplora il mercato",
+ "start": "Inizia subito"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "Cambia gruppo",
+ "title": "Nuova raccomandazione assistente:"
+ },
+ "defaultMessage": "Sono il tuo assistente intelligente personale {{appName}}. Come posso aiutarti adesso?\nSe hai bisogno di un assistente più professionale o personalizzato, puoi cliccare su `+` per creare un assistente personalizzato.",
+ "defaultMessageWithoutCreate": "Sono il tuo assistente intelligente personale {{appName}}. Come posso aiutarti adesso?",
+ "qa": {
+ "q01": "Che cos'è LobeHub?",
+ "q02": "Che cos'è {{appName}}?",
+ "q03": "{{appName}} ha supporto della comunità?",
+ "q04": "Quali funzionalità supporta {{appName}}?",
+ "q05": "Come si installa e utilizza {{appName}}?",
+ "q06": "Qual è il prezzo di {{appName}}?",
+ "q07": "{{appName}} è gratuito?",
+ "q08": "Esiste una versione cloud?",
+ "q09": "Supporta modelli di linguaggio locali?",
+ "q10": "Supporta il riconoscimento e la generazione di immagini?",
+ "q11": "Supporta la sintesi vocale e il riconoscimento vocale?",
+ "q12": "Supporta un sistema di plugin?",
+ "q13": "C'è un mercato per ottenere GPT?",
+ "q14": "Supporta diversi fornitori di servizi AI?",
+ "q15": "Cosa devo fare se riscontro problemi durante l'uso?"
+ },
+ "questions": {
+ "moreBtn": "Scopri di più",
+ "title": "Domande frequenti:"
+ },
+ "welcome": {
+ "afternoon": "Buon pomeriggio",
+ "morning": "Buongiorno",
+ "night": "Buonasera",
+ "noon": "Buon pranzo"
+ }
+ },
+ "header": "Benvenuti",
+ "pickAgent": "Oppure scegli tra i seguenti modelli di assistente",
+ "skip": "Salta creazione",
+ "slogan": {
+ "desc1": "Attiva il cluster cerebrale e accendi la scintilla del pensiero. Il tuo assistente intelligente è sempre qui.",
+ "desc2": "Crea il tuo primo assistente, cominciamo!",
+ "title": "Dai a te stesso un cervello più intelligente"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/auth.json b/DigitalHumanWeb/locales/ja-JP/auth.json
new file mode 100644
index 0000000..0caf556
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "ログイン",
+ "loginOrSignup": "ログイン / 登録",
+ "profile": "プロフィール",
+ "security": "セキュリティ",
+ "signout": "ログアウト",
+ "signup": "サインアップ"
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/chat.json b/DigitalHumanWeb/locales/ja-JP/chat.json
new file mode 100644
index 0000000..fafb0c2
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "モデル"
+ },
+ "agentDefaultMessage": "こんにちは、私は **{{name}}** です。すぐに私と会話を始めることもできますし、[アシスタント設定]({{url}}) に行って私の情報を充実させることもできます。",
+ "agentDefaultMessageWithSystemRole": "こんにちは、私は **{{name}}** です、{{systemRole}}、さあ、チャットを始めましょう!",
+ "agentDefaultMessageWithoutEdit": "こんにちは、私は**{{name}}**です。会話しましょう!",
+ "agents": "エージェント",
+ "artifact": {
+ "generating": "生成中",
+ "thinking": "思考中",
+ "thought": "思考過程",
+ "unknownTitle": "未命名の作品"
+ },
+ "backToBottom": "現在に戻る",
+ "chatList": {
+ "longMessageDetail": "詳細を見る"
+ },
+ "clearCurrentMessages": "現在の会話をクリア",
+ "confirmClearCurrentMessages": "現在の会話をクリアします。クリアした後は元に戻すことはできません。操作を確認してください。",
+ "confirmRemoveSessionItemAlert": "このエージェントを削除します。削除した後は元に戻すことはできません。操作を確認してください。",
+ "confirmRemoveSessionSuccess": "セッションが正常に削除されました",
+ "defaultAgent": "デフォルトエージェント",
+ "defaultList": "デフォルトリスト",
+ "defaultSession": "デフォルトセッション",
+ "duplicateSession": {
+ "loading": "複製中...",
+ "success": "複製に成功しました",
+ "title": "{{title}} のコピー"
+ },
+ "duplicateTitle": "{{title}} のコピー",
+ "emptyAgent": "エージェントがいません",
+ "historyRange": "履歴範囲",
+ "inbox": {
+ "desc": "脳のクラスターを起動し、創造性を引き出しましょう。あなたのスマートアシスタントは、あなたとすべてのことについてここでコミュニケーションします。",
+ "title": "気軽におしゃべり"
+ },
+ "input": {
+ "addAi": "AIメッセージを追加",
+ "addUser": "ユーザーメッセージを追加",
+ "more": "もっと",
+ "send": "送信",
+ "sendWithCmdEnter": "{{meta}} + Enter キーで送信",
+ "sendWithEnter": "Enter キーで送信",
+ "stop": "停止",
+ "warp": "改行"
+ },
+ "knowledgeBase": {
+ "all": "すべてのコンテンツ",
+ "allFiles": "すべてのファイル",
+ "allKnowledgeBases": "すべての知識ベース",
+ "disabled": "現在のデプロイモードでは知識ベースの対話はサポートされていません。使用するには、サーバー側のデータベースデプロイに切り替えるか、{{cloud}} サービスを利用してください。",
+ "library": {
+ "action": {
+ "add": "追加",
+ "detail": "詳細",
+ "remove": "削除"
+ },
+ "title": "ファイル/知識ベース"
+ },
+ "relativeFilesOrKnowledgeBases": "関連ファイル/知識ベース",
+ "title": "知識ベース",
+ "uploadGuide": "アップロードしたファイルは「知識ベース」で確認できますよ",
+ "viewMore": "さらに表示"
+ },
+ "messageAction": {
+ "delAndRegenerate": "削除して再生成",
+ "regenerate": "再生成"
+ },
+ "newAgent": "新しいエージェント",
+ "pin": "ピン留め",
+ "pinOff": "ピン留め解除",
+ "rag": {
+ "referenceChunks": "参照チャンク",
+ "userQuery": {
+ "actions": {
+ "delete": "クエリを削除",
+ "regenerate": "クエリを再生成"
+ }
+ }
+ },
+ "regenerate": "再生成",
+ "roleAndArchive": "役割とアーカイブ",
+ "searchAgentPlaceholder": "検索アシスタント...",
+ "sendPlaceholder": "チャット内容を入力してください...",
+ "sessionGroup": {
+ "config": "グループ設定",
+ "confirmRemoveGroupAlert": "このグループを削除します。削除後、このグループのアシスタントはデフォルトリストに移動されます。操作を確認してください。",
+ "createAgentSuccess": "エージェントの作成に成功しました",
+ "createGroup": "新しいグループを作成",
+ "createSuccess": "作成が成功しました",
+ "creatingAgent": "エージェントの作成中...",
+ "inputPlaceholder": "グループ名を入力してください...",
+ "moveGroup": "グループに移動",
+ "newGroup": "新しいグループ",
+ "rename": "グループ名を変更",
+ "renameSuccess": "名前の変更が成功しました",
+ "sortSuccess": "並び替えに成功しました",
+ "sorting": "グループの並び替えを更新中...",
+ "tooLong": "グループ名は1〜20文字で入力してください"
+ },
+ "shareModal": {
+ "download": "スクリーンショットをダウンロード",
+ "imageType": "画像形式",
+ "screenshot": "スクリーンショット",
+ "settings": "エクスポート設定",
+ "shareToShareGPT": "ShareGPT 共有リンクを生成",
+ "withBackground": "背景画像を含む",
+ "withFooter": "フッターを含む",
+ "withPluginInfo": "プラグイン情報を含む",
+ "withSystemRole": "エージェントの役割を含む"
+ },
+ "stt": {
+ "action": "音声入力",
+ "loading": "認識中...",
+ "prettifying": "整形中..."
+ },
+ "temp": "一時的",
+ "tokenDetails": {
+ "chats": "チャットメッセージ",
+ "rest": "残り利用可能",
+ "systemRole": "システムロール設定",
+ "title": "コンテキストの詳細",
+ "tools": "ツール設定",
+ "total": "合計利用可能",
+ "used": "合計使用"
+ },
+ "tokenTag": {
+ "overload": "制限を超えています",
+ "remained": "残り",
+ "used": "使用済み"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "自動リネーム",
+ "duplicate": "コピーを作成",
+ "export": "トピックをエクスポート"
+ },
+ "checkOpenNewTopic": "新しいトピックを開始しますか?",
+ "checkSaveCurrentMessages": "現在の会話をトピックとして保存しますか?",
+ "confirmRemoveAll": "すべてのトピックを削除します。削除した後は元に戻すことはできません。注意して操作してください。",
+ "confirmRemoveTopic": "このトピックを削除します。削除した後は元に戻すことはできません。注意して操作してください。",
+ "confirmRemoveUnstarred": "スターをつけていないトピックを削除します。削除した後は元に戻すことはできません。注意して操作してください。",
+ "defaultTitle": "デフォルトトピック",
+ "duplicateLoading": "トピックを複製中...",
+ "duplicateSuccess": "トピックの複製に成功しました",
+ "guide": {
+ "desc": "左側のボタンをクリックして、現在の会話を保存し、新しい会話を開始できます",
+ "title": "トピックリスト"
+ },
+ "openNewTopic": "新しいトピックを開く",
+ "removeAll": "すべてのトピックを削除",
+ "removeUnstarred": "スターをつけていないトピックを削除",
+ "saveCurrentMessages": "現在の会話をトピックとして保存",
+ "searchPlaceholder": "トピックを検索...",
+ "title": "トピックリスト"
+ },
+ "translate": {
+ "action": "翻訳",
+ "clear": "翻訳を削除"
+ },
+ "tts": {
+ "action": "音声読み上げ",
+ "clear": "音声を削除"
+ },
+ "updateAgent": "エージェント情報を更新",
+ "upload": {
+ "action": {
+ "fileUpload": "ファイルをアップロード",
+ "folderUpload": "フォルダをアップロード",
+ "imageDisabled": "現在のモデルは視覚認識をサポートしていません。モデルを切り替えてから使用してください。",
+ "imageUpload": "画像をアップロード",
+ "tooltip": "アップロード"
+ },
+ "clientMode": {
+ "actionFiletip": "ファイルをアップロード",
+ "actionTooltip": "アップロード",
+ "disabled": "現在のモデルは視覚認識とファイル分析をサポートしていません。モデルを切り替えてから使用してください。"
+ },
+ "preview": {
+ "prepareTasks": "ブロックの準備中...",
+ "status": {
+ "pending": "アップロードの準備中...",
+ "processing": "ファイル処理中..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/clerk.json b/DigitalHumanWeb/locales/ja-JP/clerk.json
new file mode 100644
index 0000000..949e6f3
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "戻る",
+ "badge__default": "デフォルト",
+ "badge__otherImpersonatorDevice": "他のなりすましデバイス",
+ "badge__primary": "プライマリ",
+ "badge__requiresAction": "アクションが必要",
+ "badge__thisDevice": "このデバイス",
+ "badge__unverified": "未確認",
+ "badge__userDevice": "ユーザーデバイス",
+ "badge__you": "あなた",
+ "createOrganization": {
+ "formButtonSubmit": "組織を作成",
+ "invitePage": {
+ "formButtonReset": "スキップ"
+ },
+ "title": "組織を作成"
+ },
+ "dates": {
+ "lastDay": "昨日 {{ date | timeString('ja-JP') }}",
+ "next6Days": "{{ date | weekday('ja-JP','long') }} {{ date | timeString('ja-JP') }}",
+ "nextDay": "明日 {{ date | timeString('ja-JP') }}",
+ "numeric": "{{ date | numeric('ja-JP') }}",
+ "previous6Days": "先週の{{ date | weekday('ja-JP','long') }} {{ date | timeString('ja-JP') }}",
+ "sameDay": "今日 {{ date | timeString('ja-JP') }}"
+ },
+ "dividerText": "または",
+ "footerActionLink__useAnotherMethod": "他の方法を使用",
+ "footerPageLink__help": "ヘルプ",
+ "footerPageLink__privacy": "プライバシー",
+ "footerPageLink__terms": "利用規約",
+ "formButtonPrimary": "続行",
+ "formButtonPrimary__verify": "確認",
+ "formFieldAction__forgotPassword": "パスワードを忘れましたか?",
+ "formFieldError__matchingPasswords": "パスワードが一致しています。",
+ "formFieldError__notMatchingPasswords": "パスワードが一致しません。",
+ "formFieldError__verificationLinkExpired": "確認リンクの有効期限が切れています。新しいリンクをリクエストしてください。",
+ "formFieldHintText__optional": "任意",
+ "formFieldHintText__slug": "スラッグはユニークで人間が読めるIDです。URLでよく使用されます。",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "アカウントを削除",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "example@email.com, example2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "my-org",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "このドメインの自動招待を有効にする",
+ "formFieldLabel__backupCode": "バックアップコード",
+ "formFieldLabel__confirmDeletion": "確認",
+ "formFieldLabel__confirmPassword": "パスワードを確認",
+ "formFieldLabel__currentPassword": "現在のパスワード",
+ "formFieldLabel__emailAddress": "メールアドレス",
+ "formFieldLabel__emailAddress_username": "メールアドレスまたはユーザー名",
+ "formFieldLabel__emailAddresses": "メールアドレス",
+ "formFieldLabel__firstName": "名",
+ "formFieldLabel__lastName": "姓",
+ "formFieldLabel__newPassword": "新しいパスワード",
+ "formFieldLabel__organizationDomain": "ドメイン",
+ "formFieldLabel__organizationDomainDeletePending": "保留中の招待と提案を削除",
+ "formFieldLabel__organizationDomainEmailAddress": "検証メールアドレス",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "このドメインの下でコードを受信してこのドメインを確認するためのメールアドレスを入力してください。",
+ "formFieldLabel__organizationName": "名前",
+ "formFieldLabel__organizationSlug": "スラッグ",
+ "formFieldLabel__passkeyName": "パスキーの名前",
+ "formFieldLabel__password": "パスワード",
+ "formFieldLabel__phoneNumber": "電話番号",
+ "formFieldLabel__role": "役割",
+ "formFieldLabel__signOutOfOtherSessions": "他のデバイスからサインアウト",
+ "formFieldLabel__username": "ユーザー名",
+ "impersonationFab": {
+ "action__signOut": "サインアウト",
+ "title": "{{identifier}} としてサインインしています"
+ },
+ "locale": "ja-JP",
+ "maintenanceMode": "現在、メンテナンス中ですが、心配しないでください。数分以内に完了するはずです。",
+ "membershipRole__admin": "管理者",
+ "membershipRole__basicMember": "メンバー",
+ "membershipRole__guestMember": "ゲスト",
+ "organizationList": {
+ "action__createOrganization": "組織を作成",
+ "action__invitationAccept": "参加する",
+ "action__suggestionsAccept": "参加リクエスト",
+ "createOrganization": "組織を作成",
+ "invitationAcceptedLabel": "参加済み",
+ "subtitle": "{{applicationName}} を続行するため",
+ "suggestionsAcceptedLabel": "承認待ち",
+ "title": "アカウントを選択",
+ "titleWithoutPersonal": "組織を選択"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "自動招待",
+ "badge__automaticSuggestion": "自動提案",
+ "badge__manualInvitation": "自動登録なし",
+ "badge__unverified": "未確認",
+ "createDomainPage": {
+ "subtitle": "ドメインを追加して確認してください。このドメインのメールアドレスを持つユーザーは、組織に自動的に参加するか、参加をリクエストできます。",
+ "title": "ドメインを追加"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "招待状を送信できませんでした。次のメールアドレスには既に保留中の招待状があります:{{email_addresses}}。",
+ "formButtonPrimary__continue": "招待状を送信",
+ "selectDropdown__role": "役割を選択",
+ "subtitle": "1つ以上のメールアドレスを入力または貼り付けてください。スペースやカンマで区切ってください。",
+ "successMessage": "招待状が正常に送信されました",
+ "title": "新しいメンバーを招待"
+ },
+ "membersPage": {
+ "action__invite": "招待",
+ "activeMembersTab": {
+ "menuAction__remove": "メンバーを削除",
+ "tableHeader__actions": "アクション",
+ "tableHeader__joined": "参加日",
+ "tableHeader__role": "役割",
+ "tableHeader__user": "ユーザー"
+ },
+ "detailsTitle__emptyRow": "表示するメンバーはありません",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "組織とメールドメインを接続してユーザーを招待します。一致するメールドメインでサインアップしたユーザーはいつでも組織に参加できます。",
+ "headerTitle": "自動招待",
+ "primaryButton": "確認済みドメインを管理"
+ },
+ "table__emptyRow": "表示する招待状はありません"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "招待を取り消す",
+ "tableHeader__invited": "招待中"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "一致するメールドメインでサインアップしたユーザーは、組織に参加をリクエストする提案を見ることができます。",
+ "headerTitle": "自動提案",
+ "primaryButton": "確認済みドメインを管理"
+ },
+ "menuAction__approve": "承認",
+ "menuAction__reject": "拒否",
+ "tableHeader__requested": "リクエスト済み",
+ "table__emptyRow": "表示するリクエストはありません"
+ },
+ "start": {
+ "headerTitle__invitations": "招待状",
+ "headerTitle__members": "メンバー",
+ "headerTitle__requests": "リクエスト"
+ }
+ },
+ "navbar": {
+ "description": "組織を管理します",
+ "general": "一般",
+ "members": "メンバー",
+ "title": "組織"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "{{organizationName}} を入力して続行してください。",
+ "messageLine1": "この組織を削除してもよろしいですか?",
+ "messageLine2": "この操作は取り消せません。",
+ "successMessage": "組織を削除しました",
+ "title": "組織を削除"
+ },
+ "leaveOrganization": {
+ "actionDescription": "{{organizationName}} を入力して続行してください。",
+ "messageLine1": "この組織を退出してもよろしいですか?",
+ "messageLine2": "この操作は取り消せません。",
+ "successMessage": "組織を退出しました",
+ "title": "組織を退出"
+ },
+ "title": "危険"
+ },
+ "domainSection": {
+ "menuAction__manage": "管理",
+ "menuAction__remove": "削除",
+ "menuAction__verify": "確認",
+ "primaryButton": "ドメインを追加",
+ "subtitle": "確認済みメールドメインに基づいてユーザーが組織に自動的に参加するか、参加をリクエストできるようにします。",
+ "title": "確認済みドメイン"
+ },
+ "successMessage": "組織が更新されました",
+ "title": "プロフィールを更新"
+ },
+ "removeDomainPage": {
+ "messageLine1": "{{domain}} のメールドメインが削除されます。",
+ "messageLine2": "この後、ユーザーは組織に自動的に参加できなくなります。",
+ "successMessage": "{{domain}} が削除されました",
+ "title": "ドメインを削除"
+ },
+ "start": {
+ "headerTitle__general": "一般",
+ "headerTitle__members": "メンバー",
+ "profileSection": {
+ "primaryButton": "プロフィールを更新",
+ "title": "組織プロフィール",
+ "uploadAction__title": "ロゴをアップロード"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "このドメインを削除すると招待中のユーザーに影響します",
+ "removeDomainActionLabel__remove": "ドメインを削除",
+ "removeDomainSubtitle": "このドメインを確認済みドメインから削除します",
+ "removeDomainTitle": "ドメインを削除"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "ユーザーはサインアップ時に組織に自動的に招待され、いつでも参加できます。",
+ "automaticInvitationOption__label": "自動招待",
+ "automaticSuggestionOption__description": "ユーザーは参加をリクエストする提案を受け取りますが、管理者の承認が必要です。",
+ "automaticSuggestionOption__label": "自動提案",
+ "calloutInfoLabel": "登録モードの変更は新規ユーザーにのみ影響します",
+ "calloutInvitationCountLabel": "ユーザーに送信された保留中の招待状:{{count}}",
+ "calloutSuggestionCountLabel": "ユーザーに送信された保留中の提案:{{count}}",
+ "manualInvitationOption__description": "ユーザーは組織に手動でのみ招待できます。",
+ "manualInvitationOption__label": "自動登録なし",
+ "subtitle": "このドメインからのユーザーが組織に参加する方法を選択してください"
+ },
+ "start": {
+ "headerTitle__danger": "危険",
+ "headerTitle__enrollment": "登録オプション"
+ },
+ "subtitle": "ドメイン {{domain}} が確認済みです。登録モードを選択して続行してください",
+ "title": "{{domain}} を更新"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "メールアドレスに送信された確認コードを入力してください",
+ "formTitle": "確認コード",
+ "resendButton": "コードを受信していませんか? 再送信",
+ "subtitle": "{{domainName}} のドメインをメールで確認する必要があります",
+ "subtitleVerificationCodeScreen": "確認コードが {{emailAddress}} に送信されました。コードを入力して続行してください",
+ "title": "ドメインを確認"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "組織を作成",
+ "action__invitationAccept": "参加する",
+ "action__manageOrganization": "管理",
+ "action__suggestionsAccept": "参加リクエスト",
+ "notSelected": "選択されている組織はありません",
+ "personalWorkspace": "個人アカウント",
+ "suggestionsAcceptedLabel": "承認待ち"
+ },
+ "paginationButton__next": "次へ",
+ "paginationButton__previous": "前へ",
+ "paginationRowText__displaying": "表示中",
+ "paginationRowText__of": "/",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "アカウントを追加",
+ "action__signOutAll": "すべてのアカウントからサインアウト",
+ "subtitle": "続行するアカウントを選択してください。",
+ "title": "アカウントを選択"
+ },
+ "alternativeMethods": {
+ "actionLink": "ヘルプを受ける",
+ "actionText": "これらのいずれも持っていない場合",
+ "blockButton__backupCode": "バックアップコードを使用",
+ "blockButton__emailCode": "{{identifier}} にメールコードを送信",
+ "blockButton__emailLink": "{{identifier}} にメールリンクを送信",
+ "blockButton__passkey": "パスキーを使用してサインイン",
+ "blockButton__password": "パスワードでサインイン",
+ "blockButton__phoneCode": "{{identifier}} にSMSコードを送信",
+ "blockButton__totp": "認証アプリを使用",
+ "getHelp": {
+ "blockButton__emailSupport": "メールサポート",
+ "content": "アカウントにサインインできない場合は、お問い合わせいただければ、できるだけ早くアクセスを回復するために協力します。",
+ "title": "ヘルプを受ける"
+ },
+ "subtitle": "問題が発生していますか?これらの方法のいずれかを使用してサインインできます。",
+ "title": "別の方法を使用"
+ },
+ "backupCodeMfa": {
+ "subtitle": "バックアップコードは、2段階認証を設定する際に受け取ったものです。",
+ "title": "バックアップコードを入力"
+ },
+ "emailCode": {
+ "formTitle": "確認コード",
+ "resendButton": "コードが届かない場合は再送信",
+ "subtitle": "{{applicationName}} へ続行するために",
+ "title": "メールを確認"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "続行するには元のタブに戻ってください。",
+ "title": "この検証リンクは期限切れです"
+ },
+ "failed": {
+ "subtitle": "続行するには元のタブに戻ってください。",
+ "title": "この検証リンクは無効です"
+ },
+ "formSubtitle": "メールに送信された検証リンクを使用",
+ "formTitle": "検証リンク",
+ "loading": {
+ "subtitle": "すぐにリダイレクトされます",
+ "title": "サインイン中..."
+ },
+ "resendButton": "リンクが届かない場合は再送信",
+ "subtitle": "{{applicationName}} へ続行するために",
+ "title": "メールを確認",
+ "unusedTab": {
+ "title": "このタブを閉じても構いません"
+ },
+ "verified": {
+ "subtitle": "すぐにリダイレクトされます",
+ "title": "正常にサインインしました"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "続行するには元のタブに戻ってください",
+ "subtitleNewTab": "続行するには新しく開いたタブに戻ってください",
+ "titleNewTab": "他のタブでサインイン済み"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "パスワードをリセットするコード",
+ "resendButton": "コードが届かない場合は再送信",
+ "subtitle": "パスワードをリセットするため",
+ "subtitle_email": "まず、メールアドレスに送信されたコードを入力してください",
+ "subtitle_phone": "まず、電話に送信されたコードを入力してください",
+ "title": "パスワードをリセット"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "パスワードをリセット",
+ "label__alternativeMethods": "または、他の方法でサインイン",
+ "title": "パスワードを忘れましたか?"
+ },
+ "noAvailableMethods": {
+ "message": "サインインを続行できません。利用可能な認証要素がありません。",
+ "subtitle": "エラーが発生しました",
+ "title": "サインインできません"
+ },
+ "passkey": {
+ "subtitle": "パスキーを使用することで、あなたであることが確認されます。デバイスが指紋、顔認証、または画面ロックを要求する場合があります。",
+ "title": "パスキーを使用"
+ },
+ "password": {
+ "actionLink": "他の方法を使用",
+ "subtitle": "アカウントに関連付けられたパスワードを入力してください",
+ "title": "パスワードを入力"
+ },
+ "passwordPwned": {
+ "title": "パスワードが危険にさらされています"
+ },
+ "phoneCode": {
+ "formTitle": "確認コード",
+ "resendButton": "コードが届かない場合は再送信",
+ "subtitle": "{{applicationName}} へ続行するために",
+ "title": "携帯電話を確認"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "確認コード",
+ "resendButton": "コードが届かない場合は再送信",
+ "subtitle": "続行するには、携帯電話に送信された確認コードを入力してください",
+ "title": "携帯電話を確認"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "パスワードをリセット",
+ "requiredMessage": "セキュリティ上の理由から、パスワードをリセットする必要があります。",
+ "successMessage": "パスワードが正常に変更されました。サインイン中です、しばらくお待ちください。",
+ "title": "新しいパスワードを設定"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "パスワードをリセットする前に、あなたの身元を確認する必要があります。"
+ },
+ "start": {
+ "actionLink": "サインアップ",
+ "actionLink__use_email": "メールを使用",
+ "actionLink__use_email_username": "メールまたはユーザー名を使用",
+ "actionLink__use_passkey": "代わりにパスキーを使用",
+ "actionLink__use_phone": "電話を使用",
+ "actionLink__use_username": "ユーザー名を使用",
+ "actionText": "アカウントを持っていませんか?",
+ "subtitle": "お帰りなさい!続行するにはサインインしてください",
+ "title": "{{applicationName}} にサインイン"
+ },
+ "totpMfa": {
+ "formTitle": "確認コード",
+ "subtitle": "続行するには、認証アプリで生成された確認コードを入力してください",
+ "title": "2段階認証"
+ }
+ },
+ "signInEnterPasswordTitle": "パスワードを入力",
+ "signUp": {
+ "continue": {
+ "actionLink": "サインイン",
+ "actionText": "すでにアカウントをお持ちですか?",
+ "subtitle": "続行するために残りの詳細を入力してください",
+ "title": "残りのフィールドを入力"
+ },
+ "emailCode": {
+ "formSubtitle": "メールアドレスに送信された検証コードを入力",
+ "formTitle": "検証コード",
+ "resendButton": "コードが届かない場合は再送信",
+ "subtitle": "メールアドレスに送信された検証コードを入力してください",
+ "title": "メールを検証"
+ },
+ "emailLink": {
+ "formSubtitle": "メールアドレスに送信された検証リンクを使用",
+ "formTitle": "検証リンク",
+ "loading": {
+ "title": "サインアップ中..."
+ },
+ "resendButton": "リンクが届かない場合は再送信",
+ "subtitle": "{{applicationName}} へ続行するために",
+ "title": "メールを検証",
+ "verified": {
+ "title": "正常にサインアップしました"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "続行するには新しく開いたタブに戻ってください",
+ "subtitleNewTab": "続行するには前のタブに戻ってください",
+ "title": "メールが正常に検証されました"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "電話番号に送信された検証コードを入力",
+ "formTitle": "検証コード",
+ "resendButton": "コードが届かない場合は再送信",
+ "subtitle": "電話番号に送信された検証コードを入力してください",
+ "title": "電話を検証"
+ },
+ "start": {
+ "actionLink": "サインイン",
+ "actionText": "すでにアカウントをお持ちですか?",
+ "subtitle": "ようこそ!始めるには詳細を入力してください",
+ "title": "アカウントを作成"
+ }
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}} で続行",
+ "unstable__errors": {
+ "captcha_invalid": "セキュリティ検証に失敗したため、サインアップできませんでした。もう一度試すにはページを更新するか、サポートにお問い合わせください。",
+ "captcha_unavailable": "ボット検証に失敗したため、サインアップできませんでした。もう一度試すにはページを更新するか、サポートにお問い合わせください。",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "このメールアドレスは既に使用されています。別のものをお試しください。",
+ "form_identifier_exists__phone_number": "この電話番号は既に使用されています。別のものをお試しください。",
+ "form_identifier_exists__username": "このユーザー名は既に使用されています。別のものをお試しください。",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "有効なメールアドレスを入力してください。",
+ "form_param_format_invalid__phone_number": "有効な国際フォーマットの電話番号を入力してください。",
+ "form_param_max_length_exceeded__first_name": "名前は256文字を超えることはできません。",
+ "form_param_max_length_exceeded__last_name": "姓は256文字を超えることはできません。",
+ "form_param_max_length_exceeded__name": "名前は256文字を超えることはできません。",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "パスワードが強力ではありません。",
+ "form_password_pwned": "このパスワードは侵害の一部として見つかったため使用できません。別のパスワードをお試しください。",
+ "form_password_pwned__sign_in": "このパスワードは侵害の一部として見つかったため使用できません。パスワードをリセットしてください。",
+ "form_password_size_in_bytes_exceeded": "パスワードが許容されるバイト数を超えています。短くするか、一部の特殊文字を削除してください。",
+ "form_password_validation_failed": "パスワードが間違っています。",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "最後の識別情報を削除することはできません。",
+ "not_allowed_access": "",
+ "passkey_already_exists": "このデバイスにはすでにパスキーが登録されています。",
+ "passkey_not_supported": "このデバイスではパスキーはサポートされていません。",
+ "passkey_pa_not_supported": "登録にはプラットフォーム認証子が必要ですが、デバイスがサポートしていません。",
+ "passkey_registration_cancelled": "パスキーの登録がキャンセルされたか、タイムアウトしました。",
+ "passkey_retrieval_cancelled": "パスキーの確認がキャンセルされたか、タイムアウトしました。",
+ "passwordComplexity": {
+ "maximumLength": "{{length}}文字未満",
+ "minimumLength": "{{length}}文字以上",
+ "requireLowercase": "小文字の文字",
+ "requireNumbers": "数字",
+ "requireSpecialCharacter": "特殊文字",
+ "requireUppercase": "大文字の文字",
+ "sentencePrefix": "パスワードには次の要素が含まれる必要があります"
+ },
+ "phone_number_exists": "この電話番号は既に使用されています。別のものをお試しください。",
+ "zxcvbn": {
+ "couldBeStronger": "パスワードは機能しますが、もっと強力にすることができます。もっと文字を追加してみてください。",
+ "goodPassword": "パスワードはすべての必要条件を満たしています。",
+ "notEnough": "パスワードが強力ではありません。",
+ "suggestions": {
+ "allUppercase": "すべての文字を大文字にする",
+ "anotherWord": "一般的でない単語を追加する",
+ "associatedYears": "自分に関連する年を避ける",
+ "capitalization": "最初の文字以外も大文字にする",
+ "dates": "自分に関連する日付を避ける",
+ "l33t": "予測可能な文字の置換(例:'a' を '@' に)を避ける",
+ "longerKeyboardPattern": "長いキーボードパターンを使用し、入力方向を複数回変更する",
+ "noNeed": "記号、数字、大文字を使用せずに強力なパスワードを作成できます",
+ "pwned": "他の場所でこのパスワードを使用している場合は変更してください",
+ "recentYears": "最近の年を避ける",
+ "repeated": "繰り返された単語や文字を避ける",
+ "reverseWords": "一般的な単語の逆スペルを避ける",
+ "sequences": "一般的な文字のシーケンスを避ける",
+ "useWords": "複数の単語を使用するが、一般的なフレーズは避ける"
+ },
+ "warnings": {
+ "common": "これは一般的に使用されるパスワードです",
+ "commonNames": "一般的な名前や姓は推測されやすいです",
+ "dates": "日付は推測されやすいです",
+ "extendedRepeat": "「abcabcabc」のような繰り返し文字パターンは推測されやすいです",
+ "keyPattern": "短いキーボードパターンは推測されやすいです",
+ "namesByThemselves": "単独の名前や姓は推測されやすいです",
+ "pwned": "インターネット上のデータ侵害でパスワードが公開されました",
+ "recentYears": "最近の年は推測されやすいです",
+ "sequences": "「abc」のような一般的な文字のシーケンスは推測されやすいです",
+ "similarToCommon": "これは一般的に使用されるパスワードに類似しています",
+ "simpleRepeat": "「aaa」のような繰り返し文字は推測されやすいです",
+ "straightRow": "キーボードの直線的な行は推測されやすいです",
+ "topHundred": "これは頻繁に使用されるパスワードです",
+ "topTen": "これは非常に使用されるパスワードです",
+ "userInputs": "個人情報やページ関連のデータを含めないでください",
+ "wordByItself": "単語単体は推測されやすいです"
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "アカウントを追加",
+ "action__manageAccount": "アカウントを管理",
+ "action__signOut": "サインアウト",
+ "action__signOutAll": "すべてのアカウントからサインアウト"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "コピーしました!",
+ "actionLabel__copy": "すべてコピー",
+ "actionLabel__download": ".txt ダウンロード",
+ "actionLabel__print": "印刷",
+ "infoText1": "このアカウントのバックアップコードが有効になります。",
+ "infoText2": "バックアップコードは秘密に保ち、安全に保管してください。疑わしい場合はバックアップコードを再生成できます。",
+ "subtitle__codelist": "安全に保管し、秘密にしてください。",
+ "successMessage": "バックアップコードが有効になりました。認証デバイスへのアクセスが失われた場合、これらのいずれかを使用してアカウントにサインインできます。各コードは1回しか使用できません。",
+ "successSubtitle": "認証デバイスへのアクセスが失われた場合、これらのいずれかを使用してアカウントにサインインできます。",
+ "title": "バックアップコードの検証を追加",
+ "title__codelist": "バックアップコード"
+ },
+ "connectedAccountPage": {
+ "formHint": "アカウントを接続するプロバイダを選択してください。",
+ "formHint__noAccounts": "利用可能な外部アカウントプロバイダはありません。",
+ "removeResource": {
+ "messageLine1": "{{identifier}} はこのアカウントから削除されます。",
+ "messageLine2": "この接続されたアカウントを使用したり、依存する機能を使用したりすることはできなくなります。",
+ "successMessage": "{{connectedAccount}} がアカウントから削除されました。",
+ "title": "接続されたアカウントを削除"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "プロバイダがアカウントに追加されました",
+ "title": "接続されたアカウントを追加"
+ },
+ "deletePage": {
+ "actionDescription": "\"アカウントを削除\" と入力して続行してください。",
+ "confirm": "アカウントを削除",
+ "messageLine1": "アカウントを削除してもよろしいですか?",
+ "messageLine2": "この操作は永久的で取り消しできません。",
+ "title": "アカウントを削除"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "このメールアドレスに送信された確認コードを含むメールが送信されます。",
+ "formSubtitle": "{{identifier}} に送信された確認コードを入力してください。",
+ "formTitle": "確認コード",
+ "resendButton": "コードを受信していませんか? 再送信",
+ "successMessage": "メール {{identifier}} がアカウントに追加されました。"
+ },
+ "emailLink": {
+ "formHint": "このメールアドレスに送信された確認リンクを含むメールが送信されます。",
+ "formSubtitle": "{{identifier}} に送信されたメール内の確認リンクをクリックしてください。",
+ "formTitle": "確認リンク",
+ "resendButton": "リンクを受信していませんか? 再送信",
+ "successMessage": "メール {{identifier}} がアカウントに追加されました。"
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}} はこのアカウントから削除されます。",
+ "messageLine2": "このメールアドレスを使用してサインインすることはできなくなります。",
+ "successMessage": "{{emailAddress}} がアカウントから削除されました。",
+ "title": "メールアドレスを削除"
+ },
+ "title": "メールアドレスを追加",
+ "verifyTitle": "メールアドレスを確認"
+ },
+ "formButtonPrimary__add": "追加",
+ "formButtonPrimary__continue": "続行",
+ "formButtonPrimary__finish": "完了",
+ "formButtonPrimary__remove": "削除",
+ "formButtonPrimary__save": "保存",
+ "formButtonReset": "キャンセル",
+ "mfaPage": {
+ "formHint": "追加する方法を選択してください。",
+ "title": "2段階認証を追加"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "既存の番号を使用",
+ "primaryButton__addPhoneNumber": "電話番号を追加",
+ "removeResource": {
+ "messageLine1": "サインイン時にこの番号からの確認コードが不要になります。",
+ "messageLine2": "アカウントのセキュリティが低下する可能性があります。続行しますか?",
+ "successMessage": "{{mfaPhoneCode}} のSMSコード2段階認証が削除されました。",
+ "title": "2段階認証を削除"
+ },
+ "subtitle__availablePhoneNumbers": "SMSコード2段階認証に登録する既存の電話番号を選択するか、新しい電話番号を追加してください。",
+ "subtitle__unavailablePhoneNumbers": "SMSコード2段階認証に登録する利用可能な電話番号はありません。新しい電話番号を追加してください。",
+ "successMessage1": "サインイン時に、この電話番号に送信された確認コードを追加の手順として入力する必要があります。",
+ "successMessage2": "これらのバックアップコードを保存し、安全な場所に保管してください。認証デバイスへのアクセスが失われた場合、バックアップコードを使用できます。",
+ "successTitle": "SMSコード検証が有効になりました",
+ "title": "SMSコード検証を追加"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "代わりにQRコードをスキャン",
+ "buttonUnableToScan__nonPrimary": "QRコードをスキャンできませんか?",
+ "infoText__ableToScan": "認証アプリで新しいサインイン方法を設定し、以下のQRコードをスキャンしてアカウントにリンクしてください。",
+ "infoText__unableToScan": "認証アプリで新しいサインイン方法を設定し、以下のキーを入力してください。",
+ "inputLabel__unableToScan1": "タイムベースまたはワンタイムパスワードが有効になっていることを確認し、アカウントのリンクを完了してください。",
+ "inputLabel__unableToScan2": "代替として、認証アプリがTOTP URIをサポートしている場合は、フルURIをコピーすることもできます。"
+ },
+ "removeResource": {
+ "messageLine1": "この認証アプリからの確認コードは、サインイン時に不要になります。",
+ "messageLine2": "アカウントのセキュリティが低下する可能性があります。続行しますか?",
+ "successMessage": "認証アプリによる2段階認証が削除されました。",
+ "title": "2段階認証を削除"
+ },
+ "successMessage": "2段階認証が有効になりました。サインイン時に、この認証アプリからの確認コードを追加の手順として入力する必要があります。",
+ "title": "認証アプリを追加",
+ "verifySubtitle": "認証アプリで生成された確認コードを入力してください",
+ "verifyTitle": "確認コード"
+ },
+ "mobileButton__menu": "メニュー",
+ "navbar": {
+ "account": "プロフィール",
+ "description": "アカウント情報を管理します。",
+ "security": "セキュリティ",
+ "title": "アカウント"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} がこのアカウントから削除されます。",
+ "title": "パスキーを削除"
+ },
+ "subtitle__rename": "パスキー名を変更して見つけやすくすることができます。",
+ "title__rename": "パスキーの名前を変更"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "古いパスワードを使用した可能性のあるすべての他のデバイスからサインアウトすることをお勧めします。",
+ "readonly": "現在、パスワードを編集できません。企業接続を介してのみサインインできます。",
+ "successMessage__set": "パスワードが設定されました。",
+ "successMessage__signOutOfOtherSessions": "すべての他のデバイスからサインアウトされました。",
+ "successMessage__update": "パスワードが更新されました。",
+ "title__set": "パスワードを設定",
+ "title__update": "パスワードを更新"
+ },
+ "phoneNumberPage": {
+ "infoText": "検証コードが含まれたテキストメッセージがこの電話番号に送信されます。メッセージ料金が発生する場合があります。",
+ "removeResource": {
+ "messageLine1": "{{identifier}} がこのアカウントから削除されます。",
+ "messageLine2": "この電話番号を使用してサインインすることはできなくなります。",
+ "successMessage": "{{phoneNumber}} がアカウントから削除されました。",
+ "title": "電話番号を削除"
+ },
+ "successMessage": "{{identifier}} がアカウントに追加されました。",
+ "title": "電話番号を追加",
+ "verifySubtitle": "{{identifier}} に送信された検証コードを入力してください。",
+ "verifyTitle": "電話番号を検証"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "推奨サイズ 1:1、最大10MB。",
+ "imageFormDestructiveActionSubtitle": "削除",
+ "imageFormSubtitle": "アップロード",
+ "imageFormTitle": "プロフィール画像",
+ "readonly": "プロフィール情報は企業接続によって提供され、編集できません。",
+ "successMessage": "プロフィールが更新されました。",
+ "title": "プロフィールを更新"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "デバイスからサインアウト",
+ "title": "アクティブデバイス"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "再試行",
+ "actionLabel__reauthorize": "今すぐ認証",
+ "destructiveActionTitle": "削除",
+ "primaryButton": "アカウントを接続",
+ "subtitle__reauthorize": "必要なスコープが更新されており、機能が制限されている可能性があります。問題を回避するために、このアプリケーションを再認証してください。",
+ "title": "接続されたアカウント"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "アカウントを削除",
+ "title": "アカウントを削除"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "メールを削除",
+ "detailsAction__nonPrimary": "プライマリに設定",
+ "detailsAction__primary": "検証を完了",
+ "detailsAction__unverified": "検証",
+ "primaryButton": "メールアドレスを追加",
+ "title": "メールアドレス"
+ },
+ "enterpriseAccountsSection": {
+ "title": "エンタープライズアカウント"
+ },
+ "headerTitle__account": "プロフィール詳細",
+ "headerTitle__security": "セキュリティ",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "再生成",
+ "headerTitle": "バックアップコード",
+ "subtitle__regenerate": "安全なバックアップコードの新しいセットを取得します。以前のバックアップコードは削除され、使用できなくなります。",
+ "title__regenerate": "バックアップコードを再生成"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "デフォルトに設定",
+ "destructiveActionLabel": "削除"
+ },
+ "primaryButton": "2段階認証を追加",
+ "title": "2段階認証",
+ "totp": {
+ "destructiveActionTitle": "削除",
+ "headerTitle": "認証アプリケーション"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "削除",
+ "menuAction__rename": "名前を変更",
+ "title": "パスキー"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "パスワードを設定",
+ "primaryButton__updatePassword": "パスワードを更新",
+ "title": "パスワード"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "電話番号を削除",
+ "detailsAction__nonPrimary": "プライマリに設定",
+ "detailsAction__primary": "検証を完了",
+ "detailsAction__unverified": "電話番号を検証",
+ "primaryButton": "電話番号を追加",
+ "title": "電話番号"
+ },
+ "profileSection": {
+ "primaryButton": "プロフィールを更新",
+ "title": "プロフィール"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "ユーザー名を設定",
+ "primaryButton__updateUsername": "ユーザー名を更新",
+ "title": "ユーザー名"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "ウォレットを削除",
+ "primaryButton": "Web3ウォレット",
+ "title": "Web3ウォレット"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "ユーザー名が更新されました。",
+ "title__set": "ユーザー名を設定",
+ "title__update": "ユーザー名を更新"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} がこのアカウントから削除されます。",
+ "messageLine2": "このWeb3ウォレットを使用してサインインすることはできなくなります。",
+ "successMessage": "{{web3Wallet}} がアカウントから削除されました。",
+ "title": "Web3ウォレットを削除"
+ },
+ "subtitle__availableWallets": "アカウントに接続するWeb3ウォレットを選択してください。",
+ "subtitle__unavailableWallets": "利用可能なWeb3ウォレットはありません。",
+ "successMessage": "ウォレットがアカウントに追加されました。",
+ "title": "Web3ウォレットを追加"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/common.json b/DigitalHumanWeb/locales/ja-JP/common.json
new file mode 100644
index 0000000..119b4df
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "概要",
+ "advanceSettings": "高度な設定",
+ "alert": {
+ "cloud": {
+ "action": "無料体験",
+ "desc": "我々はすべての登録ユーザーに {{credit}} 無料の計算ポイントを提供しています。複雑な設定は不要で、すぐに使用できます。無制限の会話履歴とグローバルクラウド同期をサポートし、さらに多くの高度な機能があなたを待っています。",
+ "descOnMobile": "すべての登録ユーザーに {{credit}} の無料コンピューティングポイントを提供しています。複雑な設定は不要で、すぐにご利用いただけます。",
+ "title": "{{name}} を体験してみてください"
+ }
+ },
+ "appInitializing": "アプリケーションを初期化しています...",
+ "autoGenerate": "自動生成",
+ "autoGenerateTooltip": "ヒントに基づいてエージェントの説明を自動生成します",
+ "autoGenerateTooltipDisabled": "ツールチップを入力してから自動生成機能を使用してください",
+ "back": "戻る",
+ "batchDelete": "バッチ削除",
+ "blog": "製品ブログ",
+ "cancel": "キャンセル",
+ "changelog": "変更履歴",
+ "close": "閉じる",
+ "contact": "お問い合わせ",
+ "copy": "コピー",
+ "copyFail": "コピーに失敗しました",
+ "copySuccess": "コピーが成功しました",
+ "dataStatistics": {
+ "messages": "メッセージ",
+ "sessions": "セッション",
+ "today": "今日の追加",
+ "topics": "トピック"
+ },
+ "defaultAgent": "デフォルトエージェント",
+ "defaultSession": "デフォルトセッション",
+ "delete": "削除",
+ "document": "ドキュメント",
+ "download": "ダウンロード",
+ "duplicate": "コピーを作成する",
+ "edit": "編集",
+ "export": "エクスポート",
+ "exportType": {
+ "agent": "エージェント設定のエクスポート",
+ "agentWithMessage": "エージェントとメッセージのエクスポート",
+ "all": "グローバル設定とすべてのエージェントデータのエクスポート",
+ "allAgent": "すべてのエージェント設定のエクスポート",
+ "allAgentWithMessage": "すべてのエージェントとメッセージのエクスポート",
+ "globalSetting": "グローバル設定のエクスポート"
+ },
+ "feedback": "フィードバック",
+ "follow": " {{name}} で私たちをフォローする",
+ "footer": {
+ "action": {
+ "feedback": "貴重なフィードバックを共有する",
+ "star": "GitHub でスターを付ける"
+ },
+ "and": "および",
+ "feedback": {
+ "action": "フィードバックを共有",
+ "desc": "あなたのすべてのアイデアと提案は私たちにとって非常に貴重です。私たちはあなたの意見を知りたくてたまりません!製品の機能や使用体験に関するフィードバックを提供していただければ幸いです。LobeChat をより良くするためのお手伝いをしていただけると嬉しいです。",
+ "title": "GitHub で貴重なフィードバックを共有"
+ },
+ "later": "後で",
+ "star": {
+ "action": "スターを付ける",
+ "desc": "当社の製品が気に入っていただけた場合、GitHub でスターを付けていただけますか?この小さな行動が大きな意味を持ち、私たちが継続的に特性体験を提供する励みとなります。",
+ "title": "GitHub で私たちにスターを付ける"
+ },
+ "title": "当社の製品がお気に入りですか?"
+ },
+ "fullscreen": "フルスクリーンモード",
+ "historyRange": "履歴範囲",
+ "import": "インポート",
+ "importModal": {
+ "error": {
+ "desc": "データのインポート中にエラーが発生しました。再度インポートを試すか、<1>問題を報告1>してください。問題を迅速に解決いたします。",
+ "title": "データのインポートに失敗しました"
+ },
+ "finish": {
+ "onlySettings": "システム設定のインポートが完了しました",
+ "start": "開始",
+ "subTitle": "データのインポートが完了しました。所要時間:{{duration}}秒。詳細は以下の通りです:",
+ "title": "データのインポートが完了しました"
+ },
+ "loading": "データをインポート中です。しばらくお待ちください...",
+ "preparing": "データのインポートモジュールを準備中...",
+ "result": {
+ "added": "インポートが成功しました",
+ "errors": "インポートエラー",
+ "messages": "メッセージ",
+ "sessionGroups": "セッショングループ",
+ "sessions": "セッション",
+ "skips": "重複スキップ",
+ "topics": "トピック",
+ "type": "データタイプ"
+ },
+ "title": "データのインポート",
+ "uploading": {
+ "desc": "現在のファイルは大きすぎて、アップロード中です...",
+ "restTime": "残り時間",
+ "speed": "アップロード速度"
+ }
+ },
+ "information": "コミュニティと情報",
+ "installPWA": "PWAをインストール",
+ "lang": {
+ "ar": "アラビア語",
+ "bg-BG": "ブルガリア語",
+ "bn": "ベンガル語",
+ "cs-CZ": "チェコ語",
+ "da-DK": "デンマーク語",
+ "de-DE": "ドイツ語",
+ "el-GR": "ギリシャ語",
+ "en": "英語",
+ "en-US": "英語",
+ "es-ES": "スペイン語",
+ "fi-FI": "フィンランド語",
+ "fr-FR": "フランス語",
+ "hi-IN": "ヒンディー語",
+ "hu-HU": "ハンガリー語",
+ "id-ID": "インドネシア語",
+ "it-IT": "イタリア語",
+ "ja-JP": "日本語",
+ "ko-KR": "韓国語",
+ "nl-NL": "オランダ語",
+ "no-NO": "ノルウェー語",
+ "pl-PL": "ポーランド語",
+ "pt-BR": "ポルトガル語",
+ "pt-PT": "ポルトガル語",
+ "ro-RO": "ルーマニア語",
+ "ru-RU": "ロシア語",
+ "sk-SK": "スロバキア語",
+ "sr-RS": "セルビア語",
+ "sv-SE": "スウェーデン語",
+ "th-TH": "タイ語",
+ "tr-TR": "トルコ語",
+ "uk-UA": "ウクライナ語",
+ "vi-VN": "ベトナム語",
+ "zh": "簡体字中国語",
+ "zh-CN": "簡体字中国語",
+ "zh-TW": "繁体字中国語"
+ },
+ "layoutInitializing": "レイアウトを初期化中...",
+ "legal": "法的声明",
+ "loading": "読み込み中...",
+ "mail": {
+ "business": "ビジネス提携",
+ "support": "メールサポート"
+ },
+ "oauth": "SSO ログイン",
+ "officialSite": "公式サイト",
+ "ok": "OK",
+ "password": "パスワード",
+ "pin": "ピン留め",
+ "pinOff": "ピン留め解除",
+ "privacy": "プライバシーポリシー",
+ "regenerate": "再生成",
+ "rename": "名前を変更",
+ "reset": "リセット",
+ "retry": "再試行",
+ "send": "送信",
+ "setting": "設定",
+ "share": "共有",
+ "stop": "停止",
+ "sync": {
+ "actions": {
+ "settings": "同期設定",
+ "sync": "即時同期"
+ },
+ "awareness": {
+ "current": "現在のデバイス"
+ },
+ "channel": "チャンネル",
+ "disabled": {
+ "actions": {
+ "enable": "クラウド同期を有効にする",
+ "settings": "同期設定"
+ },
+ "desc": "現在のセッションデータはこのブラウザにのみ保存されます。複数のデバイス間でデータを同期するには、クラウド同期を設定して有効にしてください。",
+ "title": "データ同期が無効です"
+ },
+ "enabled": {
+ "title": "データ同期"
+ },
+ "status": {
+ "connecting": "接続中",
+ "disabled": "同期が無効です",
+ "ready": "接続済み",
+ "synced": "同期済み",
+ "syncing": "同期中",
+ "unconnected": "接続失敗"
+ },
+ "title": "同期状態",
+ "unconnected": {
+ "tip": "シグナリングサーバーに接続できません。ピア間通信チャンネルを確立できません。ネットワークを確認して再試行してください。"
+ }
+ },
+ "tab": {
+ "chat": "チャット",
+ "discover": "発見",
+ "files": "ファイル",
+ "me": "私",
+ "setting": "設定"
+ },
+ "telemetry": {
+ "allow": "許可する",
+ "deny": "拒否する",
+ "desc": "匿名の使用情報を収集し、LobeChat の改善やより良い製品体験を提供するためにご協力いただけると嬉しいです。いつでも「設定」-「情報」から無効にできます。",
+ "learnMore": "詳細を見る",
+ "title": "LobeChat の改善にご協力ください"
+ },
+ "temp": "一時的",
+ "terms": "利用規約",
+ "updateAgent": "エージェント情報を更新",
+ "upgradeVersion": {
+ "action": "アップグレード",
+ "hasNew": "利用可能な更新があります",
+ "newVersion": "新しいバージョンが利用可能です:{{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "匿名ユーザー",
+ "billing": "請求管理",
+ "cloud": "{{name}} を体験",
+ "data": "データストレージ",
+ "defaultNickname": "コミュニティユーザー",
+ "discord": "コミュニティサポート",
+ "docs": "使用文書",
+ "email": "メールサポート",
+ "feedback": "フィードバックと提案",
+ "help": "ヘルプセンター",
+ "moveGuide": "設定ボタンがこちらに移動しました",
+ "plans": "サブスクリプションプラン",
+ "preview": "プレビュー",
+ "profile": "アカウント管理",
+ "setting": "アプリ設定",
+ "usages": "利用量統計"
+ },
+ "version": "バージョン"
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/components.json b/DigitalHumanWeb/locales/ja-JP/components.json
new file mode 100644
index 0000000..d438757
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "ここにファイルをドラッグ&ドロップしてください。複数の画像のアップロードがサポートされています。",
+ "dragFileDesc": "ここに画像やファイルをドラッグ&ドロップしてください。複数の画像やファイルのアップロードがサポートされています。",
+ "dragFileTitle": "ファイルをアップロード",
+ "dragTitle": "画像をアップロード"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "知識ベースに追加",
+ "addToOtherKnowledgeBase": "他の知識ベースに追加",
+ "batchChunking": "バッチ分割",
+ "chunking": "分割",
+ "chunkingTooltip": "ファイルを複数のテキストブロックに分割し、ベクトル化した後、意味検索やファイル対話に使用できます",
+ "confirmDelete": "このファイルを削除しようとしています。削除後は復元できませんので、操作を確認してください",
+ "confirmDeleteMultiFiles": "選択した {{count}} 個のファイルを削除しようとしています。削除後は復元できませんので、操作を確認してください",
+ "confirmRemoveFromKnowledgeBase": "選択した {{count}} 個のファイルを知識ベースから削除しようとしています。削除後もファイルはすべてのファイルで表示できますので、操作を確認してください",
+ "copyUrl": "リンクをコピー",
+ "copyUrlSuccess": "ファイルのアドレスがコピーされました",
+ "createChunkingTask": "準備中...",
+ "deleteSuccess": "ファイルが正常に削除されました",
+ "downloading": "ファイルをダウンロード中...",
+ "removeFromKnowledgeBase": "知識ベースから削除",
+ "removeFromKnowledgeBaseSuccess": "ファイルが正常に削除されました"
+ },
+ "bottom": "これ以上ありません",
+ "config": {
+ "showFilesInKnowledgeBase": "知識ベースの内容を表示"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "ファイルをアップロード",
+ "folder": "フォルダーをアップロード",
+ "knowledgeBase": "新しい知識ベースを作成"
+ },
+ "or": "または",
+ "title": "ここにファイルまたはフォルダーをドラッグしてください"
+ },
+ "title": {
+ "createdAt": "作成日時",
+ "size": "サイズ",
+ "title": "ファイル"
+ },
+ "total": {
+ "fileCount": "合計 {{count}} 件",
+ "selectedCount": "選択済み {{count}} 件"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "テキストブロックはまだ完全にベクトル化されていません。これにより意味検索機能が使用できなくなります。検索品質を向上させるために、テキストブロックをベクトル化してください",
+ "error": "ベクトル化に失敗しました",
+ "errorResult": "ベクトル化に失敗しました。再試行する前に確認してください。失敗の理由:",
+ "processing": "テキストブロックをベクトル化中です。しばらくお待ちください",
+ "success": "現在のテキストブロックはすべてベクトル化されています"
+ },
+ "embeddings": "ベクトル化",
+ "status": {
+ "error": "分割に失敗しました",
+ "errorResult": "分割に失敗しました。再試行する前に確認してください。失敗の理由:",
+ "processing": "分割中",
+ "processingTip": "サーバーがテキストブロックを分割しています。ページを閉じても分割の進行には影響しません"
+ }
+ }
+ },
+ "GoBack": {
+ "back": "戻る"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "カスタムモデル、デフォルトでは関数呼び出しとビジョン認識の両方をサポートしています。上記機能の有効性を確認してください。",
+ "file": "このモデルはファイルのアップロードと認識をサポートしています。",
+ "functionCall": "このモデルは関数呼び出し(Function Call)をサポートしています。",
+ "tokens": "このモデルは1つのセッションあたり最大{{tokens}}トークンをサポートしています。",
+ "vision": "このモデルはビジョン認識をサポートしています。"
+ },
+ "removed": "選択されたモデルはリストから削除されました。選択を解除すると自動的に削除されます。"
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "有効なモデルがありません。設定に移動して有効にしてください。",
+ "provider": "プロバイダー"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/discover.json b/DigitalHumanWeb/locales/ja-JP/discover.json
new file mode 100644
index 0000000..c7bb66f
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "アシスタントを追加",
+ "addAgentAndConverse": "アシスタントを追加して会話する",
+ "addAgentSuccess": "追加成功",
+ "conversation": {
+ "l1": "こんにちは、私は **{{name}}** です。何でも質問してください、できる限りお答えします ~",
+ "l2": "以下は私の能力の紹介です: ",
+ "l3": "さあ、会話を始めましょう!"
+ },
+ "description": "アシスタントの紹介",
+ "detail": "詳細",
+ "list": "アシスタントリスト",
+ "more": "もっと見る",
+ "plugins": "プラグインを統合",
+ "recentSubmits": "最近の更新",
+ "suggestions": "関連情報",
+ "systemRole": "アシスタント設定",
+ "try": "試してみる"
+ },
+ "back": "戻る",
+ "category": {
+ "assistant": {
+ "academic": "学術",
+ "all": "すべて",
+ "career": "キャリア",
+ "copywriting": "コピーライティング",
+ "design": "デザイン",
+ "education": "教育",
+ "emotions": "感情",
+ "entertainment": "エンターテイメント",
+ "games": "ゲーム",
+ "general": "一般",
+ "life": "生活",
+ "marketing": "マーケティング",
+ "office": "オフィス",
+ "programming": "プログラミング",
+ "translation": "翻訳"
+ },
+ "plugin": {
+ "all": "すべて",
+ "gaming-entertainment": "ゲーム・エンターテイメント",
+ "life-style": "ライフスタイル",
+ "media-generate": "メディア生成",
+ "science-education": "科学・教育",
+ "social": "ソーシャルメディア",
+ "stocks-finance": "株式・金融",
+ "tools": "実用ツール",
+ "web-search": "ウェブ検索"
+ }
+ },
+ "cleanFilter": "フィルターをクリア",
+ "create": "作成",
+ "createGuide": {
+ "func1": {
+ "desc1": "会話ウィンドウの右上隅の設定から、アシスタントを提出したい設定ページに入ります;",
+ "desc2": "右上隅のアシスタントマーケットに提出ボタンをクリックします。",
+ "tag": "方法1",
+ "title": "LobeChatを通じて提出"
+ },
+ "func2": {
+ "button": "Githubアシスタントリポジトリに移動",
+ "desc": "アシスタントをインデックスに追加したい場合は、pluginsディレクトリにagent-template.jsonまたはagent-template-full.jsonを使用してエントリを作成し、簡単な説明を書いて適切にタグ付けし、プルリクエストを作成してください。",
+ "tag": "方法2",
+ "title": "Githubを通じて提出"
+ }
+ },
+ "dislike": "嫌い",
+ "filter": "フィルター",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "すべての著者",
+ "followed": "フォローしている著者",
+ "title": "著者範囲"
+ },
+ "contentLength": "最小コンテキスト長",
+ "maxToken": {
+ "title": "最大長さを設定 (トークン)",
+ "unlimited": "無制限"
+ },
+ "other": {
+ "functionCall": "関数呼び出しをサポート",
+ "title": "その他",
+ "vision": "視覚認識をサポート",
+ "withKnowledge": "知識ベース付き",
+ "withTool": "プラグイン付き"
+ },
+ "pricing": "モデル価格",
+ "timePeriod": {
+ "all": "すべての時間",
+ "day": "過去24時間",
+ "month": "過去30日",
+ "title": "時間範囲",
+ "week": "過去7日",
+ "year": "過去1年"
+ }
+ },
+ "home": {
+ "featuredAssistants": "おすすめアシスタント",
+ "featuredModels": "おすすめモデル",
+ "featuredProviders": "おすすめモデルサービスプロバイダー",
+ "featuredTools": "おすすめプラグイン",
+ "more": "もっと発見する"
+ },
+ "like": "好き",
+ "models": {
+ "chat": "会話を始める",
+ "contentLength": "最大コンテキスト長",
+ "free": "無料",
+ "guide": "設定ガイド",
+ "list": "モデルリスト",
+ "more": "もっと見る",
+ "parameterList": {
+ "defaultValue": "デフォルト値",
+ "docs": "ドキュメントを見る",
+ "frequency_penalty": {
+ "desc": "この設定は、モデルが入力中に既に出現した特定の語彙の使用頻度を調整します。高い値はそのような繰り返しの可能性を低下させ、負の値は逆の効果を生み出します。語彙の罰則は出現回数に応じて増加しません。負の値は語彙の繰り返し使用を奨励します。",
+ "title": "頻度ペナルティ"
+ },
+ "max_tokens": {
+ "desc": "この設定は、モデルが一度の応答で生成できる最大の長さを定義します。値を高く設定すると、モデルはより長い応答を生成でき、値を低く設定すると、応答の長さが制限され、より簡潔になります。異なるアプリケーションシーンに応じて、この値を適切に調整することで、期待される応答の長さと詳細度を達成するのに役立ちます。",
+ "title": "一度の応答制限"
+ },
+ "presence_penalty": {
+ "desc": "この設定は、語彙が入力中に出現する頻度に基づいて語彙の繰り返し使用を制御することを目的としています。入力中に多く出現する語彙の使用を減らそうとし、その使用頻度は出現頻度に比例します。語彙の罰則は出現回数に応じて増加します。負の値は語彙の繰り返し使用を奨励します。",
+ "title": "トピックの新鮮さ"
+ },
+ "range": "範囲",
+ "temperature": {
+ "desc": "この設定は、モデルの応答の多様性に影響を与えます。低い値はより予測可能で典型的な応答をもたらし、高い値はより多様で珍しい応答を奨励します。値が0に設定されると、モデルは与えられた入力に対して常に同じ応答を返します。",
+ "title": "ランダム性"
+ },
+ "title": "モデルパラメータ",
+ "top_p": {
+ "desc": "この設定は、モデルの選択を可能性の高い一定の割合の語彙に制限します:累積確率がPに達するトップ語彙のみを選択します。低い値はモデルの応答をより予測可能にし、デフォルト設定はモデルが全範囲の語彙から選択できることを許可します。",
+ "title": "核サンプリング"
+ },
+ "type": "タイプ"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChatは、このプロバイダーに対してカスタムAPIキーを使用することをサポートしています。",
+ "input": "入力価格",
+ "inputTooltip": "百万トークンあたりのコスト",
+ "latency": "レイテンシ",
+ "latencyTooltip": "プロバイダーが最初のトークンを送信する平均応答時間",
+ "maxOutput": "最大出力長",
+ "maxOutputTooltip": "このエンドポイントが生成できる最大トークン数",
+ "officialTooltip": "LobeHub公式サービス",
+ "output": "出力価格",
+ "outputTooltip": "百万トークンあたりのコスト",
+ "streamCancellationTooltip": "このプロバイダーはストリームキャンセル機能をサポートしています。",
+ "throughput": "スループット",
+ "throughputTooltip": "ストリームリクエストが毎秒転送する平均トークン数"
+ },
+ "suggestions": "関連モデル",
+ "supportedProviders": "このモデルをサポートするプロバイダー"
+ },
+ "plugins": {
+ "community": "コミュニティプラグイン",
+ "install": "プラグインをインストール",
+ "installed": "インストール済み",
+ "list": "プラグインリスト",
+ "meta": {
+ "description": "説明",
+ "parameter": "パラメータ",
+ "title": "ツールパラメータ",
+ "type": "タイプ"
+ },
+ "more": "もっと見る",
+ "official": "公式プラグイン",
+ "recentSubmits": "最近の更新",
+ "suggestions": "関連する提案"
+ },
+ "providers": {
+ "config": "プロバイダーの設定",
+ "list": "モデルサービスプロバイダーリスト",
+ "modelCount": "{{count}} 個のモデル",
+ "modelSite": "モデルドキュメント",
+ "more": "もっと見る",
+ "officialSite": "公式サイト",
+ "showAllModels": "すべてのモデルを表示",
+ "suggestions": "関連プロバイダー",
+ "supportedModels": "サポートされているモデル"
+ },
+ "search": {
+ "placeholder": "名前、紹介、またはキーワードを検索...",
+ "result": "{{keyword}}に関する{{count}}件の検索結果",
+ "searching": "検索中..."
+ },
+ "sort": {
+ "mostLiked": "最も好まれた",
+ "mostUsed": "最も使用された",
+ "newest": "新しい順",
+ "oldest": "古い順",
+ "recommended": "おすすめ"
+ },
+ "tab": {
+ "assistants": "アシスタント",
+ "home": "ホーム",
+ "models": "モデル",
+ "plugins": "プラグイン",
+ "providers": "モデルプロバイダー"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/error.json b/DigitalHumanWeb/locales/ja-JP/error.json
new file mode 100644
index 0000000..f2b1627
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "続ける",
+ "desc": "{{greeting}}、続けてサービスを提供できて嬉しいです。前回の話題に戻りましょう",
+ "title": "おかえりなさい、{{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "ホームに戻る",
+ "desc": "後で試してみるか、既知の世界に戻る",
+ "retry": "再読み込み",
+ "title": "ページに問題が発生しました.."
+ },
+ "fetchError": "リクエストが失敗しました",
+ "fetchErrorDetail": "エラーの詳細",
+ "notFound": {
+ "backHome": "ホームに戻る",
+ "check": "URLが正しいかどうかを確認してください",
+ "desc": "お探しのページが見つかりませんでした",
+ "title": "未知の領域に入りましたか?"
+ },
+ "pluginSettings": {
+ "desc": "以下の設定を完了すると、プラグインを使用することができます",
+ "title": "{{name}} プラグイン設定"
+ },
+ "response": {
+ "400": "申し訳ありませんが、サーバーはリクエストを理解できません。リクエストパラメータが正しいかどうか確認してください",
+ "401": "申し訳ありませんが、サーバーはリクエストを拒否しました。権限が不足しているか、有効な認証情報が提供されていない可能性があります",
+ "403": "申し訳ありませんが、サーバーはリクエストを拒否しました。このコンテンツにアクセスする権限がありません",
+ "404": "申し訳ありませんが、サーバーはリクエストしたページやリソースを見つけることができません。URLが正しいかどうか確認してください",
+ "405": "申し訳ありませんが、サーバーは使用されたリクエストメソッドをサポートしていません。リクエストメソッドが正しいかどうか確認してください",
+ "406": "申し訳ありませんが、サーバーはリクエストされたコンテンツの特性に基づいて要求を完了できませんでした",
+ "407": "申し訳ありませんが、このリクエストを続行するにはプロキシ認証が必要です",
+ "408": "申し訳ありませんが、サーバーはリクエストの待機中にタイムアウトしました。ネットワーク接続を確認してからもう一度お試しください",
+ "409": "申し訳ありませんが、リクエストが競合して処理できません。これはリソースの状態とリクエストが互換性がないためかもしれません",
+ "410": "申し訳ありませんが、リクエストされたリソースは永久に削除されました。見つかりません",
+ "411": "申し訳ありませんが、サーバーは有効なコンテンツ長さを含まないリクエストを処理できません",
+ "412": "申し訳ありませんが、サーバー側の条件を満たさないため、リクエストを完了できません",
+ "413": "申し訳ありませんが、リクエストデータが大きすぎてサーバーが処理できません",
+ "414": "申し訳ありませんが、リクエストのURIが長すぎてサーバーが処理できません",
+ "415": "申し訳ありませんが、サーバーはリクエストに添付されたメディア形式を処理できません",
+ "416": "申し訳ありませんが、サーバーはリクエストされた範囲を満たすことができません",
+ "417": "申し訳ありませんが、サーバーはあなたの期待に応えることができません",
+ "422": "申し訳ありませんが、リクエストの形式は正しいですが、意味エラーが含まれているため応答できません",
+ "423": "申し訳ありませんが、リクエストされたリソースはロックされています",
+ "424": "申し訳ありませんが、以前のリクエストの失敗により、現在のリクエストを完了できません",
+ "426": "申し訳ありませんが、サーバーはクライアントをより高いプロトコルバージョンにアップグレードするよう要求しています",
+ "428": "申し訳ありませんが、サーバーは事前条件を要求し、リクエストに正しい条件ヘッダーを含めるよう要求しています",
+ "429": "申し訳ありませんが、リクエストが多すぎてサーバーが少し疲れています。しばらくしてからもう一度お試しください",
+ "431": "申し訳ありませんが、リクエストヘッダーフィールドが大きすぎてサーバーが処理できません",
+ "451": "申し訳ありませんが、法的な理由により、サーバーはこのリソースの提供を拒否しています",
+ "500": "申し訳ありませんが、サーバーに一時的な問題が発生し、リクエストを完了できません。しばらくしてから再試行してください",
+ "502": "申し訳ありませんが、サーバーは一時的にサービスを提供できません。しばらくしてから再試行してください",
+ "503": "申し訳ありませんが、サーバーは現在、リクエストを処理できません。オーバーロードまたはメンテナンス中の可能性があります。しばらくしてから再試行してください",
+ "504": "申し訳ありませんが、サーバーは上位サーバーからの応答を待っていません。しばらくしてから再試行してください",
+ "AgentRuntimeError": "Lobe言語モデルの実行時にエラーが発生しました。以下の情報に基づいてトラブルシューティングを行うか、再試行してください。",
+ "FreePlanLimit": "現在は無料ユーザーですので、この機能を使用することはできません。有料プランにアップグレードして継続してください。",
+ "InvalidAccessCode": "パスワードが正しくないか空です。正しいアクセスパスワードを入力するか、カスタムAPIキーを追加してください",
+ "InvalidBedrockCredentials": "Bedrockの認証に失敗しました。AccessKeyId/SecretAccessKeyを確認してから再試行してください。",
+ "InvalidClerkUser": "申し訳ありませんが、現在ログインしていません。続行するにはログインまたはアカウント登録を行ってください",
+ "InvalidGithubToken": "Githubのパーソナルアクセストークンが無効または空です。Githubのパーソナルアクセストークンを確認してから、再試行してください。",
+ "InvalidOllamaArgs": "Ollamaの設定が正しくありません。Ollamaの設定を確認してからもう一度お試しください",
+ "InvalidProviderAPIKey": "{{provider}} APIキーが正しくないか空です。{{provider}} APIキーを確認して再試行してください。",
+ "LocationNotSupportError": "申し訳ありませんが、お住まいの地域ではこのモデルサービスをサポートしていません。地域制限またはサービスが利用できない可能性があります。現在の位置がこのサービスをサポートしているかどうかを確認するか、他の位置情報を使用してみてください。",
+ "NoOpenAIAPIKey": "OpenAI APIキーが空です。カスタムOpenAI APIキーを追加してください。",
+ "OllamaBizError": "Ollamaサービスのリクエストでエラーが発生しました。以下の情報に基づいてトラブルシューティングを行うか、再度お試しください",
+ "OllamaServiceUnavailable": "Ollamaサービスが利用できません。Ollamaが正常に動作しているか、またはOllamaのクロスオリジン設定が正しく行われているかを確認してください",
+ "OpenAIBizError": "リクエスト OpenAI サービスでエラーが発生しました。以下の情報を確認して再試行してください。",
+ "PluginApiNotFound": "申し訳ありませんが、プラグインのマニフェストに指定されたAPIが見つかりませんでした。リクエストメソッドとプラグインのマニフェストのAPIが一致しているかどうかを確認してください",
+ "PluginApiParamsError": "申し訳ありませんが、プラグインのリクエストパラメータの検証に失敗しました。パラメータとAPIの説明が一致しているかどうか確認してください",
+ "PluginFailToTransformArguments": "申し訳ありませんが、プラグインの引数変換に失敗しました。助手メッセージを再生成するか、より強力な Tools Calling 機能を持つAIモデルに切り替えて再試行してください",
+ "PluginGatewayError": "申し訳ありませんが、プラグインゲートウェイでエラーが発生しました。プラグインゲートウェイの設定を確認してください。",
+ "PluginManifestInvalid": "申し訳ありませんが、このプラグインのマニフェストの検証に失敗しました。マニフェストの形式が正しいかどうかを確認してください",
+ "PluginManifestNotFound": "申し訳ありませんが、サーバーでプラグインのマニフェストファイル (manifest.json) が見つかりませんでした。プラグインのマニフェストファイルのアドレスが正しいかどうかを確認してください",
+ "PluginMarketIndexInvalid": "申し訳ありませんが、プラグインのインデックスの検証に失敗しました。インデックスファイルの形式が正しいかどうかを確認してください",
+ "PluginMarketIndexNotFound": "申し訳ありませんが、プラグインのインデックスが見つかりませんでした。インデックスのアドレスが正しいかどうか確認してください",
+ "PluginMetaInvalid": "申し訳ありませんが、プラグインのメタ情報の検証に失敗しました。プラグインのメタ情報の形式が正しいかどうか確認してください",
+ "PluginMetaNotFound": "申し訳ありませんが、インデックスでプラグインが見つかりませんでした。プラグインの設定情報をインデックスで確認してください",
+ "PluginOpenApiInitError": "申し訳ありませんが、OpenAPIクライアントの初期化に失敗しました。OpenAPIの設定情報を確認してください。",
+ "PluginServerError": "プラグインサーバーのリクエストエラーが発生しました。以下のエラーメッセージを参考に、プラグインのマニフェストファイル、設定、サーバー実装を確認してください",
+ "PluginSettingsInvalid": "このプラグインを使用するには、正しい設定が必要です。設定が正しいかどうか確認してください",
+ "ProviderBizError": "リクエスト {{provider}} サービスでエラーが発生しました。以下の情報を確認して再試行してください。",
+ "StreamChunkError": "ストリーミングリクエストのメッセージブロック解析エラーです。現在のAPIインターフェースが標準仕様に準拠しているか確認するか、APIプロバイダーにお問い合わせください。",
+ "SubscriptionPlanLimit": "ご契約のクォータが使い切られましたので、この機能を使用することはできません。より高いプランにアップグレードするか、リソースパッケージを購入して継続してください。",
+ "UnknownChatFetchError": "申し訳ありませんが、未知のリクエストエラーが発生しました。以下の情報をもとに確認するか、再試行してください。"
+ },
+ "stt": {
+ "responseError": "サービスリクエストが失敗しました。設定を確認するか、もう一度お試しください"
+ },
+ "tts": {
+ "responseError": "サービスリクエストが失敗しました。設定を確認するか、もう一度お試しください"
+ },
+ "unlock": {
+ "addProxyUrl": "OpenAI 代理アドレスを追加する(オプション)",
+ "apiKey": {
+ "description": "{{name}} APIキーを入力してセッションを開始します。",
+ "title": "カスタム{{name}} APIキーを使用"
+ },
+ "closeMessage": "ヒントを閉じる",
+ "confirm": "確認して再試行",
+ "oauth": {
+ "description": "管理者が統一ログイン認証を有効にしました。下のボタンをクリックしてログインすると、アプリがロック解除されます。",
+ "success": "ログインに成功しました",
+ "title": "アカウントにログイン",
+ "welcome": "ようこそ!"
+ },
+ "password": {
+ "description": "管理者によってアプリが暗号化されました。アプリをロック解除するには、アプリのパスワードを入力してください。パスワードは1回だけ入力すればよいです",
+ "placeholder": "パスワードを入力してください",
+ "title": "パスワードを入力してアプリをロック解除"
+ },
+ "tabs": {
+ "apiKey": "カスタムAPIキー",
+ "password": "パスワード"
+ }
+ },
+ "upload": {
+ "desc": "詳細: {{detail}}",
+ "fileOnlySupportInServerMode": "現在のデプロイモードでは、画像以外のファイルのアップロードはサポートされていません。{{ext}} 形式のファイルをアップロードするには、サーバーデータベースデプロイに切り替えるか、{{cloud}} サービスを使用してください。",
+ "networkError": "ネットワークが正常であることを確認し、ファイルストレージサービスのクロスオリジン設定が正しいかどうかを確認してください。",
+ "title": "ファイルのアップロードに失敗しました。ネットワーク接続を確認するか、後でもう一度お試しください",
+ "unknownError": "エラーの原因: {{reason}}",
+ "uploadFailed": "ファイルのアップロードに失敗しました。"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/file.json b/DigitalHumanWeb/locales/ja-JP/file.json
new file mode 100644
index 0000000..bc3aeb6
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "ファイルと知識ベースを管理する",
+ "detail": {
+ "basic": {
+ "createdAt": "作成日時",
+ "filename": "ファイル名",
+ "size": "ファイルサイズ",
+ "title": "基本情報",
+ "type": "フォーマット",
+ "updatedAt": "更新日時"
+ },
+ "data": {
+ "chunkCount": "チャンク数",
+ "embedding": {
+ "default": "ベクトル化されていません",
+ "error": "失敗",
+ "pending": "開始待ち",
+ "processing": "処理中",
+ "success": "完了"
+ },
+ "embeddingStatus": "ベクトル化"
+ }
+ },
+ "empty": "アップロードされたファイル/フォルダーはありません",
+ "header": {
+ "actions": {
+ "newFolder": "新しいフォルダーを作成",
+ "uploadFile": "ファイルをアップロード",
+ "uploadFolder": "フォルダーをアップロード"
+ },
+ "uploadButton": "アップロード"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "この知識ベースを削除しようとしています。ファイルは削除されず、すべてのファイルに移動されます。知識ベースを削除すると復元できませんので、慎重に操作してください。",
+ "empty": "クリックして <1>+1> 知識ベースを作成を開始"
+ },
+ "new": "新しい知識ベース",
+ "title": "知識ベース"
+ },
+ "networkError": "知識ベースの取得に失敗しました。ネットワーク接続を確認してから再試行してください",
+ "notSupportGuide": {
+ "desc": "現在のデプロイメントインスタンスはクライアントデータベースモードであり、ファイル管理機能を使用できません。<1>サーバーデータベースデプロイメントモード1>に切り替えるか、直接 <3>LobeChat Cloud3> を使用してください。",
+ "features": {
+ "allKind": {
+ "desc": "Word、PPT、Excel、PDF、TXTなどの一般的な文書形式や、JS、Pythonなどの主流のコードファイルを含む、主流のファイルタイプをサポートしています。",
+ "title": "多様なファイルタイプの解析"
+ },
+ "embeddings": {
+ "desc": "高性能ベクトルモデルを使用して、テキストチャンクをベクトル化し、ファイル内容の意味的検索を実現します。",
+ "title": "ベクトルの意味化"
+ },
+ "repos": {
+ "desc": "知識ベースの作成をサポートし、さまざまなタイプのファイルを追加して、あなたの専門知識を構築できます。",
+ "title": "知識ベース"
+ }
+ },
+ "title": "現在のデプロイメントモードはファイル管理をサポートしていません"
+ },
+ "preview": {
+ "downloadFile": "ファイルをダウンロード",
+ "unsupportedFileAndContact": "このファイル形式はオンラインプレビューをサポートしていません。プレビューのリクエストがある場合は、ぜひ<1>ご連絡ください1>。"
+ },
+ "searchFilePlaceholder": "ファイルを検索",
+ "tab": {
+ "all": "すべてのファイル",
+ "audios": "音声",
+ "documents": "文書",
+ "images": "画像",
+ "videos": "動画",
+ "websites": "ウェブサイト"
+ },
+ "title": "ファイル",
+ "uploadDock": {
+ "body": {
+ "collapse": "折りたたむ",
+ "item": {
+ "done": "アップロード完了",
+ "error": "アップロード失敗、再試行してください",
+ "pending": "アップロード準備中...",
+ "processing": "ファイル処理中...",
+ "restTime": "残り {{time}}"
+ }
+ },
+ "totalCount": "合計 {{count}} 件",
+ "uploadStatus": {
+ "error": "アップロードエラー",
+ "pending": "アップロード待機中",
+ "processing": "アップロード中",
+ "success": "アップロード完了",
+ "uploading": "アップロード中"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/knowledgeBase.json b/DigitalHumanWeb/locales/ja-JP/knowledgeBase.json
new file mode 100644
index 0000000..bc61c84
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "ファイルが正常に追加されました。<1>今すぐ確認1>",
+ "confirm": "追加",
+ "id": {
+ "placeholder": "追加する知識ベースを選択してください",
+ "required": "知識ベースを選択してください",
+ "title": "ターゲット知識ベース"
+ },
+ "title": "知識ベースに追加",
+ "totalFiles": "選択されたファイルは {{count}} 件です"
+ },
+ "createNew": {
+ "confirm": "新規作成",
+ "description": {
+ "placeholder": "知識ベースの説明(任意)"
+ },
+ "formTitle": "基本情報",
+ "name": {
+ "placeholder": "知識ベースの名前",
+ "required": "知識ベースの名前を入力してください"
+ },
+ "title": "新しい知識ベースを作成"
+ },
+ "tab": {
+ "evals": "評価",
+ "files": "ドキュメント",
+ "settings": "設定",
+ "testing": "リコールテスト"
+ },
+ "title": "知識ベース"
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/market.json b/DigitalHumanWeb/locales/ja-JP/market.json
new file mode 100644
index 0000000..c24d417
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "エージェントを追加する",
+ "addAgentAndConverse": "エージェントを追加して会話する",
+ "addAgentSuccess": "追加に成功しました",
+ "guide": {
+ "func1": {
+ "desc1": "セッションウィンドウで右上隅の設定にアクセスして、アシスタントの設定ページに移動します。",
+ "desc2": "右上隅の「アシスタントマーケットに送信」ボタンをクリックします。",
+ "tag": "方法1",
+ "title": "LobeChatを使用して送信する"
+ },
+ "func2": {
+ "button": "GitHubのアシスタントリポジトリに移動する",
+ "desc": "アシスタントをインデックスに追加したい場合は、agent-template.jsonまたはagent-template-full.jsonを使用して、pluginsディレクトリにエントリを作成し、簡単な説明と適切なタグを付けてプルリクエストを作成します。",
+ "tag": "方法2",
+ "title": "GitHubを使用して送信する"
+ }
+ },
+ "search": {
+ "placeholder": "エージェントの名前、説明、またはキーワードを検索..."
+ },
+ "sidebar": {
+ "comment": "コメント",
+ "prompt": "プロンプト",
+ "title": "エージェントの詳細"
+ },
+ "submitAgent": "エージェントを提出する",
+ "title": {
+ "allAgents": "すべてのエージェント",
+ "recentSubmits": "最近の追加"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/metadata.json b/DigitalHumanWeb/locales/ja-JP/metadata.json
new file mode 100644
index 0000000..fbc4bcf
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}}が提供する最高のChatGPT、Claude、Gemini、OLLaMA WebUIの体験",
+ "title": "{{appName}}:個人AI効率ツール、より賢い脳を手に入れよう"
+ },
+ "discover": {
+ "assistants": {
+ "description": "コンテンツ作成、コピーライティング、Q&A、画像生成、動画生成、音声生成、インテリジェントエージェント、自動化ワークフロー、あなた専用のAI / GPTs / OLLaMAインテリジェントアシスタントをカスタマイズ",
+ "title": "AIアシスタント"
+ },
+ "description": "コンテンツ作成、コピーライティング、Q&A、画像生成、動画生成、音声生成、インテリジェントエージェント、自動化ワークフロー、カスタムAIアプリケーション、あなた専用のAIアプリケーションワークスペースをカスタマイズ",
+ "models": {
+ "description": "主流のAIモデルを探索 OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek",
+ "title": "AIモデル"
+ },
+ "plugins": {
+ "description": "グラフ生成、学術、画像生成、動画生成、音声生成、自動化ワークフローの検索を行い、あなたのアシスタントに豊富なプラグイン機能を統合します。",
+ "title": "AIプラグイン"
+ },
+ "providers": {
+ "description": "主流のモデルプロバイダーを探索 OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter",
+ "title": "AIモデルサービスプロバイダー"
+ },
+ "search": "検索",
+ "title": "発見"
+ },
+ "plugins": {
+ "description": "検索、グラフ生成、学術、画像生成、動画生成、音声生成、自動化ワークフロー、ChatGPT / Claude専用のToolCallプラグイン機能をカスタマイズ",
+ "title": "プラグインマーケット"
+ },
+ "welcome": {
+ "description": "{{appName}}が提供する最高のChatGPT、Claude、Gemini、OLLaMA WebUIの体験",
+ "title": "ようこそ{{appName}}へ:個人AI効率ツール、より賢い脳を手に入れよう"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/migration.json b/DigitalHumanWeb/locales/ja-JP/migration.json
new file mode 100644
index 0000000..c9b7b1c
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "ローカルデータをクリア",
+ "downloadBackup": "データバックアップをダウンロード",
+ "reUpgrade": "再度アップグレード",
+ "start": "開始する",
+ "upgrade": "ワンクリックアップグレード"
+ },
+ "clear": {
+ "confirm": "ローカルデータをクリアします(グローバル設定には影響しません)。データバックアップを既にダウンロードしたことを確認してください。"
+ },
+ "description": "{{appName}}のデータストレージは新バージョンで大きな飛躍を遂げました。これにより、旧バージョンのデータをアップグレードし、より良い使用体験を提供します。",
+ "features": {
+ "capability": {
+ "desc": "IndexedDB技術に基づき、一生分のセッションメッセージを保存できます。",
+ "title": "大容量"
+ },
+ "performance": {
+ "desc": "百万件のメッセージが自動的にインデックスされ、検索クエリはミリ秒単位で応答します。",
+ "title": "高性能"
+ },
+ "use": {
+ "desc": "タイトル、説明、タグ、メッセージ内容、さらには翻訳テキストの検索をサポートし、日常的な検索効率が大幅に向上しました。",
+ "title": "使いやすさ"
+ }
+ },
+ "title": "{{appName}} データ進化",
+ "upgrade": {
+ "error": {
+ "subTitle": "申し訳ありませんが、データベースのアップグレード中に異常が発生しました。以下の方法をお試しください:A. ローカルデータをクリアしてから、バックアップデータを再インポートする;B. 「再アップグレード」ボタンをクリックする。
それでもエラーが発生する場合は、<1>問題を報告1>してください。すぐに調査いたします。",
+ "title": "データベースのアップグレードに失敗しました"
+ },
+ "success": {
+ "subTitle": "{{appName}}のデータベースは最新バージョンにアップグレードされました。さっそく体験を始めましょう。",
+ "title": "データベースのアップグレードに成功しました"
+ }
+ },
+ "upgradeTip": "アップグレードには約10〜20秒かかります。アップグレード中は{{appName}}を閉じないでください。"
+ },
+ "migrateError": {
+ "missVersion": "インポートデータにバージョン番号がありません。ファイルを確認してからもう一度お試しください",
+ "noMigration": "現在のバージョンに対応するマイグレーションソリューションが見つかりませんでした。バージョン番号を確認してから再試行してください。問題が解決しない場合は、問題を報告してください"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/modelProvider.json b/DigitalHumanWeb/locales/ja-JP/modelProvider.json
new file mode 100644
index 0000000..6ca08bc
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "Azure の API バージョン、YYYY-MM-DD 形式に従う、[最新バージョン](https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions)を参照",
+ "fetch": "リストを取得",
+ "title": "Azure API Version"
+ },
+ "empty": "モデル ID を入力して最初のモデルを追加してください",
+ "endpoint": {
+ "desc": "Azure ポータルでリソースを確認する際に、「キーとエンドポイント」セクションでこの値を見つけることができます",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "Azure API アドレス"
+ },
+ "modelListPlaceholder": "展開したい OpenAI モデルを選択または追加してください",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "Azure ポータルでリソースを確認する際に、「キーとエンドポイント」セクションでこの値を見つけることができます。KEY1 または KEY2 を使用できます",
+ "placeholder": "Azure API Key",
+ "title": "API Key"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "AWS Access Key Id を入力してください",
+ "placeholder": "AWS Access Key Id",
+ "title": "AWS Access Key Id"
+ },
+ "checker": {
+ "desc": "AccessKeyId / SecretAccessKey が正しく入力されているかをテストします"
+ },
+ "region": {
+ "desc": "AWS リージョンを入力してください",
+ "placeholder": "AWS リージョン",
+ "title": "AWS リージョン"
+ },
+ "secretAccessKey": {
+ "desc": "AWS Secret Access Key を入力してください",
+ "placeholder": "AWS Secret Access Key",
+ "title": "AWS Secret Access Key"
+ },
+ "sessionToken": {
+ "desc": "AWS SSO/STSを使用している場合は、AWSセッショントークンを入力してください。",
+ "placeholder": "AWSセッショントークン",
+ "title": "AWSセッショントークン(オプション)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "カスタムサービスリージョン",
+ "customSessionToken": "カスタムセッショントークン",
+ "description": "AWS AccessKeyId / SecretAccessKey を入力するとセッションを開始できます。アプリは認証情報を記録しません",
+ "title": "使用カスタム Bedrock 認証情報"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "あなたのGithub PATを入力してください。[こちら](https://github.com/settings/tokens)をクリックして作成します",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "プロキシアドレスが正しく入力されているかをテストします",
+ "title": "連結性チェック"
+ },
+ "customModelName": {
+ "desc": "カスタムモデルを追加します。複数のモデルはカンマ(,)で区切ります",
+ "placeholder": "vicuna,llava,codellama,llama2:13b-text",
+ "title": "カスタムモデル名"
+ },
+ "download": {
+ "desc": "Ollama is currently downloading the model. Please try not to close this page. The download will resume from where it left off if interrupted.",
+ "remainingTime": "Remaining Time",
+ "speed": "Download Speed",
+ "title": "Downloading model {{model}}"
+ },
+ "endpoint": {
+ "desc": "Ollamaプロキシインターフェースアドレスを入力してください。ローカルで追加の指定がない場合は空白のままにしてください",
+ "title": "プロキシインターフェースアドレス"
+ },
+ "setup": {
+ "cors": {
+ "description": "ブラウザのセキュリティ制限により、Ollama を正常に使用するにはクロスオリジンリクエストを許可する必要があります。",
+ "linux": {
+ "env": "[Service] セクションに `Environment` を追加し、OLLAMA_ORIGINS 環境変数を設定してください:",
+ "reboot": "systemd を再読み込みして Ollama を再起動します。",
+ "systemd": "systemd を呼び出して ollama サービスを編集します:"
+ },
+ "macos": "「ターミナル」アプリを開き、以下のコマンドを貼り付けて実行し、Enter キーを押してください",
+ "reboot": "Ollama サービスを再起動するには、実行後に再起動してください",
+ "title": "Ollama の CORS アクセスを許可する設定",
+ "windows": "Windows 上では、「コントロールパネル」をクリックしてシステム環境変数を編集します。ユーザーアカウントに「OLLAMA_ORIGINS」という名前の環境変数を作成し、値を * に設定し、「OK/適用」をクリックして保存します"
+ },
+ "install": {
+ "description": "Ollamaを有効にしていることを確認してください。Ollamaをまだダウンロードしていない場合は、公式サイト<1>からダウンロード1>してください。",
+ "docker": "もしDockerを使用することを好む場合、Ollamaは公式Dockerイメージも提供しています。以下のコマンドを使用して取得できます:",
+ "linux": {
+ "command": "以下のコマンドを使用してインストール:",
+ "manual": "または、<1>Linuxマニュアルインストールガイド1>を参照して手動でインストールすることもできます"
+ },
+ "title": "ローカルでOllamaアプリをインストールして起動する",
+ "windowsTab": "Windows(プレビュー版)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "Cancel Download",
+ "confirm": "Download",
+ "description": "Enter your Ollama model tag to continue the session",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "Starting download...",
+ "title": "Download specified Ollama model"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI 零一万物"
+ },
+ "zhipu": {
+ "title": "智谱"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/models.json b/DigitalHumanWeb/locales/ja-JP/models.json
new file mode 100644
index 0000000..c1a5c3f
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34Bは豊富な訓練サンプルを用いて業界アプリケーションで優れたパフォーマンスを提供します。"
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9Bは16Kトークンをサポートし、高効率でスムーズな言語生成能力を提供します。"
+ },
+ "360gpt-pro": {
+ "description": "360GPT Proは360 AIモデルシリーズの重要なメンバーであり、高効率なテキスト処理能力を持ち、多様な自然言語アプリケーションシーンに対応し、長文理解や多輪対話などの機能をサポートします。"
+ },
+ "360gpt-turbo": {
+ "description": "360GPT Turboは強力な計算と対話能力を提供し、優れた意味理解と生成効率を備え、企業や開発者にとって理想的なインテリジェントアシスタントソリューションです。"
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8Kは意味の安全性と責任指向を強調し、コンテンツの安全性に高い要求を持つアプリケーションシーンのために設計されており、ユーザー体験の正確性と堅牢性を確保します。"
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Proは360社が発表した高級自然言語処理モデルで、卓越したテキスト生成と理解能力を備え、特に生成と創作の分野で優れたパフォーマンスを発揮し、複雑な言語変換や役割演技タスクを処理できます。"
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultraは星火大モデルシリーズの中で最も強力なバージョンで、ネットワーク検索のリンクをアップグレードし、テキストコンテンツの理解と要約能力を向上させています。これは、オフィスの生産性を向上させ、要求に正確に応えるための全方位のソリューションであり、業界をリードするインテリジェントな製品です。"
+ },
+ "Baichuan2-Turbo": {
+ "description": "検索強化技術を採用し、大モデルと分野知識、全網知識の全面的なリンクを実現しています。PDF、Wordなどのさまざまな文書のアップロードやURL入力をサポートし、情報取得が迅速かつ包括的で、出力結果は正確かつ専門的です。"
+ },
+ "Baichuan3-Turbo": {
+ "description": "企業の高頻度シーンに最適化され、効果が大幅に向上し、高コストパフォーマンスを実現しています。Baichuan2モデルに対して、コンテンツ生成が20%、知識問答が17%、役割演技能力が40%向上しています。全体的な効果はGPT3.5よりも優れています。"
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "128Kの超長コンテキストウィンドウを備え、企業の高頻度シーンに最適化され、効果が大幅に向上し、高コストパフォーマンスを実現しています。Baichuan2モデルに対して、コンテンツ生成が20%、知識問答が17%、役割演技能力が40%向上しています。全体的な効果はGPT3.5よりも優れています。"
+ },
+ "Baichuan4": {
+ "description": "モデル能力は国内でトップであり、知識百科、長文、生成創作などの中国語タスクで海外の主流モデルを超えています。また、業界をリードするマルチモーダル能力を備え、複数の権威ある評価基準で優れたパフォーマンスを示しています。"
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B)は、革新的なモデルであり、多分野のアプリケーションや複雑なタスクに適しています。"
+ },
+ "Max-32k": {
+ "description": "Spark Max 32Kは、大規模なコンテキスト処理能力を備え、より強力なコンテキスト理解と論理推論能力を持ち、32Kトークンのテキスト入力をサポートします。長文書の読解やプライベートな知識に基づく質問応答などのシーンに適しています。"
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPOは非常に柔軟なマルチモデル統合で、卓越した創造的体験を提供することを目的としています。"
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B)は、高精度の指示モデルであり、複雑な計算に適しています。"
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B)は、最適化された言語出力と多様なアプリケーションの可能性を提供します。"
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "Phi-3-miniモデルのリフレッシュ版です。"
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "同じPhi-3-mediumモデルですが、RAGまたは少数ショットプロンプティング用により大きなコンテキストサイズを持っています。"
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "14Bパラメータのモデルで、Phi-3-miniよりも高品質で、質の高い推論密度のデータに焦点を当てています。"
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "同じPhi-3-miniモデルですが、RAGまたは少数ショットプロンプティング用により大きなコンテキストサイズを持っています。"
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "Phi-3ファミリーの最小メンバー。品質と低遅延の両方に最適化されています。"
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "同じPhi-3-smallモデルですが、RAGまたは少数ショットプロンプティング用により大きなコンテキストサイズを持っています。"
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "7Bパラメータのモデルで、Phi-3-miniよりも高品質で、質の高い推論密度のデータに焦点を当てています。"
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128Kは特大のコンテキスト処理能力を備え、最大128Kのコンテキスト情報を処理でき、特に全体分析や長期的な論理関連処理が必要な長文コンテンツに適しており、複雑なテキストコミュニケーションにおいて流暢で一貫した論理と多様な引用サポートを提供します。"
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "Qwen2のテスト版として、Qwen1.5は大規模データを使用してより正確な対話機能を実現しました。"
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B)は、迅速な応答と自然な対話能力を提供し、多言語環境に適しています。"
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2は、先進的な汎用言語モデルであり、さまざまな指示タイプをサポートします。"
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5は、新しい大型言語モデルシリーズで、指示型タスクの処理を最適化することを目的としています。"
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5は、新しい大型言語モデルシリーズで、指示型タスクの処理を最適化することを目的としています。"
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5は、新しい大型言語モデルシリーズで、より強力な理解と生成能力を持っています。"
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5は、新しい大型言語モデルシリーズで、指示型タスクの処理を最適化することを目的としています。"
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coderは、コード作成に特化しています。"
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Mathは、数学分野の問題解決に特化しており、高難度の問題に対して専門的な解答を提供します。"
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9Bはオープンソース版で、会話アプリケーションに最適化された対話体験を提供します。"
+ },
+ "abab5.5-chat": {
+ "description": "生産性シーン向けであり、複雑なタスク処理と効率的なテキスト生成をサポートし、専門分野のアプリケーションに適しています。"
+ },
+ "abab5.5s-chat": {
+ "description": "中国語のキャラクター対話シーンに特化しており、高品質な中国語対話生成能力を提供し、さまざまなアプリケーションシーンに適しています。"
+ },
+ "abab6.5g-chat": {
+ "description": "多言語のキャラクター対話に特化しており、英語および他の多くの言語の高品質な対話生成をサポートします。"
+ },
+ "abab6.5s-chat": {
+ "description": "テキスト生成、対話システムなど、幅広い自然言語処理タスクに適しています。"
+ },
+ "abab6.5t-chat": {
+ "description": "中国語のキャラクター対話シーンに最適化されており、流暢で中国語の表現習慣に合った対話生成能力を提供します。"
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "Fireworksのオープンソース関数呼び出しモデルは、卓越した指示実行能力とオープンでカスタマイズ可能な特性を提供します。"
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "Fireworks社の最新のFirefunction-v2は、Llama-3を基に開発された高性能な関数呼び出しモデルであり、多くの最適化を経て、特に関数呼び出し、対話、指示のフォローなどのシナリオに適しています。"
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13bは、画像とテキストの入力を同時に受け取ることができる視覚言語モデルであり、高品質なデータで訓練されており、多モーダルタスクに適しています。"
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "Gemma 2 9B指示モデルは、以前のGoogle技術に基づいており、質問応答、要約、推論などのさまざまなテキスト生成タスクに適しています。"
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "Llama 3 70B指示モデルは、多言語対話と自然言語理解に最適化されており、ほとんどの競合モデルを上回る性能を持っています。"
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "Llama 3 70B指示モデル(HFバージョン)は、公式実装結果と一致し、高品質な指示フォロータスクに適しています。"
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "Llama 3 8B指示モデルは、対話や多言語タスクに最適化されており、卓越した効率を発揮します。"
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "Llama 3 8B指示モデル(HFバージョン)は、公式実装結果と一致し、高い一貫性とクロスプラットフォーム互換性を持っています。"
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "Llama 3.1 405B指示モデルは、超大規模なパラメータを持ち、複雑なタスクや高負荷シナリオでの指示フォローに適しています。"
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "Llama 3.1 70B指示モデルは、卓越した自然言語理解と生成能力を提供し、対話や分析タスクに理想的な選択肢です。"
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "Llama 3.1 8B指示モデルは、多言語対話の最適化のために設計されており、一般的な業界ベンチマークを超える性能を発揮します。"
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "Mixtral MoE 8x22B指示モデルは、大規模なパラメータと多専門家アーキテクチャを持ち、複雑なタスクの高効率処理を全方位でサポートします。"
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "Mixtral MoE 8x7B指示モデルは、多専門家アーキテクチャを提供し、高効率の指示フォローと実行をサポートします。"
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "Mixtral MoE 8x7B指示モデル(HFバージョン)は、公式実装と一致し、さまざまな高効率タスクシナリオに適しています。"
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "MythoMax L2 13Bモデルは、新しい統合技術を組み合わせており、物語やキャラクターの役割に優れています。"
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "Phi 3 Vision指示モデルは、軽量の多モーダルモデルであり、複雑な視覚とテキスト情報を処理でき、強力な推論能力を持っています。"
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "StarCoder 15.5Bモデルは、高度なプログラミングタスクをサポートし、多言語能力を強化し、複雑なコード生成と理解に適しています。"
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "StarCoder 7Bモデルは、80以上のプログラミング言語に特化して訓練されており、優れたプログラミング補完能力と文脈理解を持っています。"
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "Yi-Largeモデルは、卓越した多言語処理能力を持ち、さまざまな言語生成と理解タスクに使用できます。"
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "398Bパラメータ(94Bアクティブ)の多言語モデルで、256Kの長いコンテキストウィンドウ、関数呼び出し、構造化出力、基盤生成を提供します。"
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "52Bパラメータ(12Bアクティブ)の多言語モデルで、256Kの長いコンテキストウィンドウ、関数呼び出し、構造化出力、基盤生成を提供します。"
+ },
+ "ai21-jamba-instruct": {
+ "description": "最高のパフォーマンス、品質、コスト効率を実現するための生産グレードのMambaベースのLLMモデルです。"
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "Claude 3.5 Sonnetは業界標準を向上させ、競合モデルやClaude 3 Opusを超える性能を持ち、広範な評価で優れたパフォーマンスを示し、私たちの中程度のモデルの速度とコストを兼ね備えています。"
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "Claude 3 HaikuはAnthropicの最も速く、最もコンパクトなモデルで、ほぼ瞬時の応答速度を提供します。簡単なクエリやリクエストに迅速に回答できます。顧客は人間のインタラクションを模倣するシームレスなAI体験を構築できるようになります。Claude 3 Haikuは画像を処理し、テキスト出力を返すことができ、200Kのコンテキストウィンドウを持っています。"
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "Claude 3 OpusはAnthropicの最も強力なAIモデルで、高度に複雑なタスクにおいて最先端の性能を持っています。オープンエンドのプロンプトや未見のシナリオを処理でき、優れた流暢さと人間の理解能力を持っています。Claude 3 Opusは生成AIの可能性の最前線を示しています。Claude 3 Opusは画像を処理し、テキスト出力を返すことができ、200Kのコンテキストウィンドウを持っています。"
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "AnthropicのClaude 3 Sonnetは、知能と速度の理想的なバランスを実現しており、特に企業のワークロードに適しています。競合他社よりも低価格で最大の効用を提供し、信頼性が高く耐久性のある主力機として設計されており、スケール化されたAIデプロイメントに適しています。Claude 3 Sonnetは画像を処理し、テキスト出力を返すことができ、200Kのコンテキストウィンドウを持っています。"
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "日常の対話、テキスト分析、要約、文書質問応答などの一連のタスクを処理できる、迅速で経済的かつ非常に能力のあるモデルです。"
+ },
+ "anthropic.claude-v2": {
+ "description": "Anthropicは、複雑な対話や創造的なコンテンツ生成から詳細な指示の遵守に至るまで、幅広いタスクで高い能力を発揮するモデルです。"
+ },
+ "anthropic.claude-v2:1": {
+ "description": "Claude 2の更新版で、コンテキストウィンドウが2倍になり、長文書やRAGコンテキストにおける信頼性、幻覚率、証拠に基づく正確性が改善されています。"
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 HaikuはAnthropicの最も迅速でコンパクトなモデルで、ほぼ瞬時の応答を実現することを目的としています。迅速かつ正確な指向性能を備えています。"
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opusは、Anthropicが高度に複雑なタスクを処理するために開発した最も強力なモデルです。性能、知能、流暢さ、理解力において卓越したパフォーマンスを発揮します。"
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 SonnetはOpusを超える能力を提供し、Sonnetよりも速い速度を持ちながら、Sonnetと同じ価格を維持します。Sonnetは特にプログラミング、データサイエンス、視覚処理、代理タスクに優れています。"
+ },
+ "aya": {
+ "description": "Aya 23は、Cohereが提供する多言語モデルであり、23の言語をサポートし、多様な言語アプリケーションを便利にします。"
+ },
+ "aya:35b": {
+ "description": "Aya 23は、Cohereが提供する多言語モデルであり、23の言語をサポートし、多様な言語アプリケーションを便利にします。"
+ },
+ "charglm-3": {
+ "description": "CharGLM-3はキャラクター演技と感情的な伴侶のために設計されており、超長期の多段階記憶と個別化された対話をサポートし、幅広い用途に適しています。"
+ },
+ "chatgpt-4o-latest": {
+ "description": "ChatGPT-4oは、リアルタイムで更新される動的モデルで、常に最新のバージョンを維持します。強力な言語理解と生成能力を組み合わせており、顧客サービス、教育、技術サポートなどの大規模なアプリケーションシナリオに適しています。"
+ },
+ "claude-2.0": {
+ "description": "Claude 2は、業界をリードする200Kトークンのコンテキスト、モデルの幻覚の発生率を大幅に低下させる、システムプロンプト、および新しいテスト機能:ツール呼び出しを含む、企業にとって重要な能力の進歩を提供します。"
+ },
+ "claude-2.1": {
+ "description": "Claude 2は、業界をリードする200Kトークンのコンテキスト、モデルの幻覚の発生率を大幅に低下させる、システムプロンプト、および新しいテスト機能:ツール呼び出しを含む、企業にとって重要な能力の進歩を提供します。"
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnetは、Opusを超える能力とSonnetよりも速い速度を提供し、Sonnetと同じ価格を維持します。Sonnetは特にプログラミング、データサイエンス、視覚処理、エージェントタスクに優れています。"
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haikuは、Anthropicの最も速く、最もコンパクトなモデルであり、ほぼ瞬時の応答を実現することを目的としています。迅速かつ正確な指向性能を持っています。"
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opusは、Anthropicが高度に複雑なタスクを処理するために開発した最も強力なモデルです。性能、知性、流暢さ、理解力において卓越したパフォーマンスを発揮します。"
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnetは、企業のワークロードに理想的なバランスを提供し、より低価格で最大の効用を提供し、信頼性が高く、大規模な展開に適しています。"
+ },
+ "claude-instant-1.2": {
+ "description": "Anthropicのモデルは、低遅延、高スループットのテキスト生成に使用され、数百ページのテキストを生成することをサポートします。"
+ },
+ "codegeex-4": {
+ "description": "CodeGeeX-4は強力なAIプログラミングアシスタントで、さまざまなプログラミング言語のインテリジェントな質問応答とコード補完をサポートし、開発効率を向上させます。"
+ },
+ "codegemma": {
+ "description": "CodeGemmaは、さまざまなプログラミングタスクに特化した軽量言語モデルであり、迅速な反復と統合をサポートします。"
+ },
+ "codegemma:2b": {
+ "description": "CodeGemmaは、さまざまなプログラミングタスクに特化した軽量言語モデルであり、迅速な反復と統合をサポートします。"
+ },
+ "codellama": {
+ "description": "Code Llamaは、コード生成と議論に特化したLLMであり、広範なプログラミング言語のサポートを組み合わせて、開発者環境に適しています。"
+ },
+ "codellama:13b": {
+ "description": "Code Llamaは、コード生成と議論に特化したLLMであり、広範なプログラミング言語のサポートを組み合わせて、開発者環境に適しています。"
+ },
+ "codellama:34b": {
+ "description": "Code Llamaは、コード生成と議論に特化したLLMであり、広範なプログラミング言語のサポートを組み合わせて、開発者環境に適しています。"
+ },
+ "codellama:70b": {
+ "description": "Code Llamaは、コード生成と議論に特化したLLMであり、広範なプログラミング言語のサポートを組み合わせて、開発者環境に適しています。"
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5は、大量のコードデータでトレーニングされた大規模言語モデルであり、複雑なプログラミングタスクを解決するために特化しています。"
+ },
+ "codestral": {
+ "description": "Codestralは、Mistral AIの初のコードモデルであり、コード生成タスクに優れたサポートを提供します。"
+ },
+ "codestral-latest": {
+ "description": "Codestralは、コード生成に特化した最先端の生成モデルであり、中間埋め込みやコード補完タスクを最適化しています。"
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22Bは指示遵守、対話、プログラミングのために設計されたモデルです。"
+ },
+ "cohere-command-r": {
+ "description": "Command Rは、RAGとツール使用をターゲットにしたスケーラブルな生成モデルで、企業向けの生産規模のAIを実現します。"
+ },
+ "cohere-command-r-plus": {
+ "description": "Command R+は、企業グレードのワークロードに対応するために設計された最先端のRAG最適化モデルです。"
+ },
+ "command-r": {
+ "description": "Command Rは、対話と長いコンテキストタスクに最適化されたLLMであり、特に動的なインタラクションと知識管理に適しています。"
+ },
+ "command-r-plus": {
+ "description": "Command R+は、リアルな企業シーンと複雑なアプリケーションのために設計された高性能な大規模言語モデルです。"
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instructは、高い信頼性の指示処理能力を提供し、多業界アプリケーションをサポートします。"
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5は以前のバージョンの優れた特徴を集約し、汎用性とコーディング能力を強化しました。"
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67Bは、高い複雑性の対話のために訓練された先進的なモデルです。"
+ },
+ "deepseek-chat": {
+ "description": "一般的な対話能力と強力なコード処理能力を兼ね備えた新しいオープンソースモデルであり、元のChatモデルの対話能力とCoderモデルのコード処理能力を保持しつつ、人間の好みにより良く整合しています。さらに、DeepSeek-V2.5は、執筆タスクや指示に従う能力など、さまざまな面で大幅な向上を実現しました。"
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2は、オープンソースの混合エキスパートコードモデルであり、コードタスクにおいて優れた性能を発揮し、GPT4-Turboに匹敵します。"
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2は、オープンソースの混合エキスパートコードモデルであり、コードタスクにおいて優れた性能を発揮し、GPT4-Turboに匹敵します。"
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2は、高効率なMixture-of-Experts言語モデルであり、経済的な処理ニーズに適しています。"
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236Bは、DeepSeekの設計コードモデルであり、強力なコード生成能力を提供します。"
+ },
+ "deepseek/deepseek-chat": {
+ "description": "汎用性とコード能力を融合させた新しいオープンソースモデルで、元のChatモデルの汎用対話能力とCoderモデルの強力なコード処理能力を保持しつつ、人間の好みにより良く整合しています。さらに、DeepSeek-V2.5は執筆タスク、指示の遵守などの多くの面で大幅な向上を実現しました。"
+ },
+ "emohaa": {
+ "description": "Emohaaは心理モデルで、専門的な相談能力を持ち、ユーザーが感情問題を理解するのを助けます。"
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001(チューニング)は、安定した調整可能な性能を提供し、複雑なタスクのソリューションに理想的な選択肢です。"
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002(チューニング)は、優れたマルチモーダルサポートを提供し、複雑なタスクの効果的な解決に焦点を当てています。"
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Proは、Googleの高性能AIモデルであり、幅広いタスクの拡張に特化しています。"
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001は、効率的なマルチモーダルモデルであり、幅広いアプリケーションの拡張をサポートします。"
+ },
+ "gemini-1.5-flash-002": {
+ "description": "Gemini 1.5 Flash 002は効率的なマルチモーダルモデルで、幅広いアプリケーションの拡張をサポートしています。"
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827は、大規模なタスクシナリオの処理のために設計されており、比類のない処理速度を提供します。"
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "Gemini 1.5 Flash 8B 0924は最新の実験モデルで、テキストおよびマルチモーダルのユースケースにおいて顕著な性能向上を実現しています。"
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827は、最適化されたマルチモーダル処理能力を提供し、さまざまな複雑なタスクシナリオに適用できます。"
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flashは、Googleの最新のマルチモーダルAIモデルであり、高速処理能力を備え、テキスト、画像、動画の入力をサポートし、さまざまなタスクの効率的な拡張に適しています。"
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001は、拡張可能なマルチモーダルAIソリューションであり、幅広い複雑なタスクをサポートします。"
+ },
+ "gemini-1.5-pro-002": {
+ "description": "Gemini 1.5 Pro 002は最新の生産準備モデルで、特に数学、長いコンテキスト、視覚タスクにおいて質の高い出力を提供し、顕著な向上を見せています。"
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801は、優れたマルチモーダル処理能力を提供し、アプリケーション開発における柔軟性を高めます。"
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827は、最新の最適化技術を組み合わせて、より効率的なマルチモーダルデータ処理能力を提供します。"
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Proは、最大200万トークンをサポートする中型マルチモーダルモデルの理想的な選択肢であり、複雑なタスクに対する多面的なサポートを提供します。"
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7Bは、中小規模のタスク処理に適しており、コスト効果を兼ね備えています。"
+ },
+ "gemma2": {
+ "description": "Gemma 2は、Googleが提供する高効率モデルであり、小型アプリケーションから複雑なデータ処理まで、さまざまなアプリケーションシーンをカバーしています。"
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9Bは、特定のタスクとツール統合のために最適化されたモデルです。"
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2は、Googleが提供する高効率モデルであり、小型アプリケーションから複雑なデータ処理まで、さまざまなアプリケーションシーンをカバーしています。"
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2は、Googleが提供する高効率モデルであり、小型アプリケーションから複雑なデータ処理まで、さまざまなアプリケーションシーンをカバーしています。"
+ },
+ "general": {
+ "description": "Spark Liteは軽量な大言語モデルで、極めて低い遅延と高効率な処理能力を備え、完全に無料でオープンに提供され、リアルタイムのオンライン検索機能をサポートします。その迅速な応答特性により、低算力デバイスでの推論アプリケーションやモデル微調整において優れたパフォーマンスを発揮し、ユーザーに優れたコスト効果とインテリジェントな体験を提供し、特に知識問答、コンテンツ生成、検索シーンでのパフォーマンスが優れています。"
+ },
+ "generalv3": {
+ "description": "Spark Proは専門分野に最適化された高性能な大言語モデルで、数学、プログラミング、医療、教育などの複数の分野に特化し、ネットワーク検索や内蔵の天気、日付などのプラグインをサポートします。最適化されたモデルは、複雑な知識問答、言語理解、高度なテキスト創作において優れたパフォーマンスと高効率を示し、専門的なアプリケーションシーンに最適な選択肢です。"
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Maxは機能が最も充実したバージョンで、ネットワーク検索や多くの内蔵プラグインをサポートします。全面的に最適化されたコア能力、システムロール設定、関数呼び出し機能により、さまざまな複雑なアプリケーションシーンでのパフォーマンスが非常に優れています。"
+ },
+ "glm-4": {
+ "description": "GLM-4は2024年1月にリリースされた旧フラッグシップバージョンで、現在はより強力なGLM-4-0520に取って代わられています。"
+ },
+ "glm-4-0520": {
+ "description": "GLM-4-0520は最新のモデルバージョンで、高度に複雑で多様なタスクのために設計され、優れたパフォーマンスを発揮します。"
+ },
+ "glm-4-air": {
+ "description": "GLM-4-Airはコストパフォーマンスが高いバージョンで、GLM-4に近い性能を提供し、高速かつ手頃な価格です。"
+ },
+ "glm-4-airx": {
+ "description": "GLM-4-AirXはGLM-4-Airの効率的なバージョンで、推論速度はその2.6倍に達します。"
+ },
+ "glm-4-alltools": {
+ "description": "GLM-4-AllToolsは、複雑な指示計画とツール呼び出しをサポートするために最適化された多機能エージェントモデルで、ネットサーフィン、コード解釈、テキスト生成などの多タスク実行に適しています。"
+ },
+ "glm-4-flash": {
+ "description": "GLM-4-Flashはシンプルなタスクを処理するのに理想的な選択肢で、最も速く、最も手頃な価格です。"
+ },
+ "glm-4-long": {
+ "description": "GLM-4-Longは超長文入力をサポートし、記憶型タスクや大規模文書処理に適しています。"
+ },
+ "glm-4-plus": {
+ "description": "GLM-4-Plusは高い知能を持つフラッグシップモデルで、長文や複雑なタスクを処理する能力が強化され、全体的なパフォーマンスが向上しています。"
+ },
+ "glm-4v": {
+ "description": "GLM-4Vは強力な画像理解と推論能力を提供し、さまざまな視覚タスクをサポートします。"
+ },
+ "glm-4v-plus": {
+ "description": "GLM-4V-Plusは動画コンテンツや複数の画像を理解する能力を持ち、マルチモーダルタスクに適しています。"
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827は、最適化されたマルチモーダル処理能力を提供し、さまざまな複雑なタスクシーンに適用可能です。"
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827は、最新の最適化技術を組み合わせて、より効率的なマルチモーダルデータ処理能力を提供します。"
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2は、軽量化と高効率のデザイン理念を継承しています。"
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2は、Googleの軽量オープンソーステキストモデルシリーズです。"
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2はGoogleの軽量化されたオープンソーステキストモデルシリーズです。"
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B)は、基本的な指示処理能力を提供し、軽量アプリケーションに適しています。"
+ },
+ "gpt-3.5-turbo": {
+ "description": "GPT 3.5 Turboは、さまざまなテキスト生成と理解タスクに適しており、現在はgpt-3.5-turbo-0125を指しています。"
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "GPT 3.5 Turboは、さまざまなテキスト生成と理解タスクに適しており、現在はgpt-3.5-turbo-0125を指しています。"
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "GPT 3.5 Turboは、さまざまなテキスト生成と理解タスクに適しており、現在はgpt-3.5-turbo-0125を指しています。"
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "GPT 3.5 Turboは、さまざまなテキスト生成と理解タスクに適しており、現在はgpt-3.5-turbo-0125を指しています。"
+ },
+ "gpt-4": {
+ "description": "GPT-4は、より大きなコンテキストウィンドウを提供し、より長いテキスト入力を処理できるため、広範な情報統合やデータ分析が必要なシナリオに適しています。"
+ },
+ "gpt-4-0125-preview": {
+ "description": "最新のGPT-4 Turboモデルは視覚機能を備えています。現在、視覚リクエストはJSON形式と関数呼び出しを使用して行うことができます。GPT-4 Turboは、マルチモーダルタスクに対してコスト効率の高いサポートを提供する強化版です。正確性と効率のバランスを取り、リアルタイムのインタラクションが必要なアプリケーションシナリオに適しています。"
+ },
+ "gpt-4-0613": {
+ "description": "GPT-4は、より大きなコンテキストウィンドウを提供し、より長いテキスト入力を処理できるため、広範な情報統合やデータ分析が必要なシナリオに適しています。"
+ },
+ "gpt-4-1106-preview": {
+ "description": "最新のGPT-4 Turboモデルは視覚機能を備えています。現在、視覚リクエストはJSON形式と関数呼び出しを使用して行うことができます。GPT-4 Turboは、マルチモーダルタスクに対してコスト効率の高いサポートを提供する強化版です。正確性と効率のバランスを取り、リアルタイムのインタラクションが必要なアプリケーションシナリオに適しています。"
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "最新のGPT-4 Turboモデルは視覚機能を備えています。現在、視覚リクエストはJSON形式と関数呼び出しを使用して行うことができます。GPT-4 Turboは、マルチモーダルタスクに対してコスト効率の高いサポートを提供する強化版です。正確性と効率のバランスを取り、リアルタイムのインタラクションが必要なアプリケーションシナリオに適しています。"
+ },
+ "gpt-4-32k": {
+ "description": "GPT-4は、より大きなコンテキストウィンドウを提供し、より長いテキスト入力を処理できるため、広範な情報統合やデータ分析が必要なシナリオに適しています。"
+ },
+ "gpt-4-32k-0613": {
+ "description": "GPT-4は、より大きなコンテキストウィンドウを提供し、より長いテキスト入力を処理できるため、広範な情報統合やデータ分析が必要なシナリオに適しています。"
+ },
+ "gpt-4-turbo": {
+ "description": "最新のGPT-4 Turboモデルは視覚機能を備えています。現在、視覚リクエストはJSON形式と関数呼び出しを使用して行うことができます。GPT-4 Turboは、マルチモーダルタスクに対してコスト効率の高いサポートを提供する強化版です。正確性と効率のバランスを取り、リアルタイムのインタラクションが必要なアプリケーションシナリオに適しています。"
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "最新のGPT-4 Turboモデルは視覚機能を備えています。現在、視覚リクエストはJSON形式と関数呼び出しを使用して行うことができます。GPT-4 Turboは、マルチモーダルタスクに対してコスト効率の高いサポートを提供する強化版です。正確性と効率のバランスを取り、リアルタイムのインタラクションが必要なアプリケーションシナリオに適しています。"
+ },
+ "gpt-4-turbo-preview": {
+ "description": "最新のGPT-4 Turboモデルは視覚機能を備えています。現在、視覚リクエストはJSON形式と関数呼び出しを使用して行うことができます。GPT-4 Turboは、マルチモーダルタスクに対してコスト効率の高いサポートを提供する強化版です。正確性と効率のバランスを取り、リアルタイムのインタラクションが必要なアプリケーションシナリオに適しています。"
+ },
+ "gpt-4-vision-preview": {
+ "description": "最新のGPT-4 Turboモデルは視覚機能を備えています。現在、視覚リクエストはJSON形式と関数呼び出しを使用して行うことができます。GPT-4 Turboは、マルチモーダルタスクに対してコスト効率の高いサポートを提供する強化版です。正確性と効率のバランスを取り、リアルタイムのインタラクションが必要なアプリケーションシナリオに適しています。"
+ },
+ "gpt-4o": {
+ "description": "ChatGPT-4oは、リアルタイムで更新される動的モデルで、常に最新のバージョンを維持します。強力な言語理解と生成能力を組み合わせており、顧客サービス、教育、技術サポートなどの大規模なアプリケーションシナリオに適しています。"
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "ChatGPT-4oは、リアルタイムで更新される動的モデルで、常に最新のバージョンを維持します。強力な言語理解と生成能力を組み合わせており、顧客サービス、教育、技術サポートなどの大規模なアプリケーションシナリオに適しています。"
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4oは、リアルタイムで更新される動的モデルで、常に最新のバージョンを維持します。強力な言語理解と生成能力を組み合わせており、顧客サービス、教育、技術サポートなどの大規模なアプリケーションシナリオに適しています。"
+ },
+ "gpt-4o-mini": {
+ "description": "GPT-4o miniは、OpenAIがGPT-4 Omniの後に発表した最新のモデルで、画像とテキストの入力をサポートし、テキストを出力します。最先端の小型モデルとして、最近の他の先進モデルよりもはるかに安価で、GPT-3.5 Turboよりも60%以上安価です。最先端の知能を維持しつつ、コストパフォーマンスが大幅に向上しています。GPT-4o miniはMMLUテストで82%のスコアを獲得し、現在チャットの好みではGPT-4よりも高い評価を得ています。"
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13Bは複数のトップモデルを統合した創造性と知性を兼ね備えた言語モデルです。"
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "革新的なオープンソースモデルInternLM2.5は、大規模なパラメータを通じて対話のインテリジェンスを向上させました。"
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5は多様なシーンでのインテリジェントな対話ソリューションを提供します。"
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instructモデルは、70Bパラメータを持ち、大規模なテキスト生成と指示タスクで卓越した性能を提供します。"
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70Bは、より強力なAI推論能力を提供し、複雑なアプリケーションに適しており、非常に多くの計算処理をサポートし、高効率と精度を保証します。"
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8Bは、高効率モデルであり、迅速なテキスト生成能力を提供し、大規模な効率とコスト効果が求められるアプリケーションシナリオに非常に適しています。"
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instructモデルは、8Bパラメータを持ち、画面指示タスクの高効率な実行をサポートし、優れたテキスト生成能力を提供します。"
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "Llama 3.1 Sonar Huge Onlineモデルは、405Bパラメータを持ち、約127,000トークンのコンテキスト長をサポートし、複雑なオンラインチャットアプリケーション用に設計されています。"
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "Llama 3.1 Sonar Large Chatモデルは、70Bパラメータを持ち、約127,000トークンのコンテキスト長をサポートし、複雑なオフラインチャットタスクに適しています。"
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "Llama 3.1 Sonar Large Onlineモデルは、70Bパラメータを持ち、約127,000トークンのコンテキスト長をサポートし、高容量で多様なチャットタスクに適しています。"
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "Llama 3.1 Sonar Small Chatモデルは、8Bパラメータを持ち、オフラインチャット用に設計されており、約127,000トークンのコンテキスト長をサポートします。"
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "Llama 3.1 Sonar Small Onlineモデルは、8Bパラメータを持ち、約127,000トークンのコンテキスト長をサポートし、オンラインチャット用に設計されており、さまざまなテキストインタラクションを効率的に処理できます。"
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70Bは、比類のない複雑性処理能力を提供し、高要求プロジェクトに特化しています。"
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8Bは、優れた推論性能を提供し、多様なシーンのアプリケーションニーズに適しています。"
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Useは、強力なツール呼び出し能力を提供し、複雑なタスクの効率的な処理をサポートします。"
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Useは、高効率なツール使用に最適化されたモデルであり、迅速な並列計算をサポートします。"
+ },
+ "llama3.1": {
+ "description": "Llama 3.1は、Metaが提供する先進的なモデルであり、最大405Bのパラメータをサポートし、複雑な対話、多言語翻訳、データ分析の分野で応用できます。"
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1は、Metaが提供する先進的なモデルであり、最大405Bのパラメータをサポートし、複雑な対話、多言語翻訳、データ分析の分野で応用できます。"
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1は、Metaが提供する先進的なモデルであり、最大405Bのパラメータをサポートし、複雑な対話、多言語翻訳、データ分析の分野で応用できます。"
+ },
+ "llava": {
+ "description": "LLaVAは、視覚エンコーダーとVicunaを組み合わせたマルチモーダルモデルであり、強力な視覚と言語理解を提供します。"
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7Bは、視覚処理能力を融合させ、視覚情報入力を通じて複雑な出力を生成します。"
+ },
+ "llava:13b": {
+ "description": "LLaVAは、視覚エンコーダーとVicunaを組み合わせたマルチモーダルモデルであり、強力な視覚と言語理解を提供します。"
+ },
+ "llava:34b": {
+ "description": "LLaVAは、視覚エンコーダーとVicunaを組み合わせたマルチモーダルモデルであり、強力な視覚と言語理解を提供します。"
+ },
+ "mathstral": {
+ "description": "MathΣtralは、科学研究と数学推論のために設計されており、効果的な計算能力と結果の解釈を提供します。"
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "推論、コーディング、広範な言語アプリケーションに優れた70億パラメータの強力なモデルです。"
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "対話とテキスト生成タスクに最適化された多用途の80億パラメータモデルです。"
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "Llama 3.1の指示調整されたテキスト専用モデルは、多言語対話のユースケースに最適化されており、一般的な業界ベンチマークで多くのオープンソースおよびクローズドチャットモデルを上回ります。"
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "Llama 3.1の指示調整されたテキスト専用モデルは、多言語対話のユースケースに最適化されており、一般的な業界ベンチマークで多くのオープンソースおよびクローズドチャットモデルを上回ります。"
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "Llama 3.1の指示調整されたテキスト専用モデルは、多言語対話のユースケースに最適化されており、一般的な業界ベンチマークで多くのオープンソースおよびクローズドチャットモデルを上回ります。"
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B)は、優れた言語処理能力と素晴らしいインタラクション体験を提供します。"
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B)は、強力なチャットモデルであり、複雑な対話ニーズをサポートします。"
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B)は、多言語サポートを提供し、豊富な分野知識をカバーしています。"
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Liteは、高効率と低遅延が求められる環境に適しています。"
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turboは、卓越した言語理解と生成能力を提供し、最も厳しい計算タスクに適しています。"
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Liteは、リソースが制限された環境に適しており、優れたバランス性能を提供します。"
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turboは、高効率の大規模言語モデルであり、幅広いアプリケーションシナリオをサポートします。"
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405Bは事前学習と指示調整の強力なモデルです。"
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "405BのLlama 3.1 Turboモデルは、大規模データ処理のために超大容量のコンテキストサポートを提供し、超大規模な人工知能アプリケーションで優れたパフォーマンスを発揮します。"
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70Bは多言語の高効率な対話サポートを提供します。"
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "Llama 3.1 70Bモデルは微調整されており、高負荷アプリケーションに適しており、FP8に量子化されてより効率的な計算能力と精度を提供し、複雑なシナリオでの卓越したパフォーマンスを保証します。"
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1は多言語サポートを提供し、業界をリードする生成モデルの一つです。"
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "Llama 3.1 8BモデルはFP8量子化を採用し、最大131,072のコンテキストトークンをサポートし、オープンソースモデルの中で際立っており、複雑なタスクに適しており、多くの業界ベンチマークを上回る性能を発揮します。"
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instructは高品質な対話シーンに最適化されており、さまざまな人間の評価において優れたパフォーマンスを示します。"
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instructは高品質な対話シーンに最適化されており、多くのクローズドソースモデルよりも優れた性能を持っています。"
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B InstructはMetaが最新にリリースしたバージョンで、高品質な対話生成に最適化されており、多くのリーダーのクローズドソースモデルを超えています。"
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instructは高品質な対話のために設計されており、人間の評価において優れたパフォーマンスを示し、高いインタラクションシーンに特に適しています。"
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B InstructはMetaが発表した最新バージョンで、高品質な対話シーンに最適化されており、多くの先進的なクローズドソースモデルを上回る性能を発揮します。"
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1は多言語サポートを提供し、業界をリードする生成モデルの一つです。"
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 405B Instructは、Llama 3.1 Instructモデルの中で最大かつ最も強力なモデルであり、高度に進化した対話推論および合成データ生成モデルです。また、特定の分野での専門的な継続的な事前トレーニングや微調整の基盤としても使用できます。Llama 3.1が提供する多言語大規模言語モデル(LLMs)は、8B、70B、405Bのサイズ(テキスト入力/出力)を含む、事前トレーニングされた指示調整された生成モデルのセットです。Llama 3.1の指示調整されたテキストモデル(8B、70B、405B)は、多言語対話のユースケースに最適化されており、一般的な業界ベンチマークテストで多くの利用可能なオープンソースチャットモデルを上回っています。Llama 3.1は、さまざまな言語の商業および研究用途に使用されることを目的としています。指示調整されたテキストモデルは、アシスタントのようなチャットに適しており、事前トレーニングモデルはさまざまな自然言語生成タスクに適応できます。Llama 3.1モデルは、他のモデルを改善するためにその出力を利用することもサポートしており、合成データ生成や洗練にも対応しています。Llama 3.1は、最適化されたトランスフォーマーアーキテクチャを使用した自己回帰型言語モデルです。調整されたバージョンは、監視付き微調整(SFT)と人間のフィードバックを伴う強化学習(RLHF)を使用して、人間の助けや安全性に対する好みに適合させています。"
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 70B Instructの更新版で、拡張された128Kのコンテキスト長、多言語性、改善された推論能力を含んでいます。Llama 3.1が提供する多言語大型言語モデル(LLMs)は、8B、70B、405Bのサイズ(テキスト入力/出力)を含む一連の事前トレーニングされた、指示調整された生成モデルです。Llama 3.1の指示調整されたテキストモデル(8B、70B、405B)は、多言語対話用のユースケースに最適化されており、一般的な業界ベンチマークテストで多くの利用可能なオープンソースチャットモデルを超えています。Llama 3.1は多言語の商業および研究用途に使用されることを目的としています。指示調整されたテキストモデルはアシスタントのようなチャットに適しており、事前トレーニングモデルはさまざまな自然言語生成タスクに適応できます。Llama 3.1モデルは、他のモデルを改善するためにその出力を利用することもサポートしており、合成データ生成や精製を含みます。Llama 3.1は最適化されたトランスフォーマーアーキテクチャを使用した自己回帰型言語モデルです。調整版は、監視付き微調整(SFT)と人間のフィードバックを伴う強化学習(RLHF)を使用して、人間の助けや安全性に対する好みに適合させています。"
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 8B Instructの更新版で、拡張された128Kのコンテキスト長、多言語性、改善された推論能力を含んでいます。Llama 3.1が提供する多言語大型言語モデル(LLMs)は、8B、70B、405Bのサイズ(テキスト入力/出力)を含む一連の事前トレーニングされた、指示調整された生成モデルです。Llama 3.1の指示調整されたテキストモデル(8B、70B、405B)は、多言語対話用のユースケースに最適化されており、一般的な業界ベンチマークテストで多くの利用可能なオープンソースチャットモデルを超えています。Llama 3.1は多言語の商業および研究用途に使用されることを目的としています。指示調整されたテキストモデルはアシスタントのようなチャットに適しており、事前トレーニングモデルはさまざまな自然言語生成タスクに適応できます。Llama 3.1モデルは、他のモデルを改善するためにその出力を利用することもサポートしており、合成データ生成や精製を含みます。Llama 3.1は最適化されたトランスフォーマーアーキテクチャを使用した自己回帰型言語モデルです。調整版は、監視付き微調整(SFT)と人間のフィードバックを伴う強化学習(RLHF)を使用して、人間の助けや安全性に対する好みに適合させています。"
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3は、開発者、研究者、企業向けのオープンな大規模言語モデル(LLM)であり、生成AIのアイデアを構築、実験、責任を持って拡張するのを支援することを目的としています。世界的なコミュニティの革新の基盤システムの一部として、コンテンツ作成、対話AI、言語理解、研究開発、企業アプリケーションに非常に適しています。"
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3は、開発者、研究者、企業向けのオープンな大規模言語モデル(LLM)であり、生成AIのアイデアを構築、実験、責任を持って拡張するのを支援することを目的としています。世界的なコミュニティの革新の基盤システムの一部として、計算能力とリソースが限られたエッジデバイスや、より迅速なトレーニング時間に非常に適しています。"
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7BはMicrosoft AIの最新の高速軽量モデルで、既存のオープンソースリーダーモデルの10倍に近い性能を持っています。"
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22Bは、Microsoftの最先端AI Wizardモデルであり、非常に競争力のあるパフォーマンスを示しています。"
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-VはOpenBMBが発表した次世代のマルチモーダル大モデルで、優れたOCR認識能力とマルチモーダル理解能力を備え、幅広いアプリケーションシーンをサポートします。"
+ },
+ "mistral": {
+ "description": "Mistralは、Mistral AIがリリースした7Bモデルであり、多様な言語処理ニーズに適しています。"
+ },
+ "mistral-large": {
+ "description": "Mixtral Largeは、Mistralのフラッグシップモデルであり、コード生成、数学、推論の能力を組み合わせ、128kのコンテキストウィンドウをサポートします。"
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407)は、最先端の推論、知識、コーディング能力を持つ高度な大規模言語モデル(LLM)です。"
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Largeは、フラッグシップの大モデルであり、多言語タスク、複雑な推論、コード生成に優れ、高端アプリケーションに理想的な選択肢です。"
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemoは、Mistral AIとNVIDIAが共同で開発した高効率の12Bモデルです。"
+ },
+ "mistral-small": {
+ "description": "Mistral Smallは、高効率と低遅延を必要とする言語ベースのタスクで使用できます。"
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Smallは、コスト効率が高く、迅速かつ信頼性の高い選択肢で、翻訳、要約、感情分析などのユースケースに適しています。"
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instructは、高性能で知られ、多言語タスクに適しています。"
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7Bは、オンデマンドのファインチューニングモデルであり、タスクに最適化された解答を提供します。"
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3は、高効率の計算能力と自然言語理解を提供し、幅広いアプリケーションに適しています。"
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B)は、超大規模な言語モデルであり、非常に高い処理要求をサポートします。"
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7Bは、一般的なテキストタスクに使用される事前訓練されたスパースミックス専門家モデルです。"
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instructは速度最適化と長いコンテキストサポートを兼ね備えた高性能な業界標準モデルです。"
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemoは多言語サポートと高性能プログラミングを備えた7.3Bパラメータモデルです。"
+ },
+ "mixtral": {
+ "description": "Mixtralは、Mistral AIのエキスパートモデルであり、オープンソースの重みを持ち、コード生成と言語理解のサポートを提供します。"
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7Bは、高い耐障害性を持つ並列計算能力を提供し、複雑なタスクに適しています。"
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtralは、Mistral AIのエキスパートモデルであり、オープンソースの重みを持ち、コード生成と言語理解のサポートを提供します。"
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128Kは、超長いコンテキスト処理能力を持つモデルであり、超長文の生成に適しており、複雑な生成タスクのニーズを満たし、最大128,000トークンの内容を処理でき、研究、学術、大型文書生成などのアプリケーションシーンに非常に適しています。"
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32Kは、中程度の長さのコンテキスト処理能力を提供し、32,768トークンを処理でき、さまざまな長文や複雑な対話の生成に特に適しており、コンテンツ作成、報告書生成、対話システムなどの分野で使用されます。"
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8Kは、短文生成タスクのために設計されており、高効率な処理性能を持ち、8,192トークンを処理でき、短い対話、速記、迅速なコンテンツ生成に非常に適しています。"
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8BはNous Hermes 2のアップグレード版で、最新の内部開発データセットを含んでいます。"
+ },
+ "o1-mini": {
+ "description": "o1-miniは、プログラミング、数学、科学のアプリケーションシーンに特化して設計された迅速で経済的な推論モデルです。このモデルは128Kのコンテキストを持ち、2023年10月の知識のカットオフがあります。"
+ },
+ "o1-preview": {
+ "description": "o1はOpenAIの新しい推論モデルで、広範な一般知識を必要とする複雑なタスクに適しています。このモデルは128Kのコンテキストを持ち、2023年10月の知識のカットオフがあります。"
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mambaは、コード生成に特化したMamba 2言語モデルであり、高度なコードおよび推論タスクを強力にサポートします。"
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7Bは、コンパクトでありながら高性能なモデルであり、分類やテキスト生成などのバッチ処理や簡単なタスクに優れた推論能力を持っています。"
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemoは、Nvidiaと共同開発された12Bモデルであり、優れた推論およびコーディング性能を提供し、統合と置き換えが容易です。"
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22Bは、より大きなエキスパートモデルであり、複雑なタスクに特化し、優れた推論能力とより高いスループットを提供します。"
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7Bは、スパースエキスパートモデルであり、複数のパラメータを利用して推論速度を向上させ、多言語およびコード生成タスクの処理に適しています。"
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4oは動的モデルで、リアルタイムで最新バージョンを維持します。強力な言語理解と生成能力を組み合わせており、顧客サービス、教育、技術サポートなどの大規模なアプリケーションシーンに適しています。"
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o miniはOpenAIがGPT-4 Omniの後に発表した最新モデルで、画像とテキストの入力をサポートし、テキストを出力します。彼らの最先端の小型モデルとして、最近の他の最前線モデルよりもはるかに安価で、GPT-3.5 Turboよりも60%以上安価です。最先端の知能を維持しつつ、顕著なコストパフォーマンスを誇ります。GPT-4o miniはMMLUテストで82%のスコアを獲得し、現在チャットの好みでGPT-4よりも高い評価を得ています。"
+ },
+ "openai/o1-mini": {
+ "description": "o1-miniは、プログラミング、数学、科学のアプリケーションシーンに特化して設計された迅速で経済的な推論モデルです。このモデルは128Kのコンテキストを持ち、2023年10月の知識のカットオフがあります。"
+ },
+ "openai/o1-preview": {
+ "description": "o1はOpenAIの新しい推論モデルで、広範な一般知識を必要とする複雑なタスクに適しています。このモデルは128Kのコンテキストを持ち、2023年10月の知識のカットオフがあります。"
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7Bは「C-RLFT(条件強化学習微調整)」戦略で微調整されたオープンソース言語モデルライブラリです。"
+ },
+ "openrouter/auto": {
+ "description": "コンテキストの長さ、テーマ、複雑さに応じて、あなたのリクエストはLlama 3 70B Instruct、Claude 3.5 Sonnet(自己調整)、またはGPT-4oに送信されます。"
+ },
+ "phi3": {
+ "description": "Phi-3は、Microsoftが提供する軽量オープンモデルであり、高効率な統合と大規模な知識推論に適しています。"
+ },
+ "phi3:14b": {
+ "description": "Phi-3は、Microsoftが提供する軽量オープンモデルであり、高効率な統合と大規模な知識推論に適しています。"
+ },
+ "pixtral-12b-2409": {
+ "description": "Pixtralモデルは、グラフと画像理解、文書質問応答、多モーダル推論、指示遵守などのタスクで強力な能力を発揮し、自然な解像度とアスペクト比で画像を取り込み、最大128Kトークンの長いコンテキストウィンドウで任意の数の画像を処理できます。"
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "通義千問のコードモデルです。"
+ },
+ "qwen-long": {
+ "description": "通義千問超大規模言語モデルで、長文コンテキストや長文書、複数文書に基づく対話機能をサポートしています。"
+ },
+ "qwen-math-plus-latest": {
+ "description": "通義千問の数学モデルは、数学の問題解決に特化した言語モデルです。"
+ },
+ "qwen-math-turbo-latest": {
+ "description": "通義千問の数学モデルは、数学の問題解決に特化した言語モデルです。"
+ },
+ "qwen-max-latest": {
+ "description": "通義千問の千億レベルの超大規模言語モデルで、中国語、英語などの異なる言語入力をサポートし、現在の通義千問2.5製品バージョンの背後にあるAPIモデルです。"
+ },
+ "qwen-plus-latest": {
+ "description": "通義千問の超大規模言語モデルの強化版で、中国語、英語などの異なる言語入力をサポートしています。"
+ },
+ "qwen-turbo-latest": {
+ "description": "通義千問の超大規模言語モデルで、中国語、英語などの異なる言語入力をサポートしています。"
+ },
+ "qwen-vl-chat-v1": {
+ "description": "通義千問VLは、複数の画像、多段階の質問応答、創作などの柔軟なインタラクション方式をサポートするモデルです。"
+ },
+ "qwen-vl-max": {
+ "description": "通義千問超大規模視覚言語モデル。強化版に比べて、視覚推論能力と指示遵守能力をさらに向上させ、高い視覚認識と認知レベルを提供します。"
+ },
+ "qwen-vl-plus": {
+ "description": "通義千問大規模視覚言語モデルの強化版。詳細認識能力と文字認識能力を大幅に向上させ、超百万ピクセル解像度と任意のアスペクト比の画像をサポートします。"
+ },
+ "qwen-vl-v1": {
+ "description": "Qwen-7B言語モデルを初期化し、画像モデルを追加した、画像入力解像度448の事前トレーニングモデルです。"
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2は全く新しい大型言語モデルシリーズで、より強力な理解と生成能力を備えています。"
+ },
+ "qwen2": {
+ "description": "Qwen2は、Alibabaの新世代大規模言語モデルであり、優れた性能で多様なアプリケーションニーズをサポートします。"
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "通義千問2.5の対外オープンソースの14B規模のモデルです。"
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "通義千問2.5の対外オープンソースの32B規模のモデルです。"
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "通義千問2.5の対外オープンソースの72B規模のモデルです。"
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "通義千問2.5の対外オープンソースの7B規模のモデルです。"
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "通義千問のコードモデルのオープンソース版です。"
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "通義千問のコードモデルのオープンソース版です。"
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "Qwen-Mathモデルは、強力な数学の問題解決能力を持っています。"
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "Qwen-Mathモデルは、強力な数学の問題解決能力を持っています。"
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "Qwen-Mathモデルは、強力な数学の問題解決能力を持っています。"
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2は、Alibabaの新世代大規模言語モデルであり、優れた性能で多様なアプリケーションニーズをサポートします。"
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2は、Alibabaの新世代大規模言語モデルであり、優れた性能で多様なアプリケーションニーズをサポートします。"
+ },
+ "qwen2:72b": {
+ "description": "Qwen2は、Alibabaの新世代大規模言語モデルであり、優れた性能で多様なアプリケーションニーズをサポートします。"
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar MiniはコンパクトなLLMで、GPT-3.5を上回る性能を持ち、強力な多言語能力を備え、英語と韓国語をサポートし、高効率でコンパクトなソリューションを提供します。"
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja)はSolar Miniの能力を拡張し、日本語に特化しつつ、英語と韓国語の使用においても高効率で卓越した性能を維持します。"
+ },
+ "solar-pro": {
+ "description": "Solar ProはUpstageが発表した高インテリジェンスLLMで、単一GPUの指示追従能力に特化しており、IFEvalスコアは80以上です。現在は英語をサポートしており、正式版は2024年11月にリリース予定で、言語サポートとコンテキスト長を拡張します。"
+ },
+ "step-1-128k": {
+ "description": "性能とコストのバランスを取り、一般的なシナリオに適しています。"
+ },
+ "step-1-256k": {
+ "description": "超長コンテキスト処理能力を持ち、特に長文書分析に適しています。"
+ },
+ "step-1-32k": {
+ "description": "中程度の長さの対話をサポートし、さまざまなアプリケーションシナリオに適しています。"
+ },
+ "step-1-8k": {
+ "description": "小型モデルであり、軽量なタスクに適しています。"
+ },
+ "step-1-flash": {
+ "description": "高速モデルであり、リアルタイムの対話に適しています。"
+ },
+ "step-1v-32k": {
+ "description": "視覚入力をサポートし、多モーダルインタラクション体験を強化します。"
+ },
+ "step-1v-8k": {
+ "description": "小型ビジュアルモデルで、基本的なテキストと画像のタスクに適しています。"
+ },
+ "step-2-16k": {
+ "description": "大規模なコンテキストインタラクションをサポートし、複雑な対話シナリオに適しています。"
+ },
+ "taichu_llm": {
+ "description": "紫東太初言語大モデルは、強力な言語理解能力とテキスト創作、知識問答、コードプログラミング、数学計算、論理推論、感情分析、テキスト要約などの能力を備えています。革新的に大データの事前学習と多源の豊富な知識を組み合わせ、アルゴリズム技術を継続的に磨き、膨大なテキストデータから語彙、構造、文法、意味などの新しい知識を吸収し、モデルの効果を進化させています。ユーザーにより便利な情報とサービス、よりインテリジェントな体験を提供します。"
+ },
+ "taichu_vqa": {
+ "description": "Taichu 2.0Vは画像理解、知識移転、論理的帰納などの能力を融合させており、テキストと画像の質問応答分野で優れたパフォーマンスを発揮しています。"
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B)は、高効率の戦略とモデルアーキテクチャを通じて、強化された計算能力を提供します。"
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B)は、精密な指示タスクに適しており、優れた言語処理能力を提供します。"
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2は、Microsoft AIが提供する言語モデルであり、複雑な対話、多言語、推論、インテリジェントアシスタントの分野で特に優れた性能を発揮します。"
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2は、Microsoft AIが提供する言語モデルであり、複雑な対話、多言語、推論、インテリジェントアシスタントの分野で特に優れた性能を発揮します。"
+ },
+ "yi-large": {
+ "description": "新しい千億パラメータモデルであり、超強力な質問応答およびテキスト生成能力を提供します。"
+ },
+ "yi-large-fc": {
+ "description": "yi-largeモデルを基に、ツール呼び出しの能力をサポートし強化し、エージェントやワークフローを構築する必要があるさまざまなビジネスシナリオに適しています。"
+ },
+ "yi-large-preview": {
+ "description": "初期バージョンであり、yi-large(新バージョン)の使用を推奨します。"
+ },
+ "yi-large-rag": {
+ "description": "yi-largeの超強力モデルに基づく高次サービスであり、検索と生成技術を組み合わせて正確な回答を提供し、リアルタイムで全網検索情報サービスを提供します。"
+ },
+ "yi-large-turbo": {
+ "description": "超高コストパフォーマンス、卓越した性能。性能と推論速度、コストに基づいて、高精度のバランス調整を行います。"
+ },
+ "yi-medium": {
+ "description": "中型サイズモデルのアップグレード微調整であり、能力が均衡しており、コストパフォーマンスが高いです。指示遵守能力を深く最適化しています。"
+ },
+ "yi-medium-200k": {
+ "description": "200Kの超長コンテキストウィンドウを持ち、長文の深い理解と生成能力を提供します。"
+ },
+ "yi-spark": {
+ "description": "小型で強力な、軽量で高速なモデルです。強化された数学演算とコード作成能力を提供します。"
+ },
+ "yi-vision": {
+ "description": "複雑な視覚タスクモデルであり、高性能な画像理解と分析能力を提供します。"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/plugin.json b/DigitalHumanWeb/locales/ja-JP/plugin.json
new file mode 100644
index 0000000..1edb807
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "引数",
+ "function_call": "関数呼び出し",
+ "off": "デバッグをオフにする",
+ "on": "プラグイン呼び出し情報を表示する",
+ "payload": "ペイロード",
+ "response": "レスポンス",
+ "tool_call": "ツール呼び出し"
+ },
+ "detailModal": {
+ "info": {
+ "description": "API 説明",
+ "name": "API 名"
+ },
+ "tabs": {
+ "info": "プラグイン機能",
+ "manifest": "インストールファイル",
+ "settings": "設定"
+ },
+ "title": "プラグインの詳細"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "このローカルプラグインを削除しますか?削除後は元に戻せません。",
+ "customParams": {
+ "useProxy": {
+ "label": "プロキシを使用する(クロスドメインエラーが発生した場合、このオプションを有効にして再インストールしてください)"
+ }
+ },
+ "deleteSuccess": "プラグインが正常に削除されました",
+ "manifest": {
+ "identifier": {
+ "desc": "プラグインの一意の識別子",
+ "label": "識別子"
+ },
+ "mode": {
+ "local": "ビジュアル設定",
+ "local-tooltip": "ビジュアル設定は一時的にサポートされていません",
+ "url": "オンラインリンク"
+ },
+ "name": {
+ "desc": "プラグインのタイトル",
+ "label": "タイトル",
+ "placeholder": "検索エンジン"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "プラグインの作者",
+ "label": "作者"
+ },
+ "avatar": {
+ "desc": "プラグインのアイコン、絵文字やURLを使用できます",
+ "label": "アイコン"
+ },
+ "description": {
+ "desc": "プラグインの説明",
+ "label": "説明",
+ "placeholder": "検索エンジンで情報を取得します"
+ },
+ "formFieldRequired": "このフィールドは必須です",
+ "homepage": {
+ "desc": "プラグインのホームページ",
+ "label": "ホームページ"
+ },
+ "identifier": {
+ "desc": "プラグインの一意の識別子、マニフェストから自動的に識別されます",
+ "errorDuplicate": "識別子が既存のプラグインと重複しています。識別子を変更してください",
+ "label": "識別子",
+ "pattenErrorMessage": "英数字、-、_ のみ入力できます"
+ },
+ "manifest": {
+ "desc": "{{appName}}はこのリンクを通じてプラグインをインストールします",
+ "label": "プラグイン記述ファイル (Manifest) URL",
+ "preview": "マニフェストのプレビュー",
+ "refresh": "更新"
+ },
+ "title": {
+ "desc": "プラグインのタイトル",
+ "label": "タイトル",
+ "placeholder": "検索エンジン"
+ }
+ },
+ "metaConfig": "プラグインのメタ情報の設定",
+ "modalDesc": "カスタムプラグインを追加すると、プラグインの開発検証に使用したり、セッション中に直接使用したりできます。プラグインの開発については、<1>開発ドキュメント↗>を参照してください",
+ "openai": {
+ "importUrl": "URLリンクからインポート",
+ "schema": "スキーマ"
+ },
+ "preview": {
+ "card": "プラグインのプレビュー表示",
+ "desc": "プラグインの説明のプレビュー",
+ "title": "プラグイン名のプレビュー"
+ },
+ "save": "プラグインを保存",
+ "saveSuccess": "プラグインの設定が正常に保存されました",
+ "tabs": {
+ "manifest": "機能のマニフェスト",
+ "meta": "プラグインのメタ情報"
+ },
+ "title": {
+ "create": "カスタムプラグインを追加",
+ "edit": "カスタムプラグインを編集"
+ },
+ "type": {
+ "lobe": "LobeChatプラグイン",
+ "openai": "OpenAIプラグイン"
+ },
+ "update": "更新",
+ "updateSuccess": "プラグインの設定が正常に更新されました"
+ },
+ "error": {
+ "fetchError": "このmanifestリンクのリクエストに失敗しました。リンクが有効であることを確認し、リンクがクロスドメインアクセスを許可しているかを確認してください",
+ "installError": "プラグイン {{name}} のインストールに失敗しました",
+ "manifestInvalid": "manifestが仕様に準拠していません。検証結果: \n\n {{error}}",
+ "noManifest": "マニフェストが存在しません",
+ "openAPIInvalid": "OpenAPIの解析に失敗しました。エラー: \n\n {{error}}",
+ "reinstallError": "プラグイン{{name}}の再インストールに失敗しました",
+ "urlError": "このリンクはJSON形式のコンテンツを返していません。有効なリンクであることを確認してください"
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "削除済み",
+ "local.config": "設定",
+ "local.title": "カスタム"
+ }
+ },
+ "loading": {
+ "content": "プラグインを呼び出しています...",
+ "plugin": "プラグインの実行中..."
+ },
+ "pluginList": "プラグインリスト",
+ "setting": "プラグインの設定",
+ "settings": {
+ "indexUrl": {
+ "title": "マーケットインデックス",
+ "tooltip": "オンライン編集は現在サポートされていません。デプロイ時の環境変数を使用して設定してください"
+ },
+ "modalDesc": "プラグインマーケットのアドレスを設定すると、カスタムのプラグインマーケットを使用できます",
+ "title": "プラグインマーケットの設定"
+ },
+ "showInPortal": "詳細はワークスペースで表示してください",
+ "store": {
+ "actions": {
+ "confirmUninstall": "このプラグインをアンインストールします。アンインストール後、プラグインの設定がクリアされます。操作を確認してください。",
+ "detail": "詳細",
+ "install": "インストール",
+ "manifest": "インストールファイルを編集",
+ "settings": "設定",
+ "uninstall": "アンインストール"
+ },
+ "communityPlugin": "コミュニティプラグイン",
+ "customPlugin": "カスタムプラグイン",
+ "empty": "インストールされたプラグインはありません",
+ "installAllPlugins": "すべてのプラグインをインストール",
+ "networkError": "プラグインストアの取得に失敗しました。ネットワーク接続を確認してから再試行してください",
+ "placeholder": "プラグイン名、説明、またはキーワードで検索...",
+ "releasedAt": "{{createdAt}} にリリース",
+ "tabs": {
+ "all": "すべて",
+ "installed": "インストール済み"
+ },
+ "title": "プラグインストア"
+ },
+ "unknownPlugin": "未知のプラグイン"
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/portal.json b/DigitalHumanWeb/locales/ja-JP/portal.json
new file mode 100644
index 0000000..ba3b4b3
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "アーティファクト",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "チャンク",
+ "file": "ファイル"
+ }
+ },
+ "Plugins": "プラグイン",
+ "actions": {
+ "genAiMessage": "AIメッセージを生成",
+ "summary": "サマリー",
+ "summaryTooltip": "現在の内容を要約"
+ },
+ "artifacts": {
+ "display": {
+ "code": "コード",
+ "preview": "プレビュー"
+ },
+ "svg": {
+ "copyAsImage": "画像としてコピー",
+ "copyFail": "コピーに失敗しました。エラーの理由: {{error}}",
+ "copySuccess": "画像のコピーに成功しました",
+ "download": {
+ "png": "PNGとしてダウンロード",
+ "svg": "SVGとしてダウンロード"
+ }
+ }
+ },
+ "emptyArtifactList": "現在、アーティファクトリストは空です。プラグインを使用してセッション中に追加してください。",
+ "emptyKnowledgeList": "現在の知識リストは空です。会話中に必要に応じて知識ベースを開いてからご覧ください。",
+ "files": "ファイル",
+ "messageDetail": "メッセージの詳細",
+ "title": "拡張ウィンドウ"
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/providers.json b/DigitalHumanWeb/locales/ja-JP/providers.json
new file mode 100644
index 0000000..b0b5343
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "360 AIは、360社が提供するAIモデルとサービスプラットフォームであり、360GPT2 Pro、360GPT Pro、360GPT Turbo、360GPT Turbo Responsibility 8Kなど、さまざまな先進的な自然言語処理モデルを提供しています。これらのモデルは、大規模なパラメータと多モーダル能力を組み合わせており、テキスト生成、意味理解、対話システム、コード生成などの分野で広く使用されています。柔軟な価格戦略を通じて、360 AIは多様なユーザーのニーズに応え、開発者の統合をサポートし、スマートアプリケーションの革新と発展を促進します。"
+ },
+ "anthropic": {
+ "description": "Anthropicは、人工知能の研究と開発に特化した企業であり、Claude 3.5 Sonnet、Claude 3 Sonnet、Claude 3 Opus、Claude 3 Haikuなどの先進的な言語モデルを提供しています。これらのモデルは、知性、速度、コストの理想的なバランスを実現しており、企業向けのワークロードから迅速な応答が求められるさまざまなアプリケーションシーンに適しています。Claude 3.5 Sonnetは最新のモデルであり、複数の評価で優れたパフォーマンスを示し、高いコストパフォーマンスを維持しています。"
+ },
+ "azure": {
+ "description": "Azureは、GPT-3.5や最新のGPT-4シリーズを含む多様な先進AIモデルを提供し、さまざまなデータタイプや複雑なタスクをサポートし、安全で信頼性が高く持続可能なAIソリューションに取り組んでいます。"
+ },
+ "baichuan": {
+ "description": "百川智能は、人工知能大モデルの研究開発に特化した企業であり、そのモデルは国内の知識百科、長文処理、生成創作などの中国語タスクで卓越したパフォーマンスを示し、海外の主流モデルを超えています。百川智能は、業界をリードする多モーダル能力を持ち、複数の権威ある評価で優れたパフォーマンスを示しています。そのモデルには、Baichuan 4、Baichuan 3 Turbo、Baichuan 3 Turbo 128kなどが含まれ、異なるアプリケーションシーンに最適化され、高コストパフォーマンスのソリューションを提供しています。"
+ },
+ "bedrock": {
+ "description": "Bedrockは、Amazon AWSが提供するサービスで、企業に先進的なAI言語モデルと視覚モデルを提供することに特化しています。そのモデルファミリーには、AnthropicのClaudeシリーズやMetaのLlama 3.1シリーズなどが含まれ、軽量から高性能までのさまざまな選択肢を提供し、テキスト生成、対話、画像処理などの多様なタスクをサポートし、異なる規模とニーズの企業アプリケーションに適しています。"
+ },
+ "deepseek": {
+ "description": "DeepSeekは、人工知能技術の研究と応用に特化した企業であり、最新のモデルDeepSeek-V2.5は、汎用対話とコード処理能力を融合させ、人間の好みの整合、ライティングタスク、指示の遵守などの面で顕著な向上を実現しています。"
+ },
+ "fireworksai": {
+ "description": "Fireworks AIは、先進的な言語モデルサービスのリーダーであり、機能呼び出しと多モーダル処理に特化しています。最新のモデルFirefunction V2はLlama-3に基づいており、関数呼び出し、対話、指示の遵守に最適化されています。視覚言語モデルFireLLaVA-13Bは、画像とテキストの混合入力をサポートしています。他の注目すべきモデルには、LlamaシリーズやMixtralシリーズがあり、高効率の多言語指示遵守と生成サポートを提供しています。"
+ },
+ "github": {
+ "description": "GitHubモデルを使用することで、開発者はAIエンジニアになり、業界をリードするAIモデルを使って構築できます。"
+ },
+ "google": {
+ "description": "GoogleのGeminiシリーズは、Google DeepMindによって開発された最先端で汎用的なAIモデルであり、多モーダル設計に特化しており、テキスト、コード、画像、音声、動画のシームレスな理解と処理をサポートします。データセンターからモバイルデバイスまでのさまざまな環境に適しており、AIモデルの効率と適用範囲を大幅に向上させています。"
+ },
+ "groq": {
+ "description": "GroqのLPU推論エンジンは、最新の独立した大規模言語モデル(LLM)ベンチマークテストで卓越したパフォーマンスを示し、その驚異的な速度と効率でAIソリューションの基準を再定義しています。Groqは、即時推論速度の代表であり、クラウドベースの展開で良好なパフォーマンスを発揮しています。"
+ },
+ "minimax": {
+ "description": "MiniMaxは2021年に設立された汎用人工知能テクノロジー企業であり、ユーザーと共に知能を共創することに取り組んでいます。MiniMaxは、さまざまなモードの汎用大モデルを独自に開発しており、トリリオンパラメータのMoEテキスト大モデル、音声大モデル、画像大モデルを含んでいます。また、海螺AIなどのアプリケーションも展開しています。"
+ },
+ "mistral": {
+ "description": "Mistralは、先進的な汎用、専門、研究型モデルを提供し、複雑な推論、多言語タスク、コード生成などの分野で広く使用されています。機能呼び出しインターフェースを通じて、ユーザーはカスタム機能を統合し、特定のアプリケーションを実現できます。"
+ },
+ "moonshot": {
+ "description": "Moonshotは、北京月之暗面科技有限公司が提供するオープンプラットフォームであり、さまざまな自然言語処理モデルを提供し、コンテンツ創作、学術研究、スマート推薦、医療診断などの広範な応用分野を持ち、長文処理や複雑な生成タスクをサポートしています。"
+ },
+ "novita": {
+ "description": "Novita AIは、さまざまな大規模言語モデルとAI画像生成のAPIサービスを提供するプラットフォームであり、柔軟で信頼性が高く、コスト効率に優れています。Llama3、Mistralなどの最新のオープンソースモデルをサポートし、生成的AIアプリケーションの開発に向けた包括的でユーザーフレンドリーかつ自動スケーリングのAPIソリューションを提供し、AIスタートアップの急成長を支援します。"
+ },
+ "ollama": {
+ "description": "Ollamaが提供するモデルは、コード生成、数学演算、多言語処理、対話インタラクションなどの分野を広くカバーし、企業向けおよびローカライズされた展開の多様なニーズに対応しています。"
+ },
+ "openai": {
+ "description": "OpenAIは、世界をリードする人工知能研究機関であり、GPTシリーズなどのモデルを開発し、自然言語処理の最前線を推進しています。OpenAIは、革新と効率的なAIソリューションを通じて、さまざまな業界を変革することに取り組んでいます。彼らの製品は、顕著な性能と経済性を持ち、研究、ビジネス、革新アプリケーションで広く使用されています。"
+ },
+ "openrouter": {
+ "description": "OpenRouterは、OpenAI、Anthropic、LLaMAなどのさまざまな最先端の大規模モデルインターフェースを提供するサービスプラットフォームであり、多様な開発と応用のニーズに適しています。ユーザーは、自身のニーズに応じて最適なモデルと価格を柔軟に選択し、AI体験の向上を支援します。"
+ },
+ "perplexity": {
+ "description": "Perplexityは、先進的な対話生成モデルの提供者であり、さまざまなLlama 3.1モデルを提供し、オンラインおよびオフラインアプリケーションをサポートし、特に複雑な自然言語処理タスクに適しています。"
+ },
+ "qwen": {
+ "description": "通義千問は、アリババクラウドが独自に開発した超大規模言語モデルであり、強力な自然言語理解と生成能力を持っています。さまざまな質問に答えたり、文章を創作したり、意見を表現したり、コードを執筆したりすることができ、さまざまな分野で活躍しています。"
+ },
+ "siliconcloud": {
+ "description": "SiliconFlowは、AGIを加速させ、人類に利益をもたらすことを目指し、使いやすくコスト効率の高いGenAIスタックを通じて大規模AIの効率を向上させることに取り組んでいます。"
+ },
+ "spark": {
+ "description": "科大訊飛星火大モデルは、多分野、多言語の強力なAI能力を提供し、先進的な自然言語処理技術を利用して、スマートハードウェア、スマート医療、スマート金融などのさまざまな垂直シーンに適した革新的なアプリケーションを構築します。"
+ },
+ "stepfun": {
+ "description": "階級星辰大モデルは、業界をリードする多モーダルおよび複雑な推論能力を備え、超長文の理解と強力な自律的検索エンジン機能をサポートしています。"
+ },
+ "taichu": {
+ "description": "中科院自動化研究所と武漢人工知能研究院が新世代の多モーダル大モデルを発表し、多輪問答、テキスト創作、画像生成、3D理解、信号分析などの包括的な問答タスクをサポートし、より強力な認知、理解、創作能力を持ち、新しいインタラクティブな体験を提供します。"
+ },
+ "togetherai": {
+ "description": "Together AIは、革新的なAIモデルを通じて先進的な性能を実現することに取り組んでおり、迅速なスケーリングサポートや直感的な展開プロセスを含む広範なカスタマイズ能力を提供し、企業のさまざまなニーズに応えています。"
+ },
+ "upstage": {
+ "description": "Upstageは、さまざまなビジネスニーズに応じたAIモデルの開発に特化しており、Solar LLMや文書AIを含み、人造一般知能(AGI)の実現を目指しています。Chat APIを通じてシンプルな対話エージェントを作成し、機能呼び出し、翻訳、埋め込み、特定分野のアプリケーションをサポートします。"
+ },
+ "zeroone": {
+ "description": "01.AIは、AI 2.0時代の人工知能技術に特化し、「人+人工知能」の革新と応用を推進し、超強力なモデルと先進的なAI技術を用いて人類の生産性を向上させ、技術の力を実現します。"
+ },
+ "zhipu": {
+ "description": "智谱AIは、多モーダルおよび言語モデルのオープンプラットフォームを提供し、テキスト処理、画像理解、プログラミング支援など、幅広いAIアプリケーションシーンをサポートしています。"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/ragEval.json b/DigitalHumanWeb/locales/ja-JP/ragEval.json
new file mode 100644
index 0000000..2e1af5e
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "新規作成",
+ "description": {
+ "placeholder": "データセットの概要(任意)"
+ },
+ "name": {
+ "placeholder": "データセット名",
+ "required": "データセット名を入力してください"
+ },
+ "title": "データセットの追加"
+ },
+ "dataset": {
+ "addNewButton": "データセットを作成",
+ "emptyGuide": "現在、データセットは空です。データセットを作成してください。",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "データをインポート"
+ },
+ "columns": {
+ "actions": "操作",
+ "ideal": {
+ "title": "期待される回答"
+ },
+ "question": {
+ "title": "質問"
+ },
+ "referenceFiles": {
+ "title": "参考ファイル"
+ }
+ },
+ "notSelected": "左側からデータセットを選択してください",
+ "title": "データセットの詳細"
+ },
+ "title": "データセット"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "新規作成",
+ "datasetId": {
+ "placeholder": "評価データセットを選択してください",
+ "required": "評価データセットを選択してください"
+ },
+ "description": {
+ "placeholder": "評価タスクの概要(任意)"
+ },
+ "name": {
+ "placeholder": "評価タスク名",
+ "required": "評価タスク名を入力してください"
+ },
+ "title": "評価タスクの追加"
+ },
+ "addNewButton": "評価を作成",
+ "emptyGuide": "現在、評価タスクは空です。評価を作成を開始してください。",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "状態を確認",
+ "confirmDelete": "この評価を削除しますか?",
+ "confirmRun": "実行を開始しますか?実行を開始すると、バックグラウンドで非同期に評価タスクが実行されます。ページを閉じても非同期タスクの実行には影響しません。",
+ "downloadRecords": "評価をダウンロード",
+ "retry": "再試行",
+ "run": "実行",
+ "title": "操作"
+ },
+ "datasetId": {
+ "title": "データセット"
+ },
+ "name": {
+ "title": "評価タスク名"
+ },
+ "records": {
+ "title": "評価記録数"
+ },
+ "referenceFiles": {
+ "title": "参考ファイル"
+ },
+ "status": {
+ "error": "実行エラー",
+ "pending": "実行待ち",
+ "processing": "実行中",
+ "success": "実行成功",
+ "title": "状態"
+ }
+ },
+ "title": "評価タスク一覧"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/setting.json b/DigitalHumanWeb/locales/ja-JP/setting.json
new file mode 100644
index 0000000..de7cf53
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "について"
+ },
+ "agentTab": {
+ "chat": "チャット設定",
+ "meta": "メタ情報",
+ "modal": "モーダル設定",
+ "plugin": "プラグイン設定",
+ "prompt": "プロンプト設定",
+ "tts": "音声サービス"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "{{appName}} の全体的なユーザー体験を改善するために、テレメトリーデータの送信を選択することで、あなたは私たちを助けることができます。",
+ "title": "匿名使用データの送信"
+ },
+ "title": "データ分析"
+ },
+ "danger": {
+ "clear": {
+ "action": "すぐにクリア",
+ "confirm": "すべてのチャットデータを削除しますか?",
+ "desc": "アシスタント、ファイル、メッセージ、プラグインなど、すべてのセッションデータが削除されます",
+ "success": "すべてのセッションメッセージが削除されました",
+ "title": "すべてのセッションメッセージをクリア"
+ },
+ "reset": {
+ "action": "すぐにリセット",
+ "confirm": "すべての設定をリセットしますか?",
+ "currentVersion": "現在のバージョン",
+ "desc": "すべての設定をデフォルト値にリセットします",
+ "success": "すべての設定がリセットされました",
+ "title": "すべての設定をリセット"
+ }
+ },
+ "header": {
+ "desc": "設定優先順位和模型設置。",
+ "global": "グローバル設定",
+ "session": "セッション設定",
+ "sessionDesc": "キャラクター設定とセッションの好み。",
+ "sessionWithName": "セッション設定 · {{name}}",
+ "title": "設定"
+ },
+ "llm": {
+ "aesGcm": "キーとプロキシアドレスなどは <1>AES-GCM1> 暗号化アルゴリズムを使用して暗号化されます",
+ "apiKey": {
+ "desc": "あなたの {{name}} API キーを入力してください",
+ "placeholder": "{{name}} API キー",
+ "title": "API キー"
+ },
+ "checker": {
+ "button": "チェック",
+ "desc": "APIキーとプロキシアドレスが正しく記入されているかをテスト",
+ "pass": "チェック合格",
+ "title": "接続性チェック"
+ },
+ "customModelCards": {
+ "addNew": "{{id}} モデルを作成して追加する",
+ "config": "モデルの設定",
+ "confirmDelete": "このカスタムモデルを削除しようとしています。削除した後は元に戻すことはできませんので、慎重に操作してください。",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "Azure OpenAI で実際にリクエストされるフィールド",
+ "placeholder": "Azure でのモデル展開名を入力してください",
+ "title": "モデル展開名"
+ },
+ "displayName": {
+ "placeholder": "ChatGPT、GPT-4 などのモデルの表示名を入力してください",
+ "title": "モデル表示名"
+ },
+ "files": {
+ "extra": "現在のファイルアップロード機能は一時的なハック手法であり、自己責任での試行に限られています。完全なファイルアップロード機能は今後の実装をお待ちください。",
+ "title": "ファイルアップロードのサポート"
+ },
+ "functionCall": {
+ "extra": "この設定はアプリ内の関数呼び出し機能のみを有効にします。関数呼び出しのサポートはモデル自体に依存するため、モデルの関数呼び出し機能の有効性を自分でテストしてください。",
+ "title": "関数呼び出しのサポート"
+ },
+ "id": {
+ "extra": "モデルのラベルとして表示されます",
+ "placeholder": "gpt-4-turbo-preview または claude-2.1 などのモデルIDを入力してください",
+ "title": "モデルID"
+ },
+ "modalTitle": "カスタムモデルの設定",
+ "tokens": {
+ "title": "最大トークン数",
+ "unlimited": "無制限"
+ },
+ "vision": {
+ "extra": "この設定はアプリ内の画像アップロード機能のみを有効にします。認識のサポートはモデル自体に依存するため、モデルの視覚認識機能の有効性を自分でテストしてください。",
+ "title": "視覚認識のサポート"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "ブラウザから直接セッションリクエストを開始するクライアントサイドリクエストモードは、応答速度を向上させることができます。",
+ "title": "クライアントサイドリクエストモードの使用"
+ },
+ "fetcher": {
+ "fetch": "モデルリストを取得する",
+ "fetching": "モデルリストを取得中...",
+ "latestTime": "最終更新時間:{{time}}",
+ "noLatestTime": "リストを取得していません"
+ },
+ "helpDoc": "設定ガイド",
+ "modelList": {
+ "desc": "セッションで表示するモデルを選択します。選択したモデルはモデルリストに表示されます",
+ "placeholder": "モデルをリストから選択してください",
+ "title": "モデルリスト",
+ "total": "合計 {{count}} 個のモデルが利用可能です"
+ },
+ "proxyUrl": {
+ "desc": "デフォルトのアドレスに加えて、http(s)://を含める必要があります",
+ "title": "APIプロキシアドレス"
+ },
+ "waitingForMore": "さらに多くのモデルが <1>計画されています1>。お楽しみに"
+ },
+ "plugin": {
+ "addTooltip": "カスタムプラグイン",
+ "clearDeprecated": "無効なプラグインをクリア",
+ "empty": "インストールされたプラグインはありません。 <1>プラグインストア1> で探索してください",
+ "installStatus": {
+ "deprecated": "アンインストール済み"
+ },
+ "settings": {
+ "hint": "説明に従って以下の設定を入力してください",
+ "title": "{{id}} プラグイン設定",
+ "tooltip": "プラグイン設定"
+ },
+ "store": "プラグインストア"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "アバター"
+ },
+ "backgroundColor": {
+ "title": "背景色"
+ },
+ "description": {
+ "placeholder": "アシスタントの説明を入力してください",
+ "title": "説明"
+ },
+ "name": {
+ "placeholder": "アシスタントの名前を入力してください",
+ "title": "名前"
+ },
+ "prompt": {
+ "placeholder": "役割のプロンプトワードを入力してください",
+ "title": "役割の設定"
+ },
+ "tag": {
+ "placeholder": "タグを入力してください",
+ "title": "タグ"
+ },
+ "title": "アシスタント情報"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "現在のメッセージ数がこの値を超えると、トピックが自動的に作成されます",
+ "title": "メッセージ閾値"
+ },
+ "chatStyleType": {
+ "title": "チャットウィンドウのスタイル",
+ "type": {
+ "chat": "チャットモード",
+ "docs": "ドキュメントモード"
+ }
+ },
+ "compressThreshold": {
+ "desc": "圧縮されていない過去のメッセージがこの値を超えると、圧縮されます",
+ "title": "過去メッセージの長さの圧縮閾値"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "会話中に自動的にトピックを作成するかどうか。一時的なトピックのみ有効です",
+ "title": "自動的にトピックを作成する"
+ },
+ "enableCompressThreshold": {
+ "title": "過去メッセージの長さの圧縮閾値を有効にする"
+ },
+ "enableHistoryCount": {
+ "alias": "制限なし",
+ "limited": "{{number}}件の会話メッセージのみ含む",
+ "setlimited": "使用履歴メッセージ数",
+ "title": "過去メッセージ数を制限する",
+ "unlimited": "過去メッセージ数を制限しない"
+ },
+ "historyCount": {
+ "desc": "リクエストごとに含まれる過去メッセージの数",
+ "title": "過去メッセージ数"
+ },
+ "inputTemplate": {
+ "desc": "ユーザーの最新メッセージがこのテンプレートに埋め込まれます",
+ "placeholder": "入力テンプレート {{text}} はリアルタイムの入力情報に置き換えられます",
+ "title": "ユーザー入力のプリプロセス"
+ },
+ "title": "チャット設定"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "単一応答制限を有効にする"
+ },
+ "frequencyPenalty": {
+ "desc": "値が大きいほど、単語の繰り返しを減らす可能性が高くなります",
+ "title": "頻度ペナルティ"
+ },
+ "maxTokens": {
+ "desc": "1 回の対話で使用される最大トークン数",
+ "title": "単一応答制限"
+ },
+ "model": {
+ "desc": "{{provider}}モデル",
+ "title": "モデル"
+ },
+ "presencePenalty": {
+ "desc": "値が大きいほど、新しいトピックに拡張する可能性が高くなります",
+ "title": "トピックの新鮮度"
+ },
+ "temperature": {
+ "desc": "値が大きいほど、応答がよりランダムになります",
+ "title": "ランダム性",
+ "titleWithValue": "ランダム性 {{value}}"
+ },
+ "title": "モデル設定",
+ "topP": {
+ "desc": "ランダム性と同様ですが、ランダム性と一緒に変更しないでください",
+ "title": "トップ P サンプリング"
+ }
+ },
+ "settingPlugin": {
+ "title": "プラグインリスト"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "管理者が暗号化アクセスを有効にしています",
+ "placeholder": "アクセスコードを入力してください",
+ "title": "アクセスコード"
+ },
+ "oauth": {
+ "info": {
+ "desc": "ログイン済み",
+ "title": "アカウント情報"
+ },
+ "signin": {
+ "action": "ログイン",
+ "desc": "SSO ログインを使用してアプリをロック解除",
+ "title": "アカウントにログイン"
+ },
+ "signout": {
+ "action": "ログアウト",
+ "confirm": "ログアウトしますか?",
+ "success": "ログアウトに成功しました"
+ }
+ },
+ "title": "システム設定"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "OpenAI 音声認識モデル",
+ "title": "OpenAI",
+ "ttsModel": "OpenAI 音声合成モデル"
+ },
+ "showAllLocaleVoice": {
+ "desc": "無効にすると、現在の言語の音声のみが表示されます",
+ "title": "すべての言語の音声を表示"
+ },
+ "stt": "音声認識設定",
+ "sttAutoStop": {
+ "desc": "無効にすると、音声認識は自動的に停止せず、手動で停止する必要があります",
+ "title": "音声認識の自動停止"
+ },
+ "sttLocale": {
+ "desc": "音声入力の言語、このオプションを選択すると音声認識の精度が向上します",
+ "title": "音声認識言語"
+ },
+ "sttService": {
+ "desc": "ブラウザはネイティブの音声認識サービスです",
+ "title": "音声認識サービス"
+ },
+ "title": "音声サービス",
+ "tts": "音声合成設定",
+ "ttsService": {
+ "desc": "OpenAI 音声合成サービスを使用する場合、OpenAI モデルサービスが有効になっている必要があります",
+ "title": "音声合成サービス"
+ },
+ "voice": {
+ "desc": "現在のアシスタントに適した音声を選択します。異なる TTS サービスは異なる音声をサポートしています",
+ "preview": "音声を試聴",
+ "title": "音声合成音声"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "アバター"
+ },
+ "fontSize": {
+ "desc": "チャットのフォントサイズ",
+ "marks": {
+ "normal": "標準"
+ },
+ "title": "フォントサイズ"
+ },
+ "lang": {
+ "autoMode": "システムに従う",
+ "title": "言語"
+ },
+ "neutralColor": {
+ "desc": "異なる色調のグレースケールのカスタマイズ",
+ "title": "中立色"
+ },
+ "primaryColor": {
+ "desc": "カスタマイズテーマカラー",
+ "title": "テーマカラー"
+ },
+ "themeMode": {
+ "auto": "自動",
+ "dark": "ダーク",
+ "light": "ライト",
+ "title": "テーマ"
+ },
+ "title": "テーマ設定"
+ },
+ "submitAgentModal": {
+ "button": "エージェントを提出",
+ "identifier": "エージェント識別子",
+ "metaMiss": "エージェント情報を入力してから提出してください。名前、説明、タグが必要です。",
+ "placeholder": "エージェントの識別子を入力してください。一意である必要があります。例:web-development",
+ "tooltips": "エージェントマーケットに共有"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "識別のために名前を追加",
+ "placeholder": "デバイス名を入力",
+ "title": "デバイス名"
+ },
+ "title": "デバイス情報",
+ "unknownBrowser": "不明なブラウザ",
+ "unknownOS": "不明なOS"
+ },
+ "warning": {
+ "tip": "コミュニティの長期にわたる公開テストの結果、WebRTC 同期は一般的なデータ同期要求を安定して満たすことができない可能性があります。 <1>シグナリングサーバーをデプロイ1> してからご使用ください。"
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "WebRTC はこの名前で同期チャネルを作成し、チャネル名が一意であることを確認します",
+ "placeholder": "同期チャネル名を入力してください",
+ "shuffle": "ランダム生成",
+ "title": "同期チャネル名"
+ },
+ "channelPassword": {
+ "desc": "パスワードを追加してチャネルをプライベートに保ち、パスワードが正しい場合のみデバイスがチャネルに参加できます",
+ "placeholder": "同期チャネルのパスワードを入力してください",
+ "title": "同期チャネルのパスワード"
+ },
+ "desc": "リアルタイムでピアツーピアのデータ通信を行い、デバイスが同時にオンラインである必要があります",
+ "enabled": {
+ "invalid": "シグナリングサーバーと同期チャネル名を入力してから有効にしてください",
+ "title": "同期を有効にする"
+ },
+ "signaling": {
+ "desc": "WebRTC はこのアドレスを使用して同期します",
+ "placeholder": "シグナリングサーバーのアドレスを入力してください",
+ "title": "シグナリングサーバー"
+ },
+ "title": "WebRTC 同期"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "アシスタントメタデータ生成モデル",
+ "modelDesc": "アシスタントの名前、説明、アバター、ラベルを生成するために指定されたモデル",
+ "title": "アシスタント情報の自動生成"
+ },
+ "queryRewrite": {
+ "label": "質問リライトモデル",
+ "modelDesc": "ユーザーの質問を最適化するために指定されたモデル",
+ "title": "知識ベース"
+ },
+ "title": "システムアシスタント",
+ "topic": {
+ "label": "トピックネーミングモデル",
+ "modelDesc": "トピックの自動リネームに使用されるモデルを指定します",
+ "title": "トピックの自動リネーム"
+ },
+ "translation": {
+ "label": "翻訳モデル",
+ "modelDesc": "翻訳に使用するモデルを指定します",
+ "title": "翻訳アシスタントの設定"
+ }
+ },
+ "tab": {
+ "about": "について",
+ "agent": "デフォルトエージェント",
+ "common": "一般設定",
+ "experiment": "実験",
+ "llm": "言語モデル",
+ "sync": "クラウド同期",
+ "system-agent": "システムアシスタント",
+ "tts": "音声サービス"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "組み込み"
+ },
+ "disabled": "現在のモデルは関数呼び出しをサポートしていません。プラグインを使用できません",
+ "plugins": {
+ "enabled": "{{num}} が有効",
+ "groupName": "プラグイン",
+ "noEnabled": "有効なプラグインはありません",
+ "store": "プラグインストア"
+ },
+ "title": "拡張ツール"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/tool.json b/DigitalHumanWeb/locales/ja-JP/tool.json
new file mode 100644
index 0000000..15f3716
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "自動生成",
+ "downloading": "DallE3 で生成された画像リンクは有効期間が1時間しかありません。画像をローカルにキャッシュしています...",
+ "generate": "生成する",
+ "generating": "生成中...",
+ "images": "画像:",
+ "prompt": "プロンプト"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ja-JP/welcome.json b/DigitalHumanWeb/locales/ja-JP/welcome.json
new file mode 100644
index 0000000..43c6d37
--- /dev/null
+++ b/DigitalHumanWeb/locales/ja-JP/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "設定をインポート",
+ "market": "市場を見る",
+ "start": "すぐに開始"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "別のグループ",
+ "title": "新しいアシスタントのおすすめ:"
+ },
+ "defaultMessage": "私はあなたのプライベートインテリジェントアシスタント {{appName}} です。今、何をお手伝いできますか?\nより専門的またはカスタマイズされたアシスタントが必要な場合は、`+` をクリックしてカスタムアシスタントを作成してください。",
+ "defaultMessageWithoutCreate": "私はあなたのプライベートインテリジェントアシスタント {{appName}} です。今、何をお手伝いできますか?",
+ "qa": {
+ "q01": "LobeHub とは何ですか?",
+ "q02": "{{appName}} とは何ですか?",
+ "q03": "{{appName}} にはコミュニティサポートがありますか?",
+ "q04": "{{appName}} はどのような機能をサポートしていますか?",
+ "q05": "{{appName}} はどのように展開して使用しますか?",
+ "q06": "{{appName}} の価格はどのようになっていますか?",
+ "q07": "{{appName}} は無料ですか?",
+ "q08": "クラウドサービス版はありますか?",
+ "q09": "ローカル言語モデルはサポートされていますか?",
+ "q10": "画像認識と生成はサポートされていますか?",
+ "q11": "音声合成と音声認識はサポートされていますか?",
+ "q12": "プラグインシステムはサポートされていますか?",
+ "q13": "GPTを取得するための独自のマーケットプレイスはありますか?",
+ "q14": "複数のAIサービスプロバイダーをサポートしていますか?",
+ "q15": "使用中に問題が発生した場合はどうすればよいですか?"
+ },
+ "questions": {
+ "moreBtn": "さらに詳しく",
+ "title": "よくある質問:"
+ },
+ "welcome": {
+ "afternoon": "こんにちは",
+ "morning": "おはようございます",
+ "night": "こんばんは",
+ "noon": "こんにちは"
+ }
+ },
+ "header": "ようこそ",
+ "pickAgent": "または以下のエージェントテンプレートから選択してください",
+ "skip": "作成をスキップ",
+ "slogan": {
+ "desc1": "脳のクラスターを開始し、創造性を引き出しましょう。あなたのスマートエージェントは常にそこにあります。",
+ "desc2": "最初のエージェントを作成して、始めましょう〜",
+ "title": "より賢い脳を自分に与える"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/auth.json b/DigitalHumanWeb/locales/ko-KR/auth.json
new file mode 100644
index 0000000..78e9446
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "로그인",
+ "loginOrSignup": "로그인 / 가입",
+ "profile": "프로필",
+ "security": "보안",
+ "signout": "로그아웃",
+ "signup": "가입"
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/chat.json b/DigitalHumanWeb/locales/ko-KR/chat.json
new file mode 100644
index 0000000..09cb0b0
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "모델"
+ },
+ "agentDefaultMessage": "안녕하세요, 저는 **{{name}}**입니다. 지금 바로 저와 대화를 시작하시거나 [도우미 설정]({{url}})으로 가셔서 제 정보를 완성하실 수 있습니다.",
+ "agentDefaultMessageWithSystemRole": "안녕하세요, 저는 **{{name}}**입니다. {{systemRole}}입니다. 대화를 시작해 봅시다!",
+ "agentDefaultMessageWithoutEdit": "안녕하세요, 저는 **{{name}}**입니다. 대화를 시작해보세요!",
+ "agents": "도우미",
+ "artifact": {
+ "generating": "생성 중",
+ "thinking": "생각 중",
+ "thought": "사고 과정",
+ "unknownTitle": "제목 없음"
+ },
+ "backToBottom": "하단으로 이동",
+ "chatList": {
+ "longMessageDetail": "자세히 보기"
+ },
+ "clearCurrentMessages": "현재 대화 지우기",
+ "confirmClearCurrentMessages": "현재 대화를 지우시면 되돌릴 수 없습니다. 작업을 확인하시겠습니까?",
+ "confirmRemoveSessionItemAlert": "이 도우미를 삭제하시면 되돌릴 수 없습니다. 작업을 확인하시겠습니까?",
+ "confirmRemoveSessionSuccess": "도우미가 성공적으로 삭제되었습니다",
+ "defaultAgent": "기본 도우미",
+ "defaultList": "기본 목록",
+ "defaultSession": "기본 도우미",
+ "duplicateSession": {
+ "loading": "복사 중...",
+ "success": "복사 성공",
+ "title": "{{title}} 복사본"
+ },
+ "duplicateTitle": "{{title}} 복사본",
+ "emptyAgent": "도우미가 없습니다",
+ "historyRange": "대화 기록 범위",
+ "inbox": {
+ "desc": "뇌 클러스터를 활성화하여 창의적인 아이디어를 끌어내는 인공지능 비서입니다. 여기서 모든 것에 대해 대화합니다.",
+ "title": "무작위 대화"
+ },
+ "input": {
+ "addAi": "AI 메시지 추가",
+ "addUser": "사용자 메시지 추가",
+ "more": "더 많은",
+ "send": "전송",
+ "sendWithCmdEnter": "{{meta}} + Enter 키로 전송",
+ "sendWithEnter": "Enter 키로 전송",
+ "stop": "중지",
+ "warp": "줄바꿈"
+ },
+ "knowledgeBase": {
+ "all": "모든 내용",
+ "allFiles": "모든 파일",
+ "allKnowledgeBases": "모든 지식 베이스",
+ "disabled": "현재 배포 모드에서는 지식 기반 대화가 지원되지 않습니다. 사용하려면 서버 데이터베이스 배포로 전환하거나 {{cloud}} 서비스를 이용해 주십시오.",
+ "library": {
+ "action": {
+ "add": "추가",
+ "detail": "상세",
+ "remove": "제거"
+ },
+ "title": "파일/지식 베이스"
+ },
+ "relativeFilesOrKnowledgeBases": "관련 파일/지식 베이스",
+ "title": "지식 베이스",
+ "uploadGuide": "업로드한 파일은 '지식 베이스'에서 확인할 수 있습니다.",
+ "viewMore": "더 보기"
+ },
+ "messageAction": {
+ "delAndRegenerate": "삭제하고 다시 생성",
+ "regenerate": "다시 생성"
+ },
+ "newAgent": "새 도우미",
+ "pin": "고정",
+ "pinOff": "고정 해제",
+ "rag": {
+ "referenceChunks": "참조 조각",
+ "userQuery": {
+ "actions": {
+ "delete": "쿼리 삭제",
+ "regenerate": "쿼리 재생성"
+ }
+ }
+ },
+ "regenerate": "재생성",
+ "roleAndArchive": "역할 및 아카이브",
+ "searchAgentPlaceholder": "검색 도우미...",
+ "sendPlaceholder": "채팅 내용 입력...",
+ "sessionGroup": {
+ "config": "그룹 설정",
+ "confirmRemoveGroupAlert": "이 그룹을 삭제하려고 합니다. 삭제 후 이 그룹의 도우미는 기본 목록으로 이동됩니다. 작업을 확인하십시오.",
+ "createAgentSuccess": "에이전트 생성 성공",
+ "createGroup": "새 그룹 추가",
+ "createSuccess": "생성 성공",
+ "creatingAgent": "에이전트 생성 중...",
+ "inputPlaceholder": "그룹 이름을 입력하세요...",
+ "moveGroup": "그룹으로 이동",
+ "newGroup": "새 그룹",
+ "rename": "그룹 이름 변경",
+ "renameSuccess": "이름 변경 성공",
+ "sortSuccess": "다시 정렬 성공",
+ "sorting": "그룹 정렬 업데이트 중...",
+ "tooLong": "그룹 이름은 1-20자여야 합니다"
+ },
+ "shareModal": {
+ "download": "스크린샷 다운로드",
+ "imageType": "이미지 형식",
+ "screenshot": "스크린샷",
+ "settings": "내보내기 설정",
+ "shareToShareGPT": "ShareGPT 공유 링크 생성",
+ "withBackground": "배경 이미지 포함",
+ "withFooter": "푸터 포함",
+ "withPluginInfo": "플러그인 정보 포함",
+ "withSystemRole": "도우미 역할 포함"
+ },
+ "stt": {
+ "action": "음성 입력",
+ "loading": "인식 중...",
+ "prettifying": "정제 중..."
+ },
+ "temp": "임시",
+ "tokenDetails": {
+ "chats": "채팅 메시지",
+ "rest": "남은 사용량",
+ "systemRole": "시스템 역할",
+ "title": "컨텍스트 세부 정보",
+ "tools": "도구 설정",
+ "total": "총 사용량",
+ "used": "총 사용"
+ },
+ "tokenTag": {
+ "overload": "한도 초과",
+ "remained": "남음",
+ "used": "사용됨"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "자동으로 이름 바꾸기",
+ "duplicate": "복사본 만들기",
+ "export": "주제 내보내기"
+ },
+ "checkOpenNewTopic": "새 주제를 열까요?",
+ "checkSaveCurrentMessages": "현재 대화를 주제로 저장하시겠습니까?",
+ "confirmRemoveAll": "모든 주제를 삭제하시면 되돌릴 수 없습니다. 신중하게 작업하시겠습니까?",
+ "confirmRemoveTopic": "이 주제를 삭제하시면 되돌릴 수 없습니다. 신중하게 작업하시겠습니까?",
+ "confirmRemoveUnstarred": "별표가 없는 주제를 삭제하시면 되돌릴 수 없습니다. 신중하게 작업하시겠습니까?",
+ "defaultTitle": "기본 주제",
+ "duplicateLoading": "주제 복사 중...",
+ "duplicateSuccess": "주제 복사 성공",
+ "guide": {
+ "desc": "현재 대화를 히스토리 토픽으로 저장하고 새 대화를 시작하려면 왼쪽 버튼을 클릭하세요.",
+ "title": "토픽 목록"
+ },
+ "openNewTopic": "새 주제 열기",
+ "removeAll": "모든 주제 삭제",
+ "removeUnstarred": "별표가 없는 주제 삭제",
+ "saveCurrentMessages": "현재 대화를 주제로 저장",
+ "searchPlaceholder": "주제 검색...",
+ "title": "주제 목록"
+ },
+ "translate": {
+ "action": "번역",
+ "clear": "번역 삭제"
+ },
+ "tts": {
+ "action": "음성 읽기",
+ "clear": "음성 삭제"
+ },
+ "updateAgent": "도우미 정보 업데이트",
+ "upload": {
+ "action": {
+ "fileUpload": "파일 업로드",
+ "folderUpload": "폴더 업로드",
+ "imageDisabled": "현재 모델은 시각 인식을 지원하지 않습니다. 모델을 변경한 후 사용하세요.",
+ "imageUpload": "이미지 업로드",
+ "tooltip": "업로드"
+ },
+ "clientMode": {
+ "actionFiletip": "파일 업로드",
+ "actionTooltip": "업로드",
+ "disabled": "현재 모델은 시각 인식 및 파일 분석을 지원하지 않습니다. 모델을 변경한 후 사용하세요."
+ },
+ "preview": {
+ "prepareTasks": "청크 준비 중...",
+ "status": {
+ "pending": "업로드 준비 중...",
+ "processing": "파일 처리 중..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/clerk.json b/DigitalHumanWeb/locales/ko-KR/clerk.json
new file mode 100644
index 0000000..e52f576
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "뒤로",
+ "badge__default": "기본",
+ "badge__otherImpersonatorDevice": "다른 가장자리 장치",
+ "badge__primary": "기본",
+ "badge__requiresAction": "조치 필요",
+ "badge__thisDevice": "이 장치",
+ "badge__unverified": "미인증",
+ "badge__userDevice": "사용자 장치",
+ "badge__you": "당신",
+ "createOrganization": {
+ "formButtonSubmit": "조직 생성",
+ "invitePage": {
+ "formButtonReset": "건너뛰기"
+ },
+ "title": "조직 생성"
+ },
+ "dates": {
+ "lastDay": "어제 {{ date | timeString('en-US') }}",
+ "next6Days": "{{ date | weekday('en-US','long') }} {{ date | timeString('en-US') }}",
+ "nextDay": "내일 {{ date | timeString('en-US') }}",
+ "numeric": "{{ date | numeric('en-US') }}",
+ "previous6Days": "지난 {{ date | weekday('en-US','long') }} {{ date | timeString('en-US') }}",
+ "sameDay": "오늘 {{ date | timeString('en-US') }}"
+ },
+ "dividerText": "또는",
+ "footerActionLink__useAnotherMethod": "다른 방법 사용",
+ "footerPageLink__help": "도움말",
+ "footerPageLink__privacy": "개인정보 처리방침",
+ "footerPageLink__terms": "약관",
+ "formButtonPrimary": "계속",
+ "formButtonPrimary__verify": "확인",
+ "formFieldAction__forgotPassword": "비밀번호를 잊으셨나요?",
+ "formFieldError__matchingPasswords": "비밀번호가 일치합니다.",
+ "formFieldError__notMatchingPasswords": "비밀번호가 일치하지 않습니다.",
+ "formFieldError__verificationLinkExpired": "확인 링크가 만료되었습니다. 새 링크를 요청하세요.",
+ "formFieldHintText__optional": "선택 사항",
+ "formFieldHintText__slug": "Slug는 고유해야 하는 사람이 읽을 수 있는 ID입니다. 주소(URL)에서 자주 사용됩니다.",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "계정 삭제",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "example@email.com, example2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "my-org",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "이 도메인에 대한 자동 초대 활성화",
+ "formFieldLabel__backupCode": "백업 코드",
+ "formFieldLabel__confirmDeletion": "확인",
+ "formFieldLabel__confirmPassword": "비밀번호 확인",
+ "formFieldLabel__currentPassword": "현재 비밀번호",
+ "formFieldLabel__emailAddress": "이메일 주소",
+ "formFieldLabel__emailAddress_username": "이메일 주소 또는 사용자 이름",
+ "formFieldLabel__emailAddresses": "이메일 주소",
+ "formFieldLabel__firstName": "이름",
+ "formFieldLabel__lastName": "성",
+ "formFieldLabel__newPassword": "새 비밀번호",
+ "formFieldLabel__organizationDomain": "도메인",
+ "formFieldLabel__organizationDomainDeletePending": "보류 중인 초대 및 제안 삭제",
+ "formFieldLabel__organizationDomainEmailAddress": "확인 이메일 주소",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "이 도메인에서 코드를 받고 확인하려면 이메일 주소를 입력하세요.",
+ "formFieldLabel__organizationName": "이름",
+ "formFieldLabel__organizationSlug": "Slug",
+ "formFieldLabel__passkeyName": "패스키 이름",
+ "formFieldLabel__password": "비밀번호",
+ "formFieldLabel__phoneNumber": "전화번호",
+ "formFieldLabel__role": "역할",
+ "formFieldLabel__signOutOfOtherSessions": "다른 장치에서 로그아웃",
+ "formFieldLabel__username": "사용자 이름",
+ "impersonationFab": {
+ "action__signOut": "로그아웃",
+ "title": "{{identifier}}로 로그인함"
+ },
+ "locale": "ko-KR",
+ "maintenanceMode": "현재 유지 보수 중입니다. 걱정하지 마세요. 몇 분 안에 완료될 것입니다.",
+ "membershipRole__admin": "관리자",
+ "membershipRole__basicMember": "회원",
+ "membershipRole__guestMember": "손님",
+ "organizationList": {
+ "action__createOrganization": "조직 생성",
+ "action__invitationAccept": "가입",
+ "action__suggestionsAccept": "가입 요청",
+ "createOrganization": "조직 생성",
+ "invitationAcceptedLabel": "가입됨",
+ "subtitle": "{{applicationName}}을(를) 계속하려면",
+ "suggestionsAcceptedLabel": "승인 대기 중",
+ "title": "계정 선택",
+ "titleWithoutPersonal": "조직 선택"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "자동 초대",
+ "badge__automaticSuggestion": "자동 제안",
+ "badge__manualInvitation": "자동 가입 없음",
+ "badge__unverified": "미인증",
+ "createDomainPage": {
+ "subtitle": "도메인을 추가하여 인증합니다. 이 도메인의 이메일 주소를 가진 사용자는 조직에 자동으로 가입하거나 가입 요청을 할 수 있습니다.",
+ "title": "도메인 추가"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "초대장을 보낼 수 없습니다. 다음 이메일 주소에 대기 중인 초대장이 이미 있습니다: {{email_addresses}}.",
+ "formButtonPrimary__continue": "초대 보내기",
+ "selectDropdown__role": "역할 선택",
+ "subtitle": "이메일 주소를 하나 이상 입력하거나 붙여넣어주세요. 띄어쓰기나 쉼표로 구분됩니다.",
+ "successMessage": "초대장이 성공적으로 전송되었습니다.",
+ "title": "새 멤버 초대"
+ },
+ "membersPage": {
+ "action__invite": "초대",
+ "activeMembersTab": {
+ "menuAction__remove": "멤버 제거",
+ "tableHeader__actions": "",
+ "tableHeader__joined": "가입일",
+ "tableHeader__role": "역할",
+ "tableHeader__user": "사용자"
+ },
+ "detailsTitle__emptyRow": "표시할 멤버가 없습니다.",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "이메일 도메인을 연결하여 사용자를 초대합니다. 일치하는 이메일 도메인으로 가입한 사용자는 언제든지 조직에 가입할 수 있습니다.",
+ "headerTitle": "자동 초대",
+ "primaryButton": "인증된 도메인 관리"
+ },
+ "table__emptyRow": "표시할 초대가 없습니다."
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "초대 철회",
+ "tableHeader__invited": "초대됨"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "일치하는 이메일 도메인으로 가입하는 사용자는 조직 가입 요청을 보낼 수 있습니다.",
+ "headerTitle": "자동 제안",
+ "primaryButton": "인증된 도메인 관리"
+ },
+ "menuAction__approve": "승인",
+ "menuAction__reject": "거부",
+ "tableHeader__requested": "요청된 액세스",
+ "table__emptyRow": "표시할 요청이 없습니다."
+ },
+ "start": {
+ "headerTitle__invitations": "초대",
+ "headerTitle__members": "멤버",
+ "headerTitle__requests": "요청"
+ }
+ },
+ "navbar": {
+ "description": "조직을 관리합니다.",
+ "general": "일반",
+ "members": "멤버",
+ "title": "조직"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "\"{{organizationName}}\"을(를) 입력하여 계속합니다.",
+ "messageLine1": "이 조직을 삭제하시겠습니까?",
+ "messageLine2": "이 작업은 영구적이며 되돌릴 수 없습니다.",
+ "successMessage": "조직이 삭제되었습니다.",
+ "title": "조직 삭제"
+ },
+ "leaveOrganization": {
+ "actionDescription": "\"{{organizationName}}\"을(를) 입력하여 계속합니다.",
+ "messageLine1": "이 조직을 나가시겠습니까? 이 조직 및 해당 응용프로그램에 대한 액세스 권한이 상실됩니다.",
+ "messageLine2": "이 작업은 영구적이며 되돌릴 수 없습니다.",
+ "successMessage": "조직을 나갔습니다.",
+ "title": "조직 나가기"
+ },
+ "title": "위험"
+ },
+ "domainSection": {
+ "menuAction__manage": "관리",
+ "menuAction__remove": "삭제",
+ "menuAction__verify": "인증",
+ "primaryButton": "도메인 추가",
+ "subtitle": "인증된 이메일 도메인을 기반으로 조직에 자동으로 가입하거나 가입 요청할 수 있습니다.",
+ "title": "인증된 도메인"
+ },
+ "successMessage": "조직이 업데이트되었습니다.",
+ "title": "프로필 업데이트"
+ },
+ "removeDomainPage": {
+ "messageLine1": "{{domain}} 이메일 도메인이 제거됩니다.",
+ "messageLine2": "이후에 사용자는 이 조직에 자동으로 가입할 수 없습니다.",
+ "successMessage": "{{domain}}이(가) 제거되었습니다.",
+ "title": "도메인 제거"
+ },
+ "start": {
+ "headerTitle__general": "일반",
+ "headerTitle__members": "멤버",
+ "profileSection": {
+ "primaryButton": "프로필 업데이트",
+ "title": "조직 프로필",
+ "uploadAction__title": "로고 업로드"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "이 도메인을 제거하면 초대된 사용자에게 영향을 줍니다.",
+ "removeDomainActionLabel__remove": "도메인 제거",
+ "removeDomainSubtitle": "이 도메인을 인증된 도메인에서 제거합니다.",
+ "removeDomainTitle": "도메인 제거"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "사용자는 가입 시 자동으로 조직에 초대되어 언제든지 가입할 수 있습니다.",
+ "automaticInvitationOption__label": "자동 초대",
+ "automaticSuggestionOption__description": "사용자는 가입 요청을 받지만 관리자의 승인이 필요합니다.",
+ "automaticSuggestionOption__label": "자동 제안",
+ "calloutInfoLabel": "가입 모드 변경은 새 사용자에만 영향을 줍니다.",
+ "calloutInvitationCountLabel": "사용자에게 보낸 보류 중인 초대장: {{count}}",
+ "calloutSuggestionCountLabel": "사용자에게 보낸 보류 중인 제안: {{count}}",
+ "manualInvitationOption__description": "사용자는 수동으로만 조직에 초대할 수 있습니다.",
+ "manualInvitationOption__label": "자동 가입 없음",
+ "subtitle": "이 도메인에서 조직에 가입하는 사용자의 방법을 선택하세요."
+ },
+ "start": {
+ "headerTitle__danger": "위험",
+ "headerTitle__enrollment": "가입 옵션"
+ },
+ "subtitle": "{{domain}} 도메인이 이제 인증되었습니다. 가입 모드를 선택하여 계속하세요.",
+ "title": "{{domain}} 업데이트"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "이메일 주소로 전송된 인증 코드를 입력하세요.",
+ "formTitle": "인증 코드",
+ "resendButton": "코드 받지 못했나요? 재전송",
+ "subtitle": "{{domainName}} 도메인은 이메일을 통해 인증되어야 합니다.",
+ "subtitleVerificationCodeScreen": "{{emailAddress}}로 인증 코드가 전송되었습니다. 계속하려면 코드를 입력하세요.",
+ "title": "도메인 인증"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "조직 생성",
+ "action__invitationAccept": "가입",
+ "action__manageOrganization": "관리",
+ "action__suggestionsAccept": "가입 요청",
+ "notSelected": "선택된 조직 없음",
+ "personalWorkspace": "개인 계정",
+ "suggestionsAcceptedLabel": "승인 대기 중"
+ },
+ "paginationButton__next": "다음",
+ "paginationButton__previous": "이전",
+ "paginationRowText__displaying": "표시 중",
+ "paginationRowText__of": "중",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "계정 추가",
+ "action__signOutAll": "모든 계정 로그아웃",
+ "subtitle": "계속하려는 계정을 선택하세요.",
+ "title": "계정 선택"
+ },
+ "alternativeMethods": {
+ "actionLink": "도움 받기",
+ "actionText": "이러한 방법 중 하나를 사용하시겠어요?",
+ "blockButton__backupCode": "백업 코드 사용",
+ "blockButton__emailCode": "{{identifier}}에게 이메일 코드 보내기",
+ "blockButton__emailLink": "{{identifier}}에게 이메일 링크 보내기",
+ "blockButton__passkey": "패스키로 로그인",
+ "blockButton__password": "비밀번호로 로그인",
+ "blockButton__phoneCode": "{{identifier}}에게 SMS 코드 보내기",
+ "blockButton__totp": "인증 앱 사용",
+ "getHelp": {
+ "blockButton__emailSupport": "이메일 지원",
+ "content": "계정에 로그인하는 데 어려움을 겪고 계신다면, 이메일을 보내주시면 최대한 빨리 접속을 복구하는 데 도움을 드리겠습니다.",
+ "title": "도움 받기"
+ },
+ "subtitle": "문제가 발생했나요? 이 방법 중 하나를 사용하여 로그인할 수 있습니다.",
+ "title": "다른 방법 사용"
+ },
+ "backupCodeMfa": {
+ "subtitle": "백업 코드는 이중 인증 설정 시 받은 코드입니다.",
+ "title": "백업 코드 입력"
+ },
+ "emailCode": {
+ "formTitle": "인증 코드",
+ "resendButton": "코드를 받지 못했나요? 다시 보내기",
+ "subtitle": "{{applicationName}}으로 계속하려면",
+ "title": "이메일 확인"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "계속하려면 원래 탭으로 돌아가세요.",
+ "title": "이 확인 링크가 만료되었습니다."
+ },
+ "failed": {
+ "subtitle": "계속하려면 원래 탭으로 돌아가세요.",
+ "title": "이 확인 링크가 유효하지 않습니다."
+ },
+ "formSubtitle": "이메일로 보낸 확인 링크 사용",
+ "formTitle": "확인 링크",
+ "loading": {
+ "subtitle": "곧 리디렉션됩니다.",
+ "title": "로그인 중..."
+ },
+ "resendButton": "링크를 받지 못했나요? 다시 보내기",
+ "subtitle": "{{applicationName}}으로 계속하려면",
+ "title": "이메일 확인",
+ "unusedTab": {
+ "title": "이 탭을 닫아도 됩니다."
+ },
+ "verified": {
+ "subtitle": "곧 리디렉션됩니다.",
+ "title": "로그인 성공"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "계속하려면 원래 탭으로 돌아가세요.",
+ "subtitleNewTab": "계속하려면 새로 열린 탭으로 이동하세요.",
+ "titleNewTab": "다른 탭에서 로그인 완료"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "비밀번호 재설정 코드",
+ "resendButton": "코드를 받지 못했나요? 다시 보내기",
+ "subtitle": "비밀번호를 재설정하려면",
+ "subtitle_email": "먼저 이메일 주소로 보낸 코드를 입력하세요.",
+ "subtitle_phone": "먼저 전화로 받은 코드를 입력하세요.",
+ "title": "비밀번호 재설정"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "비밀번호 재설정",
+ "label__alternativeMethods": "또는 다른 방법으로 로그인",
+ "title": "비밀번호를 잊으셨나요?"
+ },
+ "noAvailableMethods": {
+ "message": "로그인 진행 불가. 사용 가능한 인증 요소가 없습니다.",
+ "subtitle": "오류가 발생했습니다.",
+ "title": "로그인할 수 없음"
+ },
+ "passkey": {
+ "subtitle": "패스키를 사용하면 자신임을 확인합니다. 기기에서 지문, 얼굴 또는 화면 잠금을 요청할 수 있습니다.",
+ "title": "패스키 사용"
+ },
+ "password": {
+ "actionLink": "다른 방법 사용",
+ "subtitle": "계정과 관련된 비밀번호를 입력하세요.",
+ "title": "비밀번호 입력"
+ },
+ "passwordPwned": {
+ "title": "비밀번호가 노출되었습니다."
+ },
+ "phoneCode": {
+ "formTitle": "인증 코드",
+ "resendButton": "코드를 받지 못했나요? 다시 보내기",
+ "subtitle": "{{applicationName}}으로 계속하려면",
+ "title": "전화 확인"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "인증 코드",
+ "resendButton": "코드를 받지 못했나요? 다시 보내기",
+ "subtitle": "계속하려면 전화로 받은 인증 코드를 입력하세요.",
+ "title": "전화 확인"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "비밀번호 재설정",
+ "requiredMessage": "보안상의 이유로 비밀번호를 재설정해야 합니다.",
+ "successMessage": "비밀번호가 성공적으로 변경되었습니다. 로그인 중입니다. 잠시 기다려 주세요.",
+ "title": "새 비밀번호 설정"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "비밀번호 재설정 전에 정체성을 확인해야 합니다."
+ },
+ "start": {
+ "actionLink": "가입하기",
+ "actionLink__use_email": "이메일 사용",
+ "actionLink__use_email_username": "이메일 또는 사용자 이름 사용",
+ "actionLink__use_passkey": "패스키 사용",
+ "actionLink__use_phone": "전화 사용",
+ "actionLink__use_username": "사용자 이름 사용",
+ "actionText": "계정이 없으신가요?",
+ "subtitle": "다시 오신 것을 환영합니다! 계속하려면 로그인하세요.",
+ "title": "{{applicationName}}에 로그인"
+ },
+ "totpMfa": {
+ "formTitle": "인증 코드",
+ "subtitle": "계속하려면 인증 앱에서 생성된 인증 코드를 입력하세요.",
+ "title": "이중 인증"
+ }
+ },
+ "signInEnterPasswordTitle": "비밀번호 입력",
+ "signUp": {
+ "continue": {
+ "actionLink": "로그인",
+ "actionText": "계정이 있으신가요?",
+ "subtitle": "계속하려면 남은 정보를 입력하세요.",
+ "title": "누락된 필드 입력"
+ },
+ "emailCode": {
+ "formSubtitle": "이메일 주소로 보낸 확인 코드를 입력하세요.",
+ "formTitle": "인증 코드",
+ "resendButton": "코드를 받지 못했나요? 다시 보내기",
+ "subtitle": "이메일로 보낸 확인 코드를 입력하세요.",
+ "title": "이메일 확인"
+ },
+ "emailLink": {
+ "formSubtitle": "이메일 주소로 보낸 확인 링크 사용",
+ "formTitle": "확인 링크",
+ "loading": {
+ "title": "가입 중..."
+ },
+ "resendButton": "링크를 받지 못했나요? 다시 보내기",
+ "subtitle": "{{applicationName}}으로 계속하려면",
+ "title": "이메일 확인",
+ "verified": {
+ "title": "가입이 완료되었습니다."
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "계속하려면 새로 열린 탭으로 이동하세요.",
+ "subtitleNewTab": "이전 탭으로 돌아가려면 새로 열린 탭으로 이동하세요.",
+ "title": "이메일 확인 완료"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "전화번호로 받은 인증 코드를 입력하세요.",
+ "formTitle": "인증 코드",
+ "resendButton": "코드를 받지 못했나요? 다시 보내기",
+ "subtitle": "전화로 받은 인증 코드를 입력하세요.",
+ "title": "전화 확인"
+ },
+ "start": {
+ "actionLink": "로그인",
+ "actionText": "계정이 있으신가요?",
+ "subtitle": "환영합니다! 시작하려면 세부 정보를 입력하세요.",
+ "title": "계정 생성"
+ }
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}로 계속하기",
+ "unstable__errors": {
+ "captcha_invalid": "보안 확인에 실패하여 가입할 수 없습니다. 다시 시도하려면 페이지를 새로 고치거나 지원팀에 문의하십시오.",
+ "captcha_unavailable": "봇 확인에 실패하여 가입할 수 없습니다. 다시 시도하려면 페이지를 새로 고치거나 지원팀에 문의하십시오.",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "이 이메일 주소는 사용 중입니다. 다른 이메일 주소를 시도하십시오.",
+ "form_identifier_exists__phone_number": "이 전화번호는 사용 중입니다. 다른 번호를 시도하십시오.",
+ "form_identifier_exists__username": "이 사용자 이름은 사용 중입니다. 다른 이름을 시도하십시오.",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "이메일 주소는 유효한 이메일 주소여야 합니다.",
+ "form_param_format_invalid__phone_number": "전화번호는 유효한 국제 형식이어야 합니다.",
+ "form_param_max_length_exceeded__first_name": "이름은 256자를 초과할 수 없습니다.",
+ "form_param_max_length_exceeded__last_name": "성은 256자를 초과할 수 없습니다.",
+ "form_param_max_length_exceeded__name": "이름은 256자를 초과할 수 없습니다.",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "비밀번호가 충분히 강력하지 않습니다.",
+ "form_password_pwned": "이 비밀번호는 누출 사고의 일부로 확인되었으므로 사용할 수 없습니다. 대신 다른 비밀번호를 시도하십시오.",
+ "form_password_pwned__sign_in": "이 비밀번호는 누출 사고의 일부로 확인되었으므로 사용할 수 없습니다. 비밀번호를 재설정하십시오.",
+ "form_password_size_in_bytes_exceeded": "비밀번호가 허용된 최대 바이트 수를 초과했습니다. 짧게 만들거나 일부 특수 문자를 제거하십시오.",
+ "form_password_validation_failed": "잘못된 비밀번호",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "마지막 식별 정보를 삭제할 수 없습니다.",
+ "not_allowed_access": "",
+ "passkey_already_exists": "이 장치에 이미 등록된 패스키가 있습니다.",
+ "passkey_not_supported": "이 장치에서 패스키는 지원되지 않습니다.",
+ "passkey_pa_not_supported": "등록에는 플랫폼 인증기가 필요하지만 장치가 지원하지 않습니다.",
+ "passkey_registration_cancelled": "패스키 등록이 취소되었거나 시간이 초과되었습니다.",
+ "passkey_retrieval_cancelled": "패스키 확인이 취소되었거나 시간이 초과되었습니다.",
+ "passwordComplexity": {
+ "maximumLength": "{{length}}자 미만",
+ "minimumLength": "{{length}}자 이상",
+ "requireLowercase": "소문자",
+ "requireNumbers": "숫자",
+ "requireSpecialCharacter": "특수 문자",
+ "requireUppercase": "대문자",
+ "sentencePrefix": "비밀번호는 다음을 포함해야 합니다."
+ },
+ "phone_number_exists": "이 전화번호는 사용 중입니다. 다른 번호를 시도하십시오.",
+ "zxcvbn": {
+ "couldBeStronger": "비밀번호가 작동하지만 더 강력해질 수 있습니다. 더 많은 문자를 추가해보세요.",
+ "goodPassword": "비밀번호는 필요한 모든 요구 사항을 충족합니다.",
+ "notEnough": "비밀번호가 충분히 강력하지 않습니다.",
+ "suggestions": {
+ "allUppercase": "모든 문자를 대문자로 바꾸세요.",
+ "anotherWord": "덜 일반적인 단어를 추가하세요.",
+ "associatedYears": "자신과 연관된 연도를 피하세요.",
+ "capitalization": "첫 글자 이상을 대문자로 바꾸세요.",
+ "dates": "자신과 연관된 날짜를 피하세요.",
+ "l33t": "'a'를 '@'로 대체하는 예측 가능한 문자 대체를 피하세요.",
+ "longerKeyboardPattern": "더 긴 키보드 패턴을 사용하고 여러 번 타이핑 방향을 변경하세요.",
+ "noNeed": "기호, 숫자 또는 대문자를 사용하지 않고도 강력한 비밀번호를 만들 수 있습니다.",
+ "pwned": "다른 곳에서 이 비밀번호를 사용한다면 변경해야 합니다.",
+ "recentYears": "최근 연도를 피하세요.",
+ "repeated": "단어와 문자를 반복하지 마세요.",
+ "reverseWords": "일반 단어의 역순 철자를 피하세요.",
+ "sequences": "일반적인 문자 시퀀스를 피하세요.",
+ "useWords": "여러 단어를 사용하지만 일반적인 구문을 피하세요."
+ },
+ "warnings": {
+ "common": "이 비밀번호는 흔히 사용됩니다.",
+ "commonNames": "일반적인 이름과 성은 추측하기 쉽습니다.",
+ "dates": "날짜는 추측하기 쉽습니다.",
+ "extendedRepeat": "\"abcabcabc\"와 같은 반복된 문자 패턴은 추측하기 쉽습니다.",
+ "keyPattern": "짧은 키보드 패턴은 추측하기 쉽습니다.",
+ "namesByThemselves": "단일 이름 또는 성은 추측하기 쉽습니다.",
+ "pwned": "인터넷의 데이터 누출로 비밀번호가 노출되었습니다.",
+ "recentYears": "최근 연도는 추측하기 쉽습니다.",
+ "sequences": "\"abc\"와 같은 일반적인 문자 시퀀스는 추측하기 쉽습니다.",
+ "similarToCommon": "이것은 흔히 사용되는 비밀번호와 유사합니다.",
+ "simpleRepeat": "\"aaa\"와 같이 반복된 문자는 추측하기 쉽습니다.",
+ "straightRow": "키보드의 직선 행은 추측하기 쉽습니다.",
+ "topHundred": "이것은 자주 사용되는 비밀번호입니다.",
+ "topTen": "이것은 많이 사용되는 비밀번호입니다.",
+ "userInputs": "개인 또는 페이지 관련 데이터를 사용해서는 안 됩니다.",
+ "wordByItself": "단일 단어는 추측하기 쉽습니다."
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "계정 추가",
+ "action__manageAccount": "계정 관리",
+ "action__signOut": "로그아웃",
+ "action__signOutAll": "모든 계정에서 로그아웃"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "복사됨!",
+ "actionLabel__copy": "모두 복사",
+ "actionLabel__download": "다운로드 .txt",
+ "actionLabel__print": "인쇄",
+ "infoText1": "이 계정에 대해 백업 코드가 활성화됩니다.",
+ "infoText2": "백업 코드를 비밀리에 보관하고 안전하게 저장하세요. 만약 유출되었다고 의심된다면 백업 코드를 재생성할 수 있습니다.",
+ "subtitle__codelist": "안전하게 보관하고 비밀리에 유지하세요.",
+ "successMessage": "백업 코드가 이제 활성화되었습니다. 계정에 로그인할 때 인증 장치에 액세스할 수 없는 경우 이 중 하나를 사용할 수 있습니다. 각 코드는 한 번만 사용할 수 있습니다.",
+ "successSubtitle": "계정에 로그인할 때 인증 장치에 액세스할 수 없는 경우 이 중 하나를 사용할 수 있습니다.",
+ "title": "백업 코드 확인 추가",
+ "title__codelist": "백업 코드"
+ },
+ "connectedAccountPage": {
+ "formHint": "계정을 연결할 공급업체를 선택하세요.",
+ "formHint__noAccounts": "사용 가능한 외부 계정 공급업체가 없습니다.",
+ "removeResource": {
+ "messageLine1": "{{identifier}}이(가) 이 계정에서 제거됩니다.",
+ "messageLine2": "이제 더 이상이 연결된 계정을 사용할 수 없으며 종속 기능도 작동하지 않습니다.",
+ "successMessage": "{{connectedAccount}}이(가) 계정에서 제거되었습니다.",
+ "title": "연결된 계정 제거"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "공급업체가 계정에 추가되었습니다",
+ "title": "연결된 계정 추가"
+ },
+ "deletePage": {
+ "actionDescription": "계속하려면 \"계정 삭제\"를 입력하세요.",
+ "confirm": "계정 삭제",
+ "messageLine1": "계정을 삭제하시겠습니까?",
+ "messageLine2": "이 작업은 영구적이며 되돌릴 수 없습니다.",
+ "title": "계정 삭제"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "이 이메일 주소로 인증 코드가 포함된 이메일이 전송됩니다.",
+ "formSubtitle": "{{identifier}}로 전송된 이메일의 인증 코드를 입력하세요.",
+ "formTitle": "인증 코드",
+ "resendButton": "코드를 받지 못했나요? 다시 보내기",
+ "successMessage": "이 이메일 {{identifier}}이(가) 계정에 추가되었습니다."
+ },
+ "emailLink": {
+ "formHint": "이 이메일 주소로 인증 링크가 포함된 이메일이 전송됩니다.",
+ "formSubtitle": "{{identifier}}로 전송된 이메일의 인증 링크를 클릭하세요.",
+ "formTitle": "인증 링크",
+ "resendButton": "링크를 받지 못했나요? 다시 보내기",
+ "successMessage": "이 이메일 {{identifier}}이(가) 계정에 추가되었습니다."
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}}이(가) 이 계정에서 제거됩니다.",
+ "messageLine2": "이제 이 이메일 주소를 사용하여 로그인할 수 없습니다.",
+ "successMessage": "{{emailAddress}}이(가) 계정에서 제거되었습니다.",
+ "title": "이메일 주소 제거"
+ },
+ "title": "이메일 주소 추가",
+ "verifyTitle": "이메일 주소 확인"
+ },
+ "formButtonPrimary__add": "추가",
+ "formButtonPrimary__continue": "계속",
+ "formButtonPrimary__finish": "완료",
+ "formButtonPrimary__remove": "제거",
+ "formButtonPrimary__save": "저장",
+ "formButtonReset": "취소",
+ "mfaPage": {
+ "formHint": "추가할 방법을 선택하세요.",
+ "title": "이중 인증 추가"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "기존 번호 사용",
+ "primaryButton__addPhoneNumber": "전화번호 추가",
+ "removeResource": {
+ "messageLine1": "이 번호로부터의 인증 코드 수신이 중지됩니다.",
+ "messageLine2": "계정이 덜 안전해질 수 있습니다. 계속하시겠습니까?",
+ "successMessage": "{{mfaPhoneCode}}에 대한 SMS 코드 이중 인증이 제거되었습니다",
+ "title": "이중 인증 제거"
+ },
+ "subtitle__availablePhoneNumbers": "SMS 코드 이중 인증을 위해 기존 전화번호를 선택하거나 새로 추가하세요.",
+ "subtitle__unavailablePhoneNumbers": "SMS 코드 이중 인증을 위해 사용 가능한 전화번호가 없습니다. 새로 추가하세요.",
+ "successMessage1": "로그인할 때 이 번호로 전송된 인증 코드를 추가 단계로 입력해야 합니다.",
+ "successMessage2": "이 백업 코드를 저장하고 안전한 곳에 보관하세요. 인증 장치에 액세스를 잃으면 백업 코드를 사용할 수 있습니다.",
+ "successTitle": "SMS 코드 확인이 활성화되었습니다",
+ "title": "SMS 코드 확인 추가"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "대신 QR 코드 스캔",
+ "buttonUnableToScan__nonPrimary": "QR 코드를 스캔할 수 없음?",
+ "infoText__ableToScan": "인증 앱에서 새로운 로그인 방법을 설정하고 계정에 연결할 QR 코드를 스캔하세요.",
+ "infoText__unableToScan": "인증 앱에서 새로운 로그인 방법을 설정하고 아래 제공된 키를 입력하세요.",
+ "inputLabel__unableToScan1": "시간 기반 또는 일회용 비밀번호가 활성화되어 있는지 확인한 후 계정 연결을 완료하세요.",
+ "inputLabel__unableToScan2": "또는 인증 앱이 TOTP URI를 지원하는 경우 전체 URI를 복사할 수도 있습니다."
+ },
+ "removeResource": {
+ "messageLine1": "이 인증 앱에서의 인증 코드는 더 이상 로그인할 때 필요하지 않습니다.",
+ "messageLine2": "계정이 덜 안전해질 수 있습니다. 계속하시겠습니까?",
+ "successMessage": "인증 앱을 통한 이중 인증이 제거되었습니다.",
+ "title": "이중 인증 제거"
+ },
+ "successMessage": "이제 이중 인증이 활성화되었습니다. 로그인할 때 이 인증 앱에서 생성된 인증 코드를 추가 단계로 입력해야 합니다.",
+ "title": "인증 앱 추가",
+ "verifySubtitle": "인증 앱에서 생성된 인증 코드를 입력하세요",
+ "verifyTitle": "인증 코드"
+ },
+ "mobileButton__menu": "메뉴",
+ "navbar": {
+ "account": "프로필",
+ "description": "계정 정보를 관리합니다.",
+ "security": "보안",
+ "title": "계정"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}}이(가) 이 계정에서 제거됩니다.",
+ "title": "패스키 제거"
+ },
+ "subtitle__rename": "패스키 이름을 변경하여 찾기 쉽게 할 수 있습니다.",
+ "title__rename": "패스키 이름 변경"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "이전 비밀번호를 사용했을 수 있는 모든 기기에서 로그아웃하는 것이 좋습니다.",
+ "readonly": "현재 비밀번호는 기업 연결을 통해서만 로그인할 수 있기 때문에 편집할 수 없습니다.",
+ "successMessage__set": "비밀번호가 설정되었습니다.",
+ "successMessage__signOutOfOtherSessions": "다른 모든 기기에서 로그아웃되었습니다.",
+ "successMessage__update": "비밀번호가 업데이트되었습니다.",
+ "title__set": "비밀번호 설정",
+ "title__update": "비밀번호 업데이트"
+ },
+ "phoneNumberPage": {
+ "infoText": "인증 코드가 포함된 텍스트 메시지가 이 전화번호로 전송됩니다. 메시지 및 데이터 요금이 부과될 수 있습니다.",
+ "removeResource": {
+ "messageLine1": "{{identifier}}이(가) 이 계정에서 제거됩니다.",
+ "messageLine2": "더 이상 이 전화번호를 사용하여 로그인할 수 없습니다.",
+ "successMessage": "{{phoneNumber}}이(가) 계정에서 제거되었습니다.",
+ "title": "전화번호 제거"
+ },
+ "successMessage": "{{identifier}}이(가) 계정에 추가되었습니다.",
+ "title": "전화번호 추가",
+ "verifySubtitle": "{{identifier}}로 전송된 인증 코드를 입력하세요.",
+ "verifyTitle": "전화번호 인증"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "권장 크기 1:1, 최대 10MB.",
+ "imageFormDestructiveActionSubtitle": "제거",
+ "imageFormSubtitle": "업로드",
+ "imageFormTitle": "프로필 이미지",
+ "readonly": "프로필 정보는 기업 연결을 통해 제공되어 편집할 수 없습니다.",
+ "successMessage": "프로필이 업데이트되었습니다.",
+ "title": "프로필 업데이트"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "기기에서 로그아웃",
+ "title": "활성 기기"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "다시 시도",
+ "actionLabel__reauthorize": "지금 승인",
+ "destructiveActionTitle": "제거",
+ "primaryButton": "계정 연결",
+ "subtitle__reauthorize": "필요한 범위가 업데이트되어 기능이 제한될 수 있습니다. 문제를 피하기 위해 이 애플리케이션을 다시 승인하십시오.",
+ "title": "연결된 계정"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "계정 삭제",
+ "title": "계정 삭제"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "이메일 제거",
+ "detailsAction__nonPrimary": "기본 설정으로 설정",
+ "detailsAction__primary": "인증 완료",
+ "detailsAction__unverified": "인증",
+ "primaryButton": "이메일 주소 추가",
+ "title": "이메일 주소"
+ },
+ "enterpriseAccountsSection": {
+ "title": "기업 계정"
+ },
+ "headerTitle__account": "프로필 세부정보",
+ "headerTitle__security": "보안",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "재생성",
+ "headerTitle": "백업 코드",
+ "subtitle__regenerate": "새로운 안전한 백업 코드를 받으세요. 이전 백업 코드는 삭제되고 사용할 수 없습니다.",
+ "title__regenerate": "백업 코드 재생성"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "기본 설정으로 설정",
+ "destructiveActionLabel": "제거"
+ },
+ "primaryButton": "이중 인증 추가",
+ "title": "이중 인증",
+ "totp": {
+ "destructiveActionTitle": "제거",
+ "headerTitle": "인증 애플리케이션"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "제거",
+ "menuAction__rename": "이름 바꾸기",
+ "title": "패스키"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "비밀번호 설정",
+ "primaryButton__updatePassword": "비밀번호 업데이트",
+ "title": "비밀번호"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "전화번호 제거",
+ "detailsAction__nonPrimary": "기본 설정으로 설정",
+ "detailsAction__primary": "인증 완료",
+ "detailsAction__unverified": "전화번호 인증",
+ "primaryButton": "전화번호 추가",
+ "title": "전화번호"
+ },
+ "profileSection": {
+ "primaryButton": "프로필 업데이트",
+ "title": "프로필"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "사용자 이름 설정",
+ "primaryButton__updateUsername": "사용자 이름 업데이트",
+ "title": "사용자 이름"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "지갑 제거",
+ "primaryButton": "Web3 지갑",
+ "title": "Web3 지갑"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "사용자 이름이 업데이트되었습니다.",
+ "title__set": "사용자 이름 설정",
+ "title__update": "사용자 이름 업데이트"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}}이(가) 이 계정에서 제거됩니다.",
+ "messageLine2": "더 이상 이 Web3 지갑을 사용하여 로그인할 수 없습니다.",
+ "successMessage": "{{web3Wallet}}이(가) 계정에서 제거되었습니다.",
+ "title": "Web3 지갑 제거"
+ },
+ "subtitle__availableWallets": "계정에 연결할 Web3 지갑을 선택하세요.",
+ "subtitle__unavailableWallets": "사용 가능한 Web3 지갑이 없습니다.",
+ "successMessage": "지갑이 계정에 추가되었습니다.",
+ "title": "Web3 지갑 추가"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/common.json b/DigitalHumanWeb/locales/ko-KR/common.json
new file mode 100644
index 0000000..bb45541
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "소개",
+ "advanceSettings": "고급 설정",
+ "alert": {
+ "cloud": {
+ "action": "무료 체험",
+ "desc": "우리는 모든 등록 사용자에게 {{credit}}의 무료 계산 포인트를 제공합니다. 복잡한 설정이 필요 없이 즉시 사용할 수 있으며, 무제한 대화 기록 및 전역 클라우드 동기화를 지원합니다. 더 많은 고급 기능을 함께 탐험해 보세요.",
+ "descOnMobile": "모든 등록 사용자에게 {{credit}}의 무료 계산 포인트를 제공하며, 복잡한 설정 없이 즉시 사용할 수 있습니다.",
+ "title": "환영합니다 {{name}}"
+ }
+ },
+ "appInitializing": "앱 초기화 중...",
+ "autoGenerate": "자동 생성",
+ "autoGenerateTooltip": "힌트 단어를 기반으로 에이전트 설명을 자동으로 완성합니다",
+ "autoGenerateTooltipDisabled": "자동 완성 기능을 사용하려면 툴팁을 입력하십시오",
+ "back": "뒤로",
+ "batchDelete": "일괄 삭제",
+ "blog": "제품 블로그",
+ "cancel": "취소",
+ "changelog": "변경 로그",
+ "close": "닫기",
+ "contact": "연락처",
+ "copy": "복사",
+ "copyFail": "복사 실패",
+ "copySuccess": "복사 성공",
+ "dataStatistics": {
+ "messages": "메시지",
+ "sessions": "세션",
+ "today": "오늘",
+ "topics": "주제"
+ },
+ "defaultAgent": "기본 에이전트",
+ "defaultSession": "기본 세션",
+ "delete": "삭제",
+ "document": "사용 설명서",
+ "download": "다운로드",
+ "duplicate": "복제품 만들기",
+ "edit": "편집",
+ "export": "내보내기",
+ "exportType": {
+ "agent": "에이전트 설정 내보내기",
+ "agentWithMessage": "에이전트 및 메시지 내보내기",
+ "all": "전역 설정 및 모든 에이전트 데이터 내보내기",
+ "allAgent": "모든 에이전트 설정 내보내기",
+ "allAgentWithMessage": "모든 에이전트 및 메시지 내보내기",
+ "globalSetting": "전역 설정 내보내기"
+ },
+ "feedback": "피드백 및 제안",
+ "follow": "{{name}}에서 우리를 팔로우하세요",
+ "footer": {
+ "action": {
+ "feedback": "소중한 의견 공유",
+ "star": "GitHub에서 별표 추가"
+ },
+ "and": "및",
+ "feedback": {
+ "action": "피드백 공유",
+ "desc": "귀하의 모든 아이디어와 제안은 저희에게 귀중합니다. 귀하의 의견을 알고 싶어 합니다! 제품 기능 및 사용 경험 피드백을 제공하여 LobeChat을 더 나아지게 하는 데 도움을 주십시오.",
+ "title": "GitHub에서 소중한 피드백 공유"
+ },
+ "later": "나중에",
+ "star": {
+ "action": "별표 표시",
+ "desc": "만약 당신이 우리 제품을 좋아하고 우리를 지원하고 싶다면, GitHub에서 우리에게 별표를 주실 수 있을까요? 이 작은 행동은 우리에게 큰 의미가 있으며, 지속적으로 특별한 경험을 제공할 수 있도록 우리를 격려할 수 있습니다.",
+ "title": "GitHub에서 우리에게 별표 표시"
+ },
+ "title": "우리 제품을 좋아하십니까?"
+ },
+ "fullscreen": "전체 화면",
+ "historyRange": "기록 범위",
+ "import": "가져오기",
+ "importModal": {
+ "error": {
+ "desc": "데이터 가져오기 중에 문제가 발생했습니다. 다시 시도하거나 <1>문제 제출1>을 클릭하여 문제를 보고하면 우리가 즉시 도와드리겠습니다.",
+ "title": "데이터 가져오기 실패"
+ },
+ "finish": {
+ "onlySettings": "시스템 설정 가져오기 성공",
+ "start": "시작하기",
+ "subTitle": "데이터 가져오기 완료, 소요 시간 {{duration}} 초. 가져오기 세부 정보는 다음과 같습니다:",
+ "title": "데이터 가져오기 완료"
+ },
+ "loading": "데이터 가져오는 중입니다. 잠시 기다려주세요...",
+ "preparing": "데이터 가져오기 모듈 준비 중...",
+ "result": {
+ "added": "가져오기 성공",
+ "errors": "가져오기 오류",
+ "messages": "메시지",
+ "sessionGroups": "세션 그룹",
+ "sessions": "에이전트",
+ "skips": "중복 건너뛰기",
+ "topics": "주제",
+ "type": "데이터 유형"
+ },
+ "title": "데이터 가져오기",
+ "uploading": {
+ "desc": "현재 파일이 크기 때문에 업로드 중입니다...",
+ "restTime": "남은 시간",
+ "speed": "업로드 속도"
+ }
+ },
+ "information": "커뮤니티 및 정보",
+ "installPWA": "브라우저 앱 설치",
+ "lang": {
+ "ar": "아랍어",
+ "bg-BG": "불가리아어",
+ "bn": "벵골어",
+ "cs-CZ": "체코어",
+ "da-DK": "덴마크어",
+ "de-DE": "독일어",
+ "el-GR": "그리스어",
+ "en": "영어",
+ "en-US": "영어",
+ "es-ES": "스페인어",
+ "fi-FI": "핀란드어",
+ "fr-FR": "프랑스어",
+ "hi-IN": "힌디어",
+ "hu-HU": "헝가리어",
+ "id-ID": "인도네시아어",
+ "it-IT": "이탈리아어",
+ "ja-JP": "일본어",
+ "ko-KR": "한국어",
+ "nl-NL": "네덜란드어",
+ "no-NO": "노르웨이어",
+ "pl-PL": "폴란드어",
+ "pt-BR": "포르투갈어",
+ "pt-PT": "포르투갈어",
+ "ro-RO": "루마니아어",
+ "ru-RU": "러시아어",
+ "sk-SK": "슬로바키아어",
+ "sr-RS": "세르비아어",
+ "sv-SE": "스웨덴어",
+ "th-TH": "태국어",
+ "tr-TR": "터키어",
+ "uk-UA": "우크라이나어",
+ "vi-VN": "베트남어",
+ "zh": "중국어",
+ "zh-CN": "중국어(간체)",
+ "zh-TW": "중국어(번체)"
+ },
+ "layoutInitializing": "레이아웃을 불러오는 중...",
+ "legal": "법적 고지",
+ "loading": "로딩 중...",
+ "mail": {
+ "business": "비즈니스 협력",
+ "support": "이메일 지원"
+ },
+ "oauth": "SSO 로그인",
+ "officialSite": "공식 사이트",
+ "ok": "확인",
+ "password": "비밀번호",
+ "pin": "고정",
+ "pinOff": "고정 해제",
+ "privacy": "개인정보 보호 정책",
+ "regenerate": "재생성",
+ "rename": "이름 바꾸기",
+ "reset": "재설정",
+ "retry": "재시도",
+ "send": "보내기",
+ "setting": "설정",
+ "share": "공유",
+ "stop": "중지",
+ "sync": {
+ "actions": {
+ "settings": "동기화 설정",
+ "sync": "즉시 동기화"
+ },
+ "awareness": {
+ "current": "현재 장치"
+ },
+ "channel": "채널",
+ "disabled": {
+ "actions": {
+ "enable": "클라우드 동기화 활성화",
+ "settings": "동기화 설정 구성"
+ },
+ "desc": "현재 세션 데이터는 이 브라우저에만 저장됩니다. 여러 장치 간에 데이터를 동기화해야 하는 경우 클라우드 동기화를 구성하고 활성화하세요.",
+ "title": "데이터 동기화가 비활성화됨"
+ },
+ "enabled": {
+ "title": "데이터 동기화"
+ },
+ "status": {
+ "connecting": "연결 중",
+ "disabled": "동기화가 비활성화됨",
+ "ready": "연결됨",
+ "synced": "동기화됨",
+ "syncing": "동기화 중",
+ "unconnected": "연결 실패"
+ },
+ "title": "동기화 상태",
+ "unconnected": {
+ "tip": "시그널 서버 연결 실패로 인해 피어 투 피어 통신 채널을 설정할 수 없습니다. 네트워크를 확인한 후 다시 시도하세요."
+ }
+ },
+ "tab": {
+ "chat": "채팅",
+ "discover": "발견하기",
+ "files": "파일",
+ "me": "나",
+ "setting": "설정"
+ },
+ "telemetry": {
+ "allow": "허용",
+ "deny": "거부",
+ "desc": "우리는 익명으로 당신의 사용 정보를 수집하여 LobeChat을 개선하고 더 나은 제품 경험을 제공하기를 희망합니다. \"설정\" - \"정보\"에서 언제든지 비활성화할 수 있습니다.",
+ "learnMore": "더 알아보기",
+ "title": "LobeChat을 더 나아지게 하는 데 도와주세요"
+ },
+ "temp": "임시",
+ "terms": "이용 약관",
+ "updateAgent": "에이전트 정보 업데이트",
+ "upgradeVersion": {
+ "action": "업그레이드",
+ "hasNew": "사용 가능한 업데이트가 있습니다",
+ "newVersion": "새 버전 사용 가능: {{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "익명 사용자",
+ "billing": "결제 관리",
+ "cloud": "체험 {{name}}",
+ "data": "데이터 저장",
+ "defaultNickname": "커뮤니티 사용자",
+ "discord": "커뮤니티 지원",
+ "docs": "사용 설명서",
+ "email": "이메일 지원",
+ "feedback": "피드백 및 제안",
+ "help": "도움말 센터",
+ "moveGuide": "설정 버튼을 여기로 이동했습니다",
+ "plans": "요금제",
+ "preview": "미리보기",
+ "profile": "계정 관리",
+ "setting": "앱 설정",
+ "usages": "사용량 통계"
+ },
+ "version": "버전"
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/components.json b/DigitalHumanWeb/locales/ko-KR/components.json
new file mode 100644
index 0000000..bbbc458
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "여기에 파일을 드래그하여 여러 이미지를 업로드할 수 있습니다.",
+ "dragFileDesc": "여기에 이미지와 파일을 드래그하여 여러 이미지와 파일을 업로드할 수 있습니다.",
+ "dragFileTitle": "파일 업로드",
+ "dragTitle": "이미지 업로드"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "지식 베이스에 추가",
+ "addToOtherKnowledgeBase": "다른 지식 베이스에 추가",
+ "batchChunking": "배치 청크 분할",
+ "chunking": "청크 분할",
+ "chunkingTooltip": "파일을 여러 텍스트 블록으로 분할하고 벡터화한 후, 의미 검색 및 파일 대화에 사용할 수 있습니다.",
+ "confirmDelete": "해당 파일을 삭제하려고 합니다. 삭제 후에는 복구할 수 없으니, 작업을 확인해 주세요.",
+ "confirmDeleteMultiFiles": "선택한 {{count}} 개 파일을 삭제하려고 합니다. 삭제 후에는 복구할 수 없으니, 작업을 확인해 주세요.",
+ "confirmRemoveFromKnowledgeBase": "선택한 {{count}} 개 파일을 지식 베이스에서 제거하려고 합니다. 제거 후에도 파일은 모든 파일에서 볼 수 있으니, 작업을 확인해 주세요.",
+ "copyUrl": "링크 복사",
+ "copyUrlSuccess": "파일 주소가 성공적으로 복사되었습니다.",
+ "createChunkingTask": "준비 중...",
+ "deleteSuccess": "파일이 성공적으로 삭제되었습니다.",
+ "downloading": "파일 다운로드 중...",
+ "removeFromKnowledgeBase": "지식 베이스에서 제거",
+ "removeFromKnowledgeBaseSuccess": "파일이 성공적으로 제거되었습니다."
+ },
+ "bottom": "끝까지 도달했습니다.",
+ "config": {
+ "showFilesInKnowledgeBase": "지식 베이스의 내용 표시"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "파일 업로드",
+ "folder": "폴더 업로드",
+ "knowledgeBase": "새 지식 베이스 만들기"
+ },
+ "or": "또는",
+ "title": "파일 또는 폴더를 여기에 드래그하세요."
+ },
+ "title": {
+ "createdAt": "생성 시간",
+ "size": "크기",
+ "title": "파일"
+ },
+ "total": {
+ "fileCount": "총 {{count}} 항목",
+ "selectedCount": "선택된 {{count}} 항목"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "텍스트 블록이 완전히 벡터화되지 않았습니다. 이는 의미 검색 기능을 사용할 수 없게 만듭니다. 검색 품질을 향상시키기 위해 텍스트 블록을 벡터화해 주세요.",
+ "error": "벡터화 실패",
+ "errorResult": "벡터화에 실패했습니다. 다시 확인한 후 재시도해 주세요. 실패 원인:",
+ "processing": "텍스트 블록이 벡터화되고 있습니다. 잠시 기다려 주세요.",
+ "success": "현재 텍스트 블록이 모두 벡터화되었습니다."
+ },
+ "embeddings": "벡터화",
+ "status": {
+ "error": "청크 분할 실패",
+ "errorResult": "청크 분할에 실패했습니다. 다시 시도하기 전에 확인해 주세요. 실패 원인:",
+ "processing": "청크 분할 중",
+ "processingTip": "서버에서 텍스트 블록을 분할하고 있습니다. 페이지를 닫아도 분할 진행에는 영향을 미치지 않습니다."
+ }
+ }
+ },
+ "GoBack": {
+ "back": "뒤로 가기"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "사용자 정의 모델, 기본적으로 함수 호출 및 시각 인식을 모두 지원하며, 실제 기능을 확인하세요",
+ "file": "이 모델은 파일 업로드 및 인식을 지원합니다",
+ "functionCall": "이 모델은 함수 호출을 지원합니다",
+ "tokens": "이 모델은 단일 세션당 최대 {{tokens}} 토큰을 지원합니다",
+ "vision": "이 모델은 시각 인식을 지원합니다"
+ },
+ "removed": "모델이 목록에서 제거되었습니다. 선택이 취소되면 자동으로 제거됩니다."
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "활성화된 모델이 없습니다. 설정으로 이동하여 활성화하세요",
+ "provider": "제공자"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/discover.json b/DigitalHumanWeb/locales/ko-KR/discover.json
new file mode 100644
index 0000000..ba698e0
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "도우미 추가",
+ "addAgentAndConverse": "도우미 추가 및 대화하기",
+ "addAgentSuccess": "추가 성공",
+ "conversation": {
+ "l1": "안녕하세요, 저는 **{{name}}**입니다. 궁금한 점이 있으면 무엇이든 물어보세요. 최선을 다해 답변하겠습니다 ~",
+ "l2": "다음은 제 능력 소개입니다: ",
+ "l3": "대화를 시작해 볼까요!"
+ },
+ "description": "도우미 소개",
+ "detail": "상세 정보",
+ "list": "도우미 목록",
+ "more": "더 보기",
+ "plugins": "플러그인 통합",
+ "recentSubmits": "최근 업데이트",
+ "suggestions": "추천 항목",
+ "systemRole": "도우미 설정",
+ "try": "해보기"
+ },
+ "back": "발견으로 돌아가기",
+ "category": {
+ "assistant": {
+ "academic": "학술",
+ "all": "전체",
+ "career": "직업",
+ "copywriting": "카피라이팅",
+ "design": "디자인",
+ "education": "교육",
+ "emotions": "감정",
+ "entertainment": "엔터테인먼트",
+ "games": "게임",
+ "general": "일반",
+ "life": "생활",
+ "marketing": "마케팅",
+ "office": "사무",
+ "programming": "프로그래밍",
+ "translation": "번역"
+ },
+ "plugin": {
+ "all": "모두",
+ "gaming-entertainment": "게임 및 엔터테인먼트",
+ "life-style": "라이프스타일",
+ "media-generate": "미디어 생성",
+ "science-education": "과학 및 교육",
+ "social": "소셜 미디어",
+ "stocks-finance": "주식 및 금융",
+ "tools": "유용한 도구",
+ "web-search": "웹 검색"
+ }
+ },
+ "cleanFilter": "필터 지우기",
+ "create": "창작",
+ "createGuide": {
+ "func1": {
+ "desc1": "대화 창의 오른쪽 상단 설정을 통해 제출할 도우미의 설정 페이지로 이동하세요;",
+ "desc2": "오른쪽 상단의 도우미 마켓에 제출 버튼을 클릭하세요.",
+ "tag": "방법 1",
+ "title": "LobeChat을 통해 제출하기"
+ },
+ "func2": {
+ "button": "Github 도우미 저장소로 이동",
+ "desc": "도우미를 인덱스에 추가하려면 plugins 디렉토리에 agent-template.json 또는 agent-template-full.json을 사용하여 항목을 만들고, 간단한 설명을 작성한 후 적절히 태그를 추가하고, 풀 리퀘스트를 생성하세요.",
+ "tag": "방법 2",
+ "title": "Github을 통해 제출하기"
+ }
+ },
+ "dislike": "싫어요",
+ "filter": "필터",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "모든 저자",
+ "followed": "팔로우한 저자",
+ "title": "저자 범위"
+ },
+ "contentLength": "최소 맥락 길이",
+ "maxToken": {
+ "title": "최대 길이 설정 (Token)",
+ "unlimited": "무제한"
+ },
+ "other": {
+ "functionCall": "함수 호출 지원",
+ "title": "기타",
+ "vision": "비주얼 인식 지원",
+ "withKnowledge": "지식 베이스 포함",
+ "withTool": "플러그인 포함"
+ },
+ "pricing": "모델 가격",
+ "timePeriod": {
+ "all": "전체 시간",
+ "day": "최근 24시간",
+ "month": "최근 30일",
+ "title": "시간 범위",
+ "week": "최근 7일",
+ "year": "최근 1년"
+ }
+ },
+ "home": {
+ "featuredAssistants": "추천 도우미",
+ "featuredModels": "추천 모델",
+ "featuredProviders": "추천 모델 서비스 제공자",
+ "featuredTools": "추천 플러그인",
+ "more": "더 많은 발견"
+ },
+ "like": "좋아요",
+ "models": {
+ "chat": "대화 시작",
+ "contentLength": "최대 맥락 길이",
+ "free": "무료",
+ "guide": "설정 가이드",
+ "list": "모델 목록",
+ "more": "더 보기",
+ "parameterList": {
+ "defaultValue": "기본값",
+ "docs": "문서 보기",
+ "frequency_penalty": {
+ "desc": "이 설정은 모델이 입력에서 이미 나타난 특정 단어의 사용 빈도를 조정합니다. 높은 값은 이러한 반복의 가능성을 줄이고, 음수 값은 반대의 효과를 생성합니다. 단어 처벌은 출현 횟수에 따라 증가하지 않습니다. 음수 값은 단어의 반복 사용을 장려합니다.",
+ "title": "빈도 처벌"
+ },
+ "max_tokens": {
+ "desc": "이 설정은 모델이 한 번의 응답에서 생성할 수 있는 최대 길이를 정의합니다. 높은 값을 설정하면 모델이 더 긴 응답을 생성할 수 있으며, 낮은 값은 응답의 길이를 제한하여 더 간결하게 만듭니다. 다양한 응용 프로그램에 따라 이 값을 적절히 조정하면 예상되는 응답 길이와 세부 사항을 달성하는 데 도움이 될 수 있습니다.",
+ "title": "한 번의 응답 제한"
+ },
+ "presence_penalty": {
+ "desc": "이 설정은 입력에서 단어의 출현 빈도에 따라 단어의 반복 사용을 제어하는 데 목적이 있습니다. 입력에서 많이 나타나는 단어는 덜 사용하려고 하며, 사용 빈도는 출현 빈도와 비례합니다. 단어 처벌은 출현 횟수에 따라 증가합니다. 음수 값은 단어의 반복 사용을 장려합니다.",
+ "title": "주제 신선도"
+ },
+ "range": "범위",
+ "temperature": {
+ "desc": "이 설정은 모델 응답의 다양성에 영향을 미칩니다. 낮은 값은 더 예측 가능하고 전형적인 응답을 유도하며, 높은 값은 더 다양하고 드문 응답을 장려합니다. 값이 0으로 설정되면 모델은 주어진 입력에 대해 항상 동일한 응답을 제공합니다.",
+ "title": "무작위성"
+ },
+ "title": "모델 매개변수",
+ "top_p": {
+ "desc": "이 설정은 모델의 선택을 가능성이 가장 높은 특정 비율의 단어로 제한합니다: 누적 확률이 P에 도달하는 최상위 단어만 선택합니다. 낮은 값은 모델의 응답을 더 예측 가능하게 만들고, 기본 설정은 모델이 전체 범위의 단어에서 선택할 수 있도록 허용합니다.",
+ "title": "핵 샘플링"
+ },
+ "type": "유형"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChat은 이 제공자에 대해 사용자 정의 API 키를 지원합니다.",
+ "input": "입력 가격",
+ "inputTooltip": "백만 개의 Token당 비용",
+ "latency": "지연",
+ "latencyTooltip": "제공자가 첫 번째 Token을 보내는 평균 응답 시간",
+ "maxOutput": "최대 출력 길이",
+ "maxOutputTooltip": "이 엔드포인트가 생성할 수 있는 최대 Token 수",
+ "officialTooltip": "LobeHub 공식 서비스",
+ "output": "출력 가격",
+ "outputTooltip": "백만 개의 Token당 비용",
+ "streamCancellationTooltip": "이 제공자는 스트림 취소 기능을 지원합니다.",
+ "throughput": "처리량",
+ "throughputTooltip": "스트림 요청당 초당 전송되는 평균 Token 수"
+ },
+ "suggestions": "관련 모델",
+ "supportedProviders": "이 모델을 지원하는 서비스 제공자"
+ },
+ "plugins": {
+ "community": "커뮤니티 플러그인",
+ "install": "플러그인 설치",
+ "installed": "설치됨",
+ "list": "플러그인 목록",
+ "meta": {
+ "description": "설명",
+ "parameter": "매개변수",
+ "title": "도구 매개변수",
+ "type": "유형"
+ },
+ "more": "더보기",
+ "official": "공식 플러그인",
+ "recentSubmits": "최근 업데이트",
+ "suggestions": "추천 항목"
+ },
+ "providers": {
+ "config": "서비스 제공자 설정",
+ "list": "모델 서비스 제공자 목록",
+ "modelCount": "{{count}} 개 모델",
+ "modelSite": "모델 문서",
+ "more": "더보기",
+ "officialSite": "공식 웹사이트",
+ "showAllModels": "모든 모델 보기",
+ "suggestions": "관련 서비스 제공자",
+ "supportedModels": "지원되는 모델"
+ },
+ "search": {
+ "placeholder": "이름, 소개 또는 키워드 검색...",
+ "result": "{{count}} 개의 {{keyword}}에 대한 검색 결과",
+ "searching": "검색 중..."
+ },
+ "sort": {
+ "mostLiked": "가장 좋아요",
+ "mostUsed": "가장 많이 사용됨",
+ "newest": "최신순",
+ "oldest": "오래된 순",
+ "recommended": "추천"
+ },
+ "tab": {
+ "assistants": "도우미",
+ "home": "홈",
+ "models": "모델",
+ "plugins": "플러그인",
+ "providers": "모델 제공자"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/error.json b/DigitalHumanWeb/locales/ko-KR/error.json
new file mode 100644
index 0000000..dfb815f
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "계속하기",
+ "desc": "{{greeting}}님, 다시 뵙게 되어 기쁩니다. 이전 대화를 이어나갈까요?",
+ "title": "다시 환영합니다, {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "홈페이지로 돌아가기",
+ "desc": "잠시 후 다시 시도하거나 익숙한 세계로 돌아가세요",
+ "retry": "다시 시도",
+ "title": "페이지에서 문제가 발생했습니다."
+ },
+ "fetchError": "요청 실패",
+ "fetchErrorDetail": "오류 상세",
+ "notFound": {
+ "backHome": "홈페이지로 돌아가기",
+ "check": "URL이 올바른지 확인해 주세요.",
+ "desc": "찾고 있는 페이지를 찾을 수 없습니다.",
+ "title": "알 수 없는 영역에 들어갔나요?"
+ },
+ "pluginSettings": {
+ "desc": "다음 구성을 완료하면 플러그인을 사용할 수 있습니다.",
+ "title": "{{name}} 플러그인 설정"
+ },
+ "response": {
+ "400": "죄송합니다. 서버가 요청을 이해하지 못했습니다. 요청 매개변수가 올바른지 확인해주세요.",
+ "401": "죄송합니다. 서버가 요청을 거부했습니다. 권한이 부족하거나 유효한 인증 정보를 제공하지 않았을 수 있습니다.",
+ "403": "죄송합니다. 서버가 요청을 거부했습니다. 이 콘텐츠에 대한 액세스 권한이 없습니다.",
+ "404": "죄송합니다. 서버가 요청한 페이지나 리소스를 찾을 수 없습니다. URL이 올바른지 확인해주세요.",
+ "405": "죄송합니다. 서버가 사용한 요청 메서드를 지원하지 않습니다. 요청 메서드가 올바른지 확인해주세요.",
+ "406": "죄송합니다. 서버는 요청한 콘텐츠 특성에 따라 요청을 완료할 수 없습니다",
+ "407": "죄송합니다. 이 요청을 계속하려면 프록시 인증이 필요합니다",
+ "408": "죄송합니다. 서버가 요청을 대기하는 동안 시간이 초과되었습니다. 네트워크 연결을 확인한 후 다시 시도해 주세요",
+ "409": "죄송합니다. 충돌로 인해 요청을 처리할 수 없습니다. 이는 리소스 상태와 요청이 호환되지 않을 수 있습니다",
+ "410": "죄송합니다. 요청한 리소스가 영구적으로 제거되어 찾을 수 없습니다",
+ "411": "죄송합니다. 유효한 콘텐츠 길이를 포함하지 않는 요청을 서버가 처리할 수 없습니다",
+ "412": "죄송합니다. 요청이 서버 측 조건을 충족시키지 못하여 완료할 수 없습니다",
+ "413": "죄송합니다. 요청 데이터 양이 너무 많아 서버가 처리할 수 없습니다",
+ "414": "죄송합니다. 요청 URI가 너무 깁니다. 서버가 처리할 수 없습니다",
+ "415": "죄송합니다. 서버가 요청과 함께 제공된 미디어 형식을 처리할 수 없습니다",
+ "416": "죄송합니다. 서버가 요청한 범위를 충족시킬 수 없습니다",
+ "417": "죄송합니다. 서버가 귀하의 기대에 부응할 수 없습니다",
+ "422": "죄송합니다. 요청 형식이 올바르지만 의미 오류로 인해 응답할 수 없습니다",
+ "423": "죄송합니다. 요청한 리소스가 잠겨 있습니다",
+ "424": "죄송합니다. 이전 요청 실패로 현재 요청을 완료할 수 없습니다",
+ "426": "죄송합니다. 서버가 클라이언트를 더 높은 프로토콜 버전으로 업그레이드하도록 요구합니다",
+ "428": "죄송합니다. 서버가 선행 조건을 요구하여 요청이 올바른 조건 헤더를 포함해야 합니다",
+ "429": "죄송합니다. 요청이 너무 많아 서버가 조금 피곤한 상태입니다. 잠시 후에 다시 시도해 주세요",
+ "431": "죄송합니다. 요청 헤더 필드가 너무 크기 때문에 서버가 처리할 수 없습니다",
+ "451": "죄송합니다. 법적 이유로 인해 서버가 이 리소스를 제공하는 것을 거부합니다",
+ "500": "죄송합니다. 서버에 문제가 발생하여 요청을 완료할 수 없습니다. 잠시 후에 다시 시도해주세요.",
+ "502": "죄송합니다. 서버가 잠시 서비스를 제공할 수 없는 상태입니다. 잠시 후에 다시 시도해주세요.",
+ "503": "죄송합니다. 서버가 현재 요청을 처리할 수 없습니다. 과부하 또는 유지 보수 중일 수 있습니다. 잠시 후에 다시 시도해주세요.",
+ "504": "죄송합니다. 서버가 상위 서버의 응답을 기다리지 못했습니다. 잠시 후에 다시 시도해주세요.",
+ "AgentRuntimeError": "Lobe 언어 모델 실행 중 오류가 발생했습니다. 아래 정보를 확인하고 다시 시도하십시오.",
+ "FreePlanLimit": "현재 무료 사용자이므로이 기능을 사용할 수 없습니다. 유료 요금제로 업그레이드 한 후 계속 사용하십시오.",
+ "InvalidAccessCode": "액세스 코드가 잘못되었거나 비어 있습니다. 올바른 액세스 코드를 입력하거나 사용자 지정 API 키를 추가하십시오.",
+ "InvalidBedrockCredentials": "Bedrock 인증에 실패했습니다. AccessKeyId/SecretAccessKey를 확인한 후 다시 시도하십시오.",
+ "InvalidClerkUser": "죄송합니다. 현재 로그인되어 있지 않습니다. 계속하려면 먼저 로그인하거나 계정을 등록해주세요.",
+ "InvalidGithubToken": "Github 개인 액세스 토큰이 올바르지 않거나 비어 있습니다. Github 개인 액세스 토큰을 확인한 후 다시 시도해 주십시오.",
+ "InvalidOllamaArgs": "Ollama 구성이 잘못되었습니다. Ollama 구성을 확인한 후 다시 시도하십시오.",
+ "InvalidProviderAPIKey": "{{provider}} API 키가 잘못되었거나 비어 있습니다. {{provider}} API 키를 확인하고 다시 시도하십시오.",
+ "LocationNotSupportError": "죄송합니다. 귀하의 현재 위치는 해당 모델 서비스를 지원하지 않습니다. 지역 제한 또는 서비스 미개통으로 인한 것일 수 있습니다. 현재 위치가 해당 서비스를 지원하는지 확인하거나 다른 위치 정보를 사용해 보십시오.",
+ "NoOpenAIAPIKey": "OpenAI API 키가 비어 있습니다. 사용자 정의 OpenAI API 키를 추가해주세요.",
+ "OllamaBizError": "Ollama 서비스 요청 중 오류가 발생했습니다. 아래 정보를 확인하고 다시 시도하십시오.",
+ "OllamaServiceUnavailable": "Ollama 서비스를 사용할 수 없습니다. Ollama가 올바르게 작동하는지 또는 Ollama의 교차 도메인 구성이 올바르게 설정되었는지 확인하십시오.",
+ "OpenAIBizError": "OpenAI 서비스 요청 중 오류가 발생했습니다. 아래 정보를 확인하고 다시 시도해주세요.",
+ "PluginApiNotFound": "죄송합니다. 플러그인 설명서에 해당 API가 없습니다. 요청 메서드와 플러그인 설명서 API가 일치하는지 확인해주세요.",
+ "PluginApiParamsError": "죄송합니다. 플러그인 요청의 입력 매개변수 유효성 검사에 실패했습니다. 입력 매개변수와 API 설명 정보가 일치하는지 확인해주세요.",
+ "PluginFailToTransformArguments": "죄송합니다. 플러그인 호출 인수 변환에 실패했습니다. 도우미 메시지를 다시 생성하거나 더 강력한 AI 모델로 Tools Calling 능력을 변경한 후 다시 시도해주세요.",
+ "PluginGatewayError": "죄송합니다. 플러그인 게이트웨이에 오류가 발생했습니다. 플러그인 게이트웨이 구성을 확인해주세요.",
+ "PluginManifestInvalid": "죄송합니다. 해당 플러그인의 설명서 유효성 검사에 실패했습니다. 설명서 형식이 올바른지 확인해주세요.",
+ "PluginManifestNotFound": "죄송합니다. 서버에서 해당 플러그인의 설명서 (manifest.json)를 찾을 수 없습니다. 플러그인 설명 파일 주소가 올바른지 확인해주세요.",
+ "PluginMarketIndexInvalid": "죄송합니다. 플러그인 인덱스 유효성 검사에 실패했습니다. 인덱스 파일 형식이 올바른지 확인해주세요.",
+ "PluginMarketIndexNotFound": "죄송합니다. 서버에서 플러그인 인덱스를 찾을 수 없습니다. 인덱스 주소가 올바른지 확인해주세요.",
+ "PluginMetaInvalid": "죄송합니다. 해당 플러그인의 메타 정보 유효성 검사에 실패했습니다. 플러그인 메타 정보 형식이 올바른지 확인해주세요.",
+ "PluginMetaNotFound": "죄송합니다. 인덱스에서 해당 플러그인을 찾을 수 없습니다. 플러그인의 구성 정보를 인덱스에서 확인해주세요.",
+ "PluginOpenApiInitError": "죄송합니다. OpenAPI 클라이언트 초기화에 실패했습니다. OpenAPI 구성 정보를 확인해주세요.",
+ "PluginServerError": "플러그인 서버 요청이 오류로 반환되었습니다. 플러그인 설명 파일, 플러그인 구성 또는 서버 구현을 확인해주세요.",
+ "PluginSettingsInvalid": "플러그인을 사용하려면 올바른 구성이 필요합니다. 구성이 올바른지 확인해주세요.",
+ "ProviderBizError": "요청한 {{provider}} 서비스에서 오류가 발생했습니다. 아래 정보를 확인하고 다시 시도해주세요.",
+ "StreamChunkError": "스트리밍 요청의 메시지 블록 구문 분석 오류입니다. 현재 API 인터페이스가 표준 규격에 부합하는지 확인하거나 API 공급자에게 문의하십시오.",
+ "SubscriptionPlanLimit": "구독 한도를 모두 사용했으므로이 기능을 사용할 수 없습니다. 더 높은 요금제로 업그레이드하거나 리소스 패키지를 구매하여 계속 사용하십시오.",
+ "UnknownChatFetchError": "죄송합니다. 알 수 없는 요청 오류가 발생했습니다. 아래 정보를 참고하여 문제를 해결하거나 다시 시도해 주세요."
+ },
+ "stt": {
+ "responseError": "서비스 요청이 실패했습니다. 구성을 확인하거나 다시 시도해주세요."
+ },
+ "tts": {
+ "responseError": "서비스 요청이 실패했습니다. 구성을 확인하거나 다시 시도해주세요."
+ },
+ "unlock": {
+ "addProxyUrl": "OpenAI 프록시 주소 추가(선택 사항)",
+ "apiKey": {
+ "description": "{{name}} API 키를 입력하면 세션을 시작할 수 있습니다.",
+ "title": "사용자 정의 {{name}} API 키 사용"
+ },
+ "closeMessage": "알림 닫기",
+ "confirm": "확인 및 다시 시도",
+ "oauth": {
+ "description": "관리자가 통합 로그인 인증을 활성화했습니다. 아래 버튼을 클릭하여 로그인하면 앱을 잠금 해제할 수 있습니다.",
+ "success": "로그인 성공",
+ "title": "계정 로그인",
+ "welcome": "환영합니다!"
+ },
+ "password": {
+ "description": "관리자가 애플리케이션 암호화를 활성화했습니다. 애플리케이션을 잠금 해제하려면 애플리케이션 비밀번호를 입력하십시오. 비밀번호는 한 번만 입력하면 됩니다.",
+ "placeholder": "비밀번호를 입력하세요",
+ "title": "암호 입력하여 애플리케이션 잠금 해제"
+ },
+ "tabs": {
+ "apiKey": "사용자 정의 API Key",
+ "password": "비밀번호"
+ }
+ },
+ "upload": {
+ "desc": "상세 내용: {{detail}}",
+ "fileOnlySupportInServerMode": "현재 배포 모드에서는 이미지 파일이 아닌 파일 업로드를 지원하지 않습니다. {{ext}} 형식을 업로드하려면 서버 데이터베이스 배포로 전환하거나 {{cloud}} 서비스를 사용하세요.",
+ "networkError": "네트워크가 정상인지 확인하고 파일 저장 서비스의 교차 출처 구성도 올바른지 확인하세요.",
+ "title": "파일 업로드 실패, 네트워크 연결을 확인하거나 나중에 다시 시도해주세요",
+ "unknownError": "오류 원인: {{reason}}",
+ "uploadFailed": "파일 업로드에 실패했습니다."
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/file.json b/DigitalHumanWeb/locales/ko-KR/file.json
new file mode 100644
index 0000000..0dbbe35
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "파일 및 지식 베이스 관리",
+ "detail": {
+ "basic": {
+ "createdAt": "생성 시간",
+ "filename": "파일 이름",
+ "size": "파일 크기",
+ "title": "기본 정보",
+ "type": "형식",
+ "updatedAt": "업데이트 시간"
+ },
+ "data": {
+ "chunkCount": "청크 수",
+ "embedding": {
+ "default": "벡터화되지 않음",
+ "error": "실패",
+ "pending": "시작 대기 중",
+ "processing": "처리 중",
+ "success": "완료"
+ },
+ "embeddingStatus": "벡터화"
+ }
+ },
+ "empty": "업로드된 파일/폴더가 없습니다.",
+ "header": {
+ "actions": {
+ "newFolder": "새 폴더 만들기",
+ "uploadFile": "파일 업로드",
+ "uploadFolder": "폴더 업로드"
+ },
+ "uploadButton": "업로드"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "해당 지식 베이스를 삭제하면 파일은 삭제되지 않고 모든 파일로 이동됩니다. 지식 베이스 삭제 후에는 복구할 수 없으니 신중하게 작업하세요.",
+ "empty": "지식 베이스 생성을 시작하려면 <1>+1>를 클릭하세요."
+ },
+ "new": "새 지식 베이스 만들기",
+ "title": "지식 베이스"
+ },
+ "networkError": "지식베이스를 가져오는 데 실패했습니다. 네트워크 연결을 확인한 후 다시 시도해 주세요.",
+ "notSupportGuide": {
+ "desc": "현재 배포된 인스턴스는 클라이언트 데이터베이스 모드로, 파일 관리 기능을 사용할 수 없습니다. <1>서버 데이터베이스 배포 모드1>로 전환하거나 직접 <3>LobeChat Cloud3>를 사용하세요.",
+ "features": {
+ "allKind": {
+ "desc": "Word, PPT, Excel, PDF, TXT 등 일반 문서 형식과 JS, Python 등 주요 코드 파일을 포함한 다양한 파일 형식을 지원합니다.",
+ "title": "다양한 파일 형식 분석"
+ },
+ "embeddings": {
+ "desc": "고성능 벡터 모델을 사용하여 텍스트 청크를 벡터화하여 파일 내용의 의미 기반 검색을 구현합니다.",
+ "title": "벡터 의미화"
+ },
+ "repos": {
+ "desc": "지식 베이스를 생성하고 다양한 유형의 파일을 추가하여 나만의 분야 지식을 구축할 수 있습니다.",
+ "title": "지식 베이스"
+ }
+ },
+ "title": "현재 배포 모드는 파일 관리를 지원하지 않습니다."
+ },
+ "preview": {
+ "downloadFile": "파일 다운로드",
+ "unsupportedFileAndContact": "이 파일 형식은 온라인 미리보기를 지원하지 않습니다. 미리보기가 필요하신 경우, <1>저희에게 피드백을 주시기 바랍니다1>."
+ },
+ "searchFilePlaceholder": "파일 검색",
+ "tab": {
+ "all": "모든 파일",
+ "audios": "음성",
+ "documents": "문서",
+ "images": "이미지",
+ "videos": "비디오",
+ "websites": "웹사이트"
+ },
+ "title": "파일",
+ "uploadDock": {
+ "body": {
+ "collapse": "접기",
+ "item": {
+ "done": "업로드 완료",
+ "error": "업로드 실패, 다시 시도하세요.",
+ "pending": "업로드 준비 중...",
+ "processing": "파일 처리 중...",
+ "restTime": "남은 시간 {{time}}"
+ }
+ },
+ "totalCount": "총 {{count}} 항목",
+ "uploadStatus": {
+ "error": "업로드 오류",
+ "pending": "업로드 대기 중",
+ "processing": "업로드 중",
+ "success": "업로드 완료",
+ "uploading": "업로드 진행 중"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/knowledgeBase.json b/DigitalHumanWeb/locales/ko-KR/knowledgeBase.json
new file mode 100644
index 0000000..075b956
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "파일이 성공적으로 추가되었습니다. <1>즉시 확인하기1>",
+ "confirm": "추가",
+ "id": {
+ "placeholder": "추가할 지식베이스를 선택하세요",
+ "required": "지식베이스를 선택하세요",
+ "title": "대상 지식베이스"
+ },
+ "title": "지식베이스에 추가",
+ "totalFiles": "총 {{count}} 개의 파일이 선택되었습니다"
+ },
+ "createNew": {
+ "confirm": "새로 만들기",
+ "description": {
+ "placeholder": "지식베이스 소개 (선택 사항)"
+ },
+ "formTitle": "기본 정보",
+ "name": {
+ "placeholder": "지식베이스 이름",
+ "required": "지식베이스 이름을 입력하세요"
+ },
+ "title": "새 지식베이스 만들기"
+ },
+ "tab": {
+ "evals": "평가",
+ "files": "문서",
+ "settings": "설정",
+ "testing": "리콜 테스트"
+ },
+ "title": "지식베이스"
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/market.json b/DigitalHumanWeb/locales/ko-KR/market.json
new file mode 100644
index 0000000..8b4abcb
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "보조 프로그램 추가",
+ "addAgentAndConverse": "에이전트 추가 및 대화",
+ "addAgentSuccess": "추가 성공",
+ "guide": {
+ "func1": {
+ "desc1": "세션 창에서 오른쪽 상단 설정으로 이동하여 도우미를 제출할 설정 페이지로 이동합니다.",
+ "desc2": "도우미 마켓에 제출 버튼을 클릭합니다.",
+ "tag": "방법 1",
+ "title": "LobeChat을 통해 제출하기"
+ },
+ "func2": {
+ "button": "Github 도우미 저장소로 이동",
+ "desc": "도우미를 색인에 추가하려면 agent-template.json 또는 agent-template-full.json을 사용하여 plugins 디렉토리에 항목을 작성하고 간단한 설명과 적절한 태그를 추가한 다음 풀 리퀘스트를 생성하십시오.",
+ "tag": "방법 2",
+ "title": "Github을 통해 제출하기"
+ }
+ },
+ "search": {
+ "placeholder": "보조 프로그램 이름, 설명 또는 키워드 검색..."
+ },
+ "sidebar": {
+ "comment": "의견",
+ "prompt": "프롬프트",
+ "title": "보조 프로그램 세부 정보"
+ },
+ "submitAgent": "보조 프로그램 제출",
+ "title": {
+ "allAgents": "모든 보조 프로그램",
+ "recentSubmits": "최근 추가"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/metadata.json b/DigitalHumanWeb/locales/ko-KR/metadata.json
new file mode 100644
index 0000000..6cd12c3
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}}가 제공하는 최고의 ChatGPT, Claude, Gemini, OLLaMA WebUI 사용 경험",
+ "title": "{{appName}}: 개인 AI 효율 도구, 더 똑똑한 두뇌를 위한 선택"
+ },
+ "discover": {
+ "assistants": {
+ "description": "콘텐츠 제작, 카피라이팅, Q&A, 이미지 생성, 비디오 생성, 음성 생성, 스마트 에이전트, 자동화 워크플로우, 나만의 AI / GPTs / OLLaMA 스마트 어시스턴트를 맞춤 설정하세요.",
+ "title": "AI 도우미"
+ },
+ "description": "콘텐츠 제작, 카피라이팅, Q&A, 이미지 생성, 비디오 생성, 음성 생성, 스마트 에이전트, 자동화 워크플로우, 맞춤형 AI 애플리케이션, 나만의 AI 애플리케이션 작업 공간을 맞춤 설정하세요.",
+ "models": {
+ "description": "주요 AI 모델 탐색: OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek",
+ "title": "AI 모델"
+ },
+ "plugins": {
+ "description": "차트 생성, 학술, 이미지 생성, 비디오 생성, 음성 생성, 자동화 워크플로우를 검색하여 당신의 도우미에 풍부한 플러그인 기능을 통합하세요.",
+ "title": "AI 플러그인"
+ },
+ "providers": {
+ "description": "주요 모델 공급업체 탐색: OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter",
+ "title": "AI 모델 서비스 제공자"
+ },
+ "search": "검색",
+ "title": "발견"
+ },
+ "plugins": {
+ "description": "검색, 차트 생성, 학술, 이미지 생성, 비디오 생성, 음성 생성, 자동화 워크플로우, ChatGPT / Claude 전용 ToolCall 플러그인 기능을 맞춤 설정하세요",
+ "title": "플러그인 마켓"
+ },
+ "welcome": {
+ "description": "{{appName}}가 제공하는 최고의 ChatGPT, Claude, Gemini, OLLaMA WebUI 사용 경험",
+ "title": "{{appName}}에 오신 것을 환영합니다: 개인 AI 효율 도구, 더 똑똑한 두뇌를 위한 선택"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/migration.json b/DigitalHumanWeb/locales/ko-KR/migration.json
new file mode 100644
index 0000000..3731212
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "로컬 데이터 지우기",
+ "downloadBackup": "데이터 백업 다운로드",
+ "reUpgrade": "재설치",
+ "start": "시작",
+ "upgrade": "원 클릭 업그레이드"
+ },
+ "clear": {
+ "confirm": "로컬 데이터를 지우려고 합니다(전역 설정은 영향을 받지 않습니다). 데이터 백업을 이미 다운로드했는지 확인하세요."
+ },
+ "description": "{{appName}}의 데이터 저장소는 새로운 버전에서 큰 도약을 이루었습니다. 따라서 우리는 구버전 데이터를 업그레이드하여 더 나은 사용자 경험을 제공하고자 합니다.",
+ "features": {
+ "capability": {
+ "desc": "IndexedDB 기술을 기반으로 하여, 당신의 평생 대화 메시지를 저장할 수 있습니다.",
+ "title": "대용량"
+ },
+ "performance": {
+ "desc": "백만 개의 메시지가 자동으로 인덱싱되어, 검색 쿼리에 밀리초 단위로 응답합니다.",
+ "title": "고성능"
+ },
+ "use": {
+ "desc": "제목, 설명, 태그, 메시지 내용 및 번역 텍스트 검색을 지원하여 일상적인 검색 효율이 크게 향상되었습니다.",
+ "title": "더욱 사용하기 쉬움"
+ }
+ },
+ "title": "{{appName}} 데이터 진화",
+ "upgrade": {
+ "error": {
+ "subTitle": "죄송합니다. 데이터베이스 업그레이드 과정에서 문제가 발생했습니다. 다음 방법을 시도해 보세요: A. 로컬 데이터를 지운 후 백업 데이터를 다시 가져오기; B. '다시 업그레이드' 버튼을 클릭하세요.
여전히 문제가 발생하면 <1>문제를 제출1>해 주시면, 저희가 신속하게 문제를 해결해 드리겠습니다.",
+ "title": "데이터베이스 업그레이드 실패"
+ },
+ "success": {
+ "subTitle": "{{appName}}의 데이터베이스가 최신 버전으로 업그레이드되었습니다. 지금 바로 경험해 보세요.",
+ "title": "데이터베이스 업그레이드 성공"
+ }
+ },
+ "upgradeTip": "업그레이드는 대략 10~20초가 소요되며, 업그레이드 과정에서 {{appName}}를 닫지 마세요."
+ },
+ "migrateError": {
+ "missVersion": "데이터 가져오기에 버전 번호가 누락되었습니다. 파일을 확인한 후 다시 시도하십시오.",
+ "noMigration": "현재 버전에 해당하는 마이그레이션 솔루션이 없습니다. 버전 번호를 확인한 후 다시 시도하세요. 계속 문제가 발생하면 문제를 제출하여 피드백을 받으세요"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/modelProvider.json b/DigitalHumanWeb/locales/ko-KR/modelProvider.json
new file mode 100644
index 0000000..39ffd89
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "Azure의 API 버전은 YYYY-MM-DD 형식을 따릅니다. [최신 버전 확인](https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions)",
+ "fetch": "목록 가져오기",
+ "title": "Azure API 버전"
+ },
+ "empty": "모델 ID를 입력하여 첫 번째 모델을 추가하세요.",
+ "endpoint": {
+ "desc": "Azure 포털에서 리소스를 확인할 때 '키 및 엔드포인트' 섹션에서 이 값을 찾을 수 있습니다.",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "Azure API 주소"
+ },
+ "modelListPlaceholder": "배포한 OpenAI 모델을 선택하거나 추가하세요.",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "Azure 포털에서 리소스를 확인할 때 '키 및 엔드포인트' 섹션에서 이 값을 찾을 수 있습니다. KEY1 또는 KEY2를 사용할 수 있습니다.",
+ "placeholder": "Azure API 키",
+ "title": "API 키"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "AWS 액세스 키 ID를 입력하세요.",
+ "placeholder": "AWS 액세스 키 ID",
+ "title": "AWS 액세스 키 ID"
+ },
+ "checker": {
+ "desc": "AccessKeyId / SecretAccessKey를 올바르게 입력했는지 테스트합니다."
+ },
+ "region": {
+ "desc": "AWS 지역을 입력하세요.",
+ "placeholder": "AWS 지역",
+ "title": "AWS 지역"
+ },
+ "secretAccessKey": {
+ "desc": "AWS 비밀 액세스 키를 입력하세요.",
+ "placeholder": "AWS 비밀 액세스 키",
+ "title": "AWS 비밀 액세스 키"
+ },
+ "sessionToken": {
+ "desc": "AWS SSO/STS를 사용 중이라면 AWS 세션 토큰을 입력하세요.",
+ "placeholder": "AWS 세션 토큰",
+ "title": "AWS 세션 토큰 (선택 사항)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "사용자 정의 서비스 지역",
+ "customSessionToken": "사용자 정의 세션 토큰",
+ "description": "AWS AccessKeyId / SecretAccessKey를 입력하면 세션이 시작됩니다. 애플리케이션은 인증 구성을 기록하지 않습니다.",
+ "title": "사용자 정의 Bedrock 인증 정보 사용"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "당신의 Github PAT를 입력하세요. [여기](https://github.com/settings/tokens)를 클릭하여 생성하세요.",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "프록시 주소가 올바르게 입력되었는지 테스트합니다",
+ "title": "연결성 검사"
+ },
+ "customModelName": {
+ "desc": "사용자 정의 모델을 추가하려면 쉼표(,)로 구분하여 여러 모델을 입력하세요",
+ "placeholder": "비쿠나,야마,코델라마,야마2:13b-텍스트",
+ "title": "사용자 정의 모델 이름"
+ },
+ "download": {
+ "desc": "Ollama is downloading the model. Please try not to close this page. It will resume from where it left off if you restart the download.",
+ "remainingTime": "Remaining Time",
+ "speed": "Download Speed",
+ "title": "Downloading model {{model}}"
+ },
+ "endpoint": {
+ "desc": "Ollama 인터페이스 프록시 주소를 입력하세요. 로컬에서 별도로 지정하지 않은 경우 비워둘 수 있습니다",
+ "title": "인터페이스 프록시 주소"
+ },
+ "setup": {
+ "cors": {
+ "description": "브라우저 보안 제한으로 인해 Ollama를 사용하려면 CORS 구성이 필요합니다.",
+ "linux": {
+ "env": "[Service] 섹션에 `Environment`를 추가하고 OLLAMA_ORIGINS 환경 변수를 추가하십시오:",
+ "reboot": "systemd를 다시로드하고 Ollama를 다시 시작하십시오.",
+ "systemd": "systemd를 호출하여 ollama 서비스를 편집하십시오: "
+ },
+ "macos": "「터미널」앱을 열고 다음 명령을 붙여넣고 Enter를 눌러 실행하십시오.",
+ "reboot": "작업을 완료한 후 Ollama 서비스를 다시 시작하십시오.",
+ "title": "CORS 액세스를 허용하도록 Ollama 구성",
+ "windows": "Windows에서는 '제어판'을 클릭하여 시스템 환경 변수를 편집하십시오. 사용자 계정에 'OLLAMA_ORIGINS'이라는 환경 변수를 만들고 값으로 *을 입력한 후 '확인/적용'을 클릭하여 저장하십시오."
+ },
+ "install": {
+ "description": "Ollama가 활성화되어 있는지 확인하고, Ollama를 다운로드하지 않았다면 공식 웹사이트<1>에서 다운로드1>하십시오.",
+ "docker": "Docker를 사용하는 것을 선호하는 경우 Ollama는 공식 Docker 이미지도 제공하며 다음 명령을 사용하여 가져올 수 있습니다:",
+ "linux": {
+ "command": "다음 명령을 사용하여 설치하십시오:",
+ "manual": "또는 <1>Linux 수동 설치 안내1>를 참조하여 직접 설치할 수도 있습니다."
+ },
+ "title": "로컬에서 Ollama 애플리케이션을 설치하고 시작하십시오",
+ "windowsTab": "Windows (미리보기판)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "Cancel Download",
+ "confirm": "Download",
+ "description": "Enter your Ollama model tag to continue the session",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "Starting download...",
+ "title": "Download specified Ollama model"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI Zero One All Things"
+ },
+ "zhipu": {
+ "title": "지푸"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/models.json b/DigitalHumanWeb/locales/ko-KR/models.json
new file mode 100644
index 0000000..cf0fd1a
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34B는 풍부한 훈련 샘플을 통해 산업 응용에서 우수한 성능을 제공합니다."
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9B는 16K 토큰을 지원하며, 효율적이고 매끄러운 언어 생성 능력을 제공합니다."
+ },
+ "360gpt-pro": {
+ "description": "360GPT Pro는 360 AI 모델 시리즈의 중요한 구성원으로, 다양한 자연어 응용 시나리오에 맞춘 효율적인 텍스트 처리 능력을 갖추고 있으며, 긴 텍스트 이해 및 다중 회화 기능을 지원합니다."
+ },
+ "360gpt-turbo": {
+ "description": "360GPT Turbo는 강력한 계산 및 대화 능력을 제공하며, 뛰어난 의미 이해 및 생성 효율성을 갖추고 있어 기업 및 개발자에게 이상적인 스마트 어시스턴트 솔루션입니다."
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8K는 의미 안전성과 책임 지향성을 강조하며, 콘텐츠 안전에 대한 높은 요구가 있는 응용 시나리오를 위해 설계되어 사용자 경험의 정확성과 안정성을 보장합니다."
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Pro는 360 회사에서 출시한 고급 자연어 처리 모델로, 뛰어난 텍스트 생성 및 이해 능력을 갖추고 있으며, 특히 생성 및 창작 분야에서 뛰어난 성능을 발휘하여 복잡한 언어 변환 및 역할 연기 작업을 처리할 수 있습니다."
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultra는 스타크 대형 모델 시리즈 중 가장 강력한 버전으로, 업그레이드된 네트워크 검색 링크와 함께 텍스트 내용의 이해 및 요약 능력을 향상시킵니다. 사무 생산성을 높이고 정확한 요구에 응답하기 위한 종합 솔루션으로, 업계를 선도하는 스마트 제품입니다."
+ },
+ "Baichuan2-Turbo": {
+ "description": "검색 강화 기술을 통해 대형 모델과 분야 지식, 전 세계 지식의 완전한 연결을 실현합니다. PDF, Word 등 다양한 문서 업로드 및 웹사이트 입력을 지원하며, 정보 획득이 신속하고 포괄적이며, 출력 결과가 정확하고 전문적입니다."
+ },
+ "Baichuan3-Turbo": {
+ "description": "기업의 고빈도 시나리오에 최적화되어 효과가 크게 향상되었으며, 높은 비용 효율성을 자랑합니다. Baichuan2 모델에 비해 콘텐츠 창작이 20%, 지식 질문 응답이 17%, 역할 수행 능력이 40% 향상되었습니다. 전체적인 성능은 GPT3.5보다 우수합니다."
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "128K 초장기 컨텍스트 창을 갖추고 있으며, 기업의 고빈도 시나리오에 최적화되어 효과가 크게 향상되었으며, 높은 비용 효율성을 자랑합니다. Baichuan2 모델에 비해 콘텐츠 창작이 20%, 지식 질문 응답이 17%, 역할 수행 능력이 40% 향상되었습니다. 전체적인 성능은 GPT3.5보다 우수합니다."
+ },
+ "Baichuan4": {
+ "description": "모델 능력 국내 1위로, 지식 백과, 긴 텍스트, 생성 창작 등 중국어 작업에서 해외 주류 모델을 초월합니다. 또한 업계 선도적인 다중 모달 능력을 갖추고 있으며, 여러 권위 있는 평가 기준에서 우수한 성과를 보입니다."
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B)는 혁신적인 모델로, 다양한 분야의 응용과 복잡한 작업에 적합합니다."
+ },
+ "Max-32k": {
+ "description": "Spark Max 32K는 대규모 컨텍스트 처리 능력을 갖추고 있으며, 더 강력한 컨텍스트 이해 및 논리 추론 능력을 제공합니다. 32K 토큰의 텍스트 입력을 지원하며, 긴 문서 읽기, 개인 지식 질문 응답 등 다양한 상황에 적합합니다."
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPO는 뛰어난 창의적 경험을 제공하기 위해 설계된 고도로 유연한 다중 모델 통합입니다."
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B)는 고정밀 지시 모델로, 복잡한 계산에 적합합니다."
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B)는 최적화된 언어 출력과 다양한 응용 가능성을 제공합니다."
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "Phi-3-mini 모델의 새로 고침 버전입니다."
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "같은 Phi-3-medium 모델이지만 RAG 또는 몇 가지 샷 프롬프트를 위한 더 큰 컨텍스트 크기를 가지고 있습니다."
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "14B 매개변수 모델로, Phi-3-mini보다 더 나은 품질을 제공하며, 고품질의 추론 밀집 데이터에 중점을 두고 있습니다."
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "같은 Phi-3-mini 모델이지만 RAG 또는 몇 가지 샷 프롬프트를 위한 더 큰 컨텍스트 크기를 가지고 있습니다."
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "Phi-3 가족의 가장 작은 구성원으로, 품질과 낮은 대기 시간 모두에 최적화되어 있습니다."
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "같은 Phi-3-small 모델이지만 RAG 또는 몇 가지 샷 프롬프트를 위한 더 큰 컨텍스트 크기를 가지고 있습니다."
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "7B 매개변수 모델로, Phi-3-mini보다 더 나은 품질을 제공하며, 고품질의 추론 밀집 데이터에 중점을 두고 있습니다."
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128K는 초대형 컨텍스트 처리 능력을 갖추고 있으며, 최대 128K의 컨텍스트 정보를 처리할 수 있어, 특히 전체 분석 및 장기 논리 연관 처리가 필요한 긴 문서 콘텐츠에 적합합니다. 복잡한 텍스트 커뮤니케이션에서 매끄럽고 일관된 논리와 다양한 인용 지원을 제공합니다."
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "Qwen2의 테스트 버전인 Qwen1.5는 대규모 데이터를 사용하여 더 정밀한 대화 기능을 구현하였습니다."
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B)는 빠른 응답과 자연스러운 대화 능력을 제공하며, 다국어 환경에 적합합니다."
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2는 다양한 지시 유형을 지원하는 고급 범용 언어 모델입니다."
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5는 지시형 작업 처리를 최적화하기 위해 설계된 새로운 대형 언어 모델 시리즈입니다."
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5는 지시형 작업 처리를 최적화하기 위해 설계된 새로운 대형 언어 모델 시리즈입니다."
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5는 더 강력한 이해 및 생성 능력을 가진 새로운 대형 언어 모델 시리즈입니다."
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5는 지시형 작업 처리를 최적화하기 위해 설계된 새로운 대형 언어 모델 시리즈입니다."
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coder는 코드 작성을 전문으로 합니다."
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Math는 수학 분야의 문제 해결에 중점을 두고 있으며, 고난이도 문제에 대한 전문적인 해답을 제공합니다."
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9B 오픈 소스 버전으로, 대화 응용을 위한 최적화된 대화 경험을 제공합니다."
+ },
+ "abab5.5-chat": {
+ "description": "생산성 시나리오를 위해 설계되었으며, 복잡한 작업 처리 및 효율적인 텍스트 생성을 지원하여 전문 분야 응용에 적합합니다."
+ },
+ "abab5.5s-chat": {
+ "description": "중국어 캐릭터 대화 시나리오를 위해 설계되었으며, 고품질의 중국어 대화 생성 능력을 제공하여 다양한 응용 시나리오에 적합합니다."
+ },
+ "abab6.5g-chat": {
+ "description": "다국어 캐릭터 대화를 위해 설계되었으며, 영어 및 기타 여러 언어의 고품질 대화 생성을 지원합니다."
+ },
+ "abab6.5s-chat": {
+ "description": "텍스트 생성, 대화 시스템 등 다양한 자연어 처리 작업에 적합합니다."
+ },
+ "abab6.5t-chat": {
+ "description": "중국어 캐릭터 대화 시나리오에 최적화되어 있으며, 유창하고 중국어 표현 습관에 맞는 대화 생성 능력을 제공합니다."
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "Fireworks 오픈 소스 함수 호출 모델로, 뛰어난 지시 실행 능력과 개방형 커스터마이징 기능을 제공합니다."
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "Fireworks 회사의 최신 Firefunction-v2는 Llama-3를 기반으로 개발된 뛰어난 함수 호출 모델로, 많은 최적화를 통해 함수 호출, 대화 및 지시 따르기 등의 시나리오에 특히 적합합니다."
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13b는 이미지와 텍스트 입력을 동시에 수용할 수 있는 비주얼 언어 모델로, 고품질 데이터로 훈련되어 다중 모달 작업에 적합합니다."
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "Gemma 2 9B 지시 모델은 이전 Google 기술을 기반으로 하여 질문 응답, 요약 및 추론 등 다양한 텍스트 생성 작업에 적합합니다."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "Llama 3 70B 지시 모델은 다국어 대화 및 자연어 이해를 위해 최적화되어 있으며, 대부분의 경쟁 모델보다 성능이 우수합니다."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "Llama 3 70B 지시 모델(HF 버전)은 공식 구현 결과와 일치하며, 고품질의 지시 따르기 작업에 적합합니다."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "Llama 3 8B 지시 모델은 대화 및 다국어 작업을 위해 최적화되어 있으며, 뛰어난 성능과 효율성을 제공합니다."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "Llama 3 8B 지시 모델(HF 버전)은 공식 구현 결과와 일치하며, 높은 일관성과 크로스 플랫폼 호환성을 갖추고 있습니다."
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "Llama 3.1 405B 지시 모델은 초대규모 매개변수를 갖추고 있어 복잡한 작업과 고부하 환경에서의 지시 따르기에 적합합니다."
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "Llama 3.1 70B 지시 모델은 뛰어난 자연어 이해 및 생성 능력을 제공하며, 대화 및 분석 작업에 이상적인 선택입니다."
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "Llama 3.1 8B 지시 모델은 다국어 대화를 위해 최적화되어 있으며, 일반 산업 기준에서 대부분의 오픈 소스 및 폐쇄 소스 모델을 초월합니다."
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "Mixtral MoE 8x22B 지시 모델은 대규모 매개변수와 다수의 전문가 아키텍처를 통해 복잡한 작업의 효율적인 처리를 전방위적으로 지원합니다."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "Mixtral MoE 8x7B 지시 모델은 다수의 전문가 아키텍처를 통해 효율적인 지시 따르기 및 실행을 제공합니다."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "Mixtral MoE 8x7B 지시 모델(HF 버전)은 성능이 공식 구현과 일치하며, 다양한 효율적인 작업 시나리오에 적합합니다."
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "MythoMax L2 13B 모델은 혁신적인 통합 기술을 결합하여 서사 및 역할 수행에 강점을 보입니다."
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "Phi 3 Vision 지시 모델은 경량 다중 모달 모델로, 복잡한 시각 및 텍스트 정보를 처리할 수 있으며, 강력한 추론 능력을 갖추고 있습니다."
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "StarCoder 15.5B 모델은 고급 프로그래밍 작업을 지원하며, 다국어 능력이 강화되어 복잡한 코드 생성 및 이해에 적합합니다."
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "StarCoder 7B 모델은 80개 이상의 프로그래밍 언어를 대상으로 훈련되어 뛰어난 프로그래밍 완성 능력과 문맥 이해를 제공합니다."
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "Yi-Large 모델은 뛰어난 다국어 처리 능력을 갖추고 있으며, 다양한 언어 생성 및 이해 작업에 사용될 수 있습니다."
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "398B 매개변수(94B 활성)의 다국어 모델로, 256K 긴 컨텍스트 창, 함수 호출, 구조화된 출력 및 기반 생성 기능을 제공합니다."
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "52B 매개변수(12B 활성)의 다국어 모델로, 256K 긴 컨텍스트 창, 함수 호출, 구조화된 출력 및 기반 생성 기능을 제공합니다."
+ },
+ "ai21-jamba-instruct": {
+ "description": "최고 수준의 성능, 품질 및 비용 효율성을 달성하기 위해 제작된 Mamba 기반 LLM 모델입니다."
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "Claude 3.5 Sonnet는 업계 표준을 향상시켜 경쟁 모델 및 Claude 3 Opus를 초월하며, 광범위한 평가에서 뛰어난 성능을 보이고, 중간 수준 모델의 속도와 비용을 갖추고 있습니다."
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "Claude 3 Haiku는 Anthropic의 가장 빠르고 간결한 모델로, 거의 즉각적인 응답 속도를 제공합니다. 간단한 질문과 요청에 신속하게 답변할 수 있습니다. 고객은 인간 상호작용을 모방하는 원활한 AI 경험을 구축할 수 있습니다. Claude 3 Haiku는 이미지를 처리하고 텍스트 출력을 반환할 수 있으며, 200K의 컨텍스트 창을 갖추고 있습니다."
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "Claude 3 Opus는 Anthropic의 가장 강력한 AI 모델로, 매우 복잡한 작업에서 최첨단 성능을 발휘합니다. 개방형 프롬프트와 이전에 보지 못한 장면을 처리할 수 있으며, 뛰어난 유창성과 인간과 유사한 이해 능력을 갖추고 있습니다. Claude 3 Opus는 생성 AI의 가능성을 보여줍니다. Claude 3 Opus는 이미지를 처리하고 텍스트 출력을 반환할 수 있으며, 200K의 컨텍스트 창을 갖추고 있습니다."
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "Anthropic의 Claude 3 Sonnet는 지능과 속도 간의 이상적인 균형을 이루어 기업 작업 부하에 특히 적합합니다. 경쟁 모델보다 낮은 가격으로 최대의 효용을 제공하며, 신뢰할 수 있고 내구성이 뛰어난 주력 모델로 설계되어 대규모 AI 배포에 적합합니다. Claude 3 Sonnet는 이미지를 처리하고 텍스트 출력을 반환할 수 있으며, 200K의 컨텍스트 창을 갖추고 있습니다."
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "일상 대화, 텍스트 분석, 요약 및 문서 질문 응답을 포함한 다양한 작업을 처리할 수 있는 빠르고 경제적이며 여전히 매우 유능한 모델입니다."
+ },
+ "anthropic.claude-v2": {
+ "description": "Anthropic은 복잡한 대화 및 창의적 콘텐츠 생성에서부터 세부 지시 준수에 이르기까지 광범위한 작업에서 높은 능력을 발휘하는 모델입니다."
+ },
+ "anthropic.claude-v2:1": {
+ "description": "Claude 2의 업데이트 버전으로, 두 배의 컨텍스트 창을 갖추고 있으며, 긴 문서 및 RAG 컨텍스트에서의 신뢰성, 환각률 및 증거 기반 정확성이 개선되었습니다."
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 Haiku는 Anthropic의 가장 빠르고 컴팩트한 모델로, 거의 즉각적인 응답을 목표로 합니다. 빠르고 정확한 방향성 성능을 제공합니다."
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opus는 Anthropic이 복잡한 작업을 처리하기 위해 개발한 가장 강력한 모델입니다. 성능, 지능, 유창성 및 이해력에서 뛰어난 성과를 보입니다."
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 Sonnet은 Opus를 초월하는 능력과 Sonnet보다 더 빠른 속도를 제공하며, Sonnet과 동일한 가격을 유지합니다. Sonnet은 프로그래밍, 데이터 과학, 비주얼 처리 및 에이전트 작업에 특히 강합니다."
+ },
+ "aya": {
+ "description": "Aya 23은 Cohere에서 출시한 다국어 모델로, 23개 언어를 지원하여 다양한 언어 응용에 편리함을 제공합니다."
+ },
+ "aya:35b": {
+ "description": "Aya 23은 Cohere에서 출시한 다국어 모델로, 23개 언어를 지원하여 다양한 언어 응용에 편리함을 제공합니다."
+ },
+ "charglm-3": {
+ "description": "CharGLM-3는 역할 수행 및 감정 동반을 위해 설계된 모델로, 초장 다회 기억 및 개인화된 대화를 지원하여 광범위하게 사용됩니다."
+ },
+ "chatgpt-4o-latest": {
+ "description": "ChatGPT-4o는 동적 모델로, 최신 버전을 유지하기 위해 실시간으로 업데이트됩니다. 강력한 언어 이해 및 생성 능력을 결합하여 고객 서비스, 교육 및 기술 지원을 포함한 대규모 응용 프로그램에 적합합니다."
+ },
+ "claude-2.0": {
+ "description": "Claude 2는 기업에 중요한 능력의 발전을 제공하며, 업계 최고의 200K 토큰 컨텍스트, 모델 환각 발생률 대폭 감소, 시스템 프롬프트 및 새로운 테스트 기능인 도구 호출을 포함합니다."
+ },
+ "claude-2.1": {
+ "description": "Claude 2는 기업에 중요한 능력의 발전을 제공하며, 업계 최고의 200K 토큰 컨텍스트, 모델 환각 발생률 대폭 감소, 시스템 프롬프트 및 새로운 테스트 기능인 도구 호출을 포함합니다."
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnet은 Opus를 초월하는 능력과 Sonnet보다 더 빠른 속도를 제공하며, Sonnet과 동일한 가격을 유지합니다. Sonnet은 프로그래밍, 데이터 과학, 시각 처리 및 대리 작업에 특히 강합니다."
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haiku는 Anthropic의 가장 빠르고 컴팩트한 모델로, 거의 즉각적인 응답을 목표로 합니다. 빠르고 정확한 방향성 성능을 갖추고 있습니다."
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opus는 Anthropic이 고도로 복잡한 작업을 처리하기 위해 개발한 가장 강력한 모델입니다. 성능, 지능, 유창성 및 이해력에서 뛰어난 성능을 보입니다."
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnet은 기업 작업 부하에 이상적인 균형을 제공하며, 더 낮은 가격으로 최대 효용을 제공합니다. 신뢰성이 높고 대규모 배포에 적합합니다."
+ },
+ "claude-instant-1.2": {
+ "description": "Anthropic의 모델은 낮은 지연 시간과 높은 처리량의 텍스트 생성을 위해 설계되었으며, 수백 페이지의 텍스트 생성을 지원합니다."
+ },
+ "codegeex-4": {
+ "description": "CodeGeeX-4는 강력한 AI 프로그래밍 도우미로, 다양한 프로그래밍 언어에 대한 스마트 Q&A 및 코드 완성을 지원하여 개발 효율성을 높입니다."
+ },
+ "codegemma": {
+ "description": "CodeGemma는 다양한 프로그래밍 작업을 위한 경량 언어 모델로, 빠른 반복 및 통합을 지원합니다."
+ },
+ "codegemma:2b": {
+ "description": "CodeGemma는 다양한 프로그래밍 작업을 위한 경량 언어 모델로, 빠른 반복 및 통합을 지원합니다."
+ },
+ "codellama": {
+ "description": "Code Llama는 코드 생성 및 논의에 중점을 둔 LLM으로, 광범위한 프로그래밍 언어 지원을 결합하여 개발자 환경에 적합합니다."
+ },
+ "codellama:13b": {
+ "description": "Code Llama는 코드 생성 및 논의에 중점을 둔 LLM으로, 광범위한 프로그래밍 언어 지원을 결합하여 개발자 환경에 적합합니다."
+ },
+ "codellama:34b": {
+ "description": "Code Llama는 코드 생성 및 논의에 중점을 둔 LLM으로, 광범위한 프로그래밍 언어 지원을 결합하여 개발자 환경에 적합합니다."
+ },
+ "codellama:70b": {
+ "description": "Code Llama는 코드 생성 및 논의에 중점을 둔 LLM으로, 광범위한 프로그래밍 언어 지원을 결합하여 개발자 환경에 적합합니다."
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5는 대량의 코드 데이터로 훈련된 대형 언어 모델로, 복잡한 프로그래밍 작업을 해결하기 위해 설계되었습니다."
+ },
+ "codestral": {
+ "description": "Codestral은 Mistral AI의 첫 번째 코드 모델로, 코드 생성 작업에 뛰어난 지원을 제공합니다."
+ },
+ "codestral-latest": {
+ "description": "Codestral은 코드 생성을 전문으로 하는 최첨단 생성 모델로, 중간 채우기 및 코드 완성 작업을 최적화했습니다."
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22B는 지시 준수, 대화 및 프로그래밍을 위해 설계된 모델입니다."
+ },
+ "cohere-command-r": {
+ "description": "Command R은 RAG 및 도구 사용을 목표로 하는 확장 가능한 생성 모델로, 기업을 위한 생산 규모 AI를 가능하게 합니다."
+ },
+ "cohere-command-r-plus": {
+ "description": "Command R+는 기업급 작업을 처리하기 위해 설계된 최첨단 RAG 최적화 모델입니다."
+ },
+ "command-r": {
+ "description": "Command R은 대화 및 긴 컨텍스트 작업에 최적화된 LLM으로, 동적 상호작용 및 지식 관리에 특히 적합합니다."
+ },
+ "command-r-plus": {
+ "description": "Command R+는 실제 기업 환경 및 복잡한 응용을 위해 설계된 고성능 대형 언어 모델입니다."
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instruct는 높은 신뢰성을 가진 지시 처리 능력을 제공하며, 다양한 산업 응용을 지원합니다."
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5는 이전 버전의 우수한 기능을 집약하여 일반 및 인코딩 능력을 강화했습니다."
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67B는 고복잡성 대화를 위해 훈련된 고급 모델입니다."
+ },
+ "deepseek-chat": {
+ "description": "일반 및 코드 능력을 융합한 새로운 오픈 소스 모델로, 기존 Chat 모델의 일반 대화 능력과 Coder 모델의 강력한 코드 처리 능력을 유지하면서 인간의 선호에 더 잘 맞춰졌습니다. 또한, DeepSeek-V2.5는 작문 작업, 지시 따르기 등 여러 측면에서 큰 향상을 이루었습니다."
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2는 오픈 소스 혼합 전문가 코드 모델로, 코드 작업에서 뛰어난 성능을 발휘하며, GPT4-Turbo와 경쟁할 수 있습니다."
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2는 오픈 소스 혼합 전문가 코드 모델로, 코드 작업에서 뛰어난 성능을 발휘하며, GPT4-Turbo와 경쟁할 수 있습니다."
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2는 경제적이고 효율적인 처리 요구에 적합한 Mixture-of-Experts 언어 모델입니다."
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236B는 DeepSeek의 설계 코드 모델로, 강력한 코드 생성 능력을 제공합니다."
+ },
+ "deepseek/deepseek-chat": {
+ "description": "일반 및 코드 능력을 통합한 새로운 오픈 소스 모델로, 기존 Chat 모델의 일반 대화 능력과 Coder 모델의 강력한 코드 처리 능력을 유지하면서 인간의 선호에 더 잘 맞춰졌습니다. 또한, DeepSeek-V2.5는 작문 작업, 지시 따르기 등 여러 분야에서 큰 향상을 이루었습니다."
+ },
+ "emohaa": {
+ "description": "Emohaa는 심리 모델로, 전문 상담 능력을 갖추고 있어 사용자가 감정 문제를 이해하는 데 도움을 줍니다."
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001 (Tuning)은 안정적이고 조정 가능한 성능을 제공하며, 복잡한 작업 솔루션의 이상적인 선택입니다."
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002 (Tuning)은 뛰어난 다중 모달 지원을 제공하며, 복잡한 작업의 효과적인 해결에 중점을 둡니다."
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Pro는 Google의 고성능 AI 모델로, 광범위한 작업 확장을 위해 설계되었습니다."
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001은 효율적인 다중 모달 모델로, 광범위한 응용 프로그램 확장을 지원합니다."
+ },
+ "gemini-1.5-flash-002": {
+ "description": "Gemini 1.5 Flash 002는 효율적인 다중 모달 모델로, 광범위한 응용 프로그램의 확장을 지원합니다."
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827은 대규모 작업 시나리오 처리를 위해 설계되었으며, 비할 데 없는 처리 속도를 제공합니다."
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "Gemini 1.5 Flash 8B 0924는 최신 실험 모델로, 텍스트 및 다중 모달 사용 사례에서 상당한 성능 향상을 보여줍니다."
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827은 최적화된 다중 모달 처리 능력을 제공하며, 다양한 복잡한 작업 시나리오에 적합합니다."
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flash는 Google의 최신 다중 모달 AI 모델로, 빠른 처리 능력을 갖추고 있으며 텍스트, 이미지 및 비디오 입력을 지원하여 다양한 작업에 효율적으로 확장할 수 있습니다."
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001은 확장 가능한 다중 모달 AI 솔루션으로, 광범위한 복잡한 작업을 지원합니다."
+ },
+ "gemini-1.5-pro-002": {
+ "description": "Gemini 1.5 Pro 002는 최신 생산 준비 모델로, 특히 수학, 긴 문맥 및 시각적 작업에서 더 높은 품질의 출력을 제공합니다."
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801은 뛰어난 다중 모달 처리 능력을 제공하여 응용 프로그램 개발에 더 큰 유연성을 제공합니다."
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827은 최신 최적화 기술을 결합하여 더 효율적인 다중 모달 데이터 처리 능력을 제공합니다."
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Pro는 최대 200만 개의 토큰을 지원하며, 중형 다중 모달 모델의 이상적인 선택으로 복잡한 작업에 대한 다각적인 지원을 제공합니다."
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7B는 중소 규모 작업 처리에 적합하며, 비용 효과성을 갖추고 있습니다."
+ },
+ "gemma2": {
+ "description": "Gemma 2는 Google에서 출시한 효율적인 모델로, 소형 응용 프로그램부터 복잡한 데이터 처리까지 다양한 응용 시나리오를 포함합니다."
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9B는 특정 작업 및 도구 통합을 위해 최적화된 모델입니다."
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2는 Google에서 출시한 효율적인 모델로, 소형 응용 프로그램부터 복잡한 데이터 처리까지 다양한 응용 시나리오를 포함합니다."
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2는 Google에서 출시한 효율적인 모델로, 소형 응용 프로그램부터 복잡한 데이터 처리까지 다양한 응용 시나리오를 포함합니다."
+ },
+ "general": {
+ "description": "Spark Lite는 경량 대형 언어 모델로, 매우 낮은 지연 시간과 효율적인 처리 능력을 갖추고 있으며, 완전 무료로 개방되어 실시간 온라인 검색 기능을 지원합니다. 빠른 응답 특성 덕분에 저전력 장치에서의 추론 응용 및 모델 미세 조정에서 뛰어난 성능을 발휘하여 사용자에게 뛰어난 비용 효율성과 지능적인 경험을 제공합니다. 특히 지식 질문 응답, 콘텐츠 생성 및 검색 시나리오에서 두각을 나타냅니다."
+ },
+ "generalv3": {
+ "description": "Spark Pro는 전문 분야에 최적화된 고성능 대형 언어 모델로, 수학, 프로그래밍, 의료, 교육 등 여러 분야에 중점을 두고 있으며, 네트워크 검색 및 내장된 날씨, 날짜 등의 플러그인을 지원합니다. 최적화된 모델은 복잡한 지식 질문 응답, 언어 이해 및 고급 텍스트 창작에서 뛰어난 성능과 효율성을 보여주며, 전문 응용 시나리오에 적합한 이상적인 선택입니다."
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Max는 기능이 가장 포괄적인 버전으로, 네트워크 검색 및 다양한 내장 플러그인을 지원합니다. 전면적으로 최적화된 핵심 능력과 시스템 역할 설정 및 함수 호출 기능 덕분에 다양한 복잡한 응용 시나리오에서 매우 우수한 성능을 발휘합니다."
+ },
+ "glm-4": {
+ "description": "GLM-4는 2024년 1월에 출시된 구형 플래그십 버전으로, 현재 더 강력한 GLM-4-0520으로 대체되었습니다."
+ },
+ "glm-4-0520": {
+ "description": "GLM-4-0520은 최신 모델 버전으로, 매우 복잡하고 다양한 작업을 위해 설계되어 뛰어난 성능을 발휘합니다."
+ },
+ "glm-4-air": {
+ "description": "GLM-4-Air는 가성비가 높은 버전으로, GLM-4에 가까운 성능을 제공하며 빠른 속도와 저렴한 가격을 자랑합니다."
+ },
+ "glm-4-airx": {
+ "description": "GLM-4-AirX는 GLM-4-Air의 효율적인 버전으로, 추론 속도가 최대 2.6배에 달합니다."
+ },
+ "glm-4-alltools": {
+ "description": "GLM-4-AllTools는 복잡한 지시 계획 및 도구 호출을 지원하도록 최적화된 다기능 지능형 모델로, 웹 브라우징, 코드 해석 및 텍스트 생성을 포함한 다중 작업 실행에 적합합니다."
+ },
+ "glm-4-flash": {
+ "description": "GLM-4-Flash는 간단한 작업을 처리하는 데 이상적인 선택으로, 가장 빠른 속도와 가장 저렴한 가격을 자랑합니다."
+ },
+ "glm-4-long": {
+ "description": "GLM-4-Long는 초장 텍스트 입력을 지원하여 기억형 작업 및 대규모 문서 처리에 적합합니다."
+ },
+ "glm-4-plus": {
+ "description": "GLM-4-Plus는 고지능 플래그십 모델로, 긴 텍스트 및 복잡한 작업 처리 능력이 뛰어나며 성능이 전반적으로 향상되었습니다."
+ },
+ "glm-4v": {
+ "description": "GLM-4V는 강력한 이미지 이해 및 추론 능력을 제공하며, 다양한 시각적 작업을 지원합니다."
+ },
+ "glm-4v-plus": {
+ "description": "GLM-4V-Plus는 비디오 콘텐츠 및 다수의 이미지에 대한 이해 능력을 갖추고 있어 다중 모드 작업에 적합합니다."
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827은 최적화된 다중 모달 처리 능력을 제공하며, 다양한 복잡한 작업 시나리오에 적합합니다."
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827은 최신 최적화 기술을 결합하여 더 효율적인 다중 모달 데이터 처리 능력을 제공합니다."
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2는 경량화와 효율적인 설계를 이어갑니다."
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2는 Google의 경량화된 오픈 소스 텍스트 모델 시리즈입니다."
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2는 Google의 경량화된 오픈 소스 텍스트 모델 시리즈입니다."
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B)는 기본적인 지시 처리 능력을 제공하며, 경량 애플리케이션에 적합합니다."
+ },
+ "gpt-3.5-turbo": {
+ "description": "GPT 3.5 Turbo는 다양한 텍스트 생성 및 이해 작업에 적합하며, 현재 gpt-3.5-turbo-0125를 가리킵니다."
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "GPT 3.5 Turbo는 다양한 텍스트 생성 및 이해 작업에 적합하며, 현재 gpt-3.5-turbo-0125를 가리킵니다."
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "GPT 3.5 Turbo는 다양한 텍스트 생성 및 이해 작업에 적합하며, 현재 gpt-3.5-turbo-0125를 가리킵니다."
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "GPT 3.5 Turbo는 다양한 텍스트 생성 및 이해 작업에 적합하며, 현재 gpt-3.5-turbo-0125를 가리킵니다."
+ },
+ "gpt-4": {
+ "description": "GPT-4는 더 큰 컨텍스트 창을 제공하여 더 긴 텍스트 입력을 처리할 수 있으며, 광범위한 정보 통합 및 데이터 분석이 필요한 상황에 적합합니다."
+ },
+ "gpt-4-0125-preview": {
+ "description": "최신 GPT-4 Turbo 모델은 시각적 기능을 갖추고 있습니다. 이제 시각적 요청은 JSON 형식과 함수 호출을 사용하여 처리할 수 있습니다. GPT-4 Turbo는 다중 모드 작업을 위한 비용 효율적인 지원을 제공하는 향상된 버전입니다. 정확성과 효율성 간의 균형을 찾아 실시간 상호작용이 필요한 응용 프로그램에 적합합니다."
+ },
+ "gpt-4-0613": {
+ "description": "GPT-4는 더 큰 컨텍스트 창을 제공하여 더 긴 텍스트 입력을 처리할 수 있으며, 광범위한 정보 통합 및 데이터 분석이 필요한 상황에 적합합니다."
+ },
+ "gpt-4-1106-preview": {
+ "description": "최신 GPT-4 Turbo 모델은 시각적 기능을 갖추고 있습니다. 이제 시각적 요청은 JSON 형식과 함수 호출을 사용하여 처리할 수 있습니다. GPT-4 Turbo는 다중 모드 작업을 위한 비용 효율적인 지원을 제공하는 향상된 버전입니다. 정확성과 효율성 간의 균형을 찾아 실시간 상호작용이 필요한 응용 프로그램에 적합합니다."
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "최신 GPT-4 Turbo 모델은 시각적 기능을 갖추고 있습니다. 이제 시각적 요청은 JSON 형식과 함수 호출을 사용하여 처리할 수 있습니다. GPT-4 Turbo는 다중 모드 작업을 위한 비용 효율적인 지원을 제공하는 향상된 버전입니다. 정확성과 효율성 간의 균형을 찾아 실시간 상호작용이 필요한 응용 프로그램에 적합합니다."
+ },
+ "gpt-4-32k": {
+ "description": "GPT-4는 더 큰 컨텍스트 창을 제공하여 더 긴 텍스트 입력을 처리할 수 있으며, 광범위한 정보 통합 및 데이터 분석이 필요한 상황에 적합합니다."
+ },
+ "gpt-4-32k-0613": {
+ "description": "GPT-4는 더 큰 컨텍스트 창을 제공하여 더 긴 텍스트 입력을 처리할 수 있으며, 광범위한 정보 통합 및 데이터 분석이 필요한 상황에 적합합니다."
+ },
+ "gpt-4-turbo": {
+ "description": "최신 GPT-4 Turbo 모델은 시각적 기능을 갖추고 있습니다. 이제 시각적 요청은 JSON 형식과 함수 호출을 사용하여 처리할 수 있습니다. GPT-4 Turbo는 다중 모드 작업을 위한 비용 효율적인 지원을 제공하는 향상된 버전입니다. 정확성과 효율성 간의 균형을 찾아 실시간 상호작용이 필요한 응용 프로그램에 적합합니다."
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "최신 GPT-4 Turbo 모델은 시각적 기능을 갖추고 있습니다. 이제 시각적 요청은 JSON 형식과 함수 호출을 사용하여 처리할 수 있습니다. GPT-4 Turbo는 다중 모드 작업을 위한 비용 효율적인 지원을 제공하는 향상된 버전입니다. 정확성과 효율성 간의 균형을 찾아 실시간 상호작용이 필요한 응용 프로그램에 적합합니다."
+ },
+ "gpt-4-turbo-preview": {
+ "description": "최신 GPT-4 Turbo 모델은 시각적 기능을 갖추고 있습니다. 이제 시각적 요청은 JSON 형식과 함수 호출을 사용하여 처리할 수 있습니다. GPT-4 Turbo는 다중 모드 작업을 위한 비용 효율적인 지원을 제공하는 향상된 버전입니다. 정확성과 효율성 간의 균형을 찾아 실시간 상호작용이 필요한 응용 프로그램에 적합합니다."
+ },
+ "gpt-4-vision-preview": {
+ "description": "최신 GPT-4 Turbo 모델은 시각적 기능을 갖추고 있습니다. 이제 시각적 요청은 JSON 형식과 함수 호출을 사용하여 처리할 수 있습니다. GPT-4 Turbo는 다중 모드 작업을 위한 비용 효율적인 지원을 제공하는 향상된 버전입니다. 정확성과 효율성 간의 균형을 찾아 실시간 상호작용이 필요한 응용 프로그램에 적합합니다."
+ },
+ "gpt-4o": {
+ "description": "ChatGPT-4o는 동적 모델로, 최신 버전을 유지하기 위해 실시간으로 업데이트됩니다. 강력한 언어 이해 및 생성 능력을 결합하여 고객 서비스, 교육 및 기술 지원을 포함한 대규모 응용 프로그램에 적합합니다."
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "ChatGPT-4o는 동적 모델로, 최신 버전을 유지하기 위해 실시간으로 업데이트됩니다. 강력한 언어 이해 및 생성 능력을 결합하여 고객 서비스, 교육 및 기술 지원을 포함한 대규모 응용 프로그램에 적합합니다."
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o는 동적 모델로, 최신 버전을 유지하기 위해 실시간으로 업데이트됩니다. 강력한 언어 이해 및 생성 능력을 결합하여 고객 서비스, 교육 및 기술 지원을 포함한 대규모 응용 프로그램에 적합합니다."
+ },
+ "gpt-4o-mini": {
+ "description": "GPT-4o mini는 OpenAI가 GPT-4 Omni 이후에 출시한 최신 모델로, 텍스트와 이미지를 입력받아 텍스트를 출력합니다. 이 모델은 최신의 소형 모델로, 최근의 다른 최첨단 모델보다 훨씬 저렴하며, GPT-3.5 Turbo보다 60% 이상 저렴합니다. 최첨단의 지능을 유지하면서도 뛰어난 가성비를 자랑합니다. GPT-4o mini는 MMLU 테스트에서 82%의 점수를 기록했으며, 현재 채팅 선호도에서 GPT-4보다 높은 순위를 차지하고 있습니다."
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13B는 여러 최상위 모델을 통합한 창의성과 지능이 결합된 언어 모델입니다."
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "혁신적인 오픈 소스 모델 InternLM2.5는 대규모 파라미터를 통해 대화의 지능을 향상시킵니다."
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5는 다양한 시나리오에서 스마트 대화 솔루션을 제공합니다."
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct 모델은 70B 매개변수를 갖추고 있으며, 대규모 텍스트 생성 및 지시 작업에서 뛰어난 성능을 제공합니다."
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70B는 더 강력한 AI 추론 능력을 제공하며, 복잡한 응용 프로그램에 적합하고, 많은 계산 처리를 지원하며 효율성과 정확성을 보장합니다."
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8B는 효율적인 모델로, 빠른 텍스트 생성 능력을 제공하며, 대규모 효율성과 비용 효과성이 필요한 응용 프로그램에 매우 적합합니다."
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct 모델은 8B 매개변수를 갖추고 있으며, 화면 지시 작업의 효율적인 실행을 지원하고 우수한 텍스트 생성 능력을 제공합니다."
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "Llama 3.1 Sonar Huge Online 모델은 405B 매개변수를 갖추고 있으며, 약 127,000개의 토큰의 컨텍스트 길이를 지원하여 복잡한 온라인 채팅 애플리케이션을 위해 설계되었습니다."
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "Llama 3.1 Sonar Large Chat 모델은 70B 매개변수를 갖추고 있으며, 약 127,000개의 토큰의 컨텍스트 길이를 지원하여 복잡한 오프라인 채팅 작업에 적합합니다."
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "Llama 3.1 Sonar Large Online 모델은 70B 매개변수를 갖추고 있으며, 약 127,000개의 토큰의 컨텍스트 길이를 지원하여 대용량 및 다양한 채팅 작업에 적합합니다."
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "Llama 3.1 Sonar Small Chat 모델은 8B 매개변수를 갖추고 있으며, 오프라인 채팅을 위해 설계되어 약 127,000개의 토큰의 컨텍스트 길이를 지원합니다."
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "Llama 3.1 Sonar Small Online 모델은 8B 매개변수를 갖추고 있으며, 약 127,000개의 토큰의 컨텍스트 길이를 지원하여 온라인 채팅을 위해 설계되었습니다."
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70B는 비할 데 없는 복잡성 처리 능력을 제공하며, 높은 요구 사항을 가진 프로젝트에 맞춤형으로 설계되었습니다."
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8B는 우수한 추론 성능을 제공하며, 다양한 응용 프로그램 요구에 적합합니다."
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Use는 강력한 도구 호출 능력을 제공하며, 복잡한 작업의 효율적인 처리를 지원합니다."
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Use는 효율적인 도구 사용을 위해 최적화된 모델로, 빠른 병렬 계산을 지원합니다."
+ },
+ "llama3.1": {
+ "description": "Llama 3.1은 Meta에서 출시한 선도적인 모델로, 최대 405B 매개변수를 지원하며, 복잡한 대화, 다국어 번역 및 데이터 분석 분야에 적용될 수 있습니다."
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1은 Meta에서 출시한 선도적인 모델로, 최대 405B 매개변수를 지원하며, 복잡한 대화, 다국어 번역 및 데이터 분석 분야에 적용될 수 있습니다."
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1은 Meta에서 출시한 선도적인 모델로, 최대 405B 매개변수를 지원하며, 복잡한 대화, 다국어 번역 및 데이터 분석 분야에 적용될 수 있습니다."
+ },
+ "llava": {
+ "description": "LLaVA는 시각 인코더와 Vicuna를 결합한 다중 모달 모델로, 강력한 시각 및 언어 이해를 제공합니다."
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7B는 시각 처리 능력을 융합하여, 시각 정보 입력을 통해 복잡한 출력을 생성합니다."
+ },
+ "llava:13b": {
+ "description": "LLaVA는 시각 인코더와 Vicuna를 결합한 다중 모달 모델로, 강력한 시각 및 언어 이해를 제공합니다."
+ },
+ "llava:34b": {
+ "description": "LLaVA는 시각 인코더와 Vicuna를 결합한 다중 모달 모델로, 강력한 시각 및 언어 이해를 제공합니다."
+ },
+ "mathstral": {
+ "description": "MathΣtral은 과학 연구 및 수학 추론을 위해 설계되었으며, 효과적인 계산 능력과 결과 해석을 제공합니다."
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "추론, 코딩 및 광범위한 언어 응용 프로그램에서 뛰어난 성능을 발휘하는 강력한 70억 매개변수 모델입니다."
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "대화 및 텍스트 생성 작업에 최적화된 다재다능한 8억 매개변수 모델입니다."
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 지침 조정된 텍스트 전용 모델은 다국어 대화 사용 사례에 최적화되어 있으며, 일반 산업 벤치마크에서 많은 오픈 소스 및 폐쇄형 채팅 모델보다 우수한 성능을 보입니다."
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 지침 조정된 텍스트 전용 모델은 다국어 대화 사용 사례에 최적화되어 있으며, 일반 산업 벤치마크에서 많은 오픈 소스 및 폐쇄형 채팅 모델보다 우수한 성능을 보입니다."
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 지침 조정된 텍스트 전용 모델은 다국어 대화 사용 사례에 최적화되어 있으며, 일반 산업 벤치마크에서 많은 오픈 소스 및 폐쇄형 채팅 모델보다 우수한 성능을 보입니다."
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B)는 뛰어난 언어 처리 능력과 우수한 상호작용 경험을 제공합니다."
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B)는 강력한 채팅 모델로, 복잡한 대화 요구를 지원합니다."
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B)는 다국어 지원을 제공하며, 풍부한 분야 지식을 포함합니다."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Lite는 효율성과 낮은 지연 시간이 필요한 환경에 적합합니다."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turbo는 뛰어난 언어 이해 및 생성 능력을 제공하며, 가장 까다로운 계산 작업에 적합합니다."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Lite는 자원이 제한된 환경에 적합하며, 뛰어난 균형 성능을 제공합니다."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turbo는 효율적인 대형 언어 모델로, 광범위한 응용 분야를 지원합니다."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405B는 사전 훈련 및 지시 조정의 강력한 모델입니다."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "405B Llama 3.1 Turbo 모델은 대규모 데이터 처리를 위한 초대용량의 컨텍스트 지원을 제공하며, 초대규모 인공지능 애플리케이션에서 뛰어난 성능을 발휘합니다."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70B는 다국어의 효율적인 대화 지원을 제공합니다."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "Llama 3.1 70B 모델은 정밀 조정되어 고부하 애플리케이션에 적합하며, FP8로 양자화되어 더 높은 효율의 계산 능력과 정확성을 제공합니다."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1은 다국어 지원을 제공하며, 업계에서 선도하는 생성 모델 중 하나입니다."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "Llama 3.1 8B 모델은 FP8 양자화를 사용하여 최대 131,072개의 컨텍스트 토큰을 지원하며, 오픈 소스 모델 중에서 뛰어난 성능을 발휘하여 복잡한 작업에 적합합니다."
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instruct는 고품질 대화 시나리오에 최적화되어 있으며, 다양한 인간 평가에서 뛰어난 성능을 보여줍니다."
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instruct는 고품질 대화 시나리오에 최적화되어 있으며, 많은 폐쇄형 모델보다 우수한 성능을 보입니다."
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B Instruct는 Meta에서 새롭게 출시한 버전으로, 고품질 대화 생성을 위해 최적화되어 있으며, 많은 선도적인 폐쇄형 모델을 초월합니다."
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct는 고품질 대화를 위해 설계되었으며, 인간 평가에서 뛰어난 성능을 보여주고, 특히 높은 상호작용 시나리오에 적합합니다."
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct는 Meta에서 출시한 최신 버전으로, 고품질 대화 시나리오에 최적화되어 있으며, 많은 선도적인 폐쇄형 모델보다 우수한 성능을 보입니다."
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1은 다국어 지원을 제공하며, 업계 최고의 생성 모델 중 하나입니다."
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 405B Instruct는 Llama 3.1 Instruct 모델 중 가장 크고 강력한 모델로, 고도로 발전된 대화 추론 및 합성 데이터 생성 모델입니다. 특정 분야에서 전문적인 지속적 사전 훈련 또는 미세 조정의 기초로도 사용될 수 있습니다. Llama 3.1이 제공하는 다국어 대형 언어 모델(LLMs)은 8B, 70B 및 405B 크기의 사전 훈련된 지시 조정 생성 모델로 구성되어 있습니다(텍스트 입력/출력). Llama 3.1 지시 조정 텍스트 모델(8B, 70B, 405B)은 다국어 대화 사용 사례에 최적화되어 있으며, 일반 산업 벤치마크 테스트에서 많은 오픈 소스 채팅 모델을 초과했습니다. Llama 3.1은 다양한 언어의 상업적 및 연구 용도로 설계되었습니다. 지시 조정 텍스트 모델은 비서와 유사한 채팅에 적합하며, 사전 훈련 모델은 다양한 자연어 생성 작업에 적응할 수 있습니다. Llama 3.1 모델은 또한 모델의 출력을 활용하여 다른 모델을 개선하는 것을 지원하며, 합성 데이터 생성 및 정제에 사용될 수 있습니다. Llama 3.1은 최적화된 변압기 아키텍처를 사용한 자기 회귀 언어 모델입니다. 조정된 버전은 감독 미세 조정(SFT) 및 인간 피드백이 포함된 강화 학습(RLHF)을 사용하여 인간의 도움 및 안전성 선호에 부합하도록 설계되었습니다."
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 70B Instruct의 업데이트 버전으로, 확장된 128K 컨텍스트 길이, 다국어 지원 및 개선된 추론 능력을 포함합니다. Llama 3.1이 제공하는 다국어 대형 언어 모델(LLMs)은 사전 훈련된 지침 조정 생성 모델의 집합으로, 8B, 70B 및 405B 크기(텍스트 입력/출력)를 포함합니다. Llama 3.1 지침 조정 텍스트 모델(8B, 70B, 405B)은 다국어 대화 사용 사례에 최적화되어 있으며, 일반적인 산업 벤치마크 테스트에서 많은 사용 가능한 오픈 소스 채팅 모델을 초월했습니다. Llama 3.1은 다양한 언어의 상업적 및 연구 용도로 사용되도록 설계되었습니다. 지침 조정 텍스트 모델은 비서와 유사한 채팅에 적합하며, 사전 훈련된 모델은 다양한 자연어 생성 작업에 적응할 수 있습니다. Llama 3.1 모델은 또한 모델의 출력을 활용하여 다른 모델을 개선하는 데 지원하며, 합성 데이터 생성 및 정제 작업을 포함합니다. Llama 3.1은 최적화된 변압기 아키텍처를 사용한 자기 회귀 언어 모델입니다. 조정된 버전은 감독 미세 조정(SFT) 및 인간 피드백을 통한 강화 학습(RLHF)을 사용하여 인간의 도움 및 안전성 선호에 부합하도록 설계되었습니다."
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 8B Instruct의 업데이트 버전으로, 확장된 128K 컨텍스트 길이, 다국어 지원 및 개선된 추론 능력을 포함합니다. Llama 3.1이 제공하는 다국어 대형 언어 모델(LLMs)은 사전 훈련된 지침 조정 생성 모델의 집합으로, 8B, 70B 및 405B 크기(텍스트 입력/출력)를 포함합니다. Llama 3.1 지침 조정 텍스트 모델(8B, 70B, 405B)은 다국어 대화 사용 사례에 최적화되어 있으며, 일반적인 산업 벤치마크 테스트에서 많은 사용 가능한 오픈 소스 채팅 모델을 초월했습니다. Llama 3.1은 다양한 언어의 상업적 및 연구 용도로 사용되도록 설계되었습니다. 지침 조정 텍스트 모델은 비서와 유사한 채팅에 적합하며, 사전 훈련된 모델은 다양한 자연어 생성 작업에 적응할 수 있습니다. Llama 3.1 모델은 또한 모델의 출력을 활용하여 다른 모델을 개선하는 데 지원하며, 합성 데이터 생성 및 정제 작업을 포함합니다. Llama 3.1은 최적화된 변압기 아키텍처를 사용한 자기 회귀 언어 모델입니다. 조정된 버전은 감독 미세 조정(SFT) 및 인간 피드백을 통한 강화 학습(RLHF)을 사용하여 인간의 도움 및 안전성 선호에 부합하도록 설계되었습니다."
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3은 개발자, 연구자 및 기업을 위한 오픈 대형 언어 모델(LLM)로, 생성 AI 아이디어를 구축하고 실험하며 책임감 있게 확장하는 데 도움을 주기 위해 설계되었습니다. 전 세계 커뮤니티 혁신의 기초 시스템의 일환으로, 콘텐츠 생성, 대화 AI, 언어 이해, 연구 개발 및 기업 응용에 매우 적합합니다."
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3은 개발자, 연구자 및 기업을 위한 오픈 대형 언어 모델(LLM)로, 생성 AI 아이디어를 구축하고 실험하며 책임감 있게 확장하는 데 도움을 주기 위해 설계되었습니다. 전 세계 커뮤니티 혁신의 기초 시스템의 일환으로, 계산 능력과 자원이 제한된 환경, 엣지 장치 및 더 빠른 훈련 시간에 매우 적합합니다."
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7B는 Microsoft AI의 최신 경량 모델로, 기존 오픈 소스 선도 모델의 성능에 근접합니다."
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22B는 마이크로소프트 AI의 최첨단 Wizard 모델로, 매우 경쟁력 있는 성능을 보여줍니다."
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-V는 OpenBMB에서 출시한 차세대 다중 모달 대형 모델로, 뛰어난 OCR 인식 및 다중 모달 이해 능력을 갖추고 있으며, 다양한 응용 프로그램을 지원합니다."
+ },
+ "mistral": {
+ "description": "Mistral은 Mistral AI에서 출시한 7B 모델로, 변화하는 언어 처리 요구에 적합합니다."
+ },
+ "mistral-large": {
+ "description": "Mixtral Large는 Mistral의 플래그십 모델로, 코드 생성, 수학 및 추론 능력을 결합하여 128k 컨텍스트 창을 지원합니다."
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407)는 최첨단 추론, 지식 및 코딩 능력을 갖춘 고급 대형 언어 모델(LLM)입니다."
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Large는 플래그십 대형 모델로, 다국어 작업, 복잡한 추론 및 코드 생성에 능숙하여 고급 응용 프로그램에 이상적인 선택입니다."
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemo는 Mistral AI와 NVIDIA가 협력하여 출시한 고효율 12B 모델입니다."
+ },
+ "mistral-small": {
+ "description": "Mistral Small은 높은 효율성과 낮은 대기 시간이 필요한 모든 언어 기반 작업에 사용할 수 있습니다."
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Small은 번역, 요약 및 감정 분석과 같은 사용 사례에 적합한 비용 효율적이고 빠르며 신뢰할 수 있는 옵션입니다."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instruct는 높은 성능으로 유명하며, 다양한 언어 작업에 적합합니다."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7B는 필요에 따라 미세 조정된 모델로, 작업에 최적화된 해답을 제공합니다."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3는 효율적인 계산 능력과 자연어 이해를 제공하며, 광범위한 응용에 적합합니다."
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B)는 슈퍼 대형 언어 모델로, 극도의 처리 요구를 지원합니다."
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7B는 일반 텍스트 작업을 위한 사전 훈련된 희소 혼합 전문가 모델입니다."
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instruct는 속도 최적화와 긴 컨텍스트 지원을 갖춘 고성능 산업 표준 모델입니다."
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemo는 다국어 지원과 고성능 프로그래밍을 위한 7.3B 파라미터 모델입니다."
+ },
+ "mixtral": {
+ "description": "Mixtral은 Mistral AI의 전문가 모델로, 오픈 소스 가중치를 가지고 있으며, 코드 생성 및 언어 이해를 지원합니다."
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7B는 높은 내결함성을 가진 병렬 계산 능력을 제공하며, 복잡한 작업에 적합합니다."
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtral은 Mistral AI의 전문가 모델로, 오픈 소스 가중치를 가지고 있으며, 코드 생성 및 언어 이해를 지원합니다."
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128K는 초장기 컨텍스트 처리 능력을 갖춘 모델로, 초장문 생성을 위해 설계되었으며, 복잡한 생성 작업 요구를 충족하고 최대 128,000개의 토큰을 처리할 수 있어, 연구, 학술 및 대형 문서 생성 등 응용 시나리오에 매우 적합합니다."
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32K는 중간 길이의 컨텍스트 처리 능력을 제공하며, 32,768개의 토큰을 처리할 수 있어, 다양한 장문 및 복잡한 대화 생성을 위해 특히 적합하며, 콘텐츠 생성, 보고서 작성 및 대화 시스템 등 분야에 활용됩니다."
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8K는 짧은 텍스트 작업 생성을 위해 설계되었으며, 효율적인 처리 성능을 갖추고 있어 8,192개의 토큰을 처리할 수 있으며, 짧은 대화, 속기 및 빠른 콘텐츠 생성에 매우 적합합니다."
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8B는 Nous Hermes 2의 업그레이드 버전으로, 최신 내부 개발 데이터 세트를 포함하고 있습니다."
+ },
+ "o1-mini": {
+ "description": "o1-mini는 프로그래밍, 수학 및 과학 응용 프로그램을 위해 설계된 빠르고 경제적인 추론 모델입니다. 이 모델은 128K의 컨텍스트와 2023년 10월의 지식 기준일을 가지고 있습니다."
+ },
+ "o1-preview": {
+ "description": "o1은 OpenAI의 새로운 추론 모델로, 광범위한 일반 지식이 필요한 복잡한 작업에 적합합니다. 이 모델은 128K의 컨텍스트와 2023년 10월의 지식 기준일을 가지고 있습니다."
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mamba는 코드 생성을 전문으로 하는 Mamba 2 언어 모델로, 고급 코드 및 추론 작업에 강력한 지원을 제공합니다."
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7B는 컴팩트하지만 고성능 모델로, 분류 및 텍스트 생성과 같은 간단한 작업 및 배치 처리에 능숙하며, 우수한 추론 능력을 갖추고 있습니다."
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemo는 NVIDIA와 협력하여 개발된 12B 모델로, 뛰어난 추론 및 인코딩 성능을 제공하며, 통합 및 교체가 용이합니다."
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22B는 더 큰 전문가 모델로, 복잡한 작업에 중점을 두고 뛰어난 추론 능력과 더 높은 처리량을 제공합니다."
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7B는 희소 전문가 모델로, 여러 매개변수를 활용하여 추론 속도를 높이며, 다국어 및 코드 생성 작업 처리에 적합합니다."
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o는 동적 모델로, 최신 버전을 유지하기 위해 실시간으로 업데이트됩니다. 강력한 언어 이해 및 생성 능력을 결합하여 고객 서비스, 교육 및 기술 지원을 포함한 대규모 응용 프로그램에 적합합니다."
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o mini는 OpenAI가 GPT-4 Omni 이후에 출시한 최신 모델로, 이미지와 텍스트 입력을 지원하며 텍스트를 출력합니다. 가장 진보된 소형 모델로, 최근의 다른 최첨단 모델보다 훨씬 저렴하며, GPT-3.5 Turbo보다 60% 이상 저렴합니다. 최첨단 지능을 유지하면서도 뛰어난 가성비를 자랑합니다. GPT-4o mini는 MMLU 테스트에서 82%의 점수를 기록했으며, 현재 채팅 선호도에서 GPT-4보다 높은 순위를 차지하고 있습니다."
+ },
+ "openai/o1-mini": {
+ "description": "o1-mini는 프로그래밍, 수학 및 과학 응용 프로그램을 위해 설계된 빠르고 경제적인 추론 모델입니다. 이 모델은 128K의 컨텍스트와 2023년 10월의 지식 기준일을 가지고 있습니다."
+ },
+ "openai/o1-preview": {
+ "description": "o1은 OpenAI의 새로운 추론 모델로, 광범위한 일반 지식이 필요한 복잡한 작업에 적합합니다. 이 모델은 128K의 컨텍스트와 2023년 10월의 지식 기준일을 가지고 있습니다."
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7B는 'C-RLFT(조건 강화 학습 미세 조정)' 전략으로 정교하게 조정된 오픈 소스 언어 모델 라이브러리입니다."
+ },
+ "openrouter/auto": {
+ "description": "요청은 컨텍스트 길이, 주제 및 복잡성에 따라 Llama 3 70B Instruct, Claude 3.5 Sonnet(자기 조정) 또는 GPT-4o로 전송됩니다."
+ },
+ "phi3": {
+ "description": "Phi-3는 Microsoft에서 출시한 경량 오픈 모델로, 효율적인 통합 및 대규모 지식 추론에 적합합니다."
+ },
+ "phi3:14b": {
+ "description": "Phi-3는 Microsoft에서 출시한 경량 오픈 모델로, 효율적인 통합 및 대규모 지식 추론에 적합합니다."
+ },
+ "pixtral-12b-2409": {
+ "description": "Pixtral 모델은 차트 및 이미지 이해, 문서 질문 응답, 다중 모드 추론 및 지시 준수와 같은 작업에서 강력한 능력을 발휘하며, 자연 해상도와 가로 세로 비율로 이미지를 입력할 수 있고, 최대 128K 토큰의 긴 컨텍스트 창에서 임의의 수의 이미지를 처리할 수 있습니다."
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "통의 천문 코드 모델입니다."
+ },
+ "qwen-long": {
+ "description": "통의천문 초대규모 언어 모델로, 긴 텍스트 컨텍스트를 지원하며, 긴 문서 및 다수의 문서에 기반한 대화 기능을 제공합니다."
+ },
+ "qwen-math-plus-latest": {
+ "description": "통의 천문 수학 모델은 수학 문제 해결을 위해 특별히 설계된 언어 모델입니다."
+ },
+ "qwen-math-turbo-latest": {
+ "description": "통의 천문 수학 모델은 수학 문제 해결을 위해 특별히 설계된 언어 모델입니다."
+ },
+ "qwen-max-latest": {
+ "description": "통의 천문 1000억급 초대규모 언어 모델로, 중국어, 영어 등 다양한 언어 입력을 지원하며, 현재 통의 천문 2.5 제품 버전의 API 모델입니다."
+ },
+ "qwen-plus-latest": {
+ "description": "통의 천문 초대규모 언어 모델의 강화판으로, 중국어, 영어 등 다양한 언어 입력을 지원합니다."
+ },
+ "qwen-turbo-latest": {
+ "description": "통의 천문 초대규모 언어 모델로, 중국어, 영어 등 다양한 언어 입력을 지원합니다."
+ },
+ "qwen-vl-chat-v1": {
+ "description": "통의천문 VL은 다중 이미지, 다중 회차 질문 응답, 창작 등 유연한 상호작용 방식을 지원하는 모델입니다."
+ },
+ "qwen-vl-max": {
+ "description": "통의천문 초대규모 시각 언어 모델로, 강화 버전보다 시각적 추론 능력과 지시 준수 능력을 다시 향상시켜 더 높은 시각적 인식 및 인지 수준을 제공합니다."
+ },
+ "qwen-vl-plus": {
+ "description": "통의천문 대규모 시각 언어 모델의 강화 버전으로, 세부 사항 인식 능력과 문자 인식 능력을 크게 향상시켰으며, 백만 화소 이상의 해상도와 임의의 가로 세로 비율의 이미지를 지원합니다."
+ },
+ "qwen-vl-v1": {
+ "description": "Qwen-7B 언어 모델로 초기화된 모델로, 이미지 모델을 추가하여 이미지 입력 해상도가 448인 사전 훈련 모델입니다."
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2는 더 강력한 이해 및 생성 능력을 갖춘 새로운 대형 언어 모델 시리즈입니다."
+ },
+ "qwen2": {
+ "description": "Qwen2는 Alibaba의 차세대 대규모 언어 모델로, 뛰어난 성능으로 다양한 응용 요구를 지원합니다."
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "통의 천문 2.5 외부 오픈 소스 14B 규모 모델입니다."
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "통의 천문 2.5 외부 오픈 소스 32B 규모 모델입니다."
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "통의 천문 2.5 외부 오픈 소스 72B 규모 모델입니다."
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "통의 천문 2.5 외부 오픈 소스 7B 규모 모델입니다."
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "통의 천문 코드 모델 오픈 소스 버전입니다."
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "통의 천문 코드 모델 오픈 소스 버전입니다."
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "Qwen-Math 모델은 강력한 수학 문제 해결 능력을 가지고 있습니다."
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "Qwen-Math 모델은 강력한 수학 문제 해결 능력을 가지고 있습니다."
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "Qwen-Math 모델은 강력한 수학 문제 해결 능력을 가지고 있습니다."
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2는 Alibaba의 차세대 대규모 언어 모델로, 뛰어난 성능으로 다양한 응용 요구를 지원합니다."
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2는 Alibaba의 차세대 대규모 언어 모델로, 뛰어난 성능으로 다양한 응용 요구를 지원합니다."
+ },
+ "qwen2:72b": {
+ "description": "Qwen2는 Alibaba의 차세대 대규모 언어 모델로, 뛰어난 성능으로 다양한 응용 요구를 지원합니다."
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar Mini는 컴팩트한 LLM으로, GPT-3.5보다 성능이 우수하며, 강력한 다국어 능력을 갖추고 있어 영어와 한국어를 지원하며, 효율적이고 소형 솔루션을 제공합니다."
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja)는 Solar Mini의 능력을 확장하여 일본어에 중점을 두고 있으며, 영어와 한국어 사용에서도 효율적이고 뛰어난 성능을 유지합니다."
+ },
+ "solar-pro": {
+ "description": "Solar Pro는 Upstage에서 출시한 고지능 LLM으로, 단일 GPU의 지시 추적 능력에 중점을 두고 있으며, IFEval 점수가 80 이상입니다. 현재 영어를 지원하며, 정식 버전은 2024년 11월에 출시될 예정이며, 언어 지원 및 컨텍스트 길이를 확장할 계획입니다."
+ },
+ "step-1-128k": {
+ "description": "성능과 비용의 균형을 맞추어 일반적인 시나리오에 적합합니다."
+ },
+ "step-1-256k": {
+ "description": "초장기 컨텍스트 처리 능력을 갖추고 있으며, 특히 긴 문서 분석에 적합합니다."
+ },
+ "step-1-32k": {
+ "description": "중간 길이의 대화를 지원하며, 다양한 응용 시나리오에 적합합니다."
+ },
+ "step-1-8k": {
+ "description": "소형 모델로, 경량 작업에 적합합니다."
+ },
+ "step-1-flash": {
+ "description": "고속 모델로, 실시간 대화에 적합합니다."
+ },
+ "step-1v-32k": {
+ "description": "시각 입력을 지원하여 다중 모달 상호작용 경험을 강화합니다."
+ },
+ "step-1v-8k": {
+ "description": "소형 비주얼 모델로, 기본적인 텍스트 및 이미지 작업에 적합합니다."
+ },
+ "step-2-16k": {
+ "description": "대규모 컨텍스트 상호작용을 지원하며, 복잡한 대화 시나리오에 적합합니다."
+ },
+ "taichu_llm": {
+ "description": "자이동 태초 언어 대모델은 뛰어난 언어 이해 능력과 텍스트 창작, 지식 질문 응답, 코드 프로그래밍, 수학 계산, 논리 추론, 감정 분석, 텍스트 요약 등의 능력을 갖추고 있습니다. 혁신적으로 대규모 데이터 사전 훈련과 다원적 풍부한 지식을 결합하여 알고리즘 기술을 지속적으로 다듬고, 방대한 텍스트 데이터에서 어휘, 구조, 문법, 의미 등의 새로운 지식을 지속적으로 흡수하여 모델 성능을 지속적으로 진화시킵니다. 사용자에게 보다 편리한 정보와 서비스, 그리고 더 지능적인 경험을 제공합니다."
+ },
+ "taichu_vqa": {
+ "description": "Taichu 2.0V는 이미지 이해, 지식 이전, 논리적 귀속 등의 능력을 통합하여, 텍스트와 이미지 질문 응답 분야에서 뛰어난 성능을 발휘합니다."
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B)는 효율적인 전략과 모델 아키텍처를 통해 향상된 계산 능력을 제공합니다."
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B)는 세밀한 지시 작업에 적합하며, 뛰어난 언어 처리 능력을 제공합니다."
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2는 Microsoft AI에서 제공하는 언어 모델로, 복잡한 대화, 다국어, 추론 및 스마트 어시스턴트 분야에서 특히 뛰어난 성능을 발휘합니다."
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2는 Microsoft AI에서 제공하는 언어 모델로, 복잡한 대화, 다국어, 추론 및 스마트 어시스턴트 분야에서 특히 뛰어난 성능을 발휘합니다."
+ },
+ "yi-large": {
+ "description": "새로운 1000억 매개변수 모델로, 강력한 질문 응답 및 텍스트 생성 능력을 제공합니다."
+ },
+ "yi-large-fc": {
+ "description": "yi-large 모델을 기반으로 도구 호출 능력을 지원하고 강화하여 다양한 에이전트 또는 워크플로우 구축이 필요한 비즈니스 시나리오에 적합합니다."
+ },
+ "yi-large-preview": {
+ "description": "초기 버전으로, yi-large(신버전) 사용을 권장합니다."
+ },
+ "yi-large-rag": {
+ "description": "yi-large 초강력 모델을 기반으로 한 고급 서비스로, 검색 및 생성 기술을 결합하여 정확한 답변을 제공하며, 실시간으로 전 세계 정보를 검색하는 서비스를 제공합니다."
+ },
+ "yi-large-turbo": {
+ "description": "초고성능, 뛰어난 성능. 성능과 추론 속도, 비용을 기준으로 균형 잡힌 고정밀 조정을 수행합니다."
+ },
+ "yi-medium": {
+ "description": "중형 모델 업그레이드 및 미세 조정으로, 능력이 균형 잡히고 가성비가 높습니다. 지시 따르기 능력을 깊이 최적화하였습니다."
+ },
+ "yi-medium-200k": {
+ "description": "200K 초장기 컨텍스트 창을 지원하여 긴 텍스트의 깊은 이해 및 생성 능력을 제공합니다."
+ },
+ "yi-spark": {
+ "description": "작고 강력한 경량 모델로, 강화된 수학 연산 및 코드 작성 능력을 제공합니다."
+ },
+ "yi-vision": {
+ "description": "복잡한 시각 작업 모델로, 고성능 이미지 이해 및 분석 능력을 제공합니다."
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/plugin.json b/DigitalHumanWeb/locales/ko-KR/plugin.json
new file mode 100644
index 0000000..9862528
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "함수 호출 인수",
+ "function_call": "함수 호출",
+ "off": "디버그 끄기",
+ "on": "플러그인 호출 정보 보기",
+ "payload": "페이로드",
+ "response": "응답",
+ "tool_call": "도구 호출"
+ },
+ "detailModal": {
+ "info": {
+ "description": "API 설명",
+ "name": "API 이름"
+ },
+ "tabs": {
+ "info": "플러그인 능력",
+ "manifest": "설치 파일",
+ "settings": "설정"
+ },
+ "title": "플러그인 상세정보"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "로컬 플러그인을 삭제하시겠습니까? 삭제 후에는 복구할 수 없습니다.",
+ "customParams": {
+ "useProxy": {
+ "label": "프록시 사용 (크로스 도메인 오류가 발생할 경우 이 옵션을 활성화한 후 다시 설치해 보세요)"
+ }
+ },
+ "deleteSuccess": "플러그인이 성공적으로 삭제되었습니다.",
+ "manifest": {
+ "identifier": {
+ "desc": "플러그인의 고유 식별자",
+ "label": "식별자"
+ },
+ "mode": {
+ "local": "시각적 구성",
+ "local-tooltip": "시각적 구성은 일시적으로 지원되지 않습니다.",
+ "url": "온라인 링크"
+ },
+ "name": {
+ "desc": "플러그인 제목",
+ "label": "제목",
+ "placeholder": "검색 엔진"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "플러그인 작성자",
+ "label": "작성자"
+ },
+ "avatar": {
+ "desc": "플러그인 아이콘으로는 Emoji 또는 URL을 사용할 수 있습니다.",
+ "label": "아이콘"
+ },
+ "description": {
+ "desc": "플러그인 설명",
+ "label": "설명",
+ "placeholder": "검색 엔진에서 정보 가져오기"
+ },
+ "formFieldRequired": "이 필드는 필수 입력 사항입니다.",
+ "homepage": {
+ "desc": "플러그인 홈페이지",
+ "label": "홈페이지"
+ },
+ "identifier": {
+ "desc": "플러그인의 고유 식별자는 manifest에서 자동으로 인식됩니다.",
+ "errorDuplicate": "식별자가 이미 있는 플러그인과 중복되었습니다. 식별자를 수정해주세요.",
+ "label": "식별자",
+ "pattenErrorMessage": "영문자, 숫자, - 및 _만 입력할 수 있습니다."
+ },
+ "manifest": {
+ "desc": "{{appName}}는 이 링크를 통해 플러그인을 설치합니다.",
+ "label": "Manifest 파일 URL",
+ "preview": "Manifest 미리보기",
+ "refresh": "새로 고침"
+ },
+ "title": {
+ "desc": "플러그인 제목",
+ "label": "제목",
+ "placeholder": "검색 엔진"
+ }
+ },
+ "metaConfig": "플러그인 메타 정보 구성",
+ "modalDesc": "사용자 정의 플러그인을 추가하면 플러그인 개발을 검증하거나 세션에서 직접 사용할 수 있습니다. 플러그인 개발은 <1>개발 문서↗>를 참조하세요.",
+ "openai": {
+ "importUrl": "URL 링크에서 가져오기",
+ "schema": "스키마"
+ },
+ "preview": {
+ "card": "플러그인 미리보기",
+ "desc": "플러그인 설명 미리보기",
+ "title": "플러그인 이름 미리보기"
+ },
+ "save": "플러그인 설치",
+ "saveSuccess": "플러그인 설정이 성공적으로 저장되었습니다.",
+ "tabs": {
+ "manifest": "기능 설명 목록 (Manifest)",
+ "meta": "플러그인 메타 정보"
+ },
+ "title": {
+ "create": "사용자 정의 플러그인 추가",
+ "edit": "사용자 정의 플러그인 편집"
+ },
+ "type": {
+ "lobe": "LobeChat 플러그인",
+ "openai": "OpenAI 플러그인"
+ },
+ "update": "업데이트",
+ "updateSuccess": "플러그인 설정이 성공적으로 업데이트되었습니다."
+ },
+ "error": {
+ "fetchError": "해당 manifest 링크를 요청하는 중 오류가 발생했습니다. 링크의 유효성을 확인하고, 링크가 크로스 도메인 액세스를 허용하는지 확인하세요.",
+ "installError": "플러그인 {{name}} 설치 실패",
+ "manifestInvalid": "manifest가 규격에 맞지 않습니다. 유효성 검사 결과: \n\n {{error}}",
+ "noManifest": "설명 파일이 없습니다",
+ "openAPIInvalid": "OpenAPI 파싱에 실패했습니다. 오류: \n\n {{error}}",
+ "reinstallError": "플러그인 {{name}} 다시 설치 중 오류가 발생했습니다.",
+ "urlError": "이 링크는 JSON 형식의 내용을 반환하지 않습니다. 유효한 링크인지 확인하세요."
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "삭제됨",
+ "local.config": "구성",
+ "local.title": "사용자 정의"
+ }
+ },
+ "loading": {
+ "content": "플러그인 호출 중...",
+ "plugin": "플러그인 실행 중..."
+ },
+ "pluginList": "플러그인 목록",
+ "setting": "플러그인 설정",
+ "settings": {
+ "indexUrl": {
+ "title": "마켓 인덱스",
+ "tooltip": "온라인 편집은 지원되지 않습니다. 배포 환경 변수를 통해 설정해주세요."
+ },
+ "modalDesc": "플러그인 마켓의 주소를 구성하면 사용자 정의 플러그인 마켓을 사용할 수 있습니다.",
+ "title": "플러그인 마켓 설정"
+ },
+ "showInPortal": "작업 영역에서 자세히 확인하세요",
+ "store": {
+ "actions": {
+ "confirmUninstall": "이 플러그인을 제거하려고 합니다. 제거하면 플러그인 구성이 지워지므로 작업을 확인하세요.",
+ "detail": "상세정보",
+ "install": "설치",
+ "manifest": "설치 파일 편집",
+ "settings": "설정",
+ "uninstall": "제거"
+ },
+ "communityPlugin": "커뮤니티 플러그인",
+ "customPlugin": "사용자 정의 플러그인",
+ "empty": "설치된 플러그인이 없습니다",
+ "installAllPlugins": "모두 설치",
+ "networkError": "플러그인 스토어를 가져오는 데 실패했습니다. 네트워크 연결을 확인한 후 다시 시도하십시오",
+ "placeholder": "플러그인 이름 또는 키워드를 검색하세요...",
+ "releasedAt": "{{createdAt}}에 출시",
+ "tabs": {
+ "all": "모두",
+ "installed": "설치됨"
+ },
+ "title": "플러그인 스토어"
+ },
+ "unknownPlugin": "알 수 없는 플러그인"
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/portal.json b/DigitalHumanWeb/locales/ko-KR/portal.json
new file mode 100644
index 0000000..bb9c1a7
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "아티팩트",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "청크",
+ "file": "파일"
+ }
+ },
+ "Plugins": "플러그인",
+ "actions": {
+ "genAiMessage": "AI 메시지 생성",
+ "summary": "요약",
+ "summaryTooltip": "현재 콘텐츠를 요약합니다"
+ },
+ "artifacts": {
+ "display": {
+ "code": "코드",
+ "preview": "미리보기"
+ },
+ "svg": {
+ "copyAsImage": "이미지로 복사",
+ "copyFail": "복사 실패, 오류 원인: {{error}}",
+ "copySuccess": "이미지 복사 성공",
+ "download": {
+ "png": "PNG로 다운로드",
+ "svg": "SVG로 다운로드"
+ }
+ }
+ },
+ "emptyArtifactList": "현재 아티팩트 목록이 비어 있습니다. 플러그인을 사용한 후에 다시 확인해주세요.",
+ "emptyKnowledgeList": "현재 지식 목록이 비어 있습니다. 대화 중에 필요에 따라 지식 베이스를 활성화한 후 다시 확인해 주세요.",
+ "files": "파일",
+ "messageDetail": "메시지 세부정보",
+ "title": "확장 창"
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/providers.json b/DigitalHumanWeb/locales/ko-KR/providers.json
new file mode 100644
index 0000000..54f6bdc
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "360 AI는 360 회사가 출시한 AI 모델 및 서비스 플랫폼으로, 360GPT2 Pro, 360GPT Pro, 360GPT Turbo 및 360GPT Turbo Responsibility 8K를 포함한 다양한 고급 자연어 처리 모델을 제공합니다. 이러한 모델은 대규모 매개변수와 다중 모드 능력을 결합하여 텍스트 생성, 의미 이해, 대화 시스템 및 코드 생성 등 다양한 분야에 널리 사용됩니다. 유연한 가격 전략을 통해 360 AI는 다양한 사용자 요구를 충족하고 개발자가 통합할 수 있도록 지원하여 스마트화 응용 프로그램의 혁신과 발전을 촉진합니다."
+ },
+ "anthropic": {
+ "description": "Anthropic은 인공지능 연구 및 개발에 집중하는 회사로, Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus 및 Claude 3 Haiku와 같은 고급 언어 모델을 제공합니다. 이러한 모델은 지능, 속도 및 비용 간의 이상적인 균형을 이루며, 기업급 작업 부하에서부터 빠른 응답이 필요한 다양한 응용 프로그램에 적합합니다. Claude 3.5 Sonnet은 최신 모델로, 여러 평가에서 우수한 성능을 보이며 높은 비용 효율성을 유지하고 있습니다."
+ },
+ "azure": {
+ "description": "Azure는 GPT-3.5 및 최신 GPT-4 시리즈를 포함한 다양한 고급 AI 모델을 제공하며, 다양한 데이터 유형과 복잡한 작업을 지원하고 안전하고 신뢰할 수 있으며 지속 가능한 AI 솔루션을 목표로 하고 있습니다."
+ },
+ "baichuan": {
+ "description": "百川智能은 인공지능 대형 모델 연구 개발에 집중하는 회사로, 그 모델은 국내 지식 백과, 긴 텍스트 처리 및 생성 창작 등 중국어 작업에서 뛰어난 성능을 보이며, 해외 주류 모델을 초월합니다. 百川智能은 업계 선도적인 다중 모드 능력을 갖추고 있으며, 여러 권위 있는 평가에서 우수한 성능을 보였습니다. 그 모델에는 Baichuan 4, Baichuan 3 Turbo 및 Baichuan 3 Turbo 128k 등이 포함되어 있으며, 각각 다른 응용 시나리오에 최적화되어 비용 효율적인 솔루션을 제공합니다."
+ },
+ "bedrock": {
+ "description": "Bedrock은 아마존 AWS가 제공하는 서비스로, 기업에 고급 AI 언어 모델과 비주얼 모델을 제공합니다. 그 모델 가족에는 Anthropic의 Claude 시리즈, Meta의 Llama 3.1 시리즈 등이 포함되어 있으며, 경량형부터 고성능까지 다양한 선택지를 제공하고 텍스트 생성, 대화, 이미지 처리 등 여러 작업을 지원하여 다양한 규모와 요구의 기업 응용 프로그램에 적합합니다."
+ },
+ "deepseek": {
+ "description": "DeepSeek는 인공지능 기술 연구 및 응용에 집중하는 회사로, 최신 모델인 DeepSeek-V2.5는 일반 대화 및 코드 처리 능력을 통합하고 인간의 선호 정렬, 작문 작업 및 지시 따르기 등에서 상당한 향상을 이루었습니다."
+ },
+ "fireworksai": {
+ "description": "Fireworks AI는 기능 호출 및 다중 모드 처리를 전문으로 하는 선도적인 고급 언어 모델 서비스 제공업체입니다. 최신 모델인 Firefunction V2는 Llama-3를 기반으로 하며, 함수 호출, 대화 및 지시 따르기에 최적화되어 있습니다. 비주얼 언어 모델인 FireLLaVA-13B는 이미지와 텍스트 혼합 입력을 지원합니다. 기타 주목할 만한 모델로는 Llama 시리즈와 Mixtral 시리즈가 있으며, 효율적인 다국어 지시 따르기 및 생성 지원을 제공합니다."
+ },
+ "github": {
+ "description": "GitHub 모델을 통해 개발자는 AI 엔지니어가 되어 업계 최고의 AI 모델로 구축할 수 있습니다."
+ },
+ "google": {
+ "description": "Google의 Gemini 시리즈는 Google DeepMind가 개발한 가장 진보된 범용 AI 모델로, 다중 모드 설계를 통해 텍스트, 코드, 이미지, 오디오 및 비디오의 원활한 이해 및 처리를 지원합니다. 데이터 센터에서 모바일 장치에 이르기까지 다양한 환경에 적합하며 AI 모델의 효율성과 응용 범위를 크게 향상시킵니다."
+ },
+ "groq": {
+ "description": "Groq의 LPU 추론 엔진은 최신 독립 대형 언어 모델(LLM) 벤치마크 테스트에서 뛰어난 성능을 보이며, 놀라운 속도와 효율성으로 AI 솔루션의 기준을 재정의하고 있습니다. Groq는 즉각적인 추론 속도의 대표주자로, 클라우드 기반 배포에서 우수한 성능을 보여줍니다."
+ },
+ "minimax": {
+ "description": "MiniMax는 2021년에 설립된 일반 인공지능 기술 회사로, 사용자와 함께 지능을 공동 창출하는 데 전념하고 있습니다. MiniMax는 다양한 모드의 일반 대형 모델을 독자적으로 개발하였으며, 여기에는 조 단위의 MoE 텍스트 대형 모델, 음성 대형 모델 및 이미지 대형 모델이 포함됩니다. 또한 해마 AI와 같은 응용 프로그램을 출시하였습니다."
+ },
+ "mistral": {
+ "description": "Mistral은 고급 일반, 전문 및 연구형 모델을 제공하며, 복잡한 추론, 다국어 작업, 코드 생성 등 다양한 분야에 널리 사용됩니다. 기능 호출 인터페이스를 통해 사용자는 사용자 정의 기능을 통합하여 특정 응용 프로그램을 구현할 수 있습니다."
+ },
+ "moonshot": {
+ "description": "Moonshot은 베이징 월의 어두운 면 기술 회사가 출시한 오픈 소스 플랫폼으로, 다양한 자연어 처리 모델을 제공하며, 콘텐츠 창작, 학술 연구, 스마트 추천, 의료 진단 등 다양한 분야에 적용됩니다. 긴 텍스트 처리 및 복잡한 생성 작업을 지원합니다."
+ },
+ "novita": {
+ "description": "Novita AI는 다양한 대형 언어 모델과 AI 이미지 생성을 제공하는 API 서비스 플랫폼으로, 유연하고 신뢰할 수 있으며 비용 효율적입니다. Llama3, Mistral 등 최신 오픈 소스 모델을 지원하며, 생성적 AI 응용 프로그램 개발을 위한 포괄적이고 사용자 친화적이며 자동 확장 가능한 API 솔루션을 제공하여 AI 스타트업의 빠른 발전에 적합합니다."
+ },
+ "ollama": {
+ "description": "Ollama가 제공하는 모델은 코드 생성, 수학 연산, 다국어 처리 및 대화 상호작용 등 다양한 분야를 포괄하며, 기업급 및 로컬 배포의 다양한 요구를 지원합니다."
+ },
+ "openai": {
+ "description": "OpenAI는 세계 최고의 인공지능 연구 기관으로, 개발한 모델인 GPT 시리즈는 자연어 처리의 최전선에서 혁신을 이끌고 있습니다. OpenAI는 혁신적이고 효율적인 AI 솔루션을 통해 여러 산업을 변화시키는 데 전념하고 있습니다. 그들의 제품은 뛰어난 성능과 경제성을 갖추고 있어 연구, 비즈니스 및 혁신적인 응용 프로그램에서 널리 사용됩니다."
+ },
+ "openrouter": {
+ "description": "OpenRouter는 OpenAI, Anthropic, LLaMA 등 다양한 최첨단 대형 모델 인터페이스를 제공하는 서비스 플랫폼으로, 다양한 개발 및 응용 요구에 적합합니다. 사용자는 자신의 필요에 따라 최적의 모델과 가격을 유연하게 선택하여 AI 경험을 향상시킬 수 있습니다."
+ },
+ "perplexity": {
+ "description": "Perplexity는 선도적인 대화 생성 모델 제공업체로, 다양한 고급 Llama 3.1 모델을 제공하며, 온라인 및 오프라인 응용 프로그램을 지원하고 복잡한 자연어 처리 작업에 특히 적합합니다."
+ },
+ "qwen": {
+ "description": "통의천문은 알리바바 클라우드가 자주 개발한 초대형 언어 모델로, 강력한 자연어 이해 및 생성 능력을 갖추고 있습니다. 다양한 질문에 답변하고, 텍스트 콘텐츠를 창작하며, 의견을 표현하고, 코드를 작성하는 등 여러 분야에서 활용됩니다."
+ },
+ "siliconcloud": {
+ "description": "SiliconFlow는 AGI를 가속화하여 인류에 혜택을 주기 위해 사용하기 쉽고 비용이 저렴한 GenAI 스택을 통해 대규모 AI 효율성을 향상시키는 데 전념하고 있습니다."
+ },
+ "spark": {
+ "description": "科大讯飞星火 대모델은 다중 분야 및 다국어의 강력한 AI 능력을 제공하며, 고급 자연어 처리 기술을 활용하여 스마트 하드웨어, 스마트 의료, 스마트 금융 등 다양한 수직 분야에 적합한 혁신적인 응용 프로그램을 구축합니다."
+ },
+ "stepfun": {
+ "description": "阶级星辰 대모델은 업계 선도적인 다중 모드 및 복잡한 추론 능력을 갖추고 있으며, 초장 텍스트 이해 및 강력한 자율 스케줄링 검색 엔진 기능을 지원합니다."
+ },
+ "taichu": {
+ "description": "중국과학원 자동화 연구소와 우한 인공지능 연구원이 출시한 차세대 다중 모드 대형 모델은 다중 회차 질문 응답, 텍스트 창작, 이미지 생성, 3D 이해, 신호 분석 등 포괄적인 질문 응답 작업을 지원하며, 더 강력한 인지, 이해 및 창작 능력을 갖추고 있어 새로운 상호작용 경험을 제공합니다."
+ },
+ "togetherai": {
+ "description": "Together AI는 혁신적인 AI 모델을 통해 선도적인 성능을 달성하는 데 전념하며, 빠른 확장 지원 및 직관적인 배포 프로세스를 포함한 광범위한 사용자 정의 기능을 제공하여 기업의 다양한 요구를 충족합니다."
+ },
+ "upstage": {
+ "description": "Upstage는 Solar LLM 및 문서 AI를 포함하여 다양한 비즈니스 요구를 위한 AI 모델 개발에 집중하고 있으며, 인공지능 일반 지능(AGI)을 실현하는 것을 목표로 하고 있습니다. Chat API를 통해 간단한 대화 에이전트를 생성하고 기능 호출, 번역, 임베딩 및 특정 분야 응용 프로그램을 지원합니다."
+ },
+ "zeroone": {
+ "description": "01.AI는 AI 2.0 시대의 인공지능 기술에 집중하며, '인간 + 인공지능'의 혁신과 응용을 적극적으로 추진하고, 초강력 모델과 고급 AI 기술을 활용하여 인간의 생산성을 향상시키고 기술의 힘을 실현합니다."
+ },
+ "zhipu": {
+ "description": "智谱 AI는 다중 모드 및 언어 모델의 개방형 플랫폼을 제공하며, 텍스트 처리, 이미지 이해 및 프로그래밍 지원 등 광범위한 AI 응용 프로그램 시나리오를 지원합니다."
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/ragEval.json b/DigitalHumanWeb/locales/ko-KR/ragEval.json
new file mode 100644
index 0000000..5293898
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "새로 만들기",
+ "description": {
+ "placeholder": "데이터셋 설명 (선택 사항)"
+ },
+ "name": {
+ "placeholder": "데이터셋 이름",
+ "required": "데이터셋 이름을 입력해 주세요"
+ },
+ "title": "데이터셋 추가"
+ },
+ "dataset": {
+ "addNewButton": "데이터셋 생성",
+ "emptyGuide": "현재 데이터셋이 비어 있습니다. 데이터셋을 생성해 주세요.",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "데이터 가져오기"
+ },
+ "columns": {
+ "actions": "작업",
+ "ideal": {
+ "title": "기대 답변"
+ },
+ "question": {
+ "title": "질문"
+ },
+ "referenceFiles": {
+ "title": "참조 파일"
+ }
+ },
+ "notSelected": "왼쪽에서 데이터셋을 선택해 주세요",
+ "title": "데이터셋 상세 정보"
+ },
+ "title": "데이터셋"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "새로 만들기",
+ "datasetId": {
+ "placeholder": "평가 데이터셋을 선택해 주세요",
+ "required": "평가 데이터셋을 선택해 주세요"
+ },
+ "description": {
+ "placeholder": "평가 작업 설명 (선택 사항)"
+ },
+ "name": {
+ "placeholder": "평가 작업 이름",
+ "required": "평가 작업 이름을 입력해 주세요"
+ },
+ "title": "평가 작업 추가"
+ },
+ "addNewButton": "평가 생성",
+ "emptyGuide": "현재 평가 작업이 비어 있습니다. 평가를 시작해 주세요.",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "상태 확인",
+ "confirmDelete": "이 평가를 삭제하시겠습니까?",
+ "confirmRun": "실행을 시작하시겠습니까? 실행이 시작되면 백그라운드에서 비동기적으로 평가 작업이 수행됩니다. 페이지를 닫아도 비동기 작업의 실행에는 영향을 미치지 않습니다.",
+ "downloadRecords": "평가 다운로드",
+ "retry": "재시도",
+ "run": "실행",
+ "title": "작업"
+ },
+ "datasetId": {
+ "title": "데이터셋"
+ },
+ "name": {
+ "title": "평가 작업 이름"
+ },
+ "records": {
+ "title": "평가 기록 수"
+ },
+ "referenceFiles": {
+ "title": "참조 파일"
+ },
+ "status": {
+ "error": "실행 오류",
+ "pending": "대기 중",
+ "processing": "실행 중",
+ "success": "실행 성공",
+ "title": "상태"
+ }
+ },
+ "title": "평가 작업 목록"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/setting.json b/DigitalHumanWeb/locales/ko-KR/setting.json
new file mode 100644
index 0000000..9815830
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "소개"
+ },
+ "agentTab": {
+ "chat": "채팅 환경",
+ "meta": "도우미 정보",
+ "modal": "모델 설정",
+ "plugin": "플러그인 설정",
+ "prompt": "역할 설정",
+ "tts": "음성 서비스"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "{{appName}}의 전반적인 사용자 경험을 개선하는 데 도움을 주기 위해 원격 측정 데이터를 전송하도록 선택할 수 있습니다.",
+ "title": "익명 사용 데이터 전송"
+ },
+ "title": "분석"
+ },
+ "danger": {
+ "clear": {
+ "action": "모두 지우기",
+ "confirm": "모든 채팅 데이터를 지우시겠습니까?",
+ "desc": "보조, 파일, 메시지, 플러그인 등 모든 세션 데이터가 지워집니다",
+ "success": "모든 세션 메시지가 지워졌습니다",
+ "title": "모든 세션 메시지 지우기"
+ },
+ "reset": {
+ "action": "모두 재설정",
+ "confirm": "모든 설정을 재설정하시겠습니까?",
+ "currentVersion": "현재 버전",
+ "desc": "모든 설정을 기본값으로 재설정합니다",
+ "success": "모든 설정이 재설정되었습니다",
+ "title": "모든 설정 재설정"
+ }
+ },
+ "header": {
+ "desc": "설정 및 모델 설정.",
+ "global": "전역 설정",
+ "session": "세션 설정",
+ "sessionDesc": "캐릭터 설정 및 세션 환경 설정.",
+ "sessionWithName": "세션 설정 · {{name}}",
+ "title": "설정"
+ },
+ "llm": {
+ "aesGcm": "귀하의 키 및 프록시 주소는 <1>AES-GCM1> 암호화 알고리즘을 사용하여 암호화됩니다",
+ "apiKey": {
+ "desc": "당신의 {{name}} API 키를 입력해주세요",
+ "placeholder": "{{name}} API 키",
+ "title": "API 키"
+ },
+ "checker": {
+ "button": "확인",
+ "desc": "API Key 및 프록시 주소가 올바르게 입력되었는지 테스트합니다",
+ "pass": "확인 통과",
+ "title": "연결성 확인"
+ },
+ "customModelCards": {
+ "addNew": "{{id}} 모델을 생성하고 추가합니다",
+ "config": "모델 구성",
+ "confirmDelete": "사용자 정의 모델을 삭제하려고 합니다. 삭제 후에는 복구할 수 없으니 신중하게 작업하세요.",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "Azure OpenAI에서 실제 요청하는 필드",
+ "placeholder": "Azure에서 모델 배포 이름을 입력하세요",
+ "title": "모델 배포 이름"
+ },
+ "displayName": {
+ "placeholder": "ChatGPT, GPT-4 등과 같은 모델의 표시 이름을 입력하세요",
+ "title": "모델 표시 이름"
+ },
+ "files": {
+ "extra": "현재 파일 업로드 구현은 단순한 해킹 방법으로, 개인적인 시도에만 해당됩니다. 전체 파일 업로드 기능은 추후 구현을 기다려 주시기 바랍니다.",
+ "title": "파일 업로드 지원"
+ },
+ "functionCall": {
+ "extra": "이 설정은 애플리케이션 내에서 함수 호출 기능만 활성화합니다. 함수 호출 지원 여부는 모델 자체에 따라 다르므로, 해당 모델의 함수 호출 가능성을 직접 테스트해 보시기 바랍니다.",
+ "title": "함수 호출 지원"
+ },
+ "id": {
+ "extra": "모델을 식별하는 데 사용될 것입니다",
+ "placeholder": "gpt-4-turbo-preview 또는 claude-2.1과 같은 모델 ID를 입력하세요",
+ "title": "모델 ID"
+ },
+ "modalTitle": "사용자 정의 모델 구성",
+ "tokens": {
+ "title": "최대 토큰 수",
+ "unlimited": "제한 없는"
+ },
+ "vision": {
+ "extra": "이 설정은 애플리케이션 내에서 이미지 업로드 기능만 활성화합니다. 인식 지원 여부는 모델 자체에 따라 다르므로, 해당 모델의 시각 인식 가능성을 직접 테스트해 보시기 바랍니다.",
+ "title": "시각 인식 지원"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "브라우저에서 직접 세션 요청을 시작하는 클라이언트 요청 모드는 응답 속도를 향상시킬 수 있습니다",
+ "title": "클라이언트 요청 모드 사용"
+ },
+ "fetcher": {
+ "fetch": "모델 목록 가져오기",
+ "fetching": "모델 목록을 가져오는 중...",
+ "latestTime": "마지막 업데이트 시간: {{time}}",
+ "noLatestTime": "목록을 아직 가져오지 않았습니다"
+ },
+ "helpDoc": "구성 안내",
+ "modelList": {
+ "desc": "대화에서 표시할 모델을 선택하세요. 선택한 모델은 모델 목록에 표시됩니다",
+ "placeholder": "모델을 선택하세요",
+ "title": "모델 목록",
+ "total": "총 {{count}} 개 모델 사용 가능"
+ },
+ "proxyUrl": {
+ "desc": "기본 주소 이외에 http(s)://를 포함해야 합니다.",
+ "title": "API 프록시 주소"
+ },
+ "waitingForMore": "<1>계획에 따라 더 많은 모델이 추가될 예정1>이니 기대해 주세요"
+ },
+ "plugin": {
+ "addTooltip": "플러그인 추가",
+ "clearDeprecated": "사용되지 않는 플러그인 제거",
+ "empty": "설치된 플러그인이 없습니다. <1>플러그인 스토어1>에서 새로운 플러그인을 찾아보세요.",
+ "installStatus": {
+ "deprecated": "설치 해제됨"
+ },
+ "settings": {
+ "hint": "설명에 따라 아래 구성을 입력하십시오",
+ "title": "{{id}} 플러그인 설정",
+ "tooltip": "플러그인 설정"
+ },
+ "store": "플러그인 스토어"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "아바타"
+ },
+ "backgroundColor": {
+ "title": "배경색"
+ },
+ "description": {
+ "placeholder": "도우미 설명을 입력하세요",
+ "title": "도우미 설명"
+ },
+ "name": {
+ "placeholder": "도우미 이름을 입력하세요",
+ "title": "이름"
+ },
+ "prompt": {
+ "placeholder": "역할 프롬프트 단어를 입력하세요",
+ "title": "역할 설정"
+ },
+ "tag": {
+ "placeholder": "태그를 입력하세요",
+ "title": "태그"
+ },
+ "title": "도우미 정보"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "현재 메시지 수가이 값 이상이면 자동으로 주제가 생성됩니다",
+ "title": "메시지 임계값"
+ },
+ "chatStyleType": {
+ "title": "채팅 창 스타일",
+ "type": {
+ "chat": "대화 모드",
+ "docs": "문서 모드"
+ }
+ },
+ "compressThreshold": {
+ "desc": "압축되지 않은 이전 메시지가이 값 이상이면 압축됩니다",
+ "title": "이전 메시지 길이 압축 임계값"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "대화 중에 자동으로 주제를 만들지 여부를 설정합니다. 일시적인 주제에서만 작동합니다",
+ "title": "자동 주제 생성 활성화"
+ },
+ "enableCompressThreshold": {
+ "title": "이전 메시지 길이 압축 임계값 활성화"
+ },
+ "enableHistoryCount": {
+ "alias": "제한 없음",
+ "limited": "{{number}}개의 대화 메시지만 포함",
+ "setlimited": "사용할 메시지 수 설정",
+ "title": "이전 메시지 수 제한",
+ "unlimited": "이전 메시지 수 제한 없음"
+ },
+ "historyCount": {
+ "desc": "요청당 포함되는 이전 메시지 수",
+ "title": "이전 메시지 수"
+ },
+ "inputTemplate": {
+ "desc": "사용자의 최신 메시지가이 템플릿에 채워집니다",
+ "placeholder": "입력 템플릿 {{text}}은 실시간 입력 정보로 대체됩니다",
+ "title": "사용자 입력 전처리"
+ },
+ "title": "채팅 설정"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "단일 응답 제한 활성화"
+ },
+ "frequencyPenalty": {
+ "desc": "값이 클수록 반복 단어가 줄어듭니다",
+ "title": "빈도 패널티"
+ },
+ "maxTokens": {
+ "desc": "단일 상호 작용에 사용되는 최대 토큰 수",
+ "title": "단일 응답 제한"
+ },
+ "model": {
+ "desc": "{{provider}} 모델",
+ "title": "모델"
+ },
+ "presencePenalty": {
+ "desc": "값이 클수록 새로운 주제로 확장될 가능성이 높아집니다",
+ "title": "주제 신선도"
+ },
+ "temperature": {
+ "desc": "값이 클수록 응답이 더 무작위해집니다",
+ "title": "랜덤성",
+ "titleWithValue": "랜덤성 {{value}}"
+ },
+ "title": "모델 설정",
+ "topP": {
+ "desc": "랜덤성과 유사하지만 함께 변경하지 마세요",
+ "title": "상위 P 샘플링"
+ }
+ },
+ "settingPlugin": {
+ "title": "플러그인 목록"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "관리자가 암호화된 액세스를 활성화했습니다",
+ "placeholder": "액세스 암호를 입력하세요",
+ "title": "액세스 암호"
+ },
+ "oauth": {
+ "info": {
+ "desc": "로그인됨",
+ "title": "계정 정보"
+ },
+ "signin": {
+ "action": "로그인",
+ "desc": "SSO를 사용하여 앱 잠금 해제",
+ "title": "계정 로그인"
+ },
+ "signout": {
+ "action": "로그아웃",
+ "confirm": "로그아웃 하시겠습니까?",
+ "success": "로그아웃 성공"
+ }
+ },
+ "title": "시스템 설정"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "OpenAI 음성 인식 모델",
+ "title": "OpenAI",
+ "ttsModel": "OpenAI 음성 합성 모델"
+ },
+ "showAllLocaleVoice": {
+ "desc": "현재 언어의 음성만 표시하려면 닫으세요",
+ "title": "모든 로캘 음성 표시"
+ },
+ "stt": "음성 인식 설정",
+ "sttAutoStop": {
+ "desc": "비활성화하면 음성 인식이 자동으로 종료되지 않으며 수동으로 종료 버튼을 클릭해야 합니다",
+ "title": "음성 인식 자동 중지"
+ },
+ "sttLocale": {
+ "desc": "음성 입력의 언어로 음성 인식 정확도를 향상시킬 수 있습니다",
+ "title": "음성 인식 언어"
+ },
+ "sttService": {
+ "desc": "브라우저는 기본 음성 인식 서비스입니다",
+ "title": "음성 인식 서비스"
+ },
+ "title": "음성 서비스",
+ "tts": "음성 합성 설정",
+ "ttsService": {
+ "desc": "OpenAI 음성 합성 서비스를 사용하는 경우 OpenAI 모델 서비스가 활성화되어 있어야 합니다",
+ "title": "음성 합성 서비스"
+ },
+ "voice": {
+ "desc": "현재 어시스턴트에 대한 음성을 선택하십시오. 각 TTS 서비스는 다른 음성을 지원합니다",
+ "preview": "음성 미리듣기",
+ "title": "음성 합성 음성"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "아바타"
+ },
+ "fontSize": {
+ "desc": "채팅 내용의 글꼴 크기",
+ "marks": {
+ "normal": "표준"
+ },
+ "title": "글꼴 크기"
+ },
+ "lang": {
+ "autoMode": "시스템에 따름",
+ "title": "언어"
+ },
+ "neutralColor": {
+ "desc": "다양한 색상 선호도에 따른 중립적인 사용자 정의",
+ "title": "중립색"
+ },
+ "primaryColor": {
+ "desc": "사용자 정의 주제 색상",
+ "title": "주제 색상"
+ },
+ "themeMode": {
+ "auto": "자동",
+ "dark": "다크 모드",
+ "light": "라이트 모드",
+ "title": "테마"
+ },
+ "title": "테마 설정"
+ },
+ "submitAgentModal": {
+ "button": "에이전트 제출",
+ "identifier": "에이전트 식별자",
+ "metaMiss": "에이전트 정보를 입력한 후 제출하십시오. 이름, 설명 및 태그를 포함해야 합니다.",
+ "placeholder": "에이전트 식별자를 입력하세요. 고유해야 하며, 예: 웹 개발",
+ "tooltips": "에이전트 마켓에 공유"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "추가 이름을 입력하여 식별할 수 있게 합니다",
+ "placeholder": "장치 이름을 입력하세요",
+ "title": "장치 이름"
+ },
+ "title": "장치 정보",
+ "unknownBrowser": "알 수 없는 브라우저",
+ "unknownOS": "알 수 없는 OS"
+ },
+ "warning": {
+ "tip": "커뮤니티 베타 테스트를 거친 후, WebRTC 동기화는 일반 데이터 동기화 요구를 안정적으로 충족시키지 못할 수 있습니다. <1>시그널링 서버를 배포1>한 후 사용하십시오."
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "WebRTC는 이 이름을 사용하여 동기화 채널을 생성하며 채널 이름이 고유한지 확인하세요",
+ "placeholder": "동기화 채널 이름을 입력하세요",
+ "shuffle": "랜덤 생성",
+ "title": "동기화 채널 이름"
+ },
+ "channelPassword": {
+ "desc": "채널의 개인 정보를 보호하기 위해 비밀번호를 추가하고 장치가 채널에 참여하려면 올바른 비밀번호여야 합니다",
+ "placeholder": "동기화 채널 비밀번호를 입력하세요",
+ "title": "동기화 채널 비밀번호"
+ },
+ "desc": "실시간, 피어 투 피어 데이터 통신으로 장치가 동시에 온라인 상태여야만 동기화할 수 있습니다",
+ "enabled": {
+ "invalid": "시그널링 서버와 동기화 채널 이름을 입력한 후에 활성화하십시오.",
+ "title": "동기화 활성화"
+ },
+ "signaling": {
+ "desc": "WebRTC는 이 주소를 사용하여 동기화합니다.",
+ "placeholder": "시그널링 서버 주소를 입력하세요",
+ "title": "시그널링 서버"
+ },
+ "title": "WebRTC 동기화"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "어시스턴트 메타 생성 모델",
+ "modelDesc": "어시스턴트 이름, 설명, 프로필 이미지, 레이블을 생성하는 데 사용되는 모델을 지정합니다.",
+ "title": "어시스턴트 정보 자동 생성"
+ },
+ "queryRewrite": {
+ "label": "질문 재작성 모델",
+ "modelDesc": "사용자의 질문을 최적화하는 데 사용되는 모델 지정",
+ "title": "지식 베이스"
+ },
+ "title": "시스템 도우미",
+ "topic": {
+ "label": "주제 명명 모델",
+ "modelDesc": "주제 자동 재명명에 사용되는 모델 지정",
+ "title": "주제 자동 명명"
+ },
+ "translation": {
+ "label": "번역 모델",
+ "modelDesc": "번역에 사용되는 모델 지정",
+ "title": "번역 도우미 설정"
+ }
+ },
+ "tab": {
+ "about": "소개",
+ "agent": "기본 에이전트",
+ "common": "일반 설정",
+ "experiment": "실험",
+ "llm": "언어 모델",
+ "sync": "클라우드 동기화",
+ "system-agent": "시스템 도우미",
+ "tts": "음성 서비스"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "내장"
+ },
+ "disabled": "현재 모델은 함수 호출을 지원하지 않으며 플러그인을 사용할 수 없습니다",
+ "plugins": {
+ "enabled": "활성화됨 {{num}}",
+ "groupName": "플러그인",
+ "noEnabled": "활성화된 플러그인이 없음",
+ "store": "플러그인 스토어"
+ },
+ "title": "확장 도구"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/tool.json b/DigitalHumanWeb/locales/ko-KR/tool.json
new file mode 100644
index 0000000..eebaab6
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "자동 생성",
+ "downloading": "DallE3로 생성된 이미지 링크는 1시간 동안 유효하며, 로컬에 이미지를 캐시하는 중입니다...",
+ "generate": "생성",
+ "generating": "생성 중...",
+ "images": "이미지:",
+ "prompt": "알림 단어"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ko-KR/welcome.json b/DigitalHumanWeb/locales/ko-KR/welcome.json
new file mode 100644
index 0000000..dee1de3
--- /dev/null
+++ b/DigitalHumanWeb/locales/ko-KR/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "구성 가져오기",
+ "market": "시장 구경하기",
+ "start": "지금 시작"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "다른 것으로 바꾸기",
+ "title": "새로운 보조 추천: "
+ },
+ "defaultMessage": "저는 당신의 개인 스마트 어시스턴트 {{appName}}입니다. 지금 무엇을 도와드릴까요?\n더 전문적이거나 맞춤형 어시스턴트가 필요하시면 `+`를 클릭하여 사용자 정의 어시스턴트를 생성하세요.",
+ "defaultMessageWithoutCreate": "저는 당신의 개인 스마트 어시스턴트 {{appName}}입니다. 지금 무엇을 도와드릴까요?",
+ "qa": {
+ "q01": "LobeHub란 무엇인가요?",
+ "q02": "{{appName}}란 무엇인가요?",
+ "q03": "{{appName}}는 커뮤니티 지원이 있나요?",
+ "q04": "{{appName}}는 어떤 기능을 지원하나요?",
+ "q05": "{{appName}}는 어떻게 배포하고 사용하나요?",
+ "q06": "{{appName}}의 가격은 어떻게 되나요?",
+ "q07": "{{appName}}는 무료인가요?",
+ "q08": "클라우드 서비스 버전이 있나요?",
+ "q09": "로컬 언어 모델을 지원하나요?",
+ "q10": "이미지 인식 및 생성 기능을 지원하나요?",
+ "q11": "음성 합성 및 음성 인식을 지원하나요?",
+ "q12": "플러그인 시스템을 지원하나요?",
+ "q13": "GPT를 얻기 위한 자체 마켓이 있나요?",
+ "q14": "여러 AI 서비스 제공업체를 지원하나요?",
+ "q15": "사용 중 문제가 발생하면 어떻게 해야 하나요?"
+ },
+ "questions": {
+ "moreBtn": "더 알아보기",
+ "title": "자주 묻는 질문: "
+ },
+ "welcome": {
+ "afternoon": "안녕하세요",
+ "morning": "좋은 아침",
+ "night": "안녕히 주무세요",
+ "noon": "안녕하세요"
+ }
+ },
+ "header": "환영합니다",
+ "pickAgent": "또는 다음 도우미 템플릿 중 하나를 선택하세요",
+ "skip": "생성 건너뛰기",
+ "slogan": {
+ "desc1": "뇌 클러스터를 시작하여 아이디어를 자극하세요. 당신의 지능형 어시스턴트가 항상 여기에 있습니다.",
+ "desc2": "첫 번째 어시스턴트를 만들어 보세요. 시작해 봅시다~",
+ "title": "더 똑똑한 뇌를 위해 스스로에게 선물하세요"
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/auth.json b/DigitalHumanWeb/locales/nl-NL/auth.json
new file mode 100644
index 0000000..94c5ab9
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "Inloggen",
+ "loginOrSignup": "Inloggen / Registreren",
+ "profile": "Profiel",
+ "security": "Veiligheid",
+ "signout": "Uitloggen",
+ "signup": "Registreren"
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/chat.json b/DigitalHumanWeb/locales/nl-NL/chat.json
new file mode 100644
index 0000000..478299f
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "Modelen"
+ },
+ "agentDefaultMessage": "Hallo, ik ben **{{name}}**. Je kunt meteen met me beginnen praten, of je kunt naar [Assistentinstellingen]({{url}}) gaan om mijn informatie aan te vullen.",
+ "agentDefaultMessageWithSystemRole": "Hallo, ik ben **{{name}}**, {{systemRole}}, laten we beginnen met praten!",
+ "agentDefaultMessageWithoutEdit": "Hallo, ik ben **{{name}}**. Laten we beginnen met een gesprek!",
+ "agents": "Assistent",
+ "artifact": {
+ "generating": "Genereren",
+ "thinking": "Denken",
+ "thought": "Denken proces",
+ "unknownTitle": "Onbenoemd werk"
+ },
+ "backToBottom": "Terug naar onderen",
+ "chatList": {
+ "longMessageDetail": "Bekijk details"
+ },
+ "clearCurrentMessages": "Huidige berichten wissen",
+ "confirmClearCurrentMessages": "Huidige berichten worden gewist en kunnen niet worden hersteld. Bevestig je actie.",
+ "confirmRemoveSessionItemAlert": "Deze assistent wordt verwijderd en kan niet worden hersteld. Bevestig je actie.",
+ "confirmRemoveSessionSuccess": "Sessie succesvol verwijderd",
+ "defaultAgent": "Standaard assistent",
+ "defaultList": "Standaardlijst",
+ "defaultSession": "Standaard assistent",
+ "duplicateSession": {
+ "loading": "Bezig met kopiëren...",
+ "success": "Kopiëren gelukt",
+ "title": "{{title}} Kopie"
+ },
+ "duplicateTitle": "{{title}} Kopie",
+ "emptyAgent": "Geen assistent beschikbaar",
+ "historyRange": "Geschiedenisbereik",
+ "inbox": {
+ "desc": "Activeer de hersencluster en laat de vonken van gedachten overslaan. Je slimme assistent, hier om met je over alles te praten.",
+ "title": "Praat maar raak"
+ },
+ "input": {
+ "addAi": "Voeg een AI-bericht toe",
+ "addUser": "Voeg een gebruikersbericht toe",
+ "more": "Meer",
+ "send": "Verzenden",
+ "sendWithCmdEnter": "Verzenden met {{meta}} + Enter",
+ "sendWithEnter": "Verzenden met Enter",
+ "stop": "Stoppen",
+ "warp": "Nieuwe regel"
+ },
+ "knowledgeBase": {
+ "all": "Alle inhoud",
+ "allFiles": "Alle bestanden",
+ "allKnowledgeBases": "Alle kennisbanken",
+ "disabled": "De huidige implementatiemodus ondersteunt geen kennisbankgesprekken. Als je dit wilt gebruiken, schakel dan over naar serverdatabase-implementatie of gebruik de {{cloud}}-dienst.",
+ "library": {
+ "action": {
+ "add": "Toevoegen",
+ "detail": "Details",
+ "remove": "Verwijderen"
+ },
+ "title": "Bestand/Kennisbank"
+ },
+ "relativeFilesOrKnowledgeBases": "Gerelateerde bestanden/kennisbanken",
+ "title": "Kennisbank",
+ "uploadGuide": "Geüploade bestanden kunnen worden bekeken in de 'Kennisbank'.",
+ "viewMore": "Bekijk meer"
+ },
+ "messageAction": {
+ "delAndRegenerate": "Verwijderen en opnieuw genereren",
+ "regenerate": "Opnieuw genereren"
+ },
+ "newAgent": "Nieuwe assistent",
+ "pin": "Vastzetten",
+ "pinOff": "Vastzetten uitschakelen",
+ "rag": {
+ "referenceChunks": "Referentiestukken",
+ "userQuery": {
+ "actions": {
+ "delete": "Verwijder Query herschrijving",
+ "regenerate": "Genereer Query opnieuw"
+ }
+ }
+ },
+ "regenerate": "Opnieuw genereren",
+ "roleAndArchive": "Rol en archief",
+ "searchAgentPlaceholder": "Zoekassistent...",
+ "sendPlaceholder": "Voer chatbericht in...",
+ "sessionGroup": {
+ "config": "Groepsbeheer",
+ "confirmRemoveGroupAlert": "Je staat op het punt deze groep te verwijderen. Na verwijdering zullen de assistenten van deze groep worden verplaatst naar de standaardlijst. Bevestig je actie.",
+ "createAgentSuccess": "Assistent succesvol aangemaakt",
+ "createGroup": "Nieuwe groep toevoegen",
+ "createSuccess": "Succesvol aangemaakt",
+ "creatingAgent": "Assistent wordt aangemaakt...",
+ "inputPlaceholder": "Voer de naam van de groep in...",
+ "moveGroup": "Verplaatsen naar groep",
+ "newGroup": "Nieuwe groep",
+ "rename": "Groepsnaam wijzigen",
+ "renameSuccess": "Naam succesvol gewijzigd",
+ "sortSuccess": "Sorteren succesvol voltooid",
+ "sorting": "Groepsordening wordt bijgewerkt...",
+ "tooLong": "De groepsnaam moet tussen 1 en 20 tekens lang zijn"
+ },
+ "shareModal": {
+ "download": "Screenshot downloaden",
+ "imageType": "Afbeeldingstype",
+ "screenshot": "Screenshot",
+ "settings": "Exportinstellingen",
+ "shareToShareGPT": "Genereer ShareGPT-deellink",
+ "withBackground": "Met achtergrondafbeelding",
+ "withFooter": "Met voettekst",
+ "withPluginInfo": "Met plug-in informatie",
+ "withSystemRole": "Met assistentrolinstelling"
+ },
+ "stt": {
+ "action": "Spraakinvoer",
+ "loading": "Bezig met herkennen...",
+ "prettifying": "Aan het verfraaien..."
+ },
+ "temp": "Tijdelijk",
+ "tokenDetails": {
+ "chats": "Chats",
+ "rest": "Rust",
+ "systemRole": "Systeemrol",
+ "title": "Contextuele details",
+ "tools": "Tools",
+ "total": "Totaal",
+ "used": "Gebruikt"
+ },
+ "tokenTag": {
+ "overload": "Overschrijding van limiet",
+ "remained": "Resterend",
+ "used": "Gebruikt"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "Automatisch hernoemen",
+ "duplicate": "Dupliceren",
+ "export": "Exporteren"
+ },
+ "checkOpenNewTopic": "Is het openen van een nieuw onderwerp ingeschakeld?",
+ "checkSaveCurrentMessages": "Wil je het huidige gesprek opslaan als onderwerp?",
+ "confirmRemoveAll": "Alle onderwerpen worden verwijderd en kunnen niet worden hersteld. Wees voorzichtig.",
+ "confirmRemoveTopic": "Dit onderwerp wordt verwijderd en kan niet worden hersteld. Wees voorzichtig.",
+ "confirmRemoveUnstarred": "Niet-gefavoriseerde onderwerpen worden verwijderd en kunnen niet worden hersteld. Wees voorzichtig.",
+ "defaultTitle": "Standaard onderwerp",
+ "duplicateLoading": "Onderwerp dupliceren...",
+ "duplicateSuccess": "Onderwerp succesvol gedupliceerd",
+ "guide": {
+ "desc": "Klik op de knop aan de linkerkant om het huidige gesprek op te slaan als een historisch onderwerp en een nieuw gesprek te starten",
+ "title": "Onderwerplijst"
+ },
+ "openNewTopic": "Nieuw onderwerp openen",
+ "removeAll": "Alle onderwerpen verwijderen",
+ "removeUnstarred": "Niet-gefavoriseerde onderwerpen verwijderen",
+ "saveCurrentMessages": "Huidig gesprek opslaan als onderwerp",
+ "searchPlaceholder": "Zoek onderwerpen...",
+ "title": "Onderwerpenlijst"
+ },
+ "translate": {
+ "action": "Vertalen",
+ "clear": "Vertaling verwijderen"
+ },
+ "tts": {
+ "action": "Tekst-naar-spraak",
+ "clear": "Spraak verwijderen"
+ },
+ "updateAgent": "Assistentgegevens bijwerken",
+ "upload": {
+ "action": {
+ "fileUpload": "Bestand uploaden",
+ "folderUpload": "Map uploaden",
+ "imageDisabled": "Dit model ondersteunt momenteel geen visuele herkenning, schakel alstublieft naar een ander model.",
+ "imageUpload": "Afbeelding uploaden",
+ "tooltip": "Uploaden"
+ },
+ "clientMode": {
+ "actionFiletip": "Bestand uploaden",
+ "actionTooltip": "Uploaden",
+ "disabled": "Dit model ondersteunt momenteel geen visuele herkenning en bestandanalyse, schakel alstublieft naar een ander model."
+ },
+ "preview": {
+ "prepareTasks": "Voorbereiden van blokken...",
+ "status": {
+ "pending": "Voorbereiden om te uploaden...",
+ "processing": "Bestand wordt verwerkt..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/clerk.json b/DigitalHumanWeb/locales/nl-NL/clerk.json
new file mode 100644
index 0000000..08e06da
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "Terug",
+ "badge__default": "Standaard",
+ "badge__otherImpersonatorDevice": "Ander impersonatorapparaat",
+ "badge__primary": "Primair",
+ "badge__requiresAction": "Vereist actie",
+ "badge__thisDevice": "Dit apparaat",
+ "badge__unverified": "Ongeverifieerd",
+ "badge__userDevice": "Gebruikersapparaat",
+ "badge__you": "Jij",
+ "createOrganization": {
+ "formButtonSubmit": "Organisatie aanmaken",
+ "invitePage": {
+ "formButtonReset": "Overslaan"
+ },
+ "title": "Organisatie aanmaken"
+ },
+ "dates": {
+ "lastDay": "Gisteren om {{ date | timeString('nl-NL') }}",
+ "next6Days": "{{ date | weekday('nl-NL','long') }} om {{ date | timeString('nl-NL') }}",
+ "nextDay": "Morgen om {{ date | timeString('nl-NL') }}",
+ "numeric": "{{ date | numeric('nl-NL') }}",
+ "previous6Days": "Vorige {{ date | weekday('nl-NL','long') }} om {{ date | timeString('nl-NL') }}",
+ "sameDay": "Vandaag om {{ date | timeString('nl-NL') }}"
+ },
+ "dividerText": "of",
+ "footerActionLink__useAnotherMethod": "Een andere methode gebruiken",
+ "footerPageLink__help": "Help",
+ "footerPageLink__privacy": "Privacy",
+ "footerPageLink__terms": "Voorwaarden",
+ "formButtonPrimary": "Doorgaan",
+ "formButtonPrimary__verify": "Verifiëren",
+ "formFieldAction__forgotPassword": "Wachtwoord vergeten?",
+ "formFieldError__matchingPasswords": "Wachtwoorden komen overeen.",
+ "formFieldError__notMatchingPasswords": "Wachtwoorden komen niet overeen.",
+ "formFieldError__verificationLinkExpired": "De verificatielink is verlopen. Vraag een nieuwe link aan.",
+ "formFieldHintText__optional": "Optioneel",
+ "formFieldHintText__slug": "Een slug is een leesbare ID die uniek moet zijn. Het wordt vaak gebruikt in URL's.",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "Account verwijderen",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "voorbeeld@email.com, voorbeeld2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "mijn-org",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "Automatische uitnodigingen inschakelen voor dit domein",
+ "formFieldLabel__backupCode": "Back-upcode",
+ "formFieldLabel__confirmDeletion": "Bevestiging",
+ "formFieldLabel__confirmPassword": "Wachtwoord bevestigen",
+ "formFieldLabel__currentPassword": "Huidig wachtwoord",
+ "formFieldLabel__emailAddress": "E-mailadres",
+ "formFieldLabel__emailAddress_username": "E-mailadres of gebruikersnaam",
+ "formFieldLabel__emailAddresses": "E-mailadressen",
+ "formFieldLabel__firstName": "Voornaam",
+ "formFieldLabel__lastName": "Achternaam",
+ "formFieldLabel__newPassword": "Nieuw wachtwoord",
+ "formFieldLabel__organizationDomain": "Domein",
+ "formFieldLabel__organizationDomainDeletePending": "Verwijder uitnodigingen en suggesties in behandeling",
+ "formFieldLabel__organizationDomainEmailAddress": "Verificatie e-mailadres",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "Voer een e-mailadres onder dit domein in om een code te ontvangen en dit domein te verifiëren.",
+ "formFieldLabel__organizationName": "Naam",
+ "formFieldLabel__organizationSlug": "Slug",
+ "formFieldLabel__passkeyName": "Naam van passkey",
+ "formFieldLabel__password": "Wachtwoord",
+ "formFieldLabel__phoneNumber": "Telefoonnummer",
+ "formFieldLabel__role": "Rol",
+ "formFieldLabel__signOutOfOtherSessions": "Afmelden bij alle andere apparaten",
+ "formFieldLabel__username": "Gebruikersnaam",
+ "impersonationFab": {
+ "action__signOut": "Afmelden",
+ "title": "Aangemeld als {{identifier}}"
+ },
+ "locale": "nl-NL",
+ "maintenanceMode": "We zijn momenteel bezig met onderhoud, maar maak je geen zorgen, het zou niet langer dan een paar minuten moeten duren.",
+ "membershipRole__admin": "Admin",
+ "membershipRole__basicMember": "Lid",
+ "membershipRole__guestMember": "Gast",
+ "organizationList": {
+ "action__createOrganization": "Organisatie creëren",
+ "action__invitationAccept": "Deelnemen",
+ "action__suggestionsAccept": "Verzoek om deel te nemen",
+ "createOrganization": "Organisatie creëren",
+ "invitationAcceptedLabel": "Toegetreden",
+ "subtitle": "om door te gaan naar {{applicationName}}",
+ "suggestionsAcceptedLabel": "In afwachting van goedkeuring",
+ "title": "Kies een account",
+ "titleWithoutPersonal": "Kies een organisatie"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "Automatische uitnodigingen",
+ "badge__automaticSuggestion": "Automatische suggesties",
+ "badge__manualInvitation": "Geen automatische inschrijving",
+ "badge__unverified": "Ongeverifieerd",
+ "createDomainPage": {
+ "subtitle": "Voeg het domein toe ter verificatie. Gebruikers met e-mailadressen op dit domein kunnen automatisch lid worden van de organisatie of een verzoek indienen om lid te worden.",
+ "title": "Domein toevoegen"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "De uitnodigingen konden niet worden verstuurd. Er zijn al uitstaande uitnodigingen voor de volgende e-mailadressen: {{email_addresses}}.",
+ "formButtonPrimary__continue": "Uitnodigingen versturen",
+ "selectDropdown__role": "Rol selecteren",
+ "subtitle": "Voer één of meer e-mailadressen in of plak ze, gescheiden door spaties of komma's.",
+ "successMessage": "Uitnodigingen succesvol verstuurd",
+ "title": "Nieuwe leden uitnodigen"
+ },
+ "membersPage": {
+ "action__invite": "Uitnodigen",
+ "activeMembersTab": {
+ "menuAction__remove": "Lid verwijderen",
+ "tableHeader__actions": "",
+ "tableHeader__joined": "Toegetreden",
+ "tableHeader__role": "Rol",
+ "tableHeader__user": "Gebruiker"
+ },
+ "detailsTitle__emptyRow": "Geen leden om weer te geven",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "Nodig gebruikers uit door een e-maildomein te koppelen aan je organisatie. Iedereen die zich aanmeldt met een overeenkomstig e-maildomein kan op elk moment lid worden van de organisatie.",
+ "headerTitle": "Automatische uitnodigingen",
+ "primaryButton": "Beheer geverifieerde domeinen"
+ },
+ "table__emptyRow": "Geen uitnodigingen om weer te geven"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "Uitnodiging intrekken",
+ "tableHeader__invited": "Uitgenodigd"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "Gebruikers die zich aanmelden met een overeenkomstig e-maildomein, kunnen een suggestie zien om een verzoek in te dienen om lid te worden van je organisatie.",
+ "headerTitle": "Automatische suggesties",
+ "primaryButton": "Beheer geverifieerde domeinen"
+ },
+ "menuAction__approve": "Goedkeuren",
+ "menuAction__reject": "Afwijzen",
+ "tableHeader__requested": "Toegang aangevraagd",
+ "table__emptyRow": "Geen verzoeken om weer te geven"
+ },
+ "start": {
+ "headerTitle__invitations": "Uitnodigingen",
+ "headerTitle__members": "Leden",
+ "headerTitle__requests": "Verzoeken"
+ }
+ },
+ "navbar": {
+ "description": "Beheer je organisatie.",
+ "general": "Algemeen",
+ "members": "Leden",
+ "title": "Organisatie"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "Typ \"{{organizationName}}\" hieronder om door te gaan.",
+ "messageLine1": "Weet je zeker dat je deze organisatie wilt verwijderen?",
+ "messageLine2": "Deze actie is permanent en onomkeerbaar.",
+ "successMessage": "Je hebt de organisatie verwijderd.",
+ "title": "Organisatie verwijderen"
+ },
+ "leaveOrganization": {
+ "actionDescription": "Typ \"{{organizationName}}\" hieronder om door te gaan.",
+ "messageLine1": "Weet je zeker dat je deze organisatie wilt verlaten? Je verliest toegang tot deze organisatie en de bijbehorende applicaties.",
+ "messageLine2": "Deze actie is permanent en onomkeerbaar.",
+ "successMessage": "Je hebt de organisatie verlaten.",
+ "title": "Organisatie verlaten"
+ },
+ "title": "Gevaar"
+ },
+ "domainSection": {
+ "menuAction__manage": "Beheren",
+ "menuAction__remove": "Verwijderen",
+ "menuAction__verify": "Verifiëren",
+ "primaryButton": "Domein toevoegen",
+ "subtitle": "Laat gebruikers automatisch lid worden van de organisatie of een verzoek indienen om lid te worden op basis van een geverifieerd e-maildomein.",
+ "title": "Geverifieerde domeinen"
+ },
+ "successMessage": "De organisatie is bijgewerkt.",
+ "title": "Profiel bijwerken"
+ },
+ "removeDomainPage": {
+ "messageLine1": "Het e-maildomein {{domain}} wordt verwijderd.",
+ "messageLine2": "Gebruikers kunnen zich niet langer automatisch bij de organisatie aansluiten na deze actie.",
+ "successMessage": "{{domain}} is verwijderd.",
+ "title": "Domein verwijderen"
+ },
+ "start": {
+ "headerTitle__general": "Algemeen",
+ "headerTitle__members": "Leden",
+ "profileSection": {
+ "primaryButton": "Profiel bijwerken",
+ "title": "Organisatieprofiel",
+ "uploadAction__title": "Logo"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "Het verwijderen van dit domein zal van invloed zijn op uitgenodigde gebruikers.",
+ "removeDomainActionLabel__remove": "Domein verwijderen",
+ "removeDomainSubtitle": "Verwijder dit domein uit je geverifieerde domeinen",
+ "removeDomainTitle": "Domein verwijderen"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "Gebruikers worden automatisch uitgenodigd om lid te worden van de organisatie wanneer ze zich aanmelden en kunnen op elk moment lid worden.",
+ "automaticInvitationOption__label": "Automatische uitnodigingen",
+ "automaticSuggestionOption__description": "Gebruikers ontvangen een suggestie om lid te worden, maar moeten worden goedgekeurd door een beheerder voordat ze lid kunnen worden van de organisatie.",
+ "automaticSuggestionOption__label": "Automatische suggesties",
+ "calloutInfoLabel": "Het wijzigen van de inschrijvingsmodus heeft alleen invloed op nieuwe gebruikers.",
+ "calloutInvitationCountLabel": "Uitstaande uitnodigingen verzonden naar gebruikers: {{count}}",
+ "calloutSuggestionCountLabel": "Uitstaande suggesties verzonden naar gebruikers: {{count}}",
+ "manualInvitationOption__description": "Gebruikers kunnen alleen handmatig worden uitgenodigd voor de organisatie.",
+ "manualInvitationOption__label": "Geen automatische inschrijving",
+ "subtitle": "Kies hoe gebruikers van dit domein lid kunnen worden van de organisatie."
+ },
+ "start": {
+ "headerTitle__danger": "Gevaar",
+ "headerTitle__enrollment": "Inschrijvingsopties"
+ },
+ "subtitle": "Het domein {{domain}} is nu geverifieerd. Ga verder door de inschrijvingsmodus te selecteren.",
+ "title": "{{domain}} bijwerken"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "Voer de verificatiecode in die naar je e-mailadres is gestuurd",
+ "formTitle": "Verificatiecode",
+ "resendButton": "Code niet ontvangen? Opnieuw verzenden",
+ "subtitle": "Het domein {{domainName}} moet worden geverifieerd via e-mail.",
+ "subtitleVerificationCodeScreen": "Er is een verificatiecode naar {{emailAddress}} gestuurd. Voer de code in om door te gaan.",
+ "title": "Domein verifiëren"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "Organisatie creëren",
+ "action__invitationAccept": "Deelnemen",
+ "action__manageOrganization": "Beheren",
+ "action__suggestionsAccept": "Verzoek om deel te nemen",
+ "notSelected": "Geen organisatie geselecteerd",
+ "personalWorkspace": "Persoonlijk account",
+ "suggestionsAcceptedLabel": "In afwachting van goedkeuring"
+ },
+ "paginationButton__next": "Volgende",
+ "paginationButton__previous": "Vorige",
+ "paginationRowText__displaying": "Weergeven",
+ "paginationRowText__of": "van",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "Account toevoegen",
+ "action__signOutAll": "Afmelden bij alle accounts",
+ "subtitle": "Selecteer het account waarmee je wilt doorgaan.",
+ "title": "Kies een account"
+ },
+ "alternativeMethods": {
+ "actionLink": "Hulp krijgen",
+ "actionText": "Heb je geen van deze?",
+ "blockButton__backupCode": "Een back-upcode gebruiken",
+ "blockButton__emailCode": "Code e-mailen naar {{identifier}}",
+ "blockButton__emailLink": "Link e-mailen naar {{identifier}}",
+ "blockButton__passkey": "Inloggen met je pascode",
+ "blockButton__password": "Inloggen met je wachtwoord",
+ "blockButton__phoneCode": "SMS-code verzenden naar {{identifier}}",
+ "blockButton__totp": "Je authenticator-app gebruiken",
+ "getHelp": {
+ "blockButton__emailSupport": "E-mailondersteuning",
+ "content": "Als je problemen ondervindt bij het inloggen op je account, stuur ons een e-mail en we zullen samen met jou werken om zo snel mogelijk toegang te herstellen.",
+ "title": "Hulp krijgen"
+ },
+ "subtitle": "Problemen? Je kunt een van deze methoden gebruiken om in te loggen.",
+ "title": "Een andere methode gebruiken"
+ },
+ "backupCodeMfa": {
+ "subtitle": "Je back-upcode is degene die je hebt gekregen bij het instellen van tweestapsverificatie.",
+ "title": "Voer een back-upcode in"
+ },
+ "emailCode": {
+ "formTitle": "Verificatiecode",
+ "resendButton": "Geen code ontvangen? Opnieuw verzenden",
+ "subtitle": "om door te gaan naar {{applicationName}}",
+ "title": "Controleer je e-mail"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "Ga terug naar het oorspronkelijke tabblad om door te gaan.",
+ "title": "Deze verificatielink is verlopen"
+ },
+ "failed": {
+ "subtitle": "Ga terug naar het oorspronkelijke tabblad om door te gaan.",
+ "title": "Deze verificatielink is ongeldig"
+ },
+ "formSubtitle": "Gebruik de verificatielink die naar je e-mail is verzonden",
+ "formTitle": "Verificatielink",
+ "loading": {
+ "subtitle": "Je wordt binnenkort doorgestuurd",
+ "title": "Aanmelden..."
+ },
+ "resendButton": "Geen link ontvangen? Opnieuw verzenden",
+ "subtitle": "om door te gaan naar {{applicationName}}",
+ "title": "Controleer je e-mail",
+ "unusedTab": {
+ "title": "Je kunt dit tabblad sluiten"
+ },
+ "verified": {
+ "subtitle": "Je wordt binnenkort doorgestuurd",
+ "title": "Succesvol aangemeld"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Ga terug naar het oorspronkelijke tabblad om door te gaan",
+ "subtitleNewTab": "Ga terug naar het nieuw geopende tabblad om door te gaan",
+ "titleNewTab": "Aangemeld op ander tabblad"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "Wachtwoord resetcode",
+ "resendButton": "Geen code ontvangen? Opnieuw verzenden",
+ "subtitle": "om je wachtwoord opnieuw in te stellen",
+ "subtitle_email": "Voer eerst de code in die naar je e-mailadres is verzonden",
+ "subtitle_phone": "Voer eerst de code in die naar je telefoon is verzonden",
+ "title": "Wachtwoord opnieuw instellen"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "Je wachtwoord opnieuw instellen",
+ "label__alternativeMethods": "Of meld je aan met een andere methode",
+ "title": "Wachtwoord vergeten?"
+ },
+ "noAvailableMethods": {
+ "message": "Kan niet doorgaan met aanmelden. Er is geen beschikbare authenticatiefactor.",
+ "subtitle": "Er is een fout opgetreden",
+ "title": "Kan niet aanmelden"
+ },
+ "passkey": {
+ "subtitle": "Het gebruik van je pascode bevestigt dat jij het bent. Je apparaat kan vragen om je vingerafdruk, gezicht of schermvergrendeling.",
+ "title": "Gebruik je pascode"
+ },
+ "password": {
+ "actionLink": "Een andere methode gebruiken",
+ "subtitle": "Voer het wachtwoord in dat is gekoppeld aan je account",
+ "title": "Voer je wachtwoord in"
+ },
+ "passwordPwned": {
+ "title": "Wachtwoord gecompromitteerd"
+ },
+ "phoneCode": {
+ "formTitle": "Verificatiecode",
+ "resendButton": "Geen code ontvangen? Opnieuw verzenden",
+ "subtitle": "om door te gaan naar {{applicationName}}",
+ "title": "Controleer je telefoon"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "Verificatiecode",
+ "resendButton": "Geen code ontvangen? Opnieuw verzenden",
+ "subtitle": "Voer de verificatiecode in die naar je telefoon is verzonden",
+ "title": "Controleer je telefoon"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "Wachtwoord opnieuw instellen",
+ "requiredMessage": "Om veiligheidsredenen moet je je wachtwoord opnieuw instellen.",
+ "successMessage": "Je wachtwoord is succesvol gewijzigd. We melden je aan, even geduld a.u.b.",
+ "title": "Stel een nieuw wachtwoord in"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "We moeten je identiteit verifiëren voordat we je wachtwoord opnieuw instellen."
+ },
+ "start": {
+ "actionLink": "Aanmelden",
+ "actionLink__use_email": "Gebruik e-mail",
+ "actionLink__use_email_username": "Gebruik e-mail of gebruikersnaam",
+ "actionLink__use_passkey": "Gebruik pascode in plaats daarvan",
+ "actionLink__use_phone": "Gebruik telefoon",
+ "actionLink__use_username": "Gebruik gebruikersnaam",
+ "actionText": "Heb je nog geen account?",
+ "subtitle": "Welkom terug! Meld je aan om door te gaan",
+ "title": "Meld je aan bij {{applicationName}}"
+ },
+ "totpMfa": {
+ "formTitle": "Verificatiecode",
+ "subtitle": "Voer de verificatiecode in die is gegenereerd door je authenticator-app",
+ "title": "Tweestapsverificatie"
+ }
+ },
+ "signInEnterPasswordTitle": "Voer je wachtwoord in",
+ "signUp": {
+ "continue": {
+ "actionLink": "Aanmelden",
+ "actionText": "Heb je al een account?",
+ "subtitle": "Vul de overgebleven gegevens in om door te gaan",
+ "title": "Ontbrekende velden invullen"
+ },
+ "emailCode": {
+ "formSubtitle": "Voer de verificatiecode in die naar je e-mailadres is verzonden",
+ "formTitle": "Verificatiecode",
+ "resendButton": "Geen code ontvangen? Opnieuw verzenden",
+ "subtitle": "Voer de verificatiecode in die naar je e-mail is verzonden",
+ "title": "Je e-mail verifiëren"
+ },
+ "emailLink": {
+ "formSubtitle": "Gebruik de verificatielink die naar je e-mailadres is verzonden",
+ "formTitle": "Verificatielink",
+ "loading": {
+ "title": "Aanmelden..."
+ },
+ "resendButton": "Geen link ontvangen? Opnieuw verzenden",
+ "subtitle": "om door te gaan naar {{applicationName}}",
+ "title": "Je e-mail verifiëren",
+ "verified": {
+ "title": "Succesvol aangemeld"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Ga terug naar het nieuw geopende tabblad om door te gaan",
+ "subtitleNewTab": "Ga terug naar het vorige tabblad om door te gaan",
+ "title": "E-mail succesvol geverifieerd"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "Voer de verificatiecode in die naar je telefoonnummer is verzonden",
+ "formTitle": "Verificatiecode",
+ "resendButton": "Geen code ontvangen? Opnieuw verzenden",
+ "subtitle": "Voer de verificatiecode in die naar je telefoon is verzonden",
+ "title": "Je telefoon verifiëren"
+ },
+ "start": {
+ "actionLink": "Aanmelden",
+ "actionText": "Heb je al een account?",
+ "subtitle": "Welkom! Vul de gegevens in om te beginnen",
+ "title": "Maak je account aan"
+ }
+ },
+ "socialButtonsBlockButton": "Doorgaan met {{provider|titleize}}",
+ "unstable__errors": {
+ "captcha_invalid": "Aanmelding mislukt vanwege mislukte beveiligingsvalidaties. Vernieuw de pagina om het opnieuw te proberen of neem contact op met de ondersteuning voor meer hulp.",
+ "captcha_unavailable": "Aanmelding mislukt vanwege mislukte botvalidatie. Vernieuw de pagina om het opnieuw te proberen of neem contact op met de ondersteuning voor meer hulp.",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "Dit e-mailadres is al in gebruik. Probeer een ander e-mailadres.",
+ "form_identifier_exists__phone_number": "Dit telefoonnummer is al in gebruik. Probeer een ander telefoonnummer.",
+ "form_identifier_exists__username": "Deze gebruikersnaam is al in gebruik. Probeer een andere gebruikersnaam.",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "E-mailadres moet een geldig e-mailadres zijn.",
+ "form_param_format_invalid__phone_number": "Telefoonnummer moet in een geldig internationaal formaat zijn.",
+ "form_param_max_length_exceeded__first_name": "Voornaam mag niet meer dan 256 tekens bevatten.",
+ "form_param_max_length_exceeded__last_name": "Achternaam mag niet meer dan 256 tekens bevatten.",
+ "form_param_max_length_exceeded__name": "Naam mag niet meer dan 256 tekens bevatten.",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "Uw wachtwoord is niet sterk genoeg.",
+ "form_password_pwned": "Dit wachtwoord is gevonden als onderdeel van een datalek en kan niet worden gebruikt, probeer in plaats daarvan een ander wachtwoord.",
+ "form_password_pwned__sign_in": "Dit wachtwoord is gevonden als onderdeel van een datalek en kan niet worden gebruikt, reset alstublieft uw wachtwoord.",
+ "form_password_size_in_bytes_exceeded": "Uw wachtwoord heeft het maximale aantal bytes overschreden, verkort het of verwijder enkele speciale tekens.",
+ "form_password_validation_failed": "Onjuist wachtwoord",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "U kunt uw laatste identificatie niet verwijderen.",
+ "not_allowed_access": "",
+ "passkey_already_exists": "Er is al een pascode geregistreerd op dit apparaat.",
+ "passkey_not_supported": "Pascodes worden niet ondersteund op dit apparaat.",
+ "passkey_pa_not_supported": "Registratie vereist een platformauthenticator, maar het apparaat ondersteunt dit niet.",
+ "passkey_registration_cancelled": "Registratie van pascode is geannuleerd of verlopen.",
+ "passkey_retrieval_cancelled": "Verificatie van pascode is geannuleerd of verlopen.",
+ "passwordComplexity": {
+ "maximumLength": "minder dan {{length}} tekens",
+ "minimumLength": "{{length}} of meer tekens",
+ "requireLowercase": "een kleine letter",
+ "requireNumbers": "een nummer",
+ "requireSpecialCharacter": "een speciaal teken",
+ "requireUppercase": "een hoofdletter",
+ "sentencePrefix": "Uw wachtwoord moet bevatten"
+ },
+ "phone_number_exists": "Dit telefoonnummer is al in gebruik. Probeer een ander telefoonnummer.",
+ "zxcvbn": {
+ "couldBeStronger": "Uw wachtwoord werkt, maar kan sterker zijn. Probeer meer tekens toe te voegen.",
+ "goodPassword": "Uw wachtwoord voldoet aan alle vereisten.",
+ "notEnough": "Uw wachtwoord is niet sterk genoeg.",
+ "suggestions": {
+ "allUppercase": "Maak sommige letters hoofdletters, maar niet allemaal.",
+ "anotherWord": "Voeg meer woorden toe die minder gebruikelijk zijn.",
+ "associatedYears": "Vermijd jaren die met u geassocieerd zijn.",
+ "capitalization": "Maak meer dan alleen de eerste letter hoofdletter.",
+ "dates": "Vermijd data en jaren die met u geassocieerd zijn.",
+ "l33t": "Vermijd voorspelbare lettervervangingen zoals '@' voor 'a'.",
+ "longerKeyboardPattern": "Gebruik langere toetsenbordpatronen en verander meerdere keren van typrichting.",
+ "noNeed": "U kunt sterke wachtwoorden maken zonder symbolen, cijfers of hoofdletters te gebruiken.",
+ "pwned": "Als u dit wachtwoord elders gebruikt, moet u het wijzigen.",
+ "recentYears": "Vermijd recente jaren.",
+ "repeated": "Vermijd herhaalde woorden en tekens.",
+ "reverseWords": "Vermijd omgekeerde spellingen van veelvoorkomende woorden.",
+ "sequences": "Vermijd veelvoorkomende karakterreeksen.",
+ "useWords": "Gebruik meerdere woorden, maar vermijd veelvoorkomende zinnen."
+ },
+ "warnings": {
+ "common": "Dit is een veelgebruikt wachtwoord.",
+ "commonNames": "Gemeenschappelijke namen en achternamen zijn gemakkelijk te raden.",
+ "dates": "Data zijn gemakkelijk te raden.",
+ "extendedRepeat": "Herhaalde karakterpatronen zoals \"abcabcabc\" zijn gemakkelijk te raden.",
+ "keyPattern": "Korte toetsenbordpatronen zijn gemakkelijk te raden.",
+ "namesByThemselves": "Enkele namen of achternamen zijn gemakkelijk te raden.",
+ "pwned": "Uw wachtwoord is blootgesteld door een datalek op het internet.",
+ "recentYears": "Recente jaren zijn gemakkelijk te raden.",
+ "sequences": "Gemeenschappelijke karakterreeksen zoals \"abc\" zijn gemakkelijk te raden.",
+ "similarToCommon": "Dit lijkt op een veelgebruikt wachtwoord.",
+ "simpleRepeat": "Herhaalde tekens zoals \"aaa\" zijn gemakkelijk te raden.",
+ "straightRow": "Rechte rijen toetsen op uw toetsenbord zijn gemakkelijk te raden.",
+ "topHundred": "Dit is een veelgebruikt wachtwoord.",
+ "topTen": "Dit is een zeer gebruikt wachtwoord.",
+ "userInputs": "Er mogen geen persoonlijke of paginagerelateerde gegevens zijn.",
+ "wordByItself": "Enkele woorden zijn gemakkelijk te raden."
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "Account toevoegen",
+ "action__manageAccount": "Account beheren",
+ "action__signOut": "Afmelden",
+ "action__signOutAll": "Afmelden bij alle accounts"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "Gekopieerd!",
+ "actionLabel__copy": "Alles kopiëren",
+ "actionLabel__download": "Download .txt",
+ "actionLabel__print": "Afdrukken",
+ "infoText1": "Back-upcodes worden ingeschakeld voor dit account.",
+ "infoText2": "Houd de back-upcodes geheim en bewaar ze veilig. U kunt back-upcodes opnieuw genereren als u vermoedt dat ze zijn gecompromitteerd.",
+ "subtitle__codelist": "Bewaar ze veilig en houd ze geheim.",
+ "successMessage": "Back-upcodes zijn nu ingeschakeld. U kunt een van deze gebruiken om in te loggen op uw account als u geen toegang meer heeft tot uw verificatieapparaat. Elke code kan slechts eenmaal worden gebruikt.",
+ "successSubtitle": "U kunt een van deze gebruiken om in te loggen op uw account als u geen toegang meer heeft tot uw verificatieapparaat.",
+ "title": "Back-upcodeverificatie toevoegen",
+ "title__codelist": "Back-upcodes"
+ },
+ "connectedAccountPage": {
+ "formHint": "Selecteer een provider om uw account te koppelen.",
+ "formHint__noAccounts": "Er zijn geen beschikbare externe accountproviders.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} wordt verwijderd van dit account.",
+ "messageLine2": "U kunt dit gekoppelde account niet langer gebruiken en eventuele afhankelijke functies zullen niet meer werken.",
+ "successMessage": "{{connectedAccount}} is verwijderd van uw account.",
+ "title": "Gekoppeld account verwijderen"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "De provider is toegevoegd aan uw account",
+ "title": "Gekoppeld account toevoegen"
+ },
+ "deletePage": {
+ "actionDescription": "Typ \"Account verwijderen\" hieronder om door te gaan.",
+ "confirm": "Account verwijderen",
+ "messageLine1": "Weet u zeker dat u uw account wilt verwijderen?",
+ "messageLine2": "Deze actie is permanent en onomkeerbaar.",
+ "title": "Account verwijderen"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "Er wordt een e-mail met een verificatiecode naar dit e-mailadres gestuurd.",
+ "formSubtitle": "Voer de verificatiecode in die naar {{identifier}} is gestuurd.",
+ "formTitle": "Verificatiecode",
+ "resendButton": "Geen code ontvangen? Opnieuw verzenden",
+ "successMessage": "Het e-mailadres {{identifier}} is toegevoegd aan uw account."
+ },
+ "emailLink": {
+ "formHint": "Er wordt een e-mail met een verificatielink naar dit e-mailadres gestuurd.",
+ "formSubtitle": "Klik op de verificatielink in de e-mail die naar {{identifier}} is gestuurd.",
+ "formTitle": "Verificatielink",
+ "resendButton": "Geen link ontvangen? Opnieuw verzenden",
+ "successMessage": "Het e-mailadres {{identifier}} is toegevoegd aan uw account."
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}} wordt verwijderd van dit account.",
+ "messageLine2": "U kunt niet langer inloggen met dit e-mailadres.",
+ "successMessage": "{{emailAddress}} is verwijderd van uw account.",
+ "title": "E-mailadres verwijderen"
+ },
+ "title": "E-mailadres toevoegen",
+ "verifyTitle": "E-mailadres verifiëren"
+ },
+ "formButtonPrimary__add": "Toevoegen",
+ "formButtonPrimary__continue": "Doorgaan",
+ "formButtonPrimary__finish": "Voltooien",
+ "formButtonPrimary__remove": "Verwijderen",
+ "formButtonPrimary__save": "Opslaan",
+ "formButtonReset": "Annuleren",
+ "mfaPage": {
+ "formHint": "Selecteer een methode om toe te voegen.",
+ "title": "Tweestapsverificatie toevoegen"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "Bestaand nummer gebruiken",
+ "primaryButton__addPhoneNumber": "Telefoonnummer toevoegen",
+ "removeResource": {
+ "messageLine1": "{{identifier}} ontvangt geen verificatiecodes meer bij het inloggen.",
+ "messageLine2": "Uw account is mogelijk minder veilig. Weet u zeker dat u wilt doorgaan?",
+ "successMessage": "SMS-code tweestapsverificatie is verwijderd voor {{mfaPhoneCode}}",
+ "title": "Tweestapsverificatie verwijderen"
+ },
+ "subtitle__availablePhoneNumbers": "Selecteer een bestaand telefoonnummer om te registreren voor SMS-code tweestapsverificatie of voeg een nieuw nummer toe.",
+ "subtitle__unavailablePhoneNumbers": "Er zijn geen beschikbare telefoonnummers om te registreren voor SMS-code tweestapsverificatie, voeg alstublieft een nieuw nummer toe.",
+ "successMessage1": "Bij het inloggen moet u een verificatiecode invoeren die naar dit telefoonnummer is gestuurd als extra stap.",
+ "successMessage2": "Sla deze back-upcodes op en bewaar ze op een veilige plek. Als u geen toegang meer heeft tot uw verificatieapparaat, kunt u back-upcodes gebruiken om in te loggen.",
+ "successTitle": "SMS-codeverificatie ingeschakeld",
+ "title": "SMS-codeverificatie toevoegen"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "In plaats daarvan QR-code scannen",
+ "buttonUnableToScan__nonPrimary": "Kan QR-code niet scannen?",
+ "infoText__ableToScan": "Stel een nieuwe aanmeldmethode in in uw authenticator-app en scan de volgende QR-code om deze aan uw account te koppelen.",
+ "infoText__unableToScan": "Stel een nieuwe aanmeldmethode in in uw authenticator en voer de onderstaande sleutel in.",
+ "inputLabel__unableToScan1": "Zorg ervoor dat Tijdgebaseerde of Eenmalige wachtwoorden zijn ingeschakeld en voltooi vervolgens het koppelen van uw account.",
+ "inputLabel__unableToScan2": "Als uw authenticator TOTP URI's ondersteunt, kunt u ook de volledige URI kopiëren."
+ },
+ "removeResource": {
+ "messageLine1": "Verificatiecodes van deze authenticator zijn niet langer vereist bij het inloggen.",
+ "messageLine2": "Uw account is mogelijk minder veilig. Weet u zeker dat u wilt doorgaan?",
+ "successMessage": "Tweestapsverificatie via authenticator-applicatie is verwijderd.",
+ "title": "Tweestapsverificatie verwijderen"
+ },
+ "successMessage": "Tweestapsverificatie is nu ingeschakeld. Bij het inloggen moet u een verificatiecode van deze authenticator invoeren als extra stap.",
+ "title": "Authenticator-applicatie toevoegen",
+ "verifySubtitle": "Voer de verificatiecode in die is gegenereerd door uw authenticator",
+ "verifyTitle": "Verificatiecode"
+ },
+ "mobileButton__menu": "Menu",
+ "navbar": {
+ "account": "Profiel",
+ "description": "Beheer uw accountgegevens.",
+ "security": "Beveiliging",
+ "title": "Account"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} wordt van dit account verwijderd.",
+ "title": "Passkey verwijderen"
+ },
+ "subtitle__rename": "U kunt de naam van de passkey wijzigen om deze gemakkelijker te vinden.",
+ "title__rename": "Passkey hernoemen"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "Het wordt aanbevolen om uit te loggen op alle andere apparaten die uw oude wachtwoord hebben gebruikt.",
+ "readonly": "Uw wachtwoord kan momenteel niet worden bewerkt omdat u alleen kunt inloggen via de bedrijfsverbinding.",
+ "successMessage__set": "Uw wachtwoord is ingesteld.",
+ "successMessage__signOutOfOtherSessions": "Alle andere apparaten zijn uitgelogd.",
+ "successMessage__update": "Uw wachtwoord is bijgewerkt.",
+ "title__set": "Wachtwoord instellen",
+ "title__update": "Wachtwoord bijwerken"
+ },
+ "phoneNumberPage": {
+ "infoText": "Een sms met een verificatiecode wordt naar dit telefoonnummer gestuurd. Bericht- en datatarieven kunnen van toepassing zijn.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} wordt van dit account verwijderd.",
+ "messageLine2": "U kunt niet langer inloggen met dit telefoonnummer.",
+ "successMessage": "{{phoneNumber}} is verwijderd uit uw account.",
+ "title": "Telefoonnummer verwijderen"
+ },
+ "successMessage": "{{identifier}} is toegevoegd aan uw account.",
+ "title": "Telefoonnummer toevoegen",
+ "verifySubtitle": "Voer de verificatiecode in die naar {{identifier}} is gestuurd.",
+ "verifyTitle": "Telefoonnummer verifiëren"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "Aanbevolen grootte 1:1, tot 10 MB.",
+ "imageFormDestructiveActionSubtitle": "Verwijderen",
+ "imageFormSubtitle": "Uploaden",
+ "imageFormTitle": "Profielafbeelding",
+ "readonly": "Uw profielinformatie is verstrekt via de bedrijfsverbinding en kan niet worden bewerkt.",
+ "successMessage": "Uw profiel is bijgewerkt.",
+ "title": "Profiel bijwerken"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "Uitloggen van apparaat",
+ "title": "Actieve apparaten"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "Probeer opnieuw",
+ "actionLabel__reauthorize": "Nu autoriseren",
+ "destructiveActionTitle": "Verwijderen",
+ "primaryButton": "Verbind account",
+ "subtitle__reauthorize": "De vereiste rechten zijn bijgewerkt en u kunt beperkte functionaliteit ervaren. Autoriseer deze applicatie opnieuw om problemen te voorkomen",
+ "title": "Gekoppelde accounts"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "Account verwijderen",
+ "title": "Account verwijderen"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "E-mail verwijderen",
+ "detailsAction__nonPrimary": "Instellen als primair",
+ "detailsAction__primary": "Verificatie voltooien",
+ "detailsAction__unverified": "Verifiëren",
+ "primaryButton": "E-mailadres toevoegen",
+ "title": "E-mailadressen"
+ },
+ "enterpriseAccountsSection": {
+ "title": "Zakelijke accounts"
+ },
+ "headerTitle__account": "Profielgegevens",
+ "headerTitle__security": "Beveiliging",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "Regenereren",
+ "headerTitle": "Back-upcodes",
+ "subtitle__regenerate": "Ontvang een nieuwe set veilige back-upcodes. Eerdere back-upcodes worden verwijderd en kunnen niet meer worden gebruikt.",
+ "title__regenerate": "Back-upcodes regenereren"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "Als standaard instellen",
+ "destructiveActionLabel": "Verwijderen"
+ },
+ "primaryButton": "Tweestapsverificatie toevoegen",
+ "title": "Tweestapsverificatie",
+ "totp": {
+ "destructiveActionTitle": "Verwijderen",
+ "headerTitle": "Authenticator-applicatie"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "Verwijderen",
+ "menuAction__rename": "Hernoemen",
+ "title": "Wachtwoorden"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "Wachtwoord instellen",
+ "primaryButton__updatePassword": "Wachtwoord bijwerken",
+ "title": "Wachtwoord"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "Telefoonnummer verwijderen",
+ "detailsAction__nonPrimary": "Instellen als primair",
+ "detailsAction__primary": "Verificatie voltooien",
+ "detailsAction__unverified": "Telefoonnummer verifiëren",
+ "primaryButton": "Telefoonnummer toevoegen",
+ "title": "Telefoonnummers"
+ },
+ "profileSection": {
+ "primaryButton": "Profiel bijwerken",
+ "title": "Profiel"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "Gebruikersnaam instellen",
+ "primaryButton__updateUsername": "Gebruikersnaam bijwerken",
+ "title": "Gebruikersnaam"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "Portemonnee verwijderen",
+ "primaryButton": "Web3-portefeuilles",
+ "title": "Web3-portefeuilles"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "Uw gebruikersnaam is bijgewerkt.",
+ "title__set": "Gebruikersnaam instellen",
+ "title__update": "Gebruikersnaam bijwerken"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} wordt van dit account verwijderd.",
+ "messageLine2": "U kunt niet langer inloggen met deze web3-portemonnee.",
+ "successMessage": "{{web3Wallet}} is verwijderd uit uw account.",
+ "title": "Web3-portemonnee verwijderen"
+ },
+ "subtitle__availableWallets": "Selecteer een web3-portemonnee om verbinding te maken met uw account.",
+ "subtitle__unavailableWallets": "Er zijn geen beschikbare web3-portemonnees.",
+ "successMessage": "De portemonnee is toegevoegd aan uw account.",
+ "title": "Web3-portemonnee toevoegen"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/common.json b/DigitalHumanWeb/locales/nl-NL/common.json
new file mode 100644
index 0000000..1603c76
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "Over",
+ "advanceSettings": "Geavanceerde instellingen",
+ "alert": {
+ "cloud": {
+ "action": "Gratis proberen",
+ "desc": "We bieden alle geregistreerde gebruikers {{credit}} gratis rekenpunten aan, zonder ingewikkelde configuratie, direct klaar voor gebruik. Ondersteunt onbeperkte chatgeschiedenis en wereldwijde cloudsynchronisatie. Ontdek samen meer geavanceerde functies.",
+ "descOnMobile": "We bieden alle geregistreerde gebruikers {{credit}} gratis rekenpunten aan, zonder ingewikkelde configuratie direct klaar voor gebruik.",
+ "title": "Welkom bij {{name}}"
+ }
+ },
+ "appInitializing": "Applicatie wordt gestart...",
+ "autoGenerate": "Automatisch genereren",
+ "autoGenerateTooltip": "Automatisch assistentbeschrijving genereren op basis van suggesties",
+ "autoGenerateTooltipDisabled": "Schakel de automatische aanvulling in nadat u een suggestiewoord heeft ingevoerd",
+ "back": "Terug",
+ "batchDelete": "Batch verwijderen",
+ "blog": "Product Blog",
+ "cancel": "Annuleren",
+ "changelog": "Wijzigingslogboek",
+ "close": "Sluiten",
+ "contact": "Contacteer ons",
+ "copy": "Kopiëren",
+ "copyFail": "Kopiëren mislukt",
+ "copySuccess": "Kopiëren gelukt",
+ "dataStatistics": {
+ "messages": "Berichten",
+ "sessions": "Sessies",
+ "today": "Vandaag",
+ "topics": "Onderwerpen"
+ },
+ "defaultAgent": "Standaard assistent",
+ "defaultSession": "Standaard assistent",
+ "delete": "Verwijderen",
+ "document": "Gebruiksaanwijzing",
+ "download": "Downloaden",
+ "duplicate": "Dupliceren",
+ "edit": "Bewerken",
+ "export": "Exporteren",
+ "exportType": {
+ "agent": "Assistentinstellingen exporteren",
+ "agentWithMessage": "Assistent en berichten exporteren",
+ "all": "Algemene instellingen en alle assistentgegevens exporteren",
+ "allAgent": "Alle assistentinstellingen exporteren",
+ "allAgentWithMessage": "Alle assistenten en berichten exporteren",
+ "globalSetting": "Algemene instellingen exporteren"
+ },
+ "feedback": "Feedback en suggesties",
+ "follow": "Volg ons op {{name}}",
+ "footer": {
+ "action": {
+ "feedback": "Deel uw waardevolle feedback",
+ "star": "Voeg een ster toe op GitHub"
+ },
+ "and": "en",
+ "feedback": {
+ "action": "Feedback delen",
+ "desc": "Elk van uw ideeën en suggesties is van onschatbare waarde. We horen graag uw mening! Neem gerust contact met ons op voor feedback over productfuncties en gebruikerservaring om LobeChat te helpen verbeteren.",
+ "title": "Deel uw waardevolle feedback op GitHub"
+ },
+ "later": "Later",
+ "star": {
+ "action": "Geef een ster",
+ "desc": "Als u van ons product houdt en ons wilt steunen, kunt u ons dan een ster geven op GitHub? Deze kleine daad is voor ons van grote betekenis en zal ons motiveren om u voortdurend een geweldige ervaring te bieden.",
+ "title": "Geef ons een ster op GitHub"
+ },
+ "title": "Houdt u van ons product?"
+ },
+ "fullscreen": "Volledig scherm",
+ "historyRange": "Geschiedenisbereik",
+ "import": "Importeren",
+ "importModal": {
+ "error": {
+ "desc": "Sorry, er is een uitzondering opgetreden tijdens het importeren van gegevens. Probeer opnieuw te importeren of <1>diend een probleem in1>, we zullen het probleem zo snel mogelijk voor je onderzoeken.",
+ "title": "Gegevens importeren mislukt"
+ },
+ "finish": {
+ "onlySettings": "Systeeminstellingen succesvol geïmporteerd",
+ "start": "Beginnen met gebruiken",
+ "subTitle": "Gegevens succesvol geïmporteerd, duurde {{duration}} seconden. Details van de import:",
+ "title": "Gegevensimport voltooid"
+ },
+ "loading": "Gegevens worden geïmporteerd, even geduld a.u.b...",
+ "preparing": "Voorbereiden van gegevensimportmodule...",
+ "result": {
+ "added": "Succesvol geïmporteerd",
+ "errors": "Fouten bij importeren",
+ "messages": "Berichten",
+ "sessionGroups": "Sessiegroepen",
+ "sessions": "Assistenten",
+ "skips": "Overslaan van duplicaten",
+ "topics": "Onderwerpen",
+ "type": "Gegevenstype"
+ },
+ "title": "Gegevens importeren",
+ "uploading": {
+ "desc": "Het bestand is momenteel aan het uploaden vanwege de grote omvang...",
+ "restTime": "Resterende tijd",
+ "speed": "Uploadsnelheid"
+ }
+ },
+ "information": "Gemeenschap en Informatie",
+ "installPWA": "Installeer de browser-app",
+ "lang": {
+ "ar": "Arabisch",
+ "bg-BG": "Bulgaars",
+ "bn": "Bengaals",
+ "cs-CZ": "Tsjechisch",
+ "da-DK": "Deens",
+ "de-DE": "Duits",
+ "el-GR": "Grieks",
+ "en": "Engels",
+ "en-US": "Engels",
+ "es-ES": "Spaans",
+ "fi-FI": "Fins",
+ "fr-FR": "Frans",
+ "hi-IN": "Hindi",
+ "hu-HU": "Hongaars",
+ "id-ID": "Indonesisch",
+ "it-IT": "Italiaans",
+ "ja-JP": "Japans",
+ "ko-KR": "Koreaans",
+ "nl-NL": "Nederlands",
+ "no-NO": "Noors",
+ "pl-PL": "Pools",
+ "pt-BR": "Braziliaans Portugees",
+ "pt-PT": "Portugees",
+ "ro-RO": "Roemeens",
+ "ru-RU": "Russisch",
+ "sk-SK": "Slowaaks",
+ "sr-RS": "Servisch",
+ "sv-SE": "Zweeds",
+ "th-TH": "Thais",
+ "tr-TR": "Turks",
+ "uk-UA": "Oekraïens",
+ "vi-VN": "Vietnamees",
+ "zh": "Chinees",
+ "zh-CN": "Vereenvoudigd Chinees",
+ "zh-TW": "Traditioneel Chinees"
+ },
+ "layoutInitializing": "Lay-out wordt geladen...",
+ "legal": "Juridisch",
+ "loading": "Laden...",
+ "mail": {
+ "business": "Zakelijke samenwerking",
+ "support": "E-mailondersteuning"
+ },
+ "oauth": "SSO inloggen",
+ "officialSite": "Officiële website",
+ "ok": "Oké",
+ "password": "Wachtwoord",
+ "pin": "Vastzetten",
+ "pinOff": "Vastzetten uitschakelen",
+ "privacy": "Privacybeleid",
+ "regenerate": "Opnieuw genereren",
+ "rename": "Naam wijzigen",
+ "reset": "Resetten",
+ "retry": "Opnieuw proberen",
+ "send": "Verzenden",
+ "setting": "Instelling",
+ "share": "Delen",
+ "stop": "Stoppen",
+ "sync": {
+ "actions": {
+ "settings": "Synchronisatie-instellingen",
+ "sync": "Nu synchroniseren"
+ },
+ "awareness": {
+ "current": "Huidig apparaat"
+ },
+ "channel": "Kanaal",
+ "disabled": {
+ "actions": {
+ "enable": "Cloudsynchronisatie inschakelen",
+ "settings": "Synchronisatie-instellingen configureren"
+ },
+ "desc": "De huidige gespreksgegevens worden alleen opgeslagen in deze browser. Als u gegevens wilt synchroniseren tussen meerdere apparaten, configureer en schakel dan cloudsynchronisatie in.",
+ "title": "Gegevenssynchronisatie is uitgeschakeld"
+ },
+ "enabled": {
+ "title": "Gegevenssynchronisatie"
+ },
+ "status": {
+ "connecting": "Verbinding maken",
+ "disabled": "Synchronisatie is uitgeschakeld",
+ "ready": "Verbonden",
+ "synced": "Gesynchroniseerd",
+ "syncing": "Synchroniseren",
+ "unconnected": "Verbinding mislukt"
+ },
+ "title": "Synchronisatiestatus",
+ "unconnected": {
+ "tip": "Verbindingsfout met de signaleringsserver. Er kan geen point-to-point-communicatiekanaal worden opgezet. Controleer het netwerk en probeer het opnieuw."
+ }
+ },
+ "tab": {
+ "chat": "Chat",
+ "discover": "Ontdekken",
+ "files": "Bestanden",
+ "me": "Ik",
+ "setting": "Instellingen"
+ },
+ "telemetry": {
+ "allow": "Toestaan",
+ "deny": "Weigeren",
+ "desc": "We willen graag anonieme gebruiksgegevens verzamelen om LobeChat te verbeteren en een betere productervaring te bieden. Je kunt dit op elk moment uitschakelen in 'Instellingen' - 'Over'.",
+ "learnMore": "Meer informatie",
+ "title": "Help LobeChat verbeteren"
+ },
+ "temp": "tijdelijk",
+ "terms": "algemene voorwaarden",
+ "updateAgent": "update assistent",
+ "upgradeVersion": {
+ "action": "upgraden",
+ "hasNew": "nieuwe versie beschikbaar",
+ "newVersion": "nieuwe versie beschikbaar: {{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "anonieme gebruiker",
+ "billing": "facturatie",
+ "cloud": "Ervaar {{name}}",
+ "data": "gegevensopslag",
+ "defaultNickname": "communitygebruiker",
+ "discord": "communityondersteuning",
+ "docs": "gebruiksaanwijzing",
+ "email": "e-mailondersteuning",
+ "feedback": "feedback en suggesties",
+ "help": "helpcentrum",
+ "moveGuide": "instellingen verplaatst naar hier",
+ "plans": "abonnementen",
+ "preview": "voorbeeldversie",
+ "profile": "accountbeheer",
+ "setting": "app-instellingen",
+ "usages": "gebruiksstatistieken"
+ },
+ "version": "Versie"
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/components.json b/DigitalHumanWeb/locales/nl-NL/components.json
new file mode 100644
index 0000000..83734c7
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "Sleep bestanden hierheen om meerdere afbeeldingen te uploaden.",
+ "dragFileDesc": "Sleep afbeeldingen en bestanden hierheen om meerdere afbeeldingen en bestanden te uploaden.",
+ "dragFileTitle": "Bestanden uploaden",
+ "dragTitle": "Afbeeldingen uploaden"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "Toevoegen aan kennisbank",
+ "addToOtherKnowledgeBase": "Toevoegen aan andere kennisbank",
+ "batchChunking": "Batchverdeling",
+ "chunking": "Verdeling",
+ "chunkingTooltip": "Splits het bestand in meerdere tekstblokken en vectoriseer deze voor semantische zoekopdrachten en bestandsdialoog",
+ "confirmDelete": "Je staat op het punt dit bestand te verwijderen. Na verwijdering kan het niet meer worden hersteld. Bevestig je actie.",
+ "confirmDeleteMultiFiles": "Je staat op het punt de geselecteerde {{count}} bestanden te verwijderen. Na verwijdering kunnen ze niet meer worden hersteld. Bevestig je actie.",
+ "confirmRemoveFromKnowledgeBase": "Je staat op het punt de geselecteerde {{count}} bestanden uit de kennisbank te verwijderen. Na verwijdering zijn de bestanden nog steeds zichtbaar in alle bestanden. Bevestig je actie.",
+ "copyUrl": "Kopieer link",
+ "copyUrlSuccess": "Bestandsadres succesvol gekopieerd",
+ "createChunkingTask": "Voorbereiden...",
+ "deleteSuccess": "Bestand succesvol verwijderd",
+ "downloading": "Bestand aan het downloaden...",
+ "removeFromKnowledgeBase": "Verwijderen uit kennisbank",
+ "removeFromKnowledgeBaseSuccess": "Bestand succesvol verwijderd"
+ },
+ "bottom": "Je bent onderaan aangekomen",
+ "config": {
+ "showFilesInKnowledgeBase": "Toon inhoud in kennisbank"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "Bestand uploaden",
+ "folder": "Map uploaden",
+ "knowledgeBase": "Nieuwe kennisbank aanmaken"
+ },
+ "or": "of",
+ "title": "Sleep bestanden of mappen hierheen"
+ },
+ "title": {
+ "createdAt": "Aangemaakt op",
+ "size": "Grootte",
+ "title": "Bestand"
+ },
+ "total": {
+ "fileCount": "Totaal {{count}} items",
+ "selectedCount": "Geselecteerd {{count}} items"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "Tekstblokken zijn nog niet volledig gevectoriseerd, wat de semantische zoekfunctie kan uitschakelen. Om de zoekkwaliteit te verbeteren, vectoriseer de tekstblokken.",
+ "error": "Vectorisatie mislukt",
+ "errorResult": "Vectorisatie mislukt, controleer en probeer het opnieuw. Reden van falen:",
+ "processing": "Tekstblokken worden gevectoriseerd, graag even geduld.",
+ "success": "Huidige tekstblokken zijn allemaal gevectoriseerd"
+ },
+ "embeddings": "Vectorisatie",
+ "status": {
+ "error": "Verdeling mislukt",
+ "errorResult": "Verdeling mislukt, controleer en probeer het opnieuw. Reden van falen:",
+ "processing": "Bezig met verdelen",
+ "processingTip": "De server is bezig met het splitsen van tekstblokken, het sluiten van de pagina heeft geen invloed op de voortgang van de verdeling."
+ }
+ }
+ },
+ "GoBack": {
+ "back": "Terug"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "Custom model, by default, supports both function call and visual recognition. Please verify the availability of the above capabilities based on actual needs.",
+ "file": "This model supports file upload for reading and recognition.",
+ "functionCall": "This model supports function call.",
+ "tokens": "This model supports up to {{tokens}} tokens in a single session.",
+ "vision": "This model supports visual recognition."
+ },
+ "removed": "Dit model staat niet meer in de lijst. Als je het deselecteert, wordt het automatisch verwijderd."
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "No enabled model, please go to settings to enable.",
+ "provider": "Provider"
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/discover.json b/DigitalHumanWeb/locales/nl-NL/discover.json
new file mode 100644
index 0000000..5ff3bd3
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "Voeg assistent toe",
+ "addAgentAndConverse": "Voeg assistent toe en begin een gesprek",
+ "addAgentSuccess": "Toevoegen geslaagd",
+ "conversation": {
+ "l1": "Hallo, ik ben **{{name}}**, je kunt me alles vragen en ik zal mijn best doen om je te helpen ~",
+ "l2": "Hier zijn mijn mogelijkheden: ",
+ "l3": "Laten we het gesprek beginnen!"
+ },
+ "description": "Assistent introductie",
+ "detail": "Details",
+ "list": "Assistentenlijst",
+ "more": "Meer",
+ "plugins": "Geïntegreerde plugins",
+ "recentSubmits": "Recent bijgewerkt",
+ "suggestions": "Gerelateerde aanbevelingen",
+ "systemRole": "Assistent instellingen",
+ "try": "Probeer het"
+ },
+ "back": "Terug naar Ontdekken",
+ "category": {
+ "assistant": {
+ "academic": "Academisch",
+ "all": "Alle",
+ "career": "Carrière",
+ "copywriting": "Copywriting",
+ "design": "Ontwerp",
+ "education": "Onderwijs",
+ "emotions": "Emoties",
+ "entertainment": "Entertainment",
+ "games": "Spellen",
+ "general": "Algemeen",
+ "life": "Leven",
+ "marketing": "Marketing",
+ "office": "Kantoor",
+ "programming": "Programmeren",
+ "translation": "Vertaling"
+ },
+ "plugin": {
+ "all": "Alles",
+ "gaming-entertainment": "Gaming en Entertainment",
+ "life-style": "Levensstijl",
+ "media-generate": "Media Generatie",
+ "science-education": "Wetenschap en Educatie",
+ "social": "Sociale Media",
+ "stocks-finance": "Aandelen en Financiën",
+ "tools": "Hulpmiddelen",
+ "web-search": "Webzoektocht"
+ }
+ },
+ "cleanFilter": "Filter wissen",
+ "create": "Creëren",
+ "createGuide": {
+ "func1": {
+ "desc1": "Ga naar de instellingenpagina van de assistent die je wilt indienen via de instellingen in de rechterbovenhoek van het gespreksscherm;",
+ "desc2": "Klik op de knop 'Indienen naar assistentmarkt' in de rechterbovenhoek.",
+ "tag": "Methode één",
+ "title": "Indienen via LobeChat"
+ },
+ "func2": {
+ "button": "Ga naar de Github assistent repository",
+ "desc": "Als je de assistent aan de index wilt toevoegen, maak dan een item aan in de plugins map met agent-template.json of agent-template-full.json, schrijf een korte beschrijving en markeer het correct, en dien vervolgens een pull request in.",
+ "tag": "Methode twee",
+ "title": "Indienen via Github"
+ }
+ },
+ "dislike": "Niet leuk",
+ "filter": "Filter",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "Alle auteurs",
+ "followed": "Gevolgde auteurs",
+ "title": "Auteur bereik"
+ },
+ "contentLength": "Minimale contextlengte",
+ "maxToken": {
+ "title": "Stel maximale lengte in (Token)",
+ "unlimited": "Onbeperkt"
+ },
+ "other": {
+ "functionCall": "Ondersteunt functie-aanroep",
+ "title": "Overige",
+ "vision": "Ondersteunt visuele herkenning",
+ "withKnowledge": "Met kennisbank",
+ "withTool": "Met plugin"
+ },
+ "pricing": "Modelprijs",
+ "timePeriod": {
+ "all": "Alle tijd",
+ "day": "Laatste 24 uur",
+ "month": "Laatste 30 dagen",
+ "title": "Tijd bereik",
+ "week": "Laatste 7 dagen",
+ "year": "Laatste jaar"
+ }
+ },
+ "home": {
+ "featuredAssistants": "Aanbevolen assistenten",
+ "featuredModels": "Aanbevolen modellen",
+ "featuredProviders": "Aanbevolen modelproviders",
+ "featuredTools": "Aanbevolen plugins",
+ "more": "Ontdek meer"
+ },
+ "like": "Leuk",
+ "models": {
+ "chat": "Begin gesprek",
+ "contentLength": "Maximale contextlengte",
+ "free": "Gratis",
+ "guide": "Configuratiegids",
+ "list": "Modellenlijst",
+ "more": "Meer",
+ "parameterList": {
+ "defaultValue": "Standaardwaarde",
+ "docs": "Bekijk documentatie",
+ "frequency_penalty": {
+ "desc": "Deze instelling past de frequentie aan waarmee het model specifieke woorden die al in de invoer zijn verschenen, herhaalt. Hogere waarden verminderen de kans op herhaling, terwijl negatieve waarden het tegenovergestelde effect hebben. Woordstraffen nemen niet toe met het aantal verschijningen. Negatieve waarden moedigen herhaling aan.",
+ "title": "Frequentie straf"
+ },
+ "max_tokens": {
+ "desc": "Deze instelling definieert de maximale lengte die het model kan genereren in één enkele reactie. Een hogere waarde stelt het model in staat om langere antwoorden te genereren, terwijl een lagere waarde de lengte van de reactie beperkt, waardoor deze beknopter wordt. Het is raadzaam om deze waarde aan te passen op basis van verschillende toepassingsscenario's om de gewenste lengte en detailniveau van de reactie te bereiken.",
+ "title": "Beperking van de reactie in één keer"
+ },
+ "presence_penalty": {
+ "desc": "Deze instelling is bedoeld om de herhaling van woorden te beheersen op basis van hun frequentie in de invoer. Het probeert minder gebruik te maken van woorden die vaker in de invoer voorkomen, in verhouding tot hun gebruiksfrequentie. Woordstraffen nemen toe met het aantal verschijningen. Negatieve waarden moedigen herhaling aan.",
+ "title": "Onderwerp versheid"
+ },
+ "range": "Bereik",
+ "temperature": {
+ "desc": "Deze instelling beïnvloedt de diversiteit van de reacties van het model. Lagere waarden leiden tot meer voorspelbare en typische reacties, terwijl hogere waarden meer diverse en ongebruikelijke reacties aanmoedigen. Wanneer de waarde op 0 is ingesteld, geeft het model altijd dezelfde reactie op een gegeven invoer.",
+ "title": "Willekeurigheid"
+ },
+ "title": "Modelparameters",
+ "top_p": {
+ "desc": "Deze instelling beperkt de keuze van het model tot een bepaald percentage van de meest waarschijnlijke woorden: alleen die woorden waarvan de cumulatieve waarschijnlijkheid P bereikt. Lagere waarden maken de reacties van het model voorspelbaarder, terwijl de standaardinstelling het model toestaat om uit het volledige bereik van woorden te kiezen.",
+ "title": "Nucleus sampling"
+ },
+ "type": "Type"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChat ondersteunt het gebruik van een aangepaste API-sleutel voor deze provider.",
+ "input": "Invoerkosten",
+ "inputTooltip": "Kosten per miljoen tokens",
+ "latency": "Latentie",
+ "latencyTooltip": "Gemiddelde responstijd voor de eerste token van de provider",
+ "maxOutput": "Maximale uitvoerlengte",
+ "maxOutputTooltip": "Maximaal aantal tokens dat deze endpoint kan genereren",
+ "officialTooltip": "Officiële LobeHub service",
+ "output": "Uitvoerkosten",
+ "outputTooltip": "Kosten per miljoen tokens",
+ "streamCancellationTooltip": "Deze provider ondersteunt stream annulering.",
+ "throughput": "Doorvoer",
+ "throughputTooltip": "Gemiddeld aantal tokens dat per seconde wordt verzonden in stream aanvragen"
+ },
+ "suggestions": "Gerelateerde modellen",
+ "supportedProviders": "Providers die dit model ondersteunen"
+ },
+ "plugins": {
+ "community": "Gemeenschapsplugins",
+ "install": "Plugin installeren",
+ "installed": "Geïnstalleerd",
+ "list": "Pluginlijst",
+ "meta": {
+ "description": "Beschrijving",
+ "parameter": "Parameter",
+ "title": "Hulpmiddelparameters",
+ "type": "Type"
+ },
+ "more": "Meer",
+ "official": "Officiële plugins",
+ "recentSubmits": "Recent ingediend",
+ "suggestions": "Gerelateerde aanbevelingen"
+ },
+ "providers": {
+ "config": "Configuratie aanbieder",
+ "list": "Lijst van modelproviders",
+ "modelCount": "{{count}} modellen",
+ "modelSite": "Modeldocumentatie",
+ "more": "Meer",
+ "officialSite": "Officiële website",
+ "showAllModels": "Toon alle modellen",
+ "suggestions": "Gerelateerde aanbieders",
+ "supportedModels": "Ondersteunde modellen"
+ },
+ "search": {
+ "placeholder": "Zoek naam, beschrijving of trefwoord...",
+ "result": "{{count}} resultaten over {{keyword}}",
+ "searching": "Zoeken..."
+ },
+ "sort": {
+ "mostLiked": "Meest Geliket",
+ "mostUsed": "Meest Gebruikt",
+ "newest": "Nieuwste Eerst",
+ "oldest": "Oudste Eerst",
+ "recommended": "Aanbevolen"
+ },
+ "tab": {
+ "assistants": "Assistenten",
+ "home": "Startpagina",
+ "models": "Modellen",
+ "plugins": "Plugins",
+ "providers": "Modelleveranciers"
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/error.json b/DigitalHumanWeb/locales/nl-NL/error.json
new file mode 100644
index 0000000..bdbc950
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "Continue session",
+ "desc": "{{greeting}}, it's great to continue serving you. Let's pick up where we left off.",
+ "title": "Welcome back, {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "Terug naar startpagina",
+ "desc": "Probeer het later opnieuw of keer terug naar de bekende wereld",
+ "retry": "Opnieuw proberen",
+ "title": "Er is een probleem opgetreden op de pagina.."
+ },
+ "fetchError": "Verzoek mislukt",
+ "fetchErrorDetail": "Foutdetails",
+ "notFound": {
+ "backHome": "Terug naar startpagina",
+ "check": "Controleer of je URL correct is",
+ "desc": "We kunnen de pagina die je zoekt niet vinden",
+ "title": "Betreden onbekend terrein?"
+ },
+ "pluginSettings": {
+ "desc": "Voltooi de volgende instellingen om de plugin te gebruiken",
+ "title": "{{name}} Plugin Instellingen"
+ },
+ "response": {
+ "400": "Sorry, de server begrijpt uw verzoek niet. Controleer of uw verzoekparameters juist zijn",
+ "401": "Sorry, de server heeft uw verzoek geweigerd vanwege onvoldoende rechten of ongeldige authenticatie",
+ "403": "Sorry, de server heeft uw verzoek geweigerd omdat u geen toegang heeft tot deze inhoud",
+ "404": "Sorry, de server kan de door u gevraagde pagina of bron niet vinden. Controleer of uw URL juist is",
+ "405": "Sorry, de server ondersteunt de gebruikte verzoekmethode niet. Controleer of uw verzoekmethode juist is",
+ "406": "Sorry, de server kan het verzoek niet voltooien op basis van de kenmerken van de door u aangevraagde inhoud",
+ "407": "Sorry, u moet zich eerst aanmelden bij de proxy om door te gaan met dit verzoek",
+ "408": "Sorry, de server heeft een time-out tijdens het wachten op het verzoek. Controleer uw netwerkverbinding en probeer het opnieuw",
+ "409": "Sorry, er is een conflict met het verzoek en het kan niet worden verwerkt, mogelijk omdat de status van de bron niet compatibel is met het verzoek",
+ "410": "Sorry, de door u aangevraagde bron is permanent verwijderd en kan niet worden gevonden",
+ "411": "Sorry, de server kan het verzoek zonder geldige inhoudslengte niet verwerken",
+ "412": "Sorry, uw verzoek voldoet niet aan de voorwaarden van de server en kan niet worden voltooid",
+ "413": "Sorry, uw verzoek is te groot en kan niet worden verwerkt door de server",
+ "414": "Sorry, de URI van uw verzoek is te lang en kan niet worden verwerkt door de server",
+ "415": "Sorry, de server kan het verzoek met de bijgevoegde media-indeling niet verwerken",
+ "416": "Sorry, de server kan niet voldoen aan het bereik van uw verzoek",
+ "417": "Sorry, de server kan niet voldoen aan uw verwachtingen",
+ "422": "Sorry, uw verzoek is correct opgemaakt, maar vanwege semantische fouten kan er niet op worden gereageerd",
+ "423": "Sorry, de bron die u heeft aangevraagd is vergrendeld",
+ "424": "Sorry, vanwege een eerdere mislukte aanvraag kan het huidige verzoek niet worden voltooid",
+ "426": "Sorry, de server vereist dat uw client wordt geüpgraded naar een hogere protocolversie",
+ "428": "Sorry, de server vereist voorwaarden en uw verzoek moet de juiste voorwaardelijke kop bevatten",
+ "429": "Sorry, uw verzoek is te veel voor de server, probeer het later opnieuw",
+ "431": "Sorry, de kop van uw verzoek is te groot en kan niet worden verwerkt door de server",
+ "451": "Sorry, vanwege juridische redenen weigert de server deze bron te leveren",
+ "500": "Sorry, de server lijkt problemen te ondervinden en kan uw verzoek tijdelijk niet voltooien. Probeer het later opnieuw",
+ "502": "Sorry, de server lijkt de weg kwijt te zijn en kan tijdelijk geen service verlenen. Probeer het later opnieuw",
+ "503": "Sorry, de server kan uw verzoek momenteel niet verwerken vanwege overbelasting of onderhoud. Probeer het later opnieuw",
+ "504": "Sorry, de server heeft geen reactie ontvangen van de upstream server. Probeer het later opnieuw",
+ "AgentRuntimeError": "Lobe language model runtime execution error, please troubleshoot or retry based on the following information",
+ "FreePlanLimit": "U bent momenteel een gratis gebruiker en kunt deze functie niet gebruiken. Upgrade naar een betaald plan om door te gaan met gebruiken.",
+ "InvalidAccessCode": "Ongeldige toegangscode: het wachtwoord is onjuist of leeg. Voer de juiste toegangscode in of voeg een aangepaste API-sleutel toe.",
+ "InvalidBedrockCredentials": "Bedrock authentication failed, please check AccessKeyId/SecretAccessKey and retry",
+ "InvalidClerkUser": "Sorry, you are not currently logged in. Please log in or register an account to continue.",
+ "InvalidGithubToken": "Github Persoonlijke Toegangstoken is ongeldig of leeg, controleer de Github Persoonlijke Toegangstoken en probeer het opnieuw.",
+ "InvalidOllamaArgs": "Ollama-configuratie is onjuist, controleer de Ollama-configuratie en probeer het opnieuw",
+ "InvalidProviderAPIKey": "{{provider}} API-sleutel is onjuist of leeg. Controleer de {{provider}} API-sleutel en probeer het opnieuw.",
+ "LocationNotSupportError": "Sorry, your current location does not support this model service, possibly due to regional restrictions or service not being available. Please confirm if the current location supports using this service, or try using other location information.",
+ "NoOpenAIAPIKey": "OpenAI API-sleutel ontbreekt. Voeg een aangepaste OpenAI API-sleutel toe",
+ "OllamaBizError": "Fout bij het aanroepen van de Ollama-service, controleer de onderstaande informatie en probeer opnieuw",
+ "OllamaServiceUnavailable": "Ollama-service niet beschikbaar. Controleer of Ollama correct werkt en of de cross-origin configuratie van Ollama juist is ingesteld.",
+ "OpenAIBizError": "Er is een fout opgetreden bij het aanvragen van de OpenAI-service. Controleer de volgende informatie of probeer het opnieuw.",
+ "PluginApiNotFound": "Sorry, de API van de plug-inbeschrijvingslijst bestaat niet. Controleer of uw verzoeksmethode overeenkomt met de plug-inbeschrijvingslijst API",
+ "PluginApiParamsError": "Sorry, de validatie van de invoerparameters van de plug-in is mislukt. Controleer of de invoerparameters overeenkomen met de API-beschrijving",
+ "PluginFailToTransformArguments": "Sorry, the plugin failed to parse the arguments. Please try regenerating the assistant message or retry with a more powerful AI model with Tools Calling capability.",
+ "PluginGatewayError": "Sorry, er is een fout opgetreden in de plug-in gateway. Controleer of de plug-in gatewayconfiguratie juist is",
+ "PluginManifestInvalid": "Sorry, de validatie van de beschrijvingslijst van de plug-in is mislukt. Controleer of het formaat van de beschrijvingslijst correct is",
+ "PluginManifestNotFound": "Sorry, de server kon de beschrijvingslijst (manifest.json) van de plug-in niet vinden. Controleer of het adres van de plug-inbeschrijvingsbestand juist is",
+ "PluginMarketIndexInvalid": "Sorry, de plug-inindexvalidatie is mislukt. Controleer of het indexbestandsformaat correct is",
+ "PluginMarketIndexNotFound": "Sorry, de server kon de plug-inindex niet vinden. Controleer of het indexadres juist is",
+ "PluginMetaInvalid": "Sorry, de validatie van de plug-inmetadata is mislukt. Controleer of het formaat van de plug-inmetadata correct is",
+ "PluginMetaNotFound": "Sorry, de plug-in is niet gevonden in de index. Controleer of de plug-inconfiguratie in de index staat",
+ "PluginOpenApiInitError": "Sorry, initialisatie van de OpenAPI-client is mislukt. Controleer of de configuratie van OpenAPI juist is",
+ "PluginServerError": "Fout bij serverrespons voor plug-in. Controleer de foutinformatie hieronder voor uw plug-inbeschrijvingsbestand, plug-inconfiguratie of serverimplementatie",
+ "PluginSettingsInvalid": "Deze plug-in moet correct geconfigureerd zijn voordat deze kan worden gebruikt. Controleer of uw configuratie juist is",
+ "ProviderBizError": "Er is een fout opgetreden bij het aanvragen van de {{provider}}-service. Controleer de volgende informatie of probeer het opnieuw.",
+ "StreamChunkError": "Fout bij het parseren van het berichtblok van de streamingaanroep. Controleer of de huidige API-interface voldoet aan de standaardnormen, of neem contact op met uw API-leverancier voor advies.",
+ "SubscriptionPlanLimit": "Uw abonnementslimiet is bereikt en u kunt deze functie niet gebruiken. Upgrade naar een hoger plan of koop een resourcepakket om door te gaan met gebruiken.",
+ "UnknownChatFetchError": "Het spijt me, er is een onbekende verzoekfout opgetreden. Controleer de onderstaande informatie of probeer het opnieuw."
+ },
+ "stt": {
+ "responseError": "Serviceverzoek mislukt. Controleer de configuratie of probeer opnieuw"
+ },
+ "tts": {
+ "responseError": "Serviceverzoek mislukt. Controleer de configuratie of probeer opnieuw"
+ },
+ "unlock": {
+ "addProxyUrl": "Voeg een optionele OpenAI-proxy-URL toe",
+ "apiKey": {
+ "description": "Voer uw {{name}} API-sleutel in om de sessie te starten.",
+ "title": "Gebruik aangepaste {{name}} API-sleutel"
+ },
+ "closeMessage": "Sluit bericht",
+ "confirm": "Bevestigen en opnieuw proberen",
+ "oauth": {
+ "description": "De beheerder heeft een uniforme aanmeldingsverificatie ingeschakeld. Klik op de onderstaande knop om in te loggen en de app te ontgrendelen.",
+ "success": "Succesvol ingelogd",
+ "title": "Account inloggen",
+ "welcome": "Welkom!"
+ },
+ "password": {
+ "description": "De beheerder heeft app-encryptie ingeschakeld. Voer het app-wachtwoord in om de app te ontgrendelen. Het wachtwoord hoeft slechts één keer te worden ingevoerd.",
+ "placeholder": "Voer het wachtwoord in",
+ "title": "Voer het wachtwoord in om de app te ontgrendelen"
+ },
+ "tabs": {
+ "apiKey": "Custom API Key",
+ "password": "Password"
+ }
+ },
+ "upload": {
+ "desc": "Details: {{detail}}",
+ "fileOnlySupportInServerMode": "De huidige implementatiemodus ondersteunt het uploaden van niet-afbeeldingsbestanden niet. Als u bestanden in {{ext}}-formaat wilt uploaden, schakelt u over naar de serverdatabase-implementatie of gebruikt u de {{cloud}}-dienst.",
+ "networkError": "Controleer of je netwerk goed werkt en of de cross-origin configuratie van de bestandsopslagservice correct is.",
+ "title": "Bestand uploaden mislukt, controleer uw internetverbinding of probeer het later opnieuw",
+ "unknownError": "Foutreden: {{reason}}",
+ "uploadFailed": "Bestand uploaden is mislukt."
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/file.json b/DigitalHumanWeb/locales/nl-NL/file.json
new file mode 100644
index 0000000..837cd60
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "Beheer je bestanden en kennisbank",
+ "detail": {
+ "basic": {
+ "createdAt": "Aanmaakdatum",
+ "filename": "Bestandsnaam",
+ "size": "Bestandsgrootte",
+ "title": "Basisinformatie",
+ "type": "Formaat",
+ "updatedAt": "Bijwerkdatum"
+ },
+ "data": {
+ "chunkCount": "Aantal delen",
+ "embedding": {
+ "default": "Nog niet gevectoriseerd",
+ "error": "Mislukt",
+ "pending": "Te starten",
+ "processing": "Bezig",
+ "success": "Voltooid"
+ },
+ "embeddingStatus": "Vectorisatie"
+ }
+ },
+ "empty": "Geen bestanden/mappen geüpload",
+ "header": {
+ "actions": {
+ "newFolder": "Nieuwe map",
+ "uploadFile": "Bestand uploaden",
+ "uploadFolder": "Map uploaden"
+ },
+ "uploadButton": "Uploaden"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "Je staat op het punt deze kennisbank te verwijderen. De bestanden hierin worden niet verwijderd, maar verplaatst naar 'Alle bestanden'. Een verwijderde kennisbank kan niet worden hersteld, wees voorzichtig.",
+ "empty": "Klik op <1>+1> om een kennisbank te maken"
+ },
+ "new": "Nieuwe kennisbank",
+ "title": "Kennisbank"
+ },
+ "networkError": "Kon de kennisbank niet ophalen, controleer uw netwerkverbinding en probeer het opnieuw",
+ "notSupportGuide": {
+ "desc": "De huidige implementatie is in client-database modus, waardoor de bestandsbeheerfunctie niet beschikbaar is. Schakel over naar <1>server-database implementatiemodus1>, of gebruik direct <3>LobeChat Cloud3>",
+ "features": {
+ "allKind": {
+ "desc": "Ondersteunt gangbare bestandstypen, waaronder Word, PPT, Excel, PDF, TXT en andere veelvoorkomende documentformaten, evenals JS, Python en andere gangbare codebestanden",
+ "title": "Meerdere bestandstype-analyse"
+ },
+ "embeddings": {
+ "desc": "Gebruik hoogwaardige vectormodellen om tekstdelen te vectoriseren, waardoor semantische zoekopdrachten in de inhoud van bestanden mogelijk zijn",
+ "title": "Vector-semantisering"
+ },
+ "repos": {
+ "desc": "Ondersteunt het maken van kennisbanken en staat het toevoegen van verschillende bestandstypen toe, zodat je jouw eigen domeinkennis kunt opbouwen",
+ "title": "Kennisbank"
+ }
+ },
+ "title": "De huidige implementatiemodus ondersteunt geen bestandsbeheer"
+ },
+ "preview": {
+ "downloadFile": "Bestand downloaden",
+ "unsupportedFileAndContact": "Dit bestandsformaat wordt momenteel niet ondersteund voor online preview. Als u een preview wilt, neem dan gerust <1>contact met ons op1>."
+ },
+ "searchFilePlaceholder": "Zoek bestand",
+ "tab": {
+ "all": "Alle bestanden",
+ "audios": "Audio's",
+ "documents": "Documenten",
+ "images": "Afbeeldingen",
+ "videos": "Video's",
+ "websites": "Websites"
+ },
+ "title": "Bestanden",
+ "uploadDock": {
+ "body": {
+ "collapse": "Samenvouwen",
+ "item": {
+ "done": "Geüpload",
+ "error": "Upload mislukt, probeer het opnieuw",
+ "pending": "Voorbereiden om te uploaden...",
+ "processing": "Bestand wordt verwerkt...",
+ "restTime": "Overgebleven {{time}}"
+ }
+ },
+ "totalCount": "Totaal {{count}} items",
+ "uploadStatus": {
+ "error": "Uploadfout",
+ "pending": "Wachten op upload",
+ "processing": "Bezig met uploaden",
+ "success": "Upload voltooid",
+ "uploading": "Bezig met uploaden"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/knowledgeBase.json b/DigitalHumanWeb/locales/nl-NL/knowledgeBase.json
new file mode 100644
index 0000000..b7958b9
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "Bestand succesvol toegevoegd, <1>direct bekijken1>",
+ "confirm": "Toevoegen",
+ "id": {
+ "placeholder": "Selecteer de kennisbank die u wilt toevoegen",
+ "required": "Selecteer een kennisbank",
+ "title": "Doelkennisbank"
+ },
+ "title": "Toevoegen aan kennisbank",
+ "totalFiles": "{{count}} bestanden geselecteerd"
+ },
+ "createNew": {
+ "confirm": "Nieuw aanmaken",
+ "description": {
+ "placeholder": "Kennisbank beschrijving (optioneel)"
+ },
+ "formTitle": "Basisinformatie",
+ "name": {
+ "placeholder": "Kennisbank naam",
+ "required": "Vul de naam van de kennisbank in"
+ },
+ "title": "Nieuwe kennisbank aanmaken"
+ },
+ "tab": {
+ "evals": "Beoordelingen",
+ "files": "Documenten",
+ "settings": "Instellingen",
+ "testing": "Herinneringstest"
+ },
+ "title": "Kennisbank"
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/market.json b/DigitalHumanWeb/locales/nl-NL/market.json
new file mode 100644
index 0000000..161d683
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "Voeg een assistent toe",
+ "addAgentAndConverse": "Voeg een assistent toe en start een gesprek",
+ "addAgentSuccess": "Succesvol toegevoegd",
+ "guide": {
+ "func1": {
+ "desc1": "Ga naar de instellingen in de rechterbovenhoek van het chatvenster om naar de pagina te gaan waar je de assistent kunt toevoegen.",
+ "desc2": "Klik op de knop 'Indienen bij de assistentenmarkt' in de rechterbovenhoek.",
+ "tag": "Methode 1",
+ "title": "Indienen via LobeChat"
+ },
+ "func2": {
+ "button": "Ga naar de Github-assistentenopslagplaats",
+ "desc": "Als je een assistent aan de index wilt toevoegen, maak dan een vermelding in de plugins-map met behulp van agent-template.json of agent-template-full.json, schrijf een korte beschrijving en markeer deze op de juiste manier, en maak dan een pull-verzoek aan.",
+ "tag": "Methode 2",
+ "title": "Indienen via Github"
+ }
+ },
+ "search": {
+ "placeholder": "Zoek assistentnaam, beschrijving of trefwoord..."
+ },
+ "sidebar": {
+ "comment": "Opmerkingen",
+ "prompt": "Hints",
+ "title": "Assistentdetails"
+ },
+ "submitAgent": "Indienen bij de assistentenmarkt",
+ "title": {
+ "allAgents": "Alle assistenten",
+ "recentSubmits": "Recent toegevoegd"
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/metadata.json b/DigitalHumanWeb/locales/nl-NL/metadata.json
new file mode 100644
index 0000000..2bf20a5
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}} biedt je de beste ervaring met ChatGPT, Claude, Gemini, en OLLaMA WebUI",
+ "title": "{{appName}}: Persoonlijke AI-efficiëntietool, geef jezelf een slimmer brein"
+ },
+ "discover": {
+ "assistants": {
+ "description": "Inhoud creatie, copywriting, vraag-en-antwoord, beeldgeneratie, video-generatie, spraakgeneratie, slimme agenten, geautomatiseerde workflows, pas je eigen AI / GPTs / OLLaMA slimme assistent aan",
+ "title": "AI-assistenten"
+ },
+ "description": "Inhoud creatie, copywriting, vraag-en-antwoord, beeldgeneratie, video-generatie, spraakgeneratie, slimme agenten, geautomatiseerde workflows, aangepaste AI-toepassingen, pas je eigen AI-toepassingswerkplek aan",
+ "models": {
+ "description": "Verken populaire AI-modellen zoals OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek",
+ "title": "AI-modellen"
+ },
+ "plugins": {
+ "description": "Zoek naar grafiekgeneratie, academische toepassingen, afbeeldingsgeneratie, videogeneratie, spraakgeneratie en geautomatiseerde workflows om rijke plug-in mogelijkheden voor je assistent te integreren.",
+ "title": "AI-plug-ins"
+ },
+ "providers": {
+ "description": "Verken populaire modelproviders zoals OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter",
+ "title": "AI-modelleveranciers"
+ },
+ "search": "Zoeken",
+ "title": "Ontdekken"
+ },
+ "plugins": {
+ "description": "Zoeken, grafiekgeneratie, academisch, beeldgeneratie, video-generatie, spraakgeneratie, geautomatiseerde workflows, pas de ToolCall-pluginmogelijkheden van ChatGPT / Claude aan",
+ "title": "Pluginmarkt"
+ },
+ "welcome": {
+ "description": "{{appName}} biedt je de beste ervaring met ChatGPT, Claude, Gemini, en OLLaMA WebUI",
+ "title": "Welkom bij {{appName}}: Persoonlijke AI-efficiëntietool, geef jezelf een slimmer brein"
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/migration.json b/DigitalHumanWeb/locales/nl-NL/migration.json
new file mode 100644
index 0000000..5ad55b4
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "Wis lokale gegevens op",
+ "downloadBackup": "Download gegevensback-up",
+ "reUpgrade": "Opnieuw upgraden",
+ "start": "Beginnen met gebruiken",
+ "upgrade": "Upgraden"
+ },
+ "clear": {
+ "confirm": "Lokale gegevens worden binnenkort gewist (globale instellingen blijven ongewijzigd). Zorg ervoor dat je een gegevensback-up hebt gedownload."
+ },
+ "description": "In de nieuwe versie heeft de gegevensopslag van {{appName}} een enorme sprong voorwaarts gemaakt. Daarom moeten we de oude gegevens upgraden om je een betere gebruikerservaring te bieden.",
+ "features": {
+ "capability": {
+ "desc": "Gebaseerd op IndexedDB-technologie, genoeg om al je levenslange chatberichten op te slaan.",
+ "title": "Grote capaciteit"
+ },
+ "performance": {
+ "desc": "Miljoenen berichten worden automatisch geïndexeerd, met milliseconde-respons op zoekopdrachten.",
+ "title": "Hoge prestaties"
+ },
+ "use": {
+ "desc": "Ondersteunt het doorzoeken van titels, beschrijvingen, labels, berichtinhoud en zelfs vertaalde teksten, waardoor de dagelijkse zoekefficiëntie aanzienlijk is verbeterd.",
+ "title": "Gebruiksvriendelijker"
+ }
+ },
+ "title": "Evolutie van {{appName}}-gegevens",
+ "upgrade": {
+ "error": {
+ "subTitle": "Het spijt ons, er is een fout opgetreden tijdens het upgraden van de database. Probeer de volgende oplossingen: A. Wis lokale gegevens en importeer de back-upgegevens opnieuw; B. Klik op de knop 'Opnieuw upgraden'.
Als het probleem aanhoudt, <1>dien een probleem in1> en we zullen je zo snel mogelijk helpen.",
+ "title": "Database-upgrade mislukt"
+ },
+ "success": {
+ "subTitle": "De database van {{appName}} is succesvol geüpgraded naar de nieuwste versie, begin nu met ervaren!",
+ "title": "Database-upgrade succesvol"
+ }
+ },
+ "upgradeTip": "De upgrade duurt ongeveer 10-20 seconden, sluit {{appName}} niet tijdens het upgraden."
+ },
+ "migrateError": {
+ "missVersion": "De geïmporteerde gegevens missen een versienummer. Controleer het bestand en probeer het opnieuw.",
+ "noMigration": "Er is geen migratieplan gevonden voor de huidige versie. Controleer het versienummer en probeer het opnieuw. Als het probleem aanhoudt, dien dan een probleem in."
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/modelProvider.json b/DigitalHumanWeb/locales/nl-NL/modelProvider.json
new file mode 100644
index 0000000..92f21fc
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "De API-versie van Azure, volgt het formaat YYYY-MM-DD, raadpleeg [de nieuwste versie](https://learn.microsoft.com/nl-nl/azure/ai-services/openai/reference#chat-completions)",
+ "fetch": "Lijst ophalen",
+ "title": "Azure API Versie"
+ },
+ "empty": "Voer een model-ID in om het eerste model toe te voegen",
+ "endpoint": {
+ "desc": "Dit waarde kan gevonden worden in de 'Sleutels en eindpunt' sectie wanneer je een bron in Azure Portal controleert",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "Azure API Adres"
+ },
+ "modelListPlaceholder": "Selecteer of voeg het OpenAI-model toe dat u hebt ingezet",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "Dit waarde kan gevonden worden in de 'Sleutels en eindpunt' sectie wanneer je een bron in Azure Portal controleert. Je kunt KEY1 of KEY2 gebruiken",
+ "placeholder": "Azure API Key",
+ "title": "API Key"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "Voer AWS Access Key Id in",
+ "placeholder": "AWS Access Key Id",
+ "title": "AWS Access Key Id"
+ },
+ "checker": {
+ "desc": "Test of AccessKeyId / SecretAccessKey correct zijn ingevuld"
+ },
+ "region": {
+ "desc": "Voer AWS Region in",
+ "placeholder": "AWS Region",
+ "title": "AWS Region"
+ },
+ "secretAccessKey": {
+ "desc": "Voer AWS Secret Access Key in",
+ "placeholder": "AWS Secret Access Key",
+ "title": "AWS Secret Access Key"
+ },
+ "sessionToken": {
+ "desc": "Als je AWS SSO/STS gebruikt, voer dan je AWS Sessie Token in",
+ "placeholder": "AWS Sessie Token",
+ "title": "AWS Sessie Token (optioneel)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "Aangepaste regio",
+ "customSessionToken": "Aangepaste sessietoken",
+ "description": "Voer uw AWS AccessKeyId / SecretAccessKey in om een sessie te starten. De app zal uw verificatiegegevens niet opslaan",
+ "title": "Gebruik aangepaste Bedrock-verificatiegegevens"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "Vul je Github PAT in, klik [hier](https://github.com/settings/tokens) om er een te maken",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "Test of het proxyadres correct is ingevuld",
+ "title": "Connectiviteitscontrole"
+ },
+ "customModelName": {
+ "desc": "Voeg aangepaste modellen toe, gebruik een komma (,) om meerdere modellen te scheiden",
+ "placeholder": "vicuna,llava,codellama,llama2:13b-text",
+ "title": "Aangepaste Modelnamen"
+ },
+ "download": {
+ "desc": "Ollama is downloading the model. Please try not to close this page. It will resume from where it left off if you restart the download.",
+ "remainingTime": "Remaining Time",
+ "speed": "Download Speed",
+ "title": "Downloading model {{model}}"
+ },
+ "endpoint": {
+ "desc": "Voer het Ollama interface proxyadres in, laat leeg indien niet specifiek aangegeven",
+ "title": "Interface Proxyadres"
+ },
+ "setup": {
+ "cors": {
+ "description": "Due to browser security restrictions, you need to configure cross-origin settings for Ollama to function properly.",
+ "linux": {
+ "env": "Add `Environment` under [Service] section, and set the OLLAMA_ORIGINS environment variable:",
+ "reboot": "Reload systemd and restart Ollama.",
+ "systemd": "Invoke systemd to edit the ollama service:"
+ },
+ "macos": "Open the 'Terminal' application, paste the following command, and press Enter to run it.",
+ "reboot": "Please restart the Ollama service after the execution is complete.",
+ "title": "Configure Ollama for Cross-Origin Access",
+ "windows": "On Windows, go to 'Control Panel' and edit system environment variables. Create a new environment variable named 'OLLAMA_ORIGINS' for your user account, set the value to '*', and click 'OK/Apply' to save."
+ },
+ "install": {
+ "description": "Zorg ervoor dat Ollama is ingeschakeld. Als je Ollama nog niet hebt gedownload, ga dan naar de officiële website om <1>te downloaden1>.",
+ "docker": "If you prefer using Docker, Ollama also provides official Docker images. You can pull them using the following command:",
+ "linux": {
+ "command": "Install using the following command:",
+ "manual": "Alternatively, you can refer to the <1>Linux Manual Installation Guide1> for manual installation."
+ },
+ "title": "Install and Start Ollama Locally",
+ "windowsTab": "Windows (Preview)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "Cancel Download",
+ "confirm": "Download",
+ "description": "Enter your Ollama model tag to continue the session",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "Starting download...",
+ "title": "Download specified Ollama model"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI Nul Een Alles"
+ },
+ "zhipu": {
+ "title": "Intelligent Spectrum"
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/models.json b/DigitalHumanWeb/locales/nl-NL/models.json
new file mode 100644
index 0000000..fdfccbc
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34B biedt superieure prestaties in de industrie met rijke trainingsvoorbeelden."
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9B ondersteunt 16K tokens en biedt efficiënte, vloeiende taalgeneratiecapaciteiten."
+ },
+ "360gpt-pro": {
+ "description": "360GPT Pro, als een belangrijk lid van de 360 AI-modelreeks, voldoet aan de diverse natuurlijke taaltoepassingsscenario's met efficiënte tekstverwerkingscapaciteiten en ondersteunt lange tekstbegrip en meerdaagse gesprekken."
+ },
+ "360gpt-turbo": {
+ "description": "360GPT Turbo biedt krachtige reken- en gesprekscapaciteiten, met uitstekende semantische begrip en generatie-efficiëntie, en is de ideale intelligente assistentoplossing voor bedrijven en ontwikkelaars."
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8K legt de nadruk op semantische veiligheid en verantwoordelijkheid, speciaal ontworpen voor toepassingen met hoge eisen aan inhoudsveiligheid, en zorgt voor nauwkeurigheid en robuustheid in de gebruikerservaring."
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Pro is een geavanceerd natuurlijk taalverwerkingsmodel dat is ontwikkeld door 360, met uitstekende tekstgeneratie- en begripcapaciteiten, vooral in de generatieve en creatieve domeinen, en kan complexe taaltransformaties en rolinterpretatietaken aan."
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultra is de krachtigste versie in de Spark-grootmodelserie, die de netwerkintegratie heeft geüpgraded en de tekstbegrip- en samenvattingscapaciteiten heeft verbeterd. Het is een allesomvattende oplossing voor het verbeteren van de kantoorproductiviteit en het nauwkeurig reageren op behoeften, en is een toonaangevend intelligent product in de industrie."
+ },
+ "Baichuan2-Turbo": {
+ "description": "Maakt gebruik van zoekversterkingstechnologie om een uitgebreide koppeling tussen het grote model en domeinspecifieke kennis en wereldwijde kennis te realiseren. Ondersteunt het uploaden van verschillende documenten zoals PDF en Word, evenals URL-invoer, met tijdige en uitgebreide informatieverzameling en nauwkeurige, professionele output."
+ },
+ "Baichuan3-Turbo": {
+ "description": "Geoptimaliseerd voor veelvoorkomende zakelijke scenario's, met aanzienlijke verbeteringen en een hoge prijs-kwaliteitverhouding. In vergelijking met het Baichuan2-model is de inhoudsgeneratie met 20% verbeterd, de kennisvraag met 17% en de rolspelcapaciteit met 40%. De algehele prestaties zijn beter dan die van GPT-3.5."
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "Met een 128K ultra-lange contextvenster, geoptimaliseerd voor veelvoorkomende zakelijke scenario's, met aanzienlijke verbeteringen en een hoge prijs-kwaliteitverhouding. In vergelijking met het Baichuan2-model is de inhoudsgeneratie met 20% verbeterd, de kennisvraag met 17% en de rolspelcapaciteit met 40%. De algehele prestaties zijn beter dan die van GPT-3.5."
+ },
+ "Baichuan4": {
+ "description": "Het model heeft de beste prestaties in het binnenland en overtreft buitenlandse mainstream modellen in kennisencyclopedieën, lange teksten en creatieve generaties. Het heeft ook toonaangevende multimodale capaciteiten en presteert uitstekend in verschillende autoritatieve evaluatiebenchmarks."
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B) is een innovatief model, geschikt voor toepassingen in meerdere domeinen en complexe taken."
+ },
+ "Max-32k": {
+ "description": "Spark Max 32K is uitgerust met een grote contextverwerkingscapaciteit, verbeterd begrip van context en logische redeneervaardigheden, ondersteunt tekstinvoer van 32K tokens, geschikt voor het lezen van lange documenten, privé kennisvragen en andere scenario's."
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPO is een zeer flexibele multi-model combinatie, ontworpen om een uitstekende creatieve ervaring te bieden."
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B) is een hoogprecisie instructiemodel, geschikt voor complexe berekeningen."
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B) biedt geoptimaliseerde taaloutput en diverse toepassingsmogelijkheden."
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "Vernieuwing van het Phi-3-mini model."
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "Hetzelfde Phi-3-medium model, maar met een grotere contextgrootte voor RAG of few shot prompting."
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "Een model met 14 miljard parameters, biedt betere kwaliteit dan Phi-3-mini, met een focus op hoogwaardige, redeneringsdichte gegevens."
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "Hetzelfde Phi-3-mini model, maar met een grotere contextgrootte voor RAG of few shot prompting."
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "De kleinste lid van de Phi-3 familie. Geoptimaliseerd voor zowel kwaliteit als lage latentie."
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "Hetzelfde Phi-3-small model, maar met een grotere contextgrootte voor RAG of few shot prompting."
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "Een model met 7 miljard parameters, biedt betere kwaliteit dan Phi-3-mini, met een focus op hoogwaardige, redeneringsdichte gegevens."
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128K is uitgerust met een enorme contextverwerkingscapaciteit, in staat om tot 128K contextinformatie te verwerken, bijzonder geschikt voor lange teksten die volledige analyse en langdurige logische verbanden vereisen, en biedt vloeiende en consistente logica met diverse referenties in complexe tekstcommunicatie."
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "Als testversie van Qwen2 biedt Qwen1.5 nauwkeurigere gespreksfunctionaliteit door gebruik te maken van grootschalige gegevens."
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B) biedt snelle reacties en natuurlijke gesprekscapaciteiten, geschikt voor meertalige omgevingen."
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2 is een geavanceerd algemeen taalmodel dat verschillende soorten instructies ondersteunt."
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5 is een geheel nieuwe serie van grote taalmodellen, ontworpen om de verwerking van instructietaken te optimaliseren."
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5 is een geheel nieuwe serie van grote taalmodellen, ontworpen om de verwerking van instructietaken te optimaliseren."
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5 is een geheel nieuwe serie van grote taalmodellen, met sterkere begrip- en generatiecapaciteiten."
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5 is een geheel nieuwe serie van grote taalmodellen, ontworpen om de verwerking van instructietaken te optimaliseren."
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coder richt zich op het schrijven van code."
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Math richt zich op het oplossen van wiskundige vraagstukken en biedt professionele antwoorden op moeilijke vragen."
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9B is de open-source versie die een geoptimaliseerde gesprekservaring biedt voor gespreksapplicaties."
+ },
+ "abab5.5-chat": {
+ "description": "Gericht op productiviteitsscenario's, ondersteunt complexe taakverwerking en efficiënte tekstgeneratie, geschikt voor professionele toepassingen."
+ },
+ "abab5.5s-chat": {
+ "description": "Speciaal ontworpen voor Chinese personagegesprekken, biedt hoogwaardige Chinese gespreksgeneratiecapaciteiten, geschikt voor diverse toepassingsscenario's."
+ },
+ "abab6.5g-chat": {
+ "description": "Speciaal ontworpen voor meertalige personagegesprekken, ondersteunt hoogwaardige gespreksgeneratie in het Engels en andere talen."
+ },
+ "abab6.5s-chat": {
+ "description": "Geschikt voor een breed scala aan natuurlijke taalverwerkingstaken, waaronder tekstgeneratie, conversatiesystemen, enz."
+ },
+ "abab6.5t-chat": {
+ "description": "Geoptimaliseerd voor Chinese personagegesprekken, biedt vloeiende en cultureel passende gespreksgeneratiecapaciteiten."
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "Fireworks open-source functie-aanroepmodel biedt uitstekende instructie-uitvoeringscapaciteiten en aanpasbare functies."
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "Firefunction-v2, ontwikkeld door Fireworks, is een hoogpresterend functie-aanroepmodel, gebaseerd op Llama-3 en geoptimaliseerd voor functie-aanroepen, gesprekken en instructies."
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13b is een visueel taalmodel dat zowel afbeeldingen als tekstinvoer kan verwerken, getraind op hoogwaardige gegevens, geschikt voor multimodale taken."
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "Gemma 2 9B instructiemodel, gebaseerd op eerdere Google-technologie, geschikt voor vraagbeantwoording, samenvattingen en redenering."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "Llama 3 70B instructiemodel, speciaal geoptimaliseerd voor meertalige gesprekken en natuurlijke taalbegrip, presteert beter dan de meeste concurrerende modellen."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "Llama 3 70B instructiemodel (HF-versie), consistent met de officiële implementatieresultaten, geschikt voor hoogwaardige instructietaken."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "Llama 3 8B instructiemodel, geoptimaliseerd voor gesprekken en meertalige taken, presteert uitstekend en efficiënt."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "Llama 3 8B instructiemodel (HF-versie), consistent met de officiële implementatieresultaten, biedt hoge consistentie en cross-platform compatibiliteit."
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "Llama 3.1 405B instructiemodel heeft een enorm aantal parameters, geschikt voor complexe taken en instructies in omgevingen met hoge belasting."
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "Llama 3.1 70B instructiemodel biedt uitstekende natuurlijke taalbegrip en generatiecapaciteiten, ideaal voor gespreks- en analysetaken."
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "Llama 3.1 8B instructiemodel, geoptimaliseerd voor meertalige gesprekken, kan de meeste open-source en gesloten-source modellen overtreffen op gangbare industriestandaarden."
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "Mixtral MoE 8x22B instructiemodel, met een groot aantal parameters en een multi-expertarchitectuur, biedt uitgebreide ondersteuning voor de efficiënte verwerking van complexe taken."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "Mixtral MoE 8x7B instructiemodel, met een multi-expertarchitectuur die efficiënte instructievolging en uitvoering biedt."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "Mixtral MoE 8x7B instructiemodel (HF-versie), met prestaties die overeenkomen met de officiële implementatie, geschikt voor verschillende efficiënte taakscenario's."
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "MythoMax L2 13B model, dat gebruik maakt van innovatieve samenvoegtechnologie, is goed in verhalen vertellen en rollenspellen."
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "Phi 3 Vision instructiemodel, een lichtgewicht multimodaal model dat complexe visuele en tekstuele informatie kan verwerken, met sterke redeneercapaciteiten."
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "StarCoder 15.5B model, ondersteunt geavanceerde programmeertaken, met verbeterde meertalige capaciteiten, geschikt voor complexe codegeneratie en -begrip."
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "StarCoder 7B model, getraind op meer dan 80 programmeertalen, met uitstekende programmeervulcapaciteiten en contextbegrip."
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "Yi-Large model, met uitstekende meertalige verwerkingscapaciteiten, geschikt voor verschillende taalgeneratie- en begripstaken."
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "Een meertalig model met 398 miljard parameters (94 miljard actief), biedt een contextvenster van 256K, functieaanroep, gestructureerde output en gegronde generatie."
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "Een meertalig model met 52 miljard parameters (12 miljard actief), biedt een contextvenster van 256K, functieaanroep, gestructureerde output en gegronde generatie."
+ },
+ "ai21-jamba-instruct": {
+ "description": "Een productieklare Mamba-gebaseerde LLM-model om de beste prestaties, kwaliteit en kostenefficiëntie te bereiken."
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "Claude 3.5 Sonnet heeft de industrienormen verbeterd, met prestaties die de concurrentiemodellen en Claude 3 Opus overtreffen, en presteert uitstekend in brede evaluaties, met de snelheid en kosten van ons gemiddelde model."
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "Claude 3 Haiku is het snelste en meest compacte model van Anthropic, met bijna onmiddellijke reactietijden. Het kan snel eenvoudige vragen en verzoeken beantwoorden. Klanten kunnen een naadloze AI-ervaring creëren die menselijke interactie nabootst. Claude 3 Haiku kan afbeeldingen verwerken en tekstoutput retourneren, met een contextvenster van 200K."
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "Claude 3 Opus is het krachtigste AI-model van Anthropic, met geavanceerde prestaties op zeer complexe taken. Het kan open prompts en ongeziene scenario's verwerken, met uitstekende vloeiendheid en mensachtige begrip. Claude 3 Opus toont de grenzen van de mogelijkheden van generatieve AI. Claude 3 Opus kan afbeeldingen verwerken en tekstoutput retourneren, met een contextvenster van 200K."
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "Claude 3 Sonnet van Anthropic bereikt een ideale balans tussen intelligentie en snelheid - bijzonder geschikt voor bedrijfswerkbelasting. Het biedt maximale bruikbaarheid tegen lagere kosten dan concurrenten en is ontworpen als een betrouwbare, duurzame hoofdmachine, geschikt voor grootschalige AI-implementaties. Claude 3 Sonnet kan afbeeldingen verwerken en tekstoutput retourneren, met een contextvenster van 200K."
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "Een snel, kosteneffectief en toch zeer capabel model dat een reeks taken kan verwerken, waaronder dagelijkse gesprekken, tekstanalyses, samenvattingen en documentvragen."
+ },
+ "anthropic.claude-v2": {
+ "description": "Anthropic's model toont hoge capaciteiten in een breed scala aan taken, van complexe gesprekken en creatieve inhoudgeneratie tot gedetailleerde instructievolging."
+ },
+ "anthropic.claude-v2:1": {
+ "description": "De bijgewerkte versie van Claude 2, met een verdubbeling van het contextvenster en verbeteringen in betrouwbaarheid, hallucinatiepercentages en op bewijs gebaseerde nauwkeurigheid in lange documenten en RAG-contexten."
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 Haiku is het snelste en meest compacte model van Anthropic, ontworpen voor bijna onmiddellijke reacties. Het biedt snelle en nauwkeurige gerichte prestaties."
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opus is het krachtigste model van Anthropic voor het verwerken van zeer complexe taken. Het excelleert in prestaties, intelligentie, vloeiendheid en begrip."
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 Sonnet biedt mogelijkheden die verder gaan dan Opus en een snellere snelheid dan Sonnet, terwijl het dezelfde prijs als Sonnet behoudt. Sonnet is bijzonder goed in programmeren, datawetenschap, visuele verwerking en agenttaken."
+ },
+ "aya": {
+ "description": "Aya 23 is een meertalig model van Cohere, ondersteunt 23 talen en biedt gemak voor diverse taaltoepassingen."
+ },
+ "aya:35b": {
+ "description": "Aya 23 is een meertalig model van Cohere, ondersteunt 23 talen en biedt gemak voor diverse taaltoepassingen."
+ },
+ "charglm-3": {
+ "description": "CharGLM-3 is ontworpen voor rollenspellen en emotionele begeleiding, ondersteunt zeer lange meerdaagse herinneringen en gepersonaliseerde gesprekken, met brede toepassingen."
+ },
+ "chatgpt-4o-latest": {
+ "description": "ChatGPT-4o is een dynamisch model dat in realtime wordt bijgewerkt om de meest actuele versie te behouden. Het combineert krachtige taalbegrip- en generatiecapaciteiten, geschikt voor grootschalige toepassingsscenario's, waaronder klantenservice, onderwijs en technische ondersteuning."
+ },
+ "claude-2.0": {
+ "description": "Claude 2 biedt belangrijke vooruitgangen in capaciteiten voor bedrijven, waaronder de toonaangevende 200K token context, een aanzienlijke vermindering van de frequentie van modelhallucinaties, systeemprompten en een nieuwe testfunctie: functie-aanroepen."
+ },
+ "claude-2.1": {
+ "description": "Claude 2 biedt belangrijke vooruitgangen in capaciteiten voor bedrijven, waaronder de toonaangevende 200K token context, een aanzienlijke vermindering van de frequentie van modelhallucinaties, systeemprompten en een nieuwe testfunctie: functie-aanroepen."
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnet biedt mogelijkheden die verder gaan dan Opus en is sneller dan Sonnet, terwijl het dezelfde prijs behoudt. Sonnet is bijzonder goed in programmeren, datawetenschap, visuele verwerking en agenttaken."
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haiku is het snelste en meest compacte model van Anthropic, ontworpen voor bijna onmiddellijke reacties. Het heeft snelle en nauwkeurige gerichte prestaties."
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opus is het krachtigste model van Anthropic voor het verwerken van zeer complexe taken. Het presteert uitstekend op het gebied van prestaties, intelligentie, vloeiendheid en begrip."
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnet biedt een ideale balans tussen intelligentie en snelheid voor bedrijfswerkbelastingen. Het biedt maximale bruikbaarheid tegen een lagere prijs, betrouwbaar en geschikt voor grootschalige implementatie."
+ },
+ "claude-instant-1.2": {
+ "description": "Het model van Anthropic is ontworpen voor lage latentie en hoge doorvoer in tekstgeneratie, en ondersteunt het genereren van honderden pagina's tekst."
+ },
+ "codegeex-4": {
+ "description": "CodeGeeX-4 is een krachtige AI-programmeerassistent die slimme vraag- en antwoordmogelijkheden en code-aanvulling ondersteunt voor verschillende programmeertalen, waardoor de ontwikkelingssnelheid wordt verhoogd."
+ },
+ "codegemma": {
+ "description": "CodeGemma is een lichtgewicht taalmodel dat speciaal is ontworpen voor verschillende programmeertaken, ondersteunt snelle iteratie en integratie."
+ },
+ "codegemma:2b": {
+ "description": "CodeGemma is een lichtgewicht taalmodel dat speciaal is ontworpen voor verschillende programmeertaken, ondersteunt snelle iteratie en integratie."
+ },
+ "codellama": {
+ "description": "Code Llama is een LLM dat zich richt op codegeneratie en -discussie, met brede ondersteuning voor programmeertalen, geschikt voor ontwikkelaarsomgevingen."
+ },
+ "codellama:13b": {
+ "description": "Code Llama is een LLM dat zich richt op codegeneratie en -discussie, met brede ondersteuning voor programmeertalen, geschikt voor ontwikkelaarsomgevingen."
+ },
+ "codellama:34b": {
+ "description": "Code Llama is een LLM dat zich richt op codegeneratie en -discussie, met brede ondersteuning voor programmeertalen, geschikt voor ontwikkelaarsomgevingen."
+ },
+ "codellama:70b": {
+ "description": "Code Llama is een LLM dat zich richt op codegeneratie en -discussie, met brede ondersteuning voor programmeertalen, geschikt voor ontwikkelaarsomgevingen."
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5 is een groot taalmodel dat is getraind op een grote hoeveelheid codegegevens, speciaal ontworpen om complexe programmeertaken op te lossen."
+ },
+ "codestral": {
+ "description": "Codestral is het eerste codemodel van Mistral AI, biedt uitstekende ondersteuning voor codegeneratietaken."
+ },
+ "codestral-latest": {
+ "description": "Codestral is een geavanceerd generatief model dat zich richt op codegeneratie, geoptimaliseerd voor tussentijdse invulling en code-aanvultaken."
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22B is een model ontworpen voor instructievolging, gesprekken en programmeren."
+ },
+ "cohere-command-r": {
+ "description": "Command R is een schaalbaar generatief model gericht op RAG en Tool Use om productie-schaal AI voor ondernemingen mogelijk te maken."
+ },
+ "cohere-command-r-plus": {
+ "description": "Command R+ is een state-of-the-art RAG-geoptimaliseerd model ontworpen om enterprise-grade workloads aan te pakken."
+ },
+ "command-r": {
+ "description": "Command R is geoptimaliseerd voor conversatie- en lange contexttaken, bijzonder geschikt voor dynamische interactie en kennisbeheer."
+ },
+ "command-r-plus": {
+ "description": "Command R+ is een hoogpresterend groot taalmodel, speciaal ontworpen voor echte zakelijke scenario's en complexe toepassingen."
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instruct biedt betrouwbare instructieverwerkingscapaciteiten en ondersteunt toepassingen in verschillende sectoren."
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5 combineert de uitstekende kenmerken van eerdere versies en versterkt de algemene en coderingscapaciteiten."
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67B is een geavanceerd model dat is getraind voor complexe gesprekken."
+ },
+ "deepseek-chat": {
+ "description": "Een nieuw open-source model dat algemene en code-capaciteiten combineert, behoudt niet alleen de algemene conversatiecapaciteiten van het oorspronkelijke Chat-model en de krachtige codeverwerkingscapaciteiten van het Coder-model, maar is ook beter afgestemd op menselijke voorkeuren. Bovendien heeft DeepSeek-V2.5 aanzienlijke verbeteringen gerealiseerd in schrijfopdrachten, instructievolging en andere gebieden."
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2 is een open-source hybride expertcode-model, presteert uitstekend in code-taken en is vergelijkbaar met GPT4-Turbo."
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2 is een open-source hybride expertcode-model, presteert uitstekend in code-taken en is vergelijkbaar met GPT4-Turbo."
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2 is een efficiënt Mixture-of-Experts taalmodel, geschikt voor kosteneffectieve verwerkingsbehoeften."
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236B is het ontwerpcode-model van DeepSeek, biedt krachtige codegeneratiecapaciteiten."
+ },
+ "deepseek/deepseek-chat": {
+ "description": "Een nieuw open-source model dat algemene en codeercapaciteiten combineert, niet alleen de algemene gespreksvaardigheden van het oorspronkelijke Chat-model en de krachtige codeverwerkingscapaciteiten van het Coder-model behoudt, maar ook beter is afgestemd op menselijke voorkeuren. Bovendien heeft DeepSeek-V2.5 aanzienlijke verbeteringen gerealiseerd in schrijfopdrachten, instructievolging en meer."
+ },
+ "emohaa": {
+ "description": "Emohaa is een psychologisch model met professionele adviescapaciteiten, dat gebruikers helpt emotionele problemen te begrijpen."
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001 (Tuning) biedt stabiele en afstelbare prestaties, ideaal voor oplossingen voor complexe taken."
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002 (Tuning) biedt uitstekende multimodale ondersteuning, gericht op effectieve oplossingen voor complexe taken."
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Pro is Google's high-performance AI-model, ontworpen voor brede taakuitbreiding."
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001 is een efficiënt multimodaal model dat ondersteuning biedt voor brede toepassingsuitbreiding."
+ },
+ "gemini-1.5-flash-002": {
+ "description": "Gemini 1.5 Flash 002 is een efficiënt multimodaal model dat ondersteuning biedt voor een breed scala aan toepassingen."
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827 is ontworpen voor het verwerken van grootschalige taakscenario's en biedt ongeëvenaarde verwerkingssnelheid."
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "Gemini 1.5 Flash 8B 0924 is het nieuwste experimentele model, met aanzienlijke prestatieverbeteringen in tekst- en multimodale toepassingen."
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827 biedt geoptimaliseerde multimodale verwerkingscapaciteiten, geschikt voor verschillende complexe taakscenario's."
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flash is Google's nieuwste multimodale AI-model, met snelle verwerkingscapaciteiten, ondersteunt tekst-, beeld- en video-invoer, en is geschikt voor efficiënte opschaling van verschillende taken."
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001 is een schaalbare multimodale AI-oplossing die ondersteuning biedt voor een breed scala aan complexe taken."
+ },
+ "gemini-1.5-pro-002": {
+ "description": "Gemini 1.5 Pro 002 is het nieuwste productieklare model, dat hogere kwaliteit output biedt, met name op het gebied van wiskunde, lange contexten en visuele taken."
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801 biedt uitstekende multimodale verwerkingscapaciteiten en biedt meer flexibiliteit voor applicatieontwikkeling."
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827 combineert de nieuwste optimalisatietechnologieën en biedt efficiëntere multimodale gegevensverwerkingscapaciteiten."
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Pro ondersteunt tot 2 miljoen tokens en is de ideale keuze voor middelgrote multimodale modellen, geschikt voor veelzijdige ondersteuning van complexe taken."
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7B is geschikt voor het verwerken van middelgrote taken, met een goede kosteneffectiviteit."
+ },
+ "gemma2": {
+ "description": "Gemma 2 is een efficiënt model van Google, dat een breed scala aan toepassingsscenario's dekt, van kleine toepassingen tot complexe gegevensverwerking."
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9B is een model dat is geoptimaliseerd voor specifieke taken en toolintegratie."
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2 is een efficiënt model van Google, dat een breed scala aan toepassingsscenario's dekt, van kleine toepassingen tot complexe gegevensverwerking."
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2 is een efficiënt model van Google, dat een breed scala aan toepassingsscenario's dekt, van kleine toepassingen tot complexe gegevensverwerking."
+ },
+ "general": {
+ "description": "Spark Lite is een lichtgewicht groot taalmodel met extreem lage latentie en efficiënte verwerkingscapaciteiten, volledig gratis en open, met ondersteuning voor realtime online zoekfunctionaliteit. De snelle respons maakt het uitermate geschikt voor inferentie-toepassingen en modelfijnstelling op apparaten met lage rekenkracht, en biedt gebruikers uitstekende kosteneffectiviteit en een intelligente ervaring, vooral in kennisvragen, inhoudsgeneratie en zoekscenario's."
+ },
+ "generalv3": {
+ "description": "Spark Pro is een hoogwaardig groot taalmodel dat is geoptimaliseerd voor professionele domeinen, met een focus op wiskunde, programmeren, geneeskunde, onderwijs en meer, en ondersteunt online zoeken en ingebouwde plugins voor weer, datum, enz. Het geoptimaliseerde model toont uitstekende prestaties en efficiëntie in complexe kennisvragen, taalbegrip en hoogwaardig tekstcreatie, en is de ideale keuze voor professionele toepassingsscenario's."
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Max is de meest uitgebreide versie, met ondersteuning voor online zoeken en talrijke ingebouwde plugins. De volledig geoptimaliseerde kerncapaciteiten, systeemrolinstellingen en functieaanroepfunctionaliteit zorgen voor uitstekende prestaties in verschillende complexe toepassingsscenario's."
+ },
+ "glm-4": {
+ "description": "GLM-4 is de oude vlaggenschipversie die in januari 2024 is uitgebracht en is inmiddels vervangen door de krachtigere GLM-4-0520."
+ },
+ "glm-4-0520": {
+ "description": "GLM-4-0520 is de nieuwste modelversie, speciaal ontworpen voor zeer complexe en diverse taken, met uitstekende prestaties."
+ },
+ "glm-4-air": {
+ "description": "GLM-4-Air is een kosteneffectieve versie met prestaties die dicht bij GLM-4 liggen, met snelle snelheid en een betaalbare prijs."
+ },
+ "glm-4-airx": {
+ "description": "GLM-4-AirX biedt een efficiënte versie van GLM-4-Air, met een redeneersnelheid tot 2,6 keer sneller."
+ },
+ "glm-4-alltools": {
+ "description": "GLM-4-AllTools is een multifunctioneel intelligent model, geoptimaliseerd om complexe instructieplanning en toolaanroepen te ondersteunen, zoals webbrowser, code-interpretatie en tekstgeneratie, geschikt voor multitasking."
+ },
+ "glm-4-flash": {
+ "description": "GLM-4-Flash is de ideale keuze voor het verwerken van eenvoudige taken, met de snelste snelheid en de laagste prijs."
+ },
+ "glm-4-long": {
+ "description": "GLM-4-Long ondersteunt zeer lange tekstinvoer, geschikt voor geheugenintensieve taken en grootschalige documentverwerking."
+ },
+ "glm-4-plus": {
+ "description": "GLM-4-Plus, als vlaggenschip van hoge intelligentie, heeft krachtige capaciteiten voor het verwerken van lange teksten en complexe taken, met algehele prestatieverbeteringen."
+ },
+ "glm-4v": {
+ "description": "GLM-4V biedt krachtige beeldbegrip- en redeneercapaciteiten, ondersteunt verschillende visuele taken."
+ },
+ "glm-4v-plus": {
+ "description": "GLM-4V-Plus heeft de capaciteit om video-inhoud en meerdere afbeeldingen te begrijpen, geschikt voor multimodale taken."
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827 biedt geoptimaliseerde multimodale verwerkingscapaciteiten, geschikt voor verschillende complexe taakscenario's."
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827 combineert de nieuwste optimalisatietechnologieën voor efficiëntere multimodale gegevensverwerking."
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2 behoudt het ontwerpprincipe van lichtgewicht en efficiëntie."
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2 is een lichtgewicht open-source tekstmodelserie van Google."
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2 is een lichtgewicht open-source tekstmodelserie van Google."
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B) biedt basis instructieverwerkingscapaciteiten, geschikt voor lichte toepassingen."
+ },
+ "gpt-3.5-turbo": {
+ "description": "GPT 3.5 Turbo, geschikt voor verschillende tekstgeneratie- en begrijptaken, wijst momenteel naar gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "GPT 3.5 Turbo, geschikt voor verschillende tekstgeneratie- en begrijptaken, wijst momenteel naar gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "GPT 3.5 Turbo, geschikt voor verschillende tekstgeneratie- en begrijptaken, wijst momenteel naar gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "GPT 3.5 Turbo, geschikt voor verschillende tekstgeneratie- en begrijptaken, wijst momenteel naar gpt-3.5-turbo-0125."
+ },
+ "gpt-4": {
+ "description": "GPT-4 biedt een groter contextvenster en kan langere tekstinvoer verwerken, geschikt voor scenario's die uitgebreide informatie-integratie en data-analyse vereisen."
+ },
+ "gpt-4-0125-preview": {
+ "description": "Het nieuwste GPT-4 Turbo-model heeft visuele functies. Nu kunnen visuele verzoeken worden gedaan met behulp van JSON-indeling en functieaanroepen. GPT-4 Turbo is een verbeterde versie die kosteneffectieve ondersteuning biedt voor multimodale taken. Het vindt een balans tussen nauwkeurigheid en efficiëntie, geschikt voor toepassingen die realtime interactie vereisen."
+ },
+ "gpt-4-0613": {
+ "description": "GPT-4 biedt een groter contextvenster en kan langere tekstinvoer verwerken, geschikt voor scenario's die uitgebreide informatie-integratie en data-analyse vereisen."
+ },
+ "gpt-4-1106-preview": {
+ "description": "Het nieuwste GPT-4 Turbo-model heeft visuele functies. Nu kunnen visuele verzoeken worden gedaan met behulp van JSON-indeling en functieaanroepen. GPT-4 Turbo is een verbeterde versie die kosteneffectieve ondersteuning biedt voor multimodale taken. Het vindt een balans tussen nauwkeurigheid en efficiëntie, geschikt voor toepassingen die realtime interactie vereisen."
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "Het nieuwste GPT-4 Turbo-model heeft visuele functies. Nu kunnen visuele verzoeken worden gedaan met behulp van JSON-indeling en functieaanroepen. GPT-4 Turbo is een verbeterde versie die kosteneffectieve ondersteuning biedt voor multimodale taken. Het vindt een balans tussen nauwkeurigheid en efficiëntie, geschikt voor toepassingen die realtime interactie vereisen."
+ },
+ "gpt-4-32k": {
+ "description": "GPT-4 biedt een groter contextvenster en kan langere tekstinvoer verwerken, geschikt voor scenario's die uitgebreide informatie-integratie en data-analyse vereisen."
+ },
+ "gpt-4-32k-0613": {
+ "description": "GPT-4 biedt een groter contextvenster en kan langere tekstinvoer verwerken, geschikt voor scenario's die uitgebreide informatie-integratie en data-analyse vereisen."
+ },
+ "gpt-4-turbo": {
+ "description": "Het nieuwste GPT-4 Turbo-model heeft visuele functies. Nu kunnen visuele verzoeken worden gedaan met behulp van JSON-indeling en functieaanroepen. GPT-4 Turbo is een verbeterde versie die kosteneffectieve ondersteuning biedt voor multimodale taken. Het vindt een balans tussen nauwkeurigheid en efficiëntie, geschikt voor toepassingen die realtime interactie vereisen."
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "Het nieuwste GPT-4 Turbo-model heeft visuele functies. Nu kunnen visuele verzoeken worden gedaan met behulp van JSON-indeling en functieaanroepen. GPT-4 Turbo is een verbeterde versie die kosteneffectieve ondersteuning biedt voor multimodale taken. Het vindt een balans tussen nauwkeurigheid en efficiëntie, geschikt voor toepassingen die realtime interactie vereisen."
+ },
+ "gpt-4-turbo-preview": {
+ "description": "Het nieuwste GPT-4 Turbo-model heeft visuele functies. Nu kunnen visuele verzoeken worden gedaan met behulp van JSON-indeling en functieaanroepen. GPT-4 Turbo is een verbeterde versie die kosteneffectieve ondersteuning biedt voor multimodale taken. Het vindt een balans tussen nauwkeurigheid en efficiëntie, geschikt voor toepassingen die realtime interactie vereisen."
+ },
+ "gpt-4-vision-preview": {
+ "description": "Het nieuwste GPT-4 Turbo-model heeft visuele functies. Nu kunnen visuele verzoeken worden gedaan met behulp van JSON-indeling en functieaanroepen. GPT-4 Turbo is een verbeterde versie die kosteneffectieve ondersteuning biedt voor multimodale taken. Het vindt een balans tussen nauwkeurigheid en efficiëntie, geschikt voor toepassingen die realtime interactie vereisen."
+ },
+ "gpt-4o": {
+ "description": "ChatGPT-4o is een dynamisch model dat in realtime wordt bijgewerkt om de meest actuele versie te behouden. Het combineert krachtige taalbegrip- en generatiecapaciteiten, geschikt voor grootschalige toepassingsscenario's, waaronder klantenservice, onderwijs en technische ondersteuning."
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "ChatGPT-4o is een dynamisch model dat in realtime wordt bijgewerkt om de meest actuele versie te behouden. Het combineert krachtige taalbegrip- en generatiecapaciteiten, geschikt voor grootschalige toepassingsscenario's, waaronder klantenservice, onderwijs en technische ondersteuning."
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o is een dynamisch model dat in realtime wordt bijgewerkt om de meest actuele versie te behouden. Het combineert krachtige taalbegrip- en generatiecapaciteiten, geschikt voor grootschalige toepassingsscenario's, waaronder klantenservice, onderwijs en technische ondersteuning."
+ },
+ "gpt-4o-mini": {
+ "description": "GPT-4o mini is het nieuwste model van OpenAI, gelanceerd na GPT-4 Omni, en ondersteunt zowel tekst- als beeldinvoer met tekstuitvoer. Als hun meest geavanceerde kleine model is het veel goedkoper dan andere recente toonaangevende modellen en meer dan 60% goedkoper dan GPT-3.5 Turbo. Het behoudt de meest geavanceerde intelligentie met een aanzienlijke prijs-kwaliteitverhouding. GPT-4o mini behaalde 82% op de MMLU-test en staat momenteel hoger in chatvoorkeuren dan GPT-4."
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13B is een taalmodel dat creativiteit en intelligentie combineert door meerdere topmodellen te integreren."
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "Het innovatieve open-source model InternLM2.5 verhoogt de gespreksintelligentie door een groot aantal parameters."
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5 biedt intelligente gespreksoplossingen voor meerdere scenario's."
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct model, met 70B parameters, biedt uitstekende prestaties in grote tekstgeneratie- en instructietaken."
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70B biedt krachtigere AI-inferentiecapaciteiten, geschikt voor complexe toepassingen, ondersteunt een enorme rekenverwerking en garandeert efficiëntie en nauwkeurigheid."
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8B is een hoogpresterend model dat snelle tekstgeneratiecapaciteiten biedt, zeer geschikt voor toepassingen die grootschalige efficiëntie en kosteneffectiviteit vereisen."
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct model, met 8B parameters, ondersteunt de efficiënte uitvoering van visuele instructietaken en biedt hoogwaardige tekstgeneratiecapaciteiten."
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "Llama 3.1 Sonar Huge Online model, met 405B parameters, ondersteunt een contextlengte van ongeveer 127.000 tokens, ontworpen voor complexe online chattoepassingen."
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "Llama 3.1 Sonar Large Chat model, met 70B parameters, ondersteunt een contextlengte van ongeveer 127.000 tokens, geschikt voor complexe offline chattaken."
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "Llama 3.1 Sonar Large Online model, met 70B parameters, ondersteunt een contextlengte van ongeveer 127.000 tokens, geschikt voor hoge capaciteit en diverse chattaken."
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "Llama 3.1 Sonar Small Chat model, met 8B parameters, speciaal ontworpen voor offline chat, ondersteunt een contextlengte van ongeveer 127.000 tokens."
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "Llama 3.1 Sonar Small Online model, met 8B parameters, ondersteunt een contextlengte van ongeveer 127.000 tokens, speciaal ontworpen voor online chat en kan efficiënt verschillende tekstinteracties verwerken."
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70B biedt ongeëvenaarde complexiteitsverwerkingscapaciteiten, op maat gemaakt voor veeleisende projecten."
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8B biedt hoogwaardige inferentieprestaties, geschikt voor diverse toepassingsbehoeften."
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Use biedt krachtige tool-aanroepcapaciteiten en ondersteunt efficiënte verwerking van complexe taken."
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Use is een model dat is geoptimaliseerd voor efficiënt gebruik van tools, ondersteunt snelle parallelle berekeningen."
+ },
+ "llama3.1": {
+ "description": "Llama 3.1 is een toonaangevend model van Meta, ondersteunt tot 405B parameters en kan worden toegepast in complexe gesprekken, meertalige vertalingen en data-analyse."
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1 is een toonaangevend model van Meta, ondersteunt tot 405B parameters en kan worden toegepast in complexe gesprekken, meertalige vertalingen en data-analyse."
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1 is een toonaangevend model van Meta, ondersteunt tot 405B parameters en kan worden toegepast in complexe gesprekken, meertalige vertalingen en data-analyse."
+ },
+ "llava": {
+ "description": "LLaVA is een multimodaal model dat visuele encoder en Vicuna combineert, voor krachtige visuele en taalbegrip."
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7B biedt visuele verwerkingscapaciteiten, genereert complexe output via visuele informatie-invoer."
+ },
+ "llava:13b": {
+ "description": "LLaVA is een multimodaal model dat visuele encoder en Vicuna combineert, voor krachtige visuele en taalbegrip."
+ },
+ "llava:34b": {
+ "description": "LLaVA is een multimodaal model dat visuele encoder en Vicuna combineert, voor krachtige visuele en taalbegrip."
+ },
+ "mathstral": {
+ "description": "MathΣtral is ontworpen voor wetenschappelijk onderzoek en wiskundige inferentie, biedt effectieve rekencapaciteiten en resultaatinterpretatie."
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "Een krachtig model met 70 miljard parameters dat uitblinkt in redeneren, coderen en brede taaltoepassingen."
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "Een veelzijdig model met 8 miljard parameters, geoptimaliseerd voor dialoog- en tekstgeneratietaken."
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "De Llama 3.1 instructie-geoptimaliseerde tekstmodellen zijn geoptimaliseerd voor meertalige dialoogtoepassingen en presteren beter dan veel beschikbare open source en gesloten chatmodellen op gangbare industriële benchmarks."
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "De Llama 3.1 instructie-geoptimaliseerde tekstmodellen zijn geoptimaliseerd voor meertalige dialoogtoepassingen en presteren beter dan veel beschikbare open source en gesloten chatmodellen op gangbare industriële benchmarks."
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "De Llama 3.1 instructie-geoptimaliseerde tekstmodellen zijn geoptimaliseerd voor meertalige dialoogtoepassingen en presteren beter dan veel beschikbare open source en gesloten chatmodellen op gangbare industriële benchmarks."
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B) biedt uitstekende taalverwerkingscapaciteiten en een geweldige interactie-ervaring."
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B) is een krachtig chatmodel dat complexe gespreksbehoeften ondersteunt."
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B) biedt meertalige ondersteuning en dekt een breed scala aan domeinkennis."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Lite is geschikt voor omgevingen die hoge prestaties en lage latentie vereisen."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turbo biedt uitstekende taalbegrip en generatiecapaciteiten, geschikt voor de meest veeleisende rekenkundige taken."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Lite is geschikt voor omgevingen met beperkte middelen en biedt een uitstekende balans in prestaties."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turbo is een krachtige grote taalmodel, geschikt voor een breed scala aan toepassingsscenario's."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405B is een krachtig model voor voortraining en instructiefijnafstemming."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "405B Llama 3.1 Turbo model biedt enorme contextondersteuning voor big data verwerking en presteert uitstekend in grootschalige AI-toepassingen."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70B biedt efficiënte gespreksondersteuning in meerdere talen."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "Llama 3.1 70B model is fijn afgesteld voor toepassingen met hoge belasting, gekwantiseerd naar FP8 voor efficiëntere rekenkracht en nauwkeurigheid, en zorgt voor uitstekende prestaties in complexe scenario's."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1 biedt meertalige ondersteuning en is een van de toonaangevende generatieve modellen in de industrie."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "Llama 3.1 8B model maakt gebruik van FP8-kwantisering en ondersteunt tot 131.072 contexttokens, en is een van de beste open-source modellen, geschikt voor complexe taken en presteert beter dan veel industriestandaarden."
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instruct is geoptimaliseerd voor hoogwaardige gespreksscenario's en presteert uitstekend in verschillende menselijke evaluaties."
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instruct is geoptimaliseerd voor hoogwaardige gespreksscenario's en presteert beter dan veel gesloten modellen."
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B Instruct is de nieuwste versie van Meta, geoptimaliseerd voor het genereren van hoogwaardige gesprekken en overtreft veel toonaangevende gesloten modellen."
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct is ontworpen voor hoogwaardige gesprekken en presteert uitstekend in menselijke evaluaties, vooral in interactieve scenario's."
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct is de nieuwste versie van Meta, geoptimaliseerd voor hoogwaardige gespreksscenario's en presteert beter dan veel toonaangevende gesloten modellen."
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1 biedt ondersteuning voor meerdere talen en is een van de toonaangevende generatiemodellen in de industrie."
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 405B Instruct is het grootste en krachtigste model binnen het Llama 3.1 Instruct-model, een geavanceerd model voor conversatie-inferentie en synthetische datageneratie, dat ook kan worden gebruikt als basis voor gespecialiseerde continue pre-training of fine-tuning in specifieke domeinen. De meertalige grote taalmodellen (LLMs) die Llama 3.1 biedt, zijn een set van voorgetrainde, instructie-geoptimaliseerde generatieve modellen, waaronder 8B, 70B en 405B in grootte (tekstinvoer/uitvoer). De tekstmodellen van Llama 3.1, die zijn geoptimaliseerd voor meertalige conversatiegebruik, overtreffen veel beschikbare open-source chatmodellen in gangbare industriële benchmarktests. Llama 3.1 is ontworpen voor commercieel en onderzoeksgebruik in meerdere talen. De instructie-geoptimaliseerde tekstmodellen zijn geschikt voor assistentachtige chats, terwijl de voorgetrainde modellen zich kunnen aanpassen aan verschillende taken voor natuurlijke taalgeneratie. Het Llama 3.1-model ondersteunt ook het verbeteren van andere modellen door gebruik te maken van de output van zijn modellen, inclusief synthetische datageneratie en verfijning. Llama 3.1 is een autoregressief taalmodel dat gebruikmaakt van een geoptimaliseerde transformer-architectuur. De afgestelde versies gebruiken supervisie-finetuning (SFT) en versterkend leren met menselijke feedback (RLHF) om te voldoen aan menselijke voorkeuren voor behulpzaamheid en veiligheid."
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "De bijgewerkte versie van Meta Llama 3.1 70B Instruct, met een uitgebreid contextlengte van 128K, meertaligheid en verbeterde redeneercapaciteiten. De meertalige grote taalmodellen (LLMs) die door Llama 3.1 worden aangeboden, zijn een set voorgetrainde, instructie-aangepaste generatieve modellen, inclusief 8B, 70B en 405B in grootte (tekstinvoer/uitvoer). De instructie-aangepaste tekstmodellen (8B, 70B, 405B) zijn geoptimaliseerd voor meertalige dialoogtoepassingen en hebben veel beschikbare open-source chatmodellen overtroffen in gangbare industriële benchmarktests. Llama 3.1 is bedoeld voor commerciële en onderzoeksdoeleinden in meerdere talen. De instructie-aangepaste tekstmodellen zijn geschikt voor assistentachtige chats, terwijl de voorgetrainde modellen kunnen worden aangepast voor verschillende natuurlijke taalgeneratietaken. Llama 3.1-modellen ondersteunen ook het gebruik van hun output om andere modellen te verbeteren, inclusief synthetische gegevensgeneratie en verfijning. Llama 3.1 is een autoregressief taalmodel dat gebruikmaakt van een geoptimaliseerde transformerarchitectuur. De aangepaste versies maken gebruik van supervisie-fijnstelling (SFT) en versterkend leren met menselijke feedback (RLHF) om te voldoen aan menselijke voorkeuren voor behulpzaamheid en veiligheid."
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "De bijgewerkte versie van Meta Llama 3.1 8B Instruct, met een uitgebreid contextlengte van 128K, meertaligheid en verbeterde redeneercapaciteiten. De meertalige grote taalmodellen (LLMs) die door Llama 3.1 worden aangeboden, zijn een set voorgetrainde, instructie-aangepaste generatieve modellen, inclusief 8B, 70B en 405B in grootte (tekstinvoer/uitvoer). De instructie-aangepaste tekstmodellen (8B, 70B, 405B) zijn geoptimaliseerd voor meertalige dialoogtoepassingen en hebben veel beschikbare open-source chatmodellen overtroffen in gangbare industriële benchmarktests. Llama 3.1 is bedoeld voor commerciële en onderzoeksdoeleinden in meerdere talen. De instructie-aangepaste tekstmodellen zijn geschikt voor assistentachtige chats, terwijl de voorgetrainde modellen kunnen worden aangepast voor verschillende natuurlijke taalgeneratietaken. Llama 3.1-modellen ondersteunen ook het gebruik van hun output om andere modellen te verbeteren, inclusief synthetische gegevensgeneratie en verfijning. Llama 3.1 is een autoregressief taalmodel dat gebruikmaakt van een geoptimaliseerde transformerarchitectuur. De aangepaste versies maken gebruik van supervisie-fijnstelling (SFT) en versterkend leren met menselijke feedback (RLHF) om te voldoen aan menselijke voorkeuren voor behulpzaamheid en veiligheid."
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3 is een open groot taalmodel (LLM) gericht op ontwikkelaars, onderzoekers en bedrijven, ontworpen om hen te helpen bij het bouwen, experimenteren en verantwoordelijk opschalen van hun generatieve AI-ideeën. Als onderdeel van het basis systeem voor wereldwijde gemeenschapsinnovatie is het zeer geschikt voor contentcreatie, conversatie-AI, taalbegrip, R&D en zakelijke toepassingen."
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3 is een open groot taalmodel (LLM) gericht op ontwikkelaars, onderzoekers en bedrijven, ontworpen om hen te helpen bij het bouwen, experimenteren en verantwoordelijk opschalen van hun generatieve AI-ideeën. Als onderdeel van het basis systeem voor wereldwijde gemeenschapsinnovatie is het zeer geschikt voor apparaten met beperkte rekenkracht en middelen, edge-apparaten en snellere trainingstijden."
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7B is het nieuwste snelle en lichte model van Microsoft AI, met prestaties die bijna 10 keer beter zijn dan de huidige toonaangevende open-source modellen."
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22B is het meest geavanceerde Wizard-model van Microsoft AI, met een uiterst competitieve prestatie."
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-V is de nieuwe generatie multimodale grote modellen van OpenBMB, met uitstekende OCR-herkenning en multimodaal begrip, geschikt voor een breed scala aan toepassingsscenario's."
+ },
+ "mistral": {
+ "description": "Mistral is het 7B-model van Mistral AI, geschikt voor variabele taalverwerkingsbehoeften."
+ },
+ "mistral-large": {
+ "description": "Mixtral Large is het vlaggenschipmodel van Mistral, dat de capaciteiten van codegeneratie, wiskunde en inferentie combineert, ondersteunt een contextvenster van 128k."
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407) is een geavanceerd Large Language Model (LLM) met state-of-the-art redenerings-, kennis- en coderingscapaciteiten."
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Large is het vlaggenschipmodel, dat uitblinkt in meertalige taken, complexe inferentie en codegeneratie, ideaal voor high-end toepassingen."
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemo is een 12B-model dat is ontwikkeld in samenwerking met Mistral AI en NVIDIA, biedt efficiënte prestaties."
+ },
+ "mistral-small": {
+ "description": "Mistral Small kan worden gebruikt voor elke taalkundige taak die hoge efficiëntie en lage latentie vereist."
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Small is een kosteneffectieve, snelle en betrouwbare optie voor gebruikscases zoals vertaling, samenvatting en sentimentanalyse."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instruct staat bekend om zijn hoge prestaties en is geschikt voor verschillende taalgerelateerde taken."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7B is een model dat op aanvraag is fijn afgesteld om geoptimaliseerde antwoorden voor taken te bieden."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3 biedt efficiënte rekenkracht en natuurlijke taalbegrip, geschikt voor een breed scala aan toepassingen."
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B) is een supergroot taalmodel dat extreem hoge verwerkingsbehoeften ondersteunt."
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7B is een voorgetraind spaarzaam mengexpertmodel, gebruikt voor algemene teksttaken."
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instruct is een hoogwaardig industrieel standaardmodel met snelheidoptimalisatie en ondersteuning voor lange contexten."
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemo is een model met 7,3 miljard parameters dat meertalige ondersteuning en hoge prestaties biedt."
+ },
+ "mixtral": {
+ "description": "Mixtral is het expertmodel van Mistral AI, met open-source gewichten en biedt ondersteuning voor codegeneratie en taalbegrip."
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7B biedt hoge fouttolerantie en parallelle verwerkingscapaciteiten, geschikt voor complexe taken."
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtral is het expertmodel van Mistral AI, met open-source gewichten en biedt ondersteuning voor codegeneratie en taalbegrip."
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128K is een model met een superlange contextverwerkingscapaciteit, geschikt voor het genereren van zeer lange teksten, voldoet aan de behoeften van complexe generatietaken en kan tot 128.000 tokens verwerken, zeer geschikt voor onderzoek, academische en grote documentgeneratie."
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32K biedt een gemiddelde contextverwerkingscapaciteit, kan 32.768 tokens verwerken, bijzonder geschikt voor het genereren van verschillende lange documenten en complexe gesprekken, toegepast in contentcreatie, rapportgeneratie en conversatiesystemen."
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8K is speciaal ontworpen voor het genereren van korte teksttaken, met efficiënte verwerkingsprestaties, kan 8.192 tokens verwerken, zeer geschikt voor korte gesprekken, notities en snelle contentgeneratie."
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8B is een upgrade van Nous Hermes 2, met de nieuwste intern ontwikkelde datasets."
+ },
+ "o1-mini": {
+ "description": "o1-mini is een snel en kosteneffectief redeneermodel dat is ontworpen voor programmeer-, wiskunde- en wetenschappelijke toepassingen. Dit model heeft een context van 128K en een kennisafkapdatum van oktober 2023."
+ },
+ "o1-preview": {
+ "description": "o1 is het nieuwe redeneermodel van OpenAI, geschikt voor complexe taken die uitgebreide algemene kennis vereisen. Dit model heeft een context van 128K en een kennisafkapdatum van oktober 2023."
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mamba is een Mamba 2-taalmodel dat zich richt op codegeneratie en krachtige ondersteuning biedt voor geavanceerde code- en inferentietaken."
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7B is een compact maar hoogpresterend model, dat uitblinkt in batchverwerking en eenvoudige taken zoals classificatie en tekstgeneratie, met goede inferentiecapaciteiten."
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemo is een 12B-model ontwikkeld in samenwerking met Nvidia, biedt uitstekende inferentie- en coderingsprestaties, gemakkelijk te integreren en te vervangen."
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22B is een groter expertmodel dat zich richt op complexe taken, biedt uitstekende inferentiecapaciteiten en een hogere doorvoer."
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7B is een spaarzaam expertmodel dat meerdere parameters benut om de inferentiesnelheid te verhogen, geschikt voor het verwerken van meertalige en codegeneratietaken."
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o is een dynamisch model dat in realtime wordt bijgewerkt om de meest actuele versie te behouden. Het combineert krachtige taalbegrip- en generatiecapaciteiten, geschikt voor grootschalige toepassingsscenario's, waaronder klantenservice, onderwijs en technische ondersteuning."
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o mini is het nieuwste model van OpenAI, gelanceerd na GPT-4 Omni, dat tekst- en afbeeldingsinvoer ondersteunt en tekstuitvoer genereert. Als hun meest geavanceerde kleine model is het veel goedkoper dan andere recente toonaangevende modellen en meer dan 60% goedkoper dan GPT-3.5 Turbo. Het behoudt de meest geavanceerde intelligentie met een aanzienlijke prijs-kwaliteitverhouding. GPT-4o mini behaalde 82% op de MMLU-test en staat momenteel hoger in chatvoorkeuren dan GPT-4."
+ },
+ "openai/o1-mini": {
+ "description": "o1-mini is een snel en kosteneffectief redeneermodel dat is ontworpen voor programmeer-, wiskunde- en wetenschappelijke toepassingen. Dit model heeft een context van 128K en een kennisafkapdatum van oktober 2023."
+ },
+ "openai/o1-preview": {
+ "description": "o1 is het nieuwe redeneermodel van OpenAI, geschikt voor complexe taken die uitgebreide algemene kennis vereisen. Dit model heeft een context van 128K en een kennisafkapdatum van oktober 2023."
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7B is een open-source taalmodelbibliotheek die is geoptimaliseerd met de 'C-RLFT (Conditionele Versterkingsleer Fijnstelling)' strategie."
+ },
+ "openrouter/auto": {
+ "description": "Afhankelijk van de contextlengte, het onderwerp en de complexiteit, wordt uw verzoek verzonden naar Llama 3 70B Instruct, Claude 3.5 Sonnet (zelfregulerend) of GPT-4o."
+ },
+ "phi3": {
+ "description": "Phi-3 is een lichtgewicht open model van Microsoft, geschikt voor efficiënte integratie en grootschalige kennisinferentie."
+ },
+ "phi3:14b": {
+ "description": "Phi-3 is een lichtgewicht open model van Microsoft, geschikt voor efficiënte integratie en grootschalige kennisinferentie."
+ },
+ "pixtral-12b-2409": {
+ "description": "Het Pixtral model toont sterke capaciteiten in taken zoals grafiek- en beeldbegrip, documentvraag-en-antwoord, multimodale redenering en instructievolging, en kan afbeeldingen met natuurlijke resolutie en beeldverhouding verwerken, evenals een onbeperkt aantal afbeeldingen in een lange contextvenster van maximaal 128K tokens."
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "Het Tongyi Qianwen codeermodel."
+ },
+ "qwen-long": {
+ "description": "Qwen is een grootschalig taalmodel dat lange tekstcontexten ondersteunt, evenals dialoogfunctionaliteit op basis van lange documenten en meerdere documenten."
+ },
+ "qwen-math-plus-latest": {
+ "description": "Het Tongyi Qianwen wiskundemodel is speciaal ontworpen voor het oplossen van wiskundige problemen."
+ },
+ "qwen-math-turbo-latest": {
+ "description": "Het Tongyi Qianwen wiskundemodel is speciaal ontworpen voor het oplossen van wiskundige problemen."
+ },
+ "qwen-max-latest": {
+ "description": "Het Tongyi Qianwen model met een schaal van honderden miljarden, ondersteunt invoer in verschillende talen, waaronder Chinees en Engels, en is de API-model achter de huidige Tongyi Qianwen 2.5 productversie."
+ },
+ "qwen-plus-latest": {
+ "description": "De verbeterde versie van het Tongyi Qianwen supergrote taalmodel ondersteunt invoer in verschillende talen, waaronder Chinees en Engels."
+ },
+ "qwen-turbo-latest": {
+ "description": "De Tongyi Qianwen supergrote taalmodel ondersteunt invoer in verschillende talen, waaronder Chinees en Engels."
+ },
+ "qwen-vl-chat-v1": {
+ "description": "Qwen VL ondersteunt flexibele interactiemethoden, inclusief meerdere afbeeldingen, meerdere rondes van vraag en antwoord, en creatiecapaciteiten."
+ },
+ "qwen-vl-max": {
+ "description": "Qwen is een grootschalig visueel taalmodel. In vergelijking met de verbeterde versie biedt het een verdere verbetering van de visuele redeneercapaciteit en de naleving van instructies, met een hoger niveau van visuele waarneming en cognitie."
+ },
+ "qwen-vl-plus": {
+ "description": "Qwen is een verbeterde versie van het grootschalige visuele taalmodel. Het verbetert aanzienlijk de detailherkenning en tekstherkenning, en ondersteunt afbeeldingen met een resolutie van meer dan een miljoen pixels en een willekeurige beeldverhouding."
+ },
+ "qwen-vl-v1": {
+ "description": "Geïnitieerd met het Qwen-7B taalmodel, voegt het een afbeeldingsmodel toe, met een invoerresolutie van 448 voor het voorgetrainde model."
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2 is een gloednieuwe serie grote taalmodellen met sterkere begrip- en generatiecapaciteiten."
+ },
+ "qwen2": {
+ "description": "Qwen2 is Alibaba's nieuwe generatie grootschalig taalmodel, ondersteunt diverse toepassingsbehoeften met uitstekende prestaties."
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "Het 14B model van Tongyi Qianwen 2.5 is open source beschikbaar."
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "Het 32B model van Tongyi Qianwen 2.5 is open source beschikbaar."
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "Het 72B model van Tongyi Qianwen 2.5 is open source beschikbaar."
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "Het 7B model van Tongyi Qianwen 2.5 is open source beschikbaar."
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "De open source versie van het Tongyi Qianwen codeermodel."
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "De open source versie van het Tongyi Qianwen codeermodel."
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "Het Qwen-Math model heeft krachtige capaciteiten voor het oplossen van wiskundige problemen."
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "Het Qwen-Math model heeft krachtige capaciteiten voor het oplossen van wiskundige problemen."
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "Het Qwen-Math model heeft krachtige capaciteiten voor het oplossen van wiskundige problemen."
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2 is Alibaba's nieuwe generatie grootschalig taalmodel, ondersteunt diverse toepassingsbehoeften met uitstekende prestaties."
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2 is Alibaba's nieuwe generatie grootschalig taalmodel, ondersteunt diverse toepassingsbehoeften met uitstekende prestaties."
+ },
+ "qwen2:72b": {
+ "description": "Qwen2 is Alibaba's nieuwe generatie grootschalig taalmodel, ondersteunt diverse toepassingsbehoeften met uitstekende prestaties."
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar Mini is een compact LLM dat beter presteert dan GPT-3.5, met sterke meertalige capaciteiten, ondersteunt Engels en Koreaans, en biedt een efficiënte en compacte oplossing."
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja) breidt de mogelijkheden van Solar Mini uit, met een focus op de Japanse taal, terwijl het ook efficiënt en uitstekend presteert in het gebruik van Engels en Koreaans."
+ },
+ "solar-pro": {
+ "description": "Solar Pro is een zeer intelligent LLM dat is uitgebracht door Upstage, gericht op instructievolging met één GPU, met een IFEval-score van boven de 80. Momenteel ondersteunt het Engels, met een officiële versie die gepland staat voor november 2024, die de taalondersteuning en contextlengte zal uitbreiden."
+ },
+ "step-1-128k": {
+ "description": "Biedt een balans tussen prestaties en kosten, geschikt voor algemene scenario's."
+ },
+ "step-1-256k": {
+ "description": "Heeft ultra-lange contextverwerkingscapaciteiten, vooral geschikt voor lange documentanalyse."
+ },
+ "step-1-32k": {
+ "description": "Ondersteunt gesprekken van gemiddelde lengte, geschikt voor verschillende toepassingsscenario's."
+ },
+ "step-1-8k": {
+ "description": "Klein model, geschikt voor lichte taken."
+ },
+ "step-1-flash": {
+ "description": "Hogesnelheidsmodel, geschikt voor realtime gesprekken."
+ },
+ "step-1v-32k": {
+ "description": "Ondersteunt visuele invoer, verbetert de multimodale interactie-ervaring."
+ },
+ "step-1v-8k": {
+ "description": "Klein visueel model, geschikt voor basis tekst- en afbeeldingtaken."
+ },
+ "step-2-16k": {
+ "description": "Ondersteunt grootschalige contextinteracties, geschikt voor complexe gespreksscenario's."
+ },
+ "taichu_llm": {
+ "description": "Het Zido Tai Chu-taalmodel heeft een sterke taalbegripcapaciteit en kan tekstcreatie, kennisvragen, codeprogrammering, wiskundige berekeningen, logische redenering, sentimentanalyse, tekstsamenvattingen en meer aan. Het combineert innovatief grote data voortraining met rijke kennis uit meerdere bronnen, door algoritmische technologie continu te verfijnen en voortdurend nieuwe kennis op te nemen uit enorme tekstdata op het gebied van vocabulaire, structuur, grammatica en semantiek, waardoor de modelprestaties voortdurend evolueren. Het biedt gebruikers gemakkelijkere informatie en diensten en een meer intelligente ervaring."
+ },
+ "taichu_vqa": {
+ "description": "Taichu 2.0V combineert capaciteiten zoals beeldbegrip, kennisoverdracht en logische toerekening, en presteert uitstekend in het domein van beeld-tekst vraag en antwoord."
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B) biedt verbeterde rekenkracht door middel van efficiënte strategieën en modelarchitectuur."
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B) is geschikt voor verfijnde instructietaken en biedt uitstekende taalverwerkingscapaciteiten."
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2 is een taalmodel van Microsoft AI dat uitblinkt in complexe gesprekken, meertaligheid, inferentie en intelligente assistentie."
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2 is een taalmodel van Microsoft AI dat uitblinkt in complexe gesprekken, meertaligheid, inferentie en intelligente assistentie."
+ },
+ "yi-large": {
+ "description": "Een nieuw model met honderden miljarden parameters, biedt superieure vraag- en tekstgeneratiecapaciteiten."
+ },
+ "yi-large-fc": {
+ "description": "Bouwt voort op het yi-large model en versterkt de mogelijkheden voor functie-aanroepen, geschikt voor verschillende zakelijke scenario's die agent- of workflowopbouw vereisen."
+ },
+ "yi-large-preview": {
+ "description": "Vroegere versie, aanbevolen om yi-large (nieuwe versie) te gebruiken."
+ },
+ "yi-large-rag": {
+ "description": "Een geavanceerde service op basis van het yi-large model, die retrieval en generatietechnologie combineert om nauwkeurige antwoorden te bieden en realtime informatie van het hele web te doorzoeken."
+ },
+ "yi-large-turbo": {
+ "description": "Biedt een uitstekende prijs-kwaliteitverhouding en prestaties. Voert een nauwkeurige afstemming uit op basis van prestaties, redeneersnelheid en kosten."
+ },
+ "yi-medium": {
+ "description": "Gemiddeld formaat model met geoptimaliseerde afstemming, biedt een evenwichtige prijs-kwaliteitverhouding. Diep geoptimaliseerde instructievolgcapaciteiten."
+ },
+ "yi-medium-200k": {
+ "description": "200K ultra-lange contextvenster, biedt diepgaand begrip en generatiecapaciteiten voor lange teksten."
+ },
+ "yi-spark": {
+ "description": "Klein maar krachtig, een lichtgewicht en snelle model. Biedt versterkte wiskundige berekeningen en codeercapaciteiten."
+ },
+ "yi-vision": {
+ "description": "Model voor complexe visuele taken, biedt hoge prestaties in beeldbegrip en analyse."
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/plugin.json b/DigitalHumanWeb/locales/nl-NL/plugin.json
new file mode 100644
index 0000000..3ada908
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "Argumenten",
+ "function_call": "Functieoproep",
+ "off": "Zet debug uit",
+ "on": "Bekijk plug-in oproepinformatie",
+ "payload": "plug-in payload",
+ "response": "Reactie",
+ "tool_call": "Tool-oproepverzoek"
+ },
+ "detailModal": {
+ "info": {
+ "description": "API-beschrijving",
+ "name": "API-naam"
+ },
+ "tabs": {
+ "info": "Plug-in mogelijkheden",
+ "manifest": "Installatiebestand",
+ "settings": "Instellingen"
+ },
+ "title": "Plug-in Details"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "Weet u zeker dat u deze lokale plug-in wilt verwijderen? Eenmaal verwijderd, kan het niet worden hersteld.",
+ "customParams": {
+ "useProxy": {
+ "label": "Installeren via proxy (als u problemen ondervindt met toegang tot cross-origin, probeer dan deze optie in te schakelen en opnieuw te installeren)"
+ }
+ },
+ "deleteSuccess": "Plug-in succesvol verwijderd",
+ "manifest": {
+ "identifier": {
+ "desc": "De unieke identificatie van de plug-in",
+ "label": "Identificatie"
+ },
+ "mode": {
+ "local": "Visuele configuratie",
+ "local-tooltip": "Visuele configuratie wordt op dit moment niet ondersteund",
+ "url": "Online link"
+ },
+ "name": {
+ "desc": "De titel van de plug-in",
+ "label": "Titel",
+ "placeholder": "Zoekmachine"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "De auteur van de plug-in",
+ "label": "Auteur"
+ },
+ "avatar": {
+ "desc": "Het pictogram van de plug-in, kan een Emoji of een URL zijn",
+ "label": "Pictogram"
+ },
+ "description": {
+ "desc": "De beschrijving van de plug-in",
+ "label": "Beschrijving",
+ "placeholder": "Informatie verkrijgen van zoekmachines"
+ },
+ "formFieldRequired": "Dit veld is verplicht",
+ "homepage": {
+ "desc": "De startpagina van de plug-in",
+ "label": "Startpagina"
+ },
+ "identifier": {
+ "desc": "De unieke identificatie van de plug-in, ondersteunt alleen alfanumerieke tekens, koppelteken - en underscore _",
+ "errorDuplicate": "De identificatie wordt al gebruikt door een andere plug-in, wijzig de identificatie",
+ "label": "Identificatie",
+ "pattenErrorMessage": "Alleen alfanumerieke tekens, koppelteken - en underscore _ zijn toegestaan"
+ },
+ "manifest": {
+ "desc": "{{appName}} zal de plugin installeren via deze link",
+ "label": "Plug-in Beschrijving (Manifest) URL",
+ "preview": "Voorbeeld",
+ "refresh": "Vernieuwen"
+ },
+ "title": {
+ "desc": "De titel van de plug-in",
+ "label": "Titel",
+ "placeholder": "Zoekmachine"
+ }
+ },
+ "metaConfig": "Configuratie van plug-inmetadata",
+ "modalDesc": "Na het toevoegen van een aangepaste plug-in kan deze worden gebruikt voor verificatie van plug-inontwikkeling of direct in de sessie. Raadpleeg de <1>ontwikkelingsdocumentatie↗> voor plug-inontwikkeling.",
+ "openai": {
+ "importUrl": "Importeren van URL-link",
+ "schema": "Schema"
+ },
+ "preview": {
+ "card": "Voorbeeld van plug-inweergave",
+ "desc": "Voorbeeld van plug-inbeschrijving",
+ "title": "Voorbeeld van plug-innaam"
+ },
+ "save": "Installeer plug-in",
+ "saveSuccess": "Instellingen van plug-in succesvol opgeslagen",
+ "tabs": {
+ "manifest": "Functiebeschrijving Manifest (Manifest)",
+ "meta": "Plug-in Metadata"
+ },
+ "title": {
+ "create": "Aangepaste plug-in toevoegen",
+ "edit": "Aangepaste plug-in bewerken"
+ },
+ "type": {
+ "lobe": "LobeChat-plug-in",
+ "openai": "OpenAI-plug-in"
+ },
+ "update": "Bijwerken",
+ "updateSuccess": "Instellingen van plug-in succesvol bijgewerkt"
+ },
+ "error": {
+ "fetchError": "Het ophalen van de manifest-link is mislukt. Zorg ervoor dat de link geldig is en controleer of de link cross-origin toegang toestaat.",
+ "installError": "Installatie van de plugin {{name}} is mislukt.",
+ "manifestInvalid": "Manifest voldoet niet aan de specificatie. Validatieresultaat: \n\n {{error}}",
+ "noManifest": "Geen manifest beschikbaar",
+ "openAPIInvalid": "OpenAPI-analyse mislukt. Fout: \n\n {{error}}",
+ "reinstallError": "Vernieuwen van de plugin {{name}} is mislukt.",
+ "urlError": "De link retourneert geen JSON-indeling. Zorg ervoor dat het een geldige link is."
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "Verouderd",
+ "local.config": "Configuratie",
+ "local.title": "Aangepast"
+ }
+ },
+ "loading": {
+ "content": "Plugin wordt geladen...",
+ "plugin": "Plugin wordt uitgevoerd..."
+ },
+ "pluginList": "Lijst met plugins",
+ "setting": "Plugin-instellingen",
+ "settings": {
+ "indexUrl": {
+ "title": "Marktindex",
+ "tooltip": "Online bewerken wordt momenteel niet ondersteund. Stel in via omgevingsvariabelen tijdens implementatie."
+ },
+ "modalDesc": "Na het instellen van de marktlocatie voor plugins, kunt u een aangepaste pluginmarkt gebruiken.",
+ "title": "Instellingen voor pluginmarkt"
+ },
+ "showInPortal": "Gelieve de details in het portaal te bekijken",
+ "store": {
+ "actions": {
+ "confirmUninstall": "Deze plugin wordt binnenkort verwijderd. Na verwijdering worden de configuraties gewist. Weet u zeker dat u door wilt gaan?",
+ "detail": "Details",
+ "install": "Installeren",
+ "manifest": "Installatiebestand bewerken",
+ "settings": "Instellingen",
+ "uninstall": "Verwijderen"
+ },
+ "communityPlugin": "Community",
+ "customPlugin": "Aangepast",
+ "empty": "Geen geïnstalleerde plugins beschikbaar",
+ "installAllPlugins": "Allemaal installeren",
+ "networkError": "Kan de pluginwinkel niet laden. Controleer de netwerkverbinding en probeer het opnieuw.",
+ "placeholder": "Zoek plugin op naam, beschrijving of trefwoord...",
+ "releasedAt": "Uitgebracht op {{createdAt}}",
+ "tabs": {
+ "all": "Alle",
+ "installed": "Geïnstalleerd"
+ },
+ "title": "Pluginwinkel"
+ },
+ "unknownPlugin": "onbekende plugin"
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/portal.json b/DigitalHumanWeb/locales/nl-NL/portal.json
new file mode 100644
index 0000000..3ffe58c
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "Artifacts",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "Chunk",
+ "file": "Bestand"
+ }
+ },
+ "Plugins": "Plugins",
+ "actions": {
+ "genAiMessage": "Creëer assistentbericht",
+ "summary": "Samenvatting",
+ "summaryTooltip": "Samenvatting van de huidige inhoud"
+ },
+ "artifacts": {
+ "display": {
+ "code": "Code",
+ "preview": "Voorbeeld"
+ },
+ "svg": {
+ "copyAsImage": "Kopieer als afbeelding",
+ "copyFail": "Kopiëren mislukt, foutmelding: {{error}}",
+ "copySuccess": "Afbeelding succesvol gekopieerd",
+ "download": {
+ "png": "Download als PNG",
+ "svg": "Download als SVG"
+ }
+ }
+ },
+ "emptyArtifactList": "De huidige lijst met Artifacts is leeg. Gebruik plugins in de sessie en bekijk deze later opnieuw.",
+ "emptyKnowledgeList": "De huidige kennislijst is leeg. Gelieve de kennisbank in de sessie te openen voordat u deze bekijkt.",
+ "files": "Bestanden",
+ "messageDetail": "Berichtdetails",
+ "title": "Uitbreidingsvenster"
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/providers.json b/DigitalHumanWeb/locales/nl-NL/providers.json
new file mode 100644
index 0000000..79b4561
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "360 AI is een AI-model- en serviceplatform gelanceerd door het bedrijf 360, dat verschillende geavanceerde modellen voor natuurlijke taalverwerking biedt, waaronder 360GPT2 Pro, 360GPT Pro, 360GPT Turbo en 360GPT Turbo Responsibility 8K. Deze modellen combineren grootschalige parameters en multimodale capaciteiten, en worden breed toegepast in tekstgeneratie, semantisch begrip, dialoogsystemen en codegeneratie. Met flexibele prijsstrategieën voldoet 360 AI aan diverse gebruikersbehoeften, ondersteunt het ontwikkelaars bij integratie en bevordert het de innovatie en ontwikkeling van intelligente toepassingen."
+ },
+ "anthropic": {
+ "description": "Anthropic is een bedrijf dat zich richt op onderzoek en ontwikkeling van kunstmatige intelligentie, en biedt een reeks geavanceerde taalmodellen aan, zoals Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus en Claude 3 Haiku. Deze modellen bereiken een ideale balans tussen intelligentie, snelheid en kosten, en zijn geschikt voor een breed scala aan toepassingen, van bedrijfswerkbelasting tot snelle respons. Claude 3.5 Sonnet, als hun nieuwste model, presteert uitstekend in verschillende evaluaties, terwijl het een hoge kosteneffectiviteit behoudt."
+ },
+ "azure": {
+ "description": "Azure biedt een scala aan geavanceerde AI-modellen, waaronder GPT-3.5 en de nieuwste GPT-4-serie, die verschillende datatypes en complexe taken ondersteunen, met een focus op veilige, betrouwbare en duurzame AI-oplossingen."
+ },
+ "baichuan": {
+ "description": "Baichuan Intelligent is een bedrijf dat zich richt op de ontwikkeling van grote modellen voor kunstmatige intelligentie, wiens modellen uitblinken in Chinese taken zoals kennisencyclopedieën, lange tekstverwerking en generatieve creatie, en de mainstream modellen uit het buitenland overtreffen. Baichuan Intelligent heeft ook toonaangevende multimodale capaciteiten en presteert uitstekend in verschillende autoritatieve evaluaties. Hun modellen omvatten Baichuan 4, Baichuan 3 Turbo en Baichuan 3 Turbo 128k, die zijn geoptimaliseerd voor verschillende toepassingsscenario's en kosteneffectieve oplossingen bieden."
+ },
+ "bedrock": {
+ "description": "Bedrock is een dienst van Amazon AWS die zich richt op het bieden van geavanceerde AI-taalmodellen en visuele modellen voor bedrijven. De modellenfamilie omvat de Claude-serie van Anthropic, de Llama 3.1-serie van Meta, en meer, met opties variërend van lichtgewicht tot hoge prestaties, en ondersteunt tekstgeneratie, dialogen, beeldverwerking en meer, geschikt voor bedrijfsapplicaties van verschillende schalen en behoeften."
+ },
+ "deepseek": {
+ "description": "DeepSeek is een bedrijf dat zich richt op onderzoek en toepassing van kunstmatige intelligentietechnologie, en hun nieuwste model DeepSeek-V2.5 combineert algemene dialoog- en codeverwerkingscapaciteiten, met significante verbeteringen in het afstemmen op menselijke voorkeuren, schrijfopdrachten en het volgen van instructies."
+ },
+ "fireworksai": {
+ "description": "Fireworks AI is een toonaangevende aanbieder van geavanceerde taalmodellen, met een focus op functionele aanroepen en multimodale verwerking. Hun nieuwste model Firefunction V2 is gebaseerd op Llama-3 en geoptimaliseerd voor functieaanroepen, dialogen en het volgen van instructies. Het visuele taalmodel FireLLaVA-13B ondersteunt gemengde invoer van afbeeldingen en tekst. Andere opmerkelijke modellen zijn de Llama-serie en de Mixtral-serie, die efficiënte ondersteuning bieden voor meertalig volgen van instructies en genereren."
+ },
+ "github": {
+ "description": "Met GitHub-modellen kunnen ontwikkelaars AI-ingenieurs worden en bouwen met de toonaangevende AI-modellen in de industrie."
+ },
+ "google": {
+ "description": "De Gemini-serie van Google is hun meest geavanceerde, algemene AI-modellen, ontwikkeld door Google DeepMind, speciaal ontworpen voor multimodale toepassingen, en ondersteunt naadloze begrip en verwerking van tekst, code, afbeeldingen, audio en video. Geschikt voor verschillende omgevingen, van datacenters tot mobiele apparaten, verhoogt het de efficiëntie en toepasbaarheid van AI-modellen aanzienlijk."
+ },
+ "groq": {
+ "description": "De LPU-inferentie-engine van Groq presteert uitstekend in de nieuwste onafhankelijke benchmarktests voor grote taalmodellen (LLM), en herdefinieert de normen voor AI-oplossingen met zijn verbazingwekkende snelheid en efficiëntie. Groq is een vertegenwoordiger van onmiddellijke inferentiesnelheid en toont goede prestaties in cloudgebaseerde implementaties."
+ },
+ "minimax": {
+ "description": "MiniMax is een algemeen kunstmatige intelligentietechnologiebedrijf dat in 2021 is opgericht, en zich richt op co-creatie van intelligentie met gebruikers. MiniMax heeft verschillende multimodale algemene grote modellen ontwikkeld, waaronder een MoE-tekstgrootmodel met triljoenen parameters, een spraakgrootmodel en een afbeeldingsgrootmodel. Ze hebben ook toepassingen zoals Conch AI gelanceerd."
+ },
+ "mistral": {
+ "description": "Mistral biedt geavanceerde algemene, professionele en onderzoeksmodellen, die breed worden toegepast in complexe redenering, meertalige taken, codegeneratie en meer. Via functionele aanroepinterfaces kunnen gebruikers aangepaste functies integreren voor specifieke toepassingen."
+ },
+ "moonshot": {
+ "description": "Moonshot is een open platform gelanceerd door Beijing Dark Side Technology Co., Ltd., dat verschillende modellen voor natuurlijke taalverwerking biedt, met een breed toepassingsgebied, waaronder maar niet beperkt tot contentcreatie, academisch onderzoek, slimme aanbevelingen, medische diagnose, en ondersteunt lange tekstverwerking en complexe generatietaken."
+ },
+ "novita": {
+ "description": "Novita AI is een platform dat API-diensten biedt voor verschillende grote taalmodellen en AI-beeldgeneratie, flexibel, betrouwbaar en kosteneffectief. Het ondersteunt de nieuwste open-source modellen zoals Llama3 en Mistral, en biedt een uitgebreide, gebruiksvriendelijke en automatisch schaalbare API-oplossing voor de ontwikkeling van generatieve AI-toepassingen, geschikt voor de snelle groei van AI-startups."
+ },
+ "ollama": {
+ "description": "De modellen van Ollama bestrijken een breed scala aan gebieden, waaronder codegeneratie, wiskundige berekeningen, meertalige verwerking en interactieve dialogen, en voldoen aan de diverse behoeften van bedrijfs- en lokale implementaties."
+ },
+ "openai": {
+ "description": "OpenAI is 's werelds toonaangevende onderzoeksinstituut op het gebied van kunstmatige intelligentie, wiens ontwikkelde modellen zoals de GPT-serie de grenzen van natuurlijke taalverwerking verleggen. OpenAI streeft ernaar verschillende industrieën te transformeren door middel van innovatieve en efficiënte AI-oplossingen. Hun producten bieden opmerkelijke prestaties en kosteneffectiviteit, en worden op grote schaal gebruikt in onderzoek, commercie en innovatieve toepassingen."
+ },
+ "openrouter": {
+ "description": "OpenRouter is een serviceplatform dat verschillende vooraanstaande grote modelinterfaces biedt, ondersteunt OpenAI, Anthropic, LLaMA en meer, en is geschikt voor diverse ontwikkelings- en toepassingsbehoeften. Gebruikers kunnen flexibel het optimale model en de prijs kiezen op basis van hun behoeften, wat de AI-ervaring verbetert."
+ },
+ "perplexity": {
+ "description": "Perplexity is een toonaangevende aanbieder van dialooggeneratiemodellen, die verschillende geavanceerde Llama 3.1-modellen aanbiedt, die zowel online als offline toepassingen ondersteunen, en bijzonder geschikt zijn voor complexe natuurlijke taalverwerkingstaken."
+ },
+ "qwen": {
+ "description": "Tongyi Qianwen is een door Alibaba Cloud zelf ontwikkeld grootschalig taalmodel met krachtige mogelijkheden voor natuurlijke taalbegrip en -generatie. Het kan verschillende vragen beantwoorden, tekstinhoud creëren, meningen uiten, code schrijven, en speelt een rol in verschillende domeinen."
+ },
+ "siliconcloud": {
+ "description": "SiliconFlow streeft ernaar AGI te versnellen ten behoeve van de mensheid, door de efficiëntie van grootschalige AI te verbeteren met een gebruiksvriendelijke en kosteneffectieve GenAI-stack."
+ },
+ "spark": {
+ "description": "iFlytek's Xinghuo-grootmodel biedt krachtige AI-capaciteiten in meerdere domeinen en talen, en maakt gebruik van geavanceerde natuurlijke taalverwerkingstechnologie om innovatieve toepassingen te bouwen die geschikt zijn voor slimme hardware, slimme gezondheidszorg, slimme financiën en andere verticale scenario's."
+ },
+ "stepfun": {
+ "description": "De Class Star-grootmodel heeft toonaangevende multimodale en complexe redeneringscapaciteiten, ondersteunt het begrijpen van zeer lange teksten en beschikt over krachtige autonome zoekmachinefunctionaliteit."
+ },
+ "taichu": {
+ "description": "Het Instituut voor Automatisering van de Chinese Academie van Wetenschappen en het Wuhan Instituut voor Kunstmatige Intelligentie hebben een nieuwe generatie multimodale grote modellen gelanceerd, die ondersteuning bieden voor meerdaagse vraag-en-antwoord, tekstcreatie, beeldgeneratie, 3D-begrip, signaalanalyse en andere uitgebreide vraag-en-antwoordtaken, met sterkere cognitieve, begrip en creatiecapaciteiten, wat zorgt voor een geheel nieuwe interactie-ervaring."
+ },
+ "togetherai": {
+ "description": "Together AI streeft ernaar toonaangevende prestaties te bereiken door middel van innovatieve AI-modellen, en biedt uitgebreide aanpassingsmogelijkheden, waaronder ondersteuning voor snelle schaling en intuïtieve implementatieprocessen, om aan de verschillende behoeften van bedrijven te voldoen."
+ },
+ "upstage": {
+ "description": "Upstage richt zich op het ontwikkelen van AI-modellen voor verschillende zakelijke behoeften, waaronder Solar LLM en document AI, met als doel het realiseren van kunstmatige algemene intelligentie (AGI). Het creëert eenvoudige dialoogagenten via de Chat API en ondersteunt functionele aanroepen, vertalingen, insluitingen en specifieke domeintoepassingen."
+ },
+ "zeroone": {
+ "description": "01.AI richt zich op kunstmatige intelligentietechnologie in het tijdperk van AI 2.0, en bevordert sterk de innovatie en toepassing van 'mens + kunstmatige intelligentie', met behulp van krachtige modellen en geavanceerde AI-technologie om de productiviteit van de mens te verbeteren en technologische capaciteiten te realiseren."
+ },
+ "zhipu": {
+ "description": "Zhipu AI biedt een open platform voor multimodale en taalmodellen, dat een breed scala aan AI-toepassingsscenario's ondersteunt, waaronder tekstverwerking, beeldbegrip en programmeerondersteuning."
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/ragEval.json b/DigitalHumanWeb/locales/nl-NL/ragEval.json
new file mode 100644
index 0000000..e21556b
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "Nieuw aanmaken",
+ "description": {
+ "placeholder": "Beschrijving van de dataset (optioneel)"
+ },
+ "name": {
+ "placeholder": "Naam van de dataset",
+ "required": "Vul alstublieft de naam van de dataset in"
+ },
+ "title": "Dataset toevoegen"
+ },
+ "dataset": {
+ "addNewButton": "Dataset aanmaken",
+ "emptyGuide": "De huidige dataset is leeg, maak alstublieft een dataset aan.",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "Gegevens importeren"
+ },
+ "columns": {
+ "actions": "Acties",
+ "ideal": {
+ "title": "Gewenst antwoord"
+ },
+ "question": {
+ "title": "Vraag"
+ },
+ "referenceFiles": {
+ "title": "Referentiebestanden"
+ }
+ },
+ "notSelected": "Selecteer alstublieft een dataset aan de linkerkant",
+ "title": "Details van de dataset"
+ },
+ "title": "Dataset"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "Nieuw aanmaken",
+ "datasetId": {
+ "placeholder": "Selecteer uw evaluatiedataset",
+ "required": "Selecteer alstublieft een evaluatiedataset"
+ },
+ "description": {
+ "placeholder": "Beschrijving van de evaluatietaak (optioneel)"
+ },
+ "name": {
+ "placeholder": "Naam van de evaluatietaak",
+ "required": "Vul alstublieft de naam van de evaluatietaak in"
+ },
+ "title": "Evaluatietaak toevoegen"
+ },
+ "addNewButton": "Evaluatie aanmaken",
+ "emptyGuide": "De huidige evaluatietaak is leeg, begin met het aanmaken van een evaluatie.",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "Controleer status",
+ "confirmDelete": "Weet u zeker dat u deze evaluatie wilt verwijderen?",
+ "confirmRun": "Weet u zeker dat u wilt starten? Na het starten wordt de evaluatietaak asynchroon op de achtergrond uitgevoerd, het sluiten van de pagina heeft geen invloed op de uitvoering van de asynchrone taak.",
+ "downloadRecords": "Evaluatie downloaden",
+ "retry": "Opnieuw proberen",
+ "run": "Uitvoeren",
+ "title": "Acties"
+ },
+ "datasetId": {
+ "title": "Dataset"
+ },
+ "name": {
+ "title": "Naam van de evaluatietaak"
+ },
+ "records": {
+ "title": "Aantal evaluatieregisters"
+ },
+ "referenceFiles": {
+ "title": "Referentiebestanden"
+ },
+ "status": {
+ "error": "Uitvoering fout",
+ "pending": "Te uitvoeren",
+ "processing": "Bezig met uitvoeren",
+ "success": "Uitvoering succesvol",
+ "title": "Status"
+ }
+ },
+ "title": "Lijst van evaluatietaken"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/setting.json b/DigitalHumanWeb/locales/nl-NL/setting.json
new file mode 100644
index 0000000..ce4491a
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "Over"
+ },
+ "agentTab": {
+ "chat": "Chatvoorkeur",
+ "meta": "Assistentinformatie",
+ "modal": "Modelinstellingen",
+ "plugin": "Plugin-instellingen",
+ "prompt": "Rolinstelling",
+ "tts": "Tekst-naar-spraakdienst"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "Door te kiezen voor het verzenden van telemetriegegevens, kun je ons helpen de algehele gebruikerservaring van {{appName}} te verbeteren.",
+ "title": "Anonieme gebruiksgegevens verzenden"
+ },
+ "title": "Analytics"
+ },
+ "danger": {
+ "clear": {
+ "action": "Direct verwijderen",
+ "confirm": "Alle chatgegevens wissen bevestigen?",
+ "desc": "Alle gespreksgegevens worden gewist, inclusief assistenten, bestanden, berichten, plug-ins, enz.",
+ "success": "Alle gespreksberichten zijn gewist",
+ "title": "Alle gespreksberichten wissen"
+ },
+ "reset": {
+ "action": "Direct resetten",
+ "confirm": "Alle instellingen resetten bevestigen?",
+ "currentVersion": "Huidige versie",
+ "desc": "Alle instellingen worden teruggezet naar de standaardwaarden",
+ "success": "Alle instellingen zijn succesvol gereset",
+ "title": "Alle instellingen resetten"
+ }
+ },
+ "header": {
+ "desc": "Voorkeuren en modelinstellingen.",
+ "global": "Algemene instellingen",
+ "session": "Sessie-instellingen",
+ "sessionDesc": "Rolinstellingen en sessievoorkeuren.",
+ "sessionWithName": "Sessie-instellingen · {{name}}",
+ "title": "Instellingen"
+ },
+ "llm": {
+ "aesGcm": "Uw sleutel en proxy-adres zullen worden versleuteld met het <1>AES-GCM1> encryptie-algoritme",
+ "apiKey": {
+ "desc": "Vul je {{name}} API-sleutel in",
+ "placeholder": "{{name}} API-sleutel",
+ "title": "API-sleutel"
+ },
+ "checker": {
+ "button": "Controleren",
+ "desc": "Test of de API-sleutel en proxyadres correct zijn ingevuld",
+ "pass": "Succesvol gecontroleerd",
+ "title": "Connectiviteitscontrole"
+ },
+ "customModelCards": {
+ "addNew": "Maak en voeg het {{id}} model toe",
+ "config": "Model configureren",
+ "confirmDelete": "U staat op het punt om dit aangepaste model te verwijderen. Deze actie kan niet ongedaan worden gemaakt. Wees voorzichtig.",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "Het veld dat daadwerkelijk wordt aangevraagd in Azure OpenAI",
+ "placeholder": "Voer de modelimplementatienaam in Azure in",
+ "title": "Naam van modelimplementatie"
+ },
+ "displayName": {
+ "placeholder": "Voer de weergavenaam van het model in, bijv. ChatGPT, GPT-4, enz.",
+ "title": "Model weergavenaam"
+ },
+ "files": {
+ "extra": "De huidige implementatie van het uploaden van bestanden is slechts een hackoplossing en is alleen bedoeld voor eigen gebruik. Volledige bestandsuploadmogelijkheden zijn in de toekomst te verwachten.",
+ "title": "Ondersteuning voor het uploaden van bestanden"
+ },
+ "functionCall": {
+ "extra": "Deze configuratie zal alleen de functie-aanroepmogelijkheden in de applicatie inschakelen. Of functie-aanroepen worden ondersteund, hangt volledig af van het model zelf. Test de beschikbaarheid van functie-aanroepen van dit model zelf.",
+ "title": "Ondersteuningsfunctie Oproep"
+ },
+ "id": {
+ "extra": "Wordt weergegeven als de modeltag",
+ "placeholder": "Voer de model ID in, bijvoorbeeld gpt-4-turbo-preview of claude-2.1",
+ "title": "Model ID"
+ },
+ "modalTitle": "Aangepaste modelconfiguratie",
+ "tokens": {
+ "title": "Maximaal tokenaantal",
+ "unlimited": "onbeperkt"
+ },
+ "vision": {
+ "extra": "Deze configuratie zal alleen de mogelijkheid voor het uploaden van afbeeldingen in de applicatie inschakelen. Of herkenning wordt ondersteund, hangt volledig af van het model zelf. Test de beschikbaarheid van visuele herkenning van dit model zelf.",
+ "title": "Ondersteuning van visuele herkenning"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "De ophaalmodus aan de clientzijde initieert sessieverzoeken rechtstreeks vanuit de browser, waardoor de reactiesnelheid wordt verbeterd.",
+ "title": "Gebruik de ophaalmodus aan de clientzijde"
+ },
+ "fetcher": {
+ "fetch": "Haal model lijst op",
+ "fetching": "Model lijst wordt opgehaald...",
+ "latestTime": "Laatst bijgewerkt: {{time}}",
+ "noLatestTime": "Geen lijst beschikbaar op dit moment"
+ },
+ "helpDoc": "configuratiehandleiding",
+ "modelList": {
+ "desc": "Selecteer het model dat in de sessie moet worden weergegeven. Het geselecteerde model wordt weergegeven in de modellijst.",
+ "placeholder": "Selecteer een model uit de lijst",
+ "title": "Modellijst",
+ "total": "In totaal {{count}} modellen beschikbaar"
+ },
+ "proxyUrl": {
+ "desc": "Moet http(s):// bevatten, naast het standaardadres",
+ "title": "API Proxy Adres"
+ },
+ "waitingForMore": "Meer modellen worden <1>gepland om te worden toegevoegd1>, dus blijf op de hoogte"
+ },
+ "plugin": {
+ "addTooltip": "Voeg aangepaste plug-in toe",
+ "clearDeprecated": "Verwijder verouderde plug-ins",
+ "empty": "Geen geïnstalleerde plug-ins, ga naar de <1>plug-in store1> om te verkennen",
+ "installStatus": {
+ "deprecated": "Verwijderd"
+ },
+ "settings": {
+ "hint": "Vul de volgende configuratie in op basis van de beschrijving",
+ "title": "{{id}} Plug-inconfiguratie",
+ "tooltip": "Plug-inconfiguratie"
+ },
+ "store": "Plug-in store"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "Avatar"
+ },
+ "backgroundColor": {
+ "title": "Achtergrondkleur"
+ },
+ "description": {
+ "placeholder": "Voer assistentbeschrijving in",
+ "title": "Assistentbeschrijving"
+ },
+ "name": {
+ "placeholder": "Voer assistentnaam in",
+ "title": "Naam"
+ },
+ "prompt": {
+ "placeholder": "Voer rol Prompt-woord in",
+ "title": "Rolinstelling"
+ },
+ "tag": {
+ "placeholder": "Voer tag in",
+ "title": "Tag"
+ },
+ "title": "Assistentinformatie"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "Automatisch een onderwerp maken wanneer het aantal berichten de ingestelde waarde overschrijdt",
+ "title": "Berichtdrempel"
+ },
+ "chatStyleType": {
+ "title": "Chatvensterstijl",
+ "type": {
+ "chat": "Gespreksmodus",
+ "docs": "Documentmodus"
+ }
+ },
+ "compressThreshold": {
+ "desc": "Comprimeren wanneer het aantal ongecomprimeerde berichten de ingestelde waarde overschrijdt",
+ "title": "Compressiedrempel voor berichtlengte"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "Automatisch een onderwerp maken tijdens het gesprek, alleen van toepassing op tijdelijke onderwerpen",
+ "title": "Automatisch onderwerp maken"
+ },
+ "enableCompressThreshold": {
+ "title": "Compressiedrempel voor berichtlengte inschakelen"
+ },
+ "enableHistoryCount": {
+ "alias": "Onbeperkt",
+ "limited": "Bevat alleen {{number}} berichten",
+ "setlimited": "Stel berichtengeschiedenis in",
+ "title": "Berichtgeschiedenis beperken",
+ "unlimited": "Onbeperkt aantal berichten in de geschiedenis"
+ },
+ "historyCount": {
+ "desc": "Aantal berichten dat bij elke aanvraag wordt meegenomen (inclusief de meest recente vraag. Elke vraag en antwoord tellen als 1)",
+ "title": "Berichtaantal meenemen"
+ },
+ "inputTemplate": {
+ "desc": "De meest recente gebruikersboodschap wordt ingevuld in dit sjabloon",
+ "placeholder": "Voorbewerkingssjabloon {{text}} wordt vervangen door realtime invoer",
+ "title": "Voorbewerking van gebruikersinvoer"
+ },
+ "title": "Chatinstellingen"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "Limiet voor enkele reacties inschakelen"
+ },
+ "frequencyPenalty": {
+ "desc": "Hoe hoger de waarde, hoe waarschijnlijker het is dat herhaalde woorden worden verminderd",
+ "title": "Frequentieboete"
+ },
+ "maxTokens": {
+ "desc": "Het maximale aantal tokens dat wordt gebruikt voor een enkele interactie",
+ "title": "Limiet voor enkele reacties"
+ },
+ "model": {
+ "desc": "{{provider}} model",
+ "title": "Model"
+ },
+ "presencePenalty": {
+ "desc": "Hoe hoger de waarde, hoe waarschijnlijker het is dat het gesprek naar nieuwe onderwerpen wordt uitgebreid",
+ "title": "Onderwerpnieuwheid"
+ },
+ "temperature": {
+ "desc": "Hoe hoger de waarde, hoe willekeuriger de reactie",
+ "title": "Willekeurigheid",
+ "titleWithValue": "Willekeurigheid {{value}}"
+ },
+ "title": "Modelinstellingen",
+ "topP": {
+ "desc": "Vergelijkbaar met willekeurigheid, maar verander dit niet samen met willekeurigheid",
+ "title": "Top-P-monstername"
+ }
+ },
+ "settingPlugin": {
+ "title": "Plugin-lijst"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "Beheerder heeft versleutelde toegang ingeschakeld",
+ "placeholder": "Voer toegangswachtwoord in",
+ "title": "Toegangswachtwoord"
+ },
+ "oauth": {
+ "info": {
+ "desc": "已登录",
+ "title": "Account Information"
+ },
+ "signin": {
+ "action": "Sign In",
+ "desc": "Sign in using SSO to unlock the app",
+ "title": "Sign In to Your Account"
+ },
+ "signout": {
+ "action": "Sign Out",
+ "confirm": "Confirm sign out?",
+ "success": "Sign out successful"
+ }
+ },
+ "title": "Systeeminstellingen"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "OpenAI spraakherkenningsmodel",
+ "title": "OpenAI",
+ "ttsModel": "OpenAI spraaksynthesemodel"
+ },
+ "showAllLocaleVoice": {
+ "desc": "Als dit is uitgeschakeld, worden alleen stemmen in de huidige taal weergegeven",
+ "title": "Alle taalstemmen weergeven"
+ },
+ "stt": "Spraakherkenning instellingen",
+ "sttAutoStop": {
+ "desc": "Als dit is uitgeschakeld, stopt de spraakherkenning niet automatisch en moet je handmatig op de stopknop klikken",
+ "title": "Automatisch stoppen van spraakherkenning"
+ },
+ "sttLocale": {
+ "desc": "De taal van de gesproken invoer, deze optie kan de nauwkeurigheid van spraakherkenning verbeteren",
+ "title": "Taal voor spraakherkenning"
+ },
+ "sttService": {
+ "desc": "Browser staat voor de native spraakherkenningsservice van de browser",
+ "title": "Spraakherkenningsservice"
+ },
+ "title": "Spraakdienst",
+ "tts": "Spraaksynthese-instellingen",
+ "ttsService": {
+ "desc": "Als je gebruikmaakt van de spraaksynthese-service van OpenAI, zorg er dan voor dat de OpenAI-modelservice is ingeschakeld",
+ "title": "Spraaksynthese-service"
+ },
+ "voice": {
+ "desc": "Kies een stem voor de huidige assistent, verschillende TTS-services ondersteunen verschillende stemmen",
+ "preview": "Stem voorbeluisteren",
+ "title": "Spraaksynthese stem"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "Profielfoto"
+ },
+ "fontSize": {
+ "desc": "Lettergrootte van chatberichten",
+ "marks": {
+ "normal": "Normaal"
+ },
+ "title": "Lettergrootte"
+ },
+ "lang": {
+ "autoMode": "Volg systeem",
+ "title": "Taal"
+ },
+ "neutralColor": {
+ "desc": "Aangepaste grijstinten voor verschillende kleurvoorkeuren",
+ "title": "Neutrale kleur"
+ },
+ "primaryColor": {
+ "desc": "Aangepaste themakleur",
+ "title": "Themakleur"
+ },
+ "themeMode": {
+ "auto": "Automatisch",
+ "dark": "Donker",
+ "light": "Licht",
+ "title": "Thema"
+ },
+ "title": "Thema-instellingen"
+ },
+ "submitAgentModal": {
+ "button": "Assistent indienen",
+ "identifier": "Assistent-identificatie",
+ "metaMiss": "Vul alstublieft de assistentinformatie in voordat u deze indient. Dit moet de naam, beschrijving en labels bevatten",
+ "placeholder": "Voer de identificatie van de assistent in, deze moet uniek zijn, bijvoorbeeld web-ontwikkeling",
+ "tooltips": "Delen op de assistentenmarkt"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "Voeg een naam toe om het apparaat te identificeren",
+ "placeholder": "Voer de apparaatnaam in",
+ "title": "Apparaatnaam"
+ },
+ "title": "Apparaatinformatie",
+ "unknownBrowser": "Onbekende browser",
+ "unknownOS": "Onbekend besturingssysteem"
+ },
+ "warning": {
+ "tip": "Na een lange periode van openbare tests in de community, kan WebRTC-synchronisatie mogelijk niet stabiel voldoen aan algemene synchronisatiebehoeften. Gelieve zelf een <1>signaleringsserver implementeren1> voordat u het gebruikt."
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "WebRTC zal deze naam gebruiken om een synchronisatiekanaal te maken, zorg ervoor dat de kanaalnaam uniek is",
+ "placeholder": "Voer de synchronisatiekanaalnaam in",
+ "shuffle": "Willekeurig genereren",
+ "title": "Synchronisatiekanaalnaam"
+ },
+ "channelPassword": {
+ "desc": "Voeg een wachtwoord toe om de privacy van het kanaal te waarborgen, alleen apparaten met het juiste wachtwoord kunnen het kanaal betreden",
+ "placeholder": "Voer het synchronisatiekanaalwachtwoord in",
+ "title": "Synchronisatiekanaalwachtwoord"
+ },
+ "desc": "Realtime, point-to-point datacommunicatie, apparaten moeten tegelijkertijd online zijn om te synchroniseren",
+ "enabled": {
+ "invalid": "Vul eerst de signaleringsserver en synchronisatiekanaalnaam in voordat u deze inschakelt",
+ "title": "Synchronisatie inschakelen"
+ },
+ "signaling": {
+ "desc": "WebRTC zal dit adres gebruiken voor synchronisatie",
+ "placeholder": "Voer het adres van de signaleringsserver in",
+ "title": "Signaleringsserver"
+ },
+ "title": "WebRTC Synchronisatie"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "Assistentmetadata-generatiemodel",
+ "modelDesc": "Model voor het genereren van assistentnaam, beschrijving, profielfoto en labels",
+ "title": "Automatisch assistentinformatie genereren"
+ },
+ "queryRewrite": {
+ "label": "Vraag herschrijvingsmodel",
+ "modelDesc": "Model dat is opgegeven voor het optimaliseren van gebruikersvragen",
+ "title": "Kennisbank"
+ },
+ "title": "Systeemassistent",
+ "topic": {
+ "label": "Onderwerp Naamgevingsmodel",
+ "modelDesc": "Specificeer het model dat wordt gebruikt voor automatische hernoeming van onderwerpen",
+ "title": "Automatische Onderwerpnaamgeving"
+ },
+ "translation": {
+ "label": "Vertaalmodel",
+ "modelDesc": "Specificeer het model voor vertaling",
+ "title": "Instellingen voor vertaalassistent"
+ }
+ },
+ "tab": {
+ "about": "Over",
+ "agent": "Standaardassistent",
+ "common": "Algemene instellingen",
+ "experiment": "Experiment",
+ "llm": "Taalmodel",
+ "sync": "Cloudsynchronisatie",
+ "system-agent": "Systeemassistent",
+ "tts": "Spraakdienst"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "Ingebouwd"
+ },
+ "disabled": "Dit model ondersteunt momenteel geen functieaanroepen en kan geen plug-ins gebruiken",
+ "plugins": {
+ "enabled": "Ingeschakeld {{num}}",
+ "groupName": "Plug-ins",
+ "noEnabled": "Geen plug-ins ingeschakeld",
+ "store": "Plug-in store"
+ },
+ "title": "Uitbreidingsgereedschap"
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/tool.json b/DigitalHumanWeb/locales/nl-NL/tool.json
new file mode 100644
index 0000000..52448ec
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "Automatisch genereren",
+ "downloading": "De link naar de afbeelding gegenereerd door DallE3 is slechts 1 uur geldig. De afbeelding wordt lokaal in de cache opgeslagen...",
+ "generate": "Genereren",
+ "generating": "Bezig met genereren...",
+ "images": "Afbeeldingen:",
+ "prompt": "prompt"
+ }
+}
diff --git a/DigitalHumanWeb/locales/nl-NL/welcome.json b/DigitalHumanWeb/locales/nl-NL/welcome.json
new file mode 100644
index 0000000..86e50a5
--- /dev/null
+++ b/DigitalHumanWeb/locales/nl-NL/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "Importeer configuratie",
+ "market": "Verken de markt",
+ "start": "Nu beginnen"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "Vervang een groep",
+ "title": "Nieuwe aanbeveling assistent: "
+ },
+ "defaultMessage": "Ik ben uw persoonlijke slimme assistent {{appName}}. Hoe kan ik u nu helpen?\nAls u een meer professionele of op maat gemaakte assistent nodig heeft, kunt u op `+` klikken om een aangepaste assistent te maken.",
+ "defaultMessageWithoutCreate": "Ik ben uw persoonlijke slimme assistent {{appName}}. Hoe kan ik u nu helpen?",
+ "qa": {
+ "q01": "Wat is LobeHub?",
+ "q02": "Wat is {{appName}}?",
+ "q03": "Heeft {{appName}} ondersteuning van de gemeenschap?",
+ "q04": "Welke functies ondersteunt {{appName}}?",
+ "q05": "Hoe wordt {{appName}} geïmplementeerd en gebruikt?",
+ "q06": "Wat zijn de prijzen van {{appName}}?",
+ "q07": "Is {{appName}} gratis?",
+ "q08": "Is er een cloudversie beschikbaar?",
+ "q09": "Ondersteunt het lokale taalmodellen?",
+ "q10": "Ondersteunt het beeldherkenning en -generatie?",
+ "q11": "Ondersteunt het spraaksynthese en spraakherkenning?",
+ "q12": "Ondersteunt het een plug-insysteem?",
+ "q13": "Is er een eigen markt om GPT's te verkrijgen?",
+ "q14": "Ondersteunt het meerdere AI-dienstverleners?",
+ "q15": "Wat moet ik doen als ik problemen ondervind tijdens het gebruik?"
+ },
+ "questions": {
+ "moreBtn": "Meer informatie",
+ "title": "Veelgestelde vragen: "
+ },
+ "welcome": {
+ "afternoon": "Goedemiddag",
+ "morning": "Goedemorgen",
+ "night": "Goedenavond",
+ "noon": "Goedemiddag"
+ }
+ },
+ "header": "Welkom",
+ "pickAgent": "Of kies een assistent-sjabloon uit de onderstaande opties",
+ "skip": "Overslaan bij het maken",
+ "slogan": {
+ "desc1": "Activeer denkkracht en ontsteek creatieve vonken in het brein. Jouw intelligente assistent is er altijd voor jou.",
+ "desc2": "Maak je eerste assistent en laten we beginnen!",
+ "title": "Geef jezelf een slimmer brein"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/auth.json b/DigitalHumanWeb/locales/pl-PL/auth.json
new file mode 100644
index 0000000..64f2b15
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "Zaloguj się",
+ "loginOrSignup": "Zaloguj się / Zarejestruj się",
+ "profile": "Profil użytkownika",
+ "security": "Bezpieczeństwo",
+ "signout": "Wyloguj",
+ "signup": "Zarejestruj się"
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/chat.json b/DigitalHumanWeb/locales/pl-PL/chat.json
new file mode 100644
index 0000000..fc5d47c
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "Przełącz model"
+ },
+ "agentDefaultMessage": "Cześć, jestem **{{name}}**, możesz od razu rozpocząć ze mną rozmowę lub przejść do [ustawień asystenta]({{url}}), aby uzupełnić moje informacje.",
+ "agentDefaultMessageWithSystemRole": "Cześć, jestem **{{name}}**, {{systemRole}}, zacznijmy rozmowę!",
+ "agentDefaultMessageWithoutEdit": "Cześć, jestem **{{name}}**. Zacznijmy rozmowę!",
+ "agents": "Asystent",
+ "artifact": {
+ "generating": "Generowanie",
+ "thinking": "Myślenie",
+ "thought": "Proces myślenia",
+ "unknownTitle": "Nienazwane dzieło"
+ },
+ "backToBottom": "Przewiń na dół",
+ "chatList": {
+ "longMessageDetail": "Zobacz szczegóły"
+ },
+ "clearCurrentMessages": "Wyczyść bieżącą rozmowę",
+ "confirmClearCurrentMessages": "Czy na pewno chcesz wyczyścić bieżącą rozmowę? Tej operacji nie można cofnąć.",
+ "confirmRemoveSessionItemAlert": "Czy na pewno chcesz usunąć tego asystenta? Tej operacji nie można cofnąć.",
+ "confirmRemoveSessionSuccess": "Sesja usunięta pomyślnie",
+ "defaultAgent": "Domyślny asystent",
+ "defaultList": "Domyślna lista",
+ "defaultSession": "Domyślna sesja",
+ "duplicateSession": {
+ "loading": "Kopiowanie...",
+ "success": "Kopiowanie zakończone powodzeniem",
+ "title": "{{title}} - kopia"
+ },
+ "duplicateTitle": "{{title}} kopia",
+ "emptyAgent": "Brak asystenta",
+ "historyRange": "Zakres historii",
+ "inbox": {
+ "desc": "Włącz klastry mózgów, rozpal iskrę myślenia. Twój inteligentny asystent, gotowy do rozmowy o wszystkim.",
+ "title": "Pogadajmy sobie"
+ },
+ "input": {
+ "addAi": "Dodaj wiadomość AI",
+ "addUser": "Dodaj wiadomość użytkownika",
+ "more": "więcej",
+ "send": "Wyślij",
+ "sendWithCmdEnter": "Wyślij za pomocą klawisza {{meta}} + Enter",
+ "sendWithEnter": "Wyślij za pomocą klawisza Enter",
+ "stop": "Zatrzymaj",
+ "warp": "Złamanie wiersza"
+ },
+ "knowledgeBase": {
+ "all": "Wszystkie treści",
+ "allFiles": "Wszystkie pliki",
+ "allKnowledgeBases": "Wszystkie bazy wiedzy",
+ "disabled": "Obecny tryb wdrożenia nie obsługuje rozmów z bazą wiedzy. Aby z niej skorzystać, przełącz się na wdrożenie z bazą danych serwera lub skorzystaj z usługi {{cloud}}.",
+ "library": {
+ "action": {
+ "add": "Dodaj",
+ "detail": "Szczegóły",
+ "remove": "Usuń"
+ },
+ "title": "Plik/Baza wiedzy"
+ },
+ "relativeFilesOrKnowledgeBases": "Powiązane pliki/bazy wiedzy",
+ "title": "Baza wiedzy",
+ "uploadGuide": "Przesłane pliki można przeglądać w „Bazie wiedzy”",
+ "viewMore": "Zobacz więcej"
+ },
+ "messageAction": {
+ "delAndRegenerate": "Usuń i wygeneruj ponownie",
+ "regenerate": "Wygeneruj ponownie"
+ },
+ "newAgent": "Nowy asystent",
+ "pin": "Przypnij",
+ "pinOff": "Odepnij",
+ "rag": {
+ "referenceChunks": "Fragmenty odniesienia",
+ "userQuery": {
+ "actions": {
+ "delete": "Usuń przepisanie zapytania",
+ "regenerate": "Ponownie wygeneruj zapytanie"
+ }
+ }
+ },
+ "regenerate": "Wygeneruj ponownie",
+ "roleAndArchive": "Rola i archiwum",
+ "searchAgentPlaceholder": "Wyszukaj pomocnika...",
+ "sendPlaceholder": "Wpisz treść rozmowy...",
+ "sessionGroup": {
+ "config": "Zarządzanie grupami",
+ "confirmRemoveGroupAlert": "Czy na pewno chcesz usunąć tę grupę? Po usunięciu asystenci z tej grupy zostaną przeniesieni do domyślnej listy. Potwierdź swoje działanie.",
+ "createAgentSuccess": "Utworzenie asystenta zakończone sukcesem",
+ "createGroup": "Dodaj nową grupę",
+ "createSuccess": "Utworzono pomyślnie",
+ "creatingAgent": "Tworzenie asystenta...",
+ "inputPlaceholder": "Wprowadź nazwę grupy...",
+ "moveGroup": "Przenieś do grupy",
+ "newGroup": "Nowa grupa",
+ "rename": "Zmień nazwę grupy",
+ "renameSuccess": "Zmiana nazwy pomyślna",
+ "sortSuccess": "Pomyślne ponowne sortowanie",
+ "sorting": "Aktualizacja sortowania grupy...",
+ "tooLong": "Nazwa grupy musi mieć od 1 do 20 znaków"
+ },
+ "shareModal": {
+ "download": "Pobierz zrzut ekranu",
+ "imageType": "Typ obrazu",
+ "screenshot": "Zrzut ekranu",
+ "settings": "Ustawienia eksportu",
+ "shareToShareGPT": "Generuj link udostępniania ShareGPT",
+ "withBackground": "Z tłem",
+ "withFooter": "Z stopką",
+ "withPluginInfo": "Z informacjami o wtyczce",
+ "withSystemRole": "Z rolą asystenta"
+ },
+ "stt": {
+ "action": "Mowa na tekst",
+ "loading": "Rozpoznawanie...",
+ "prettifying": "Upiększanie..."
+ },
+ "temp": "Tymczasowy",
+ "tokenDetails": {
+ "chats": "Rozmowy",
+ "rest": "Pozostałe",
+ "systemRole": "Rola systemowa",
+ "title": "Szczegóły tokena",
+ "tools": "Narzędzia",
+ "total": "Razem",
+ "used": "Wykorzystane"
+ },
+ "tokenTag": {
+ "overload": "Przekroczenie limitu",
+ "remained": "Pozostało",
+ "used": "Użyte"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "Automatyczna zmiana nazwy",
+ "duplicate": "Utwórz kopię",
+ "export": "Eksportuj temat"
+ },
+ "checkOpenNewTopic": "Czy otworzyć nowy temat?",
+ "checkSaveCurrentMessages": "Czy zapisać bieżącą rozmowę jako temat?",
+ "confirmRemoveAll": "Czy na pewno chcesz usunąć wszystkie tematy? Tej operacji nie można cofnąć. Proszę potwierdź swoją decyzję.",
+ "confirmRemoveTopic": "Czy na pewno chcesz usunąć ten temat? Tej operacji nie można cofnąć. Proszę potwierdź swoją decyzję.",
+ "confirmRemoveUnstarred": "Czy na pewno chcesz usunąć nieoznaczone tematy? Tej operacji nie można cofnąć. Proszę potwierdź swoją decyzję.",
+ "defaultTitle": "Domyślne tematy",
+ "duplicateLoading": "Kopiowanie tematu...",
+ "duplicateSuccess": "Temat został skopiowany pomyślnie",
+ "guide": {
+ "desc": "Kliknij przycisk po lewej stronie, aby zapisać bieżącą rozmowę jako historię tematu i rozpocząć nową rundę rozmowy",
+ "title": "Lista tematów"
+ },
+ "openNewTopic": "Otwórz nowy temat",
+ "removeAll": "Usuń wszystkie tematy",
+ "removeUnstarred": "Usuń nieoznaczone tematy",
+ "saveCurrentMessages": "Zapisz bieżącą rozmowę jako temat",
+ "searchPlaceholder": "Szukaj tematów...",
+ "title": "Lista tematów"
+ },
+ "translate": {
+ "action": "Tłumaczenie",
+ "clear": "Wyczyść tłumaczenie"
+ },
+ "tts": {
+ "action": "Czytaj tekst",
+ "clear": "Wyczyść czytanie"
+ },
+ "updateAgent": "Aktualizuj informacje asystenta",
+ "upload": {
+ "action": {
+ "fileUpload": "Prześlij plik",
+ "folderUpload": "Prześlij folder",
+ "imageDisabled": "Aktualny model nie obsługuje rozpoznawania wizualnego, przełącz się na inny model, aby użyć tej funkcji",
+ "imageUpload": "Prześlij obraz",
+ "tooltip": "Prześlij"
+ },
+ "clientMode": {
+ "actionFiletip": "Prześlij plik",
+ "actionTooltip": "Prześlij",
+ "disabled": "Aktualny model nie obsługuje rozpoznawania wizualnego i analizy plików, przełącz się na inny model, aby użyć tej funkcji"
+ },
+ "preview": {
+ "prepareTasks": "Przygotowywanie fragmentów...",
+ "status": {
+ "pending": "Przygotowywanie do przesłania...",
+ "processing": "Przetwarzanie pliku..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/clerk.json b/DigitalHumanWeb/locales/pl-PL/clerk.json
new file mode 100644
index 0000000..e84c2a5
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "Wróć",
+ "badge__default": "Domyślny",
+ "badge__otherImpersonatorDevice": "Inne urządzenie podszywające się",
+ "badge__primary": "Podstawowy",
+ "badge__requiresAction": "Wymaga działania",
+ "badge__thisDevice": "To urządzenie",
+ "badge__unverified": "Niezweryfikowany",
+ "badge__userDevice": "Urządzenie użytkownika",
+ "badge__you": "Ty",
+ "createOrganization": {
+ "formButtonSubmit": "Utwórz organizację",
+ "invitePage": {
+ "formButtonReset": "Pomiń"
+ },
+ "title": "Utwórz organizację"
+ },
+ "dates": {
+ "lastDay": "Wczoraj o {{ date | timeString('pl-PL') }}",
+ "next6Days": "{{ date | weekday('pl-PL','long') }} o {{ date | timeString('pl-PL') }}",
+ "nextDay": "Jutro o {{ date | timeString('pl-PL') }}",
+ "numeric": "{{ date | numeric('pl-PL') }}",
+ "previous6Days": "Ostatni {{ date | weekday('pl-PL','long') }} o {{ date | timeString('pl-PL') }}",
+ "sameDay": "Dziś o {{ date | timeString('pl-PL') }}"
+ },
+ "dividerText": "lub",
+ "footerActionLink__useAnotherMethod": "Użyj innej metody",
+ "footerPageLink__help": "Pomoc",
+ "footerPageLink__privacy": "Prywatność",
+ "footerPageLink__terms": "Warunki",
+ "formButtonPrimary": "Kontynuuj",
+ "formButtonPrimary__verify": "Zweryfikuj",
+ "formFieldAction__forgotPassword": "Zapomniałeś hasła?",
+ "formFieldError__matchingPasswords": "Hasła pasują.",
+ "formFieldError__notMatchingPasswords": "Hasła nie pasują.",
+ "formFieldError__verificationLinkExpired": "Link weryfikacyjny wygasł. Proszę poproś o nowy link.",
+ "formFieldHintText__optional": "Opcjonalne",
+ "formFieldHintText__slug": "Slug to czytelne dla człowieka ID, które musi być unikalne. Często używane w adresach URL.",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "Usuń konto",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "przykład@email.com, przykład2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "moja-org",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "Włącz automatyczne zaproszenia dla tej domeny",
+ "formFieldLabel__backupCode": "Kod zapasowy",
+ "formFieldLabel__confirmDeletion": "Potwierdzenie",
+ "formFieldLabel__confirmPassword": "Potwierdź hasło",
+ "formFieldLabel__currentPassword": "Aktualne hasło",
+ "formFieldLabel__emailAddress": "Adres email",
+ "formFieldLabel__emailAddress_username": "Adres email lub nazwa użytkownika",
+ "formFieldLabel__emailAddresses": "Adresy email",
+ "formFieldLabel__firstName": "Imię",
+ "formFieldLabel__lastName": "Nazwisko",
+ "formFieldLabel__newPassword": "Nowe hasło",
+ "formFieldLabel__organizationDomain": "Domena",
+ "formFieldLabel__organizationDomainDeletePending": "Usuń oczekujące zaproszenia i sugestie",
+ "formFieldLabel__organizationDomainEmailAddress": "Adres email weryfikacyjny",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "Wprowadź adres email pod tą domeną, aby otrzymać kod i zweryfikować tę domenę.",
+ "formFieldLabel__organizationName": "Nazwa",
+ "formFieldLabel__organizationSlug": "Slug",
+ "formFieldLabel__passkeyName": "Nazwa klucza dostępu",
+ "formFieldLabel__password": "Hasło",
+ "formFieldLabel__phoneNumber": "Numer telefonu",
+ "formFieldLabel__role": "Rola",
+ "formFieldLabel__signOutOfOtherSessions": "Wyloguj ze wszystkich innych urządzeń",
+ "formFieldLabel__username": "Nazwa użytkownika",
+ "impersonationFab": {
+ "action__signOut": "Wyloguj",
+ "title": "Zalogowany jako {{identifier}}"
+ },
+ "locale": "pl-PL",
+ "maintenanceMode": "Obecnie trwają prace konserwacyjne, ale nie martw się, nie powinny potrwać dłużej niż kilka minut.",
+ "membershipRole__admin": "Administrator",
+ "membershipRole__basicMember": "Członek",
+ "membershipRole__guestMember": "Gość",
+ "organizationList": {
+ "action__createOrganization": "Utwórz organizację",
+ "action__invitationAccept": "Dołącz",
+ "action__suggestionsAccept": "Poproś o dołączenie",
+ "createOrganization": "Utwórz organizację",
+ "invitationAcceptedLabel": "Dołączono",
+ "subtitle": "aby kontynuować jako {{applicationName}}",
+ "suggestionsAcceptedLabel": "Oczekuje na zatwierdzenie",
+ "title": "Wybierz konto",
+ "titleWithoutPersonal": "Wybierz organizację"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "Automatyczne zaproszenia",
+ "badge__automaticSuggestion": "Automatyczne sugestie",
+ "badge__manualInvitation": "Brak automatycznego zapisu",
+ "badge__unverified": "Niezweryfikowany",
+ "createDomainPage": {
+ "subtitle": "Dodaj domenę do weryfikacji. Użytkownicy z adresami e-mail na tej domenie mogą automatycznie dołączyć do organizacji lub poprosić o dołączenie.",
+ "title": "Dodaj domenę"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "Zaproszenia nie mogły zostać wysłane. Istnieją już oczekujące zaproszenia dla następujących adresów e-mail: {{email_addresses}}.",
+ "formButtonPrimary__continue": "Wyślij zaproszenia",
+ "selectDropdown__role": "Wybierz rolę",
+ "subtitle": "Wprowadź lub wklej jeden lub więcej adresów e-mail, oddzielając je spacjami lub przecinkami.",
+ "successMessage": "Zaproszenia zostały pomyślnie wysłane",
+ "title": "Zaproś nowych członków"
+ },
+ "membersPage": {
+ "action__invite": "Zaproś",
+ "activeMembersTab": {
+ "menuAction__remove": "Usuń członka",
+ "tableHeader__actions": "",
+ "tableHeader__joined": "Dołączono",
+ "tableHeader__role": "Rola",
+ "tableHeader__user": "Użytkownik"
+ },
+ "detailsTitle__emptyRow": "Brak członków do wyświetlenia",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "Zaproś użytkowników, łącząc domenę e-mail z Twoją organizacją. Każdy, kto zarejestruje się z pasującą domeną e-mail, będzie mógł dołączyć do organizacji w dowolnym momencie.",
+ "headerTitle": "Automatyczne zaproszenia",
+ "primaryButton": "Zarządzaj zweryfikowanymi domenami"
+ },
+ "table__emptyRow": "Brak zaproszeń do wyświetlenia"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "Anuluj zaproszenie",
+ "tableHeader__invited": "Zaproszony"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "Użytkownicy, którzy zarejestrują się z pasującą domeną e-mail, będą mogli zobaczyć sugestię, aby poprosić o dołączenie do Twojej organizacji.",
+ "headerTitle": "Automatyczne sugestie",
+ "primaryButton": "Zarządzaj zweryfikowanymi domenami"
+ },
+ "menuAction__approve": "Zatwierdź",
+ "menuAction__reject": "Odrzuć",
+ "tableHeader__requested": "Żądany dostęp",
+ "table__emptyRow": "Brak żądań do wyświetlenia"
+ },
+ "start": {
+ "headerTitle__invitations": "Zaproszenia",
+ "headerTitle__members": "Członkowie",
+ "headerTitle__requests": "Żądania"
+ }
+ },
+ "navbar": {
+ "description": "Zarządzaj swoją organizacją.",
+ "general": "Ogólne",
+ "members": "Członkowie",
+ "title": "Organizacja"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "Wpisz „{{organizationName}}” poniżej, aby kontynuować.",
+ "messageLine1": "Czy na pewno chcesz usunąć tę organizację?",
+ "messageLine2": "Ta czynność jest trwała i nieodwracalna.",
+ "successMessage": "Usunąłeś organizację.",
+ "title": "Usuń organizację"
+ },
+ "leaveOrganization": {
+ "actionDescription": "Wpisz „{{organizationName}}” poniżej, aby kontynuować.",
+ "messageLine1": "Czy na pewno chcesz opuścić tę organizację? Stracisz dostęp do tej organizacji i jej aplikacji.",
+ "messageLine2": "Ta czynność jest trwała i nieodwracalna.",
+ "successMessage": "Opuściłeś organizację.",
+ "title": "Opuść organizację"
+ },
+ "title": "Zagrożenie"
+ },
+ "domainSection": {
+ "menuAction__manage": "Zarządzaj",
+ "menuAction__remove": "Usuń",
+ "menuAction__verify": "Zweryfikuj",
+ "primaryButton": "Dodaj domenę",
+ "subtitle": "Pozwól użytkownikom automatycznie dołączać do organizacji lub prosić o dołączenie na podstawie zweryfikowanej domeny e-mail.",
+ "title": "Zweryfikowane domeny"
+ },
+ "successMessage": "Organizacja została zaktualizowana.",
+ "title": "Zaktualizuj profil"
+ },
+ "removeDomainPage": {
+ "messageLine1": "Domena e-mail {{domain}} zostanie usunięta.",
+ "messageLine2": "Użytkownicy nie będą mogli automatycznie dołączać do organizacji po tej czynności.",
+ "successMessage": "{{domain}} została usunięta.",
+ "title": "Usuń domenę"
+ },
+ "start": {
+ "headerTitle__general": "Ogólne",
+ "headerTitle__members": "Członkowie",
+ "profileSection": {
+ "primaryButton": "Zaktualizuj profil",
+ "title": "Profil organizacji",
+ "uploadAction__title": "Logo"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "Usunięcie tej domeny wpłynie na zaproszonych użytkowników.",
+ "removeDomainActionLabel__remove": "Usuń domenę",
+ "removeDomainSubtitle": "Usuń tę domenę z Twoich zweryfikowanych domen",
+ "removeDomainTitle": "Usuń domenę"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "Użytkownicy są automatycznie zapraszani do dołączenia do organizacji podczas rejestracji i mogą dołączyć w dowolnym momencie.",
+ "automaticInvitationOption__label": "Automatyczne zaproszenia",
+ "automaticSuggestionOption__description": "Użytkownicy otrzymują sugestię, aby poprosić o dołączenie, ale muszą zostać zatwierdzeni przez administratora przed dołączeniem do organizacji.",
+ "automaticSuggestionOption__label": "Automatyczne sugestie",
+ "calloutInfoLabel": "Zmiana trybu zapisu dotyczy tylko nowych użytkowników.",
+ "calloutInvitationCountLabel": "Liczba zaproszeń wysłanych do użytkowników: {{count}}",
+ "calloutSuggestionCountLabel": "Liczba sugestii wysłanych do użytkowników: {{count}}",
+ "manualInvitationOption__description": "Użytkownicy mogą być zapraszani ręcznie do organizacji.",
+ "manualInvitationOption__label": "Brak automatycznego zapisu",
+ "subtitle": "Wybierz, w jaki sposób użytkownicy z tej domeny mogą dołączyć do organizacji."
+ },
+ "start": {
+ "headerTitle__danger": "Zagrożenie",
+ "headerTitle__enrollment": "Opcje zapisu"
+ },
+ "subtitle": "Domena {{domain}} jest teraz zweryfikowana. Kontynuuj, wybierając tryb zapisu.",
+ "title": "Aktualizuj {{domain}}"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "Wprowadź kod weryfikacyjny wysłany na Twój adres e-mail",
+ "formTitle": "Kod weryfikacyjny",
+ "resendButton": "Nie otrzymałeś kodu? Wyślij ponownie",
+ "subtitle": "Domena {{domainName}} musi zostać zweryfikowana za pomocą e-maila.",
+ "subtitleVerificationCodeScreen": "Kod weryfikacyjny został wysłany na {{emailAddress}}. Wprowadź kod, aby kontynuować.",
+ "title": "Zweryfikuj domenę"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "Utwórz organizację",
+ "action__invitationAccept": "Dołącz",
+ "action__manageOrganization": "Zarządzaj",
+ "action__suggestionsAccept": "Poproś o dołączenie",
+ "notSelected": "Nie wybrano organizacji",
+ "personalWorkspace": "Konto osobiste",
+ "suggestionsAcceptedLabel": "Oczekuje na zatwierdzenie"
+ },
+ "paginationButton__next": "Następny",
+ "paginationButton__previous": "Poprzedni",
+ "paginationRowText__displaying": "Wyświetlanie",
+ "paginationRowText__of": "z",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "Dodaj konto",
+ "action__signOutAll": "Wyloguj ze wszystkich kont",
+ "subtitle": "Wybierz konto, z którym chcesz kontynuować.",
+ "title": "Wybierz konto"
+ },
+ "alternativeMethods": {
+ "actionLink": "Uzyskaj pomoc",
+ "actionText": "Nie masz żadnego z tych?",
+ "blockButton__backupCode": "Użyj kodu zapasowego",
+ "blockButton__emailCode": "Wyślij kod e-mailem na {{identifier}}",
+ "blockButton__emailLink": "Wyślij link e-mailem na {{identifier}}",
+ "blockButton__passkey": "Zaloguj się za pomocą klucza",
+ "blockButton__password": "Zaloguj się za pomocą hasła",
+ "blockButton__phoneCode": "Wyślij kod SMS na {{identifier}}",
+ "blockButton__totp": "Użyj aplikacji uwierzytelniającej",
+ "getHelp": {
+ "blockButton__emailSupport": "Wsparcie e-mailowe",
+ "content": "Jeśli masz problemy z zalogowaniem się do swojego konta, prześlij do nas e-mail, a postaramy się przywrócić dostęp tak szybko, jak to możliwe.",
+ "title": "Uzyskaj pomoc"
+ },
+ "subtitle": "Masz problemy? Możesz skorzystać z dowolnej z tych metod logowania.",
+ "title": "Użyj innej metody"
+ },
+ "backupCodeMfa": {
+ "subtitle": "Twój kod zapasowy to ten, który otrzymałeś podczas konfigurowania uwierzytelniania dwuetapowego.",
+ "title": "Wprowadź kod zapasowy"
+ },
+ "emailCode": {
+ "formTitle": "Kod weryfikacyjny",
+ "resendButton": "Nie otrzymałeś kodu? Wyślij ponownie",
+ "subtitle": "aby kontynuować do {{applicationName}}",
+ "title": "Sprawdź swój e-mail"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "Wróć do oryginalnej karty, aby kontynuować.",
+ "title": "Ten link weryfikacyjny wygasł"
+ },
+ "failed": {
+ "subtitle": "Wróć do oryginalnej karty, aby kontynuować.",
+ "title": "Ten link weryfikacyjny jest nieprawidłowy"
+ },
+ "formSubtitle": "Użyj linku weryfikacyjnego wysłanego na Twój e-mail",
+ "formTitle": "Link weryfikacyjny",
+ "loading": {
+ "subtitle": "Zostaniesz przekierowany wkrótce",
+ "title": "Logowanie..."
+ },
+ "resendButton": "Nie otrzymałeś linku? Wyślij ponownie",
+ "subtitle": "aby kontynuować do {{applicationName}}",
+ "title": "Sprawdź swój e-mail",
+ "unusedTab": {
+ "title": "Możesz zamknąć tę kartę"
+ },
+ "verified": {
+ "subtitle": "Zostaniesz przekierowany wkrótce",
+ "title": "Pomyślnie zalogowano"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Wróć do oryginalnej karty, aby kontynuować",
+ "subtitleNewTab": "Wróć do nowo otwartej karty, aby kontynuować",
+ "titleNewTab": "Zalogowano na innej karcie"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "Kod resetowania hasła",
+ "resendButton": "Nie otrzymałeś kodu? Wyślij ponownie",
+ "subtitle": "aby zresetować hasło",
+ "subtitle_email": "Najpierw wprowadź kod wysłany na Twój adres e-mail",
+ "subtitle_phone": "Najpierw wprowadź kod wysłany na Twój telefon",
+ "title": "Zresetuj hasło"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "Zresetuj swoje hasło",
+ "label__alternativeMethods": "Albo zaloguj się za pomocą innej metody",
+ "title": "Zapomniałeś hasła?"
+ },
+ "noAvailableMethods": {
+ "message": "Nie można kontynuować logowania. Brak dostępnych czynników uwierzytelniających.",
+ "subtitle": "Wystąpił błąd",
+ "title": "Nie można się zalogować"
+ },
+ "passkey": {
+ "subtitle": "Użycie klucza potwierdza Twoją tożsamość. Twóje urządzenie może poprosić o odcisk palca, rozpoznanie twarzy lub blokadę ekranu.",
+ "title": "Użyj swojego klucza"
+ },
+ "password": {
+ "actionLink": "Użyj innej metody",
+ "subtitle": "Wprowadź hasło powiązane z Twoim kontem",
+ "title": "Wprowadź swoje hasło"
+ },
+ "passwordPwned": {
+ "title": "Hasło skompromitowane"
+ },
+ "phoneCode": {
+ "formTitle": "Kod weryfikacyjny",
+ "resendButton": "Nie otrzymałeś kodu? Wyślij ponownie",
+ "subtitle": "aby kontynuować do {{applicationName}}",
+ "title": "Sprawdź swój telefon"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "Kod weryfikacyjny",
+ "resendButton": "Nie otrzymałeś kodu? Wyślij ponownie",
+ "subtitle": "Aby kontynuować, wprowadź kod weryfikacyjny wysłany na Twój telefon",
+ "title": "Sprawdź swój telefon"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "Zresetuj hasło",
+ "requiredMessage": "Z powodów bezpieczeństwa konieczne jest zresetowanie hasła.",
+ "successMessage": "Twoje hasło zostało pomyślnie zmienione. Logowanie, proszę czekać chwilę.",
+ "title": "Ustaw nowe hasło"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "Musimy zweryfikować Twoją tożsamość przed zresetowaniem hasła."
+ },
+ "start": {
+ "actionLink": "Zarejestruj się",
+ "actionLink__use_email": "Użyj e-maila",
+ "actionLink__use_email_username": "Użyj e-maila lub nazwy użytkownika",
+ "actionLink__use_passkey": "Użyj klucza",
+ "actionLink__use_phone": "Użyj telefonu",
+ "actionLink__use_username": "Użyj nazwy użytkownika",
+ "actionText": "Nie masz konta?",
+ "subtitle": "Witaj ponownie! Proszę zaloguj się, aby kontynuować",
+ "title": "Zaloguj się do {{applicationName}}"
+ },
+ "totpMfa": {
+ "formTitle": "Kod weryfikacyjny",
+ "subtitle": "Aby kontynuować, wprowadź kod weryfikacyjny wygenerowany przez swoją aplikację uwierzytelniającą",
+ "title": "Weryfikacja dwuetapowa"
+ }
+ },
+ "signInEnterPasswordTitle": "Wprowadź swoje hasło",
+ "signUp": {
+ "continue": {
+ "actionLink": "Zaloguj się",
+ "actionText": "Masz już konto?",
+ "subtitle": "Proszę uzupełnij pozostałe dane, aby kontynuować.",
+ "title": "Uzupełnij brakujące pola"
+ },
+ "emailCode": {
+ "formSubtitle": "Wprowadź kod weryfikacyjny wysłany na Twój adres e-mail",
+ "formTitle": "Kod weryfikacyjny",
+ "resendButton": "Nie otrzymałeś kodu? Wyślij ponownie",
+ "subtitle": "Wprowadź kod weryfikacyjny wysłany na Twój adres e-mail",
+ "title": "Zweryfikuj swój e-mail"
+ },
+ "emailLink": {
+ "formSubtitle": "Użyj linku weryfikacyjnego wysłanego na Twój adres e-mail",
+ "formTitle": "Link weryfikacyjny",
+ "loading": {
+ "title": "Rejestrowanie..."
+ },
+ "resendButton": "Nie otrzymałeś linku? Wyślij ponownie",
+ "subtitle": "Aby kontynuować do {{applicationName}}",
+ "title": "Zweryfikuj swój e-mail",
+ "verified": {
+ "title": "Pomyślnie zarejestrowano"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Wróć do nowo otwartej karty, aby kontynuować",
+ "subtitleNewTab": "Wróć do poprzedniej karty, aby kontynuować",
+ "title": "Pomyślnie zweryfikowano e-mail"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "Wprowadź kod weryfikacyjny wysłany na Twój numer telefonu",
+ "formTitle": "Kod weryfikacyjny",
+ "resendButton": "Nie otrzymałeś kodu? Wyślij ponownie",
+ "subtitle": "Wprowadź kod weryfikacyjny wysłany na Twój numer telefonu",
+ "title": "Zweryfikuj swój telefon"
+ },
+ "start": {
+ "actionLink": "Zaloguj się",
+ "actionText": "Masz już konto?",
+ "subtitle": "Witaj! Proszę wypełnij szczegóły, aby rozpocząć.",
+ "title": "Utwórz swoje konto"
+ }
+ },
+ "socialButtonsBlockButton": "Kontynuuj z {{provider|titleize}}",
+ "unstable__errors": {
+ "captcha_invalid": "Rejestracja nieudana z powodu nieprawidłowych walidacji zabezpieczeń. Proszę odświeżyć stronę i spróbować ponownie lub skontaktować się z pomocą techniczną.",
+ "captcha_unavailable": "Rejestracja nieudana z powodu nieprawidłowej weryfikacji botów. Proszę odświeżyć stronę i spróbować ponownie lub skontaktować się z pomocą techniczną.",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "Ten adres e-mail jest zajęty. Proszę spróbować innego.",
+ "form_identifier_exists__phone_number": "Ten numer telefonu jest zajęty. Proszę spróbować innego.",
+ "form_identifier_exists__username": "Ta nazwa użytkownika jest zajęta. Proszę spróbować innego.",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "Adres e-mail musi być poprawny.",
+ "form_param_format_invalid__phone_number": "Numer telefonu musi być w poprawnym formacie międzynarodowym.",
+ "form_param_max_length_exceeded__first_name": "Imię nie powinno przekraczać 256 znaków.",
+ "form_param_max_length_exceeded__last_name": "Nazwisko nie powinno przekraczać 256 znaków.",
+ "form_param_max_length_exceeded__name": "Nazwa nie powinna przekraczać 256 znaków.",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "Twoje hasło nie jest wystarczająco silne.",
+ "form_password_pwned": "To hasło zostało znalezione w wyniku naruszenia i nie może być używane, proszę spróbować innego hasła.",
+ "form_password_pwned__sign_in": "To hasło zostało znalezione w wyniku naruszenia i nie może być używane, proszę zresetować hasło.",
+ "form_password_size_in_bytes_exceeded": "Twoje hasło przekroczyło maksymalną liczbę dozwolonych bajtów, proszę skrócić je lub usunąć niektóre znaki specjalne.",
+ "form_password_validation_failed": "Nieprawidłowe hasło.",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "Nie możesz usunąć swojej ostatniej identyfikacji.",
+ "not_allowed_access": "",
+ "passkey_already_exists": "Klucz dostępu jest już zarejestrowany na tym urządzeniu.",
+ "passkey_not_supported": "Klucze dostępu nie są obsługiwane na tym urządzeniu.",
+ "passkey_pa_not_supported": "Rejestracja wymaga autentykatora platformy, ale urządzenie go nie obsługuje.",
+ "passkey_registration_cancelled": "Rejestracja klucza dostępu została anulowana lub przekroczyła limit czasu.",
+ "passkey_retrieval_cancelled": "Weryfikacja klucza dostępu została anulowana lub przekroczyła limit czasu.",
+ "passwordComplexity": {
+ "maximumLength": "mniej niż {{length}} znaków",
+ "minimumLength": "{{length}} lub więcej znaków",
+ "requireLowercase": "małą literę",
+ "requireNumbers": "cyfrę",
+ "requireSpecialCharacter": "znak specjalny",
+ "requireUppercase": "wielką literę",
+ "sentencePrefix": "Twoje hasło musi zawierać"
+ },
+ "phone_number_exists": "Ten numer telefonu jest zajęty. Proszę spróbować innego.",
+ "zxcvbn": {
+ "couldBeStronger": "Twoje hasło działa, ale mogłoby być silniejsze. Spróbuj dodać więcej znaków.",
+ "goodPassword": "Twoje hasło spełnia wszystkie wymagane kryteria.",
+ "notEnough": "Twoje hasło nie jest wystarczająco silne.",
+ "suggestions": {
+ "allUppercase": "Zastosuj wielką literę w niektórych, ale nie we wszystkich literach.",
+ "anotherWord": "Dodaj więcej mniej popularnych słów.",
+ "associatedYears": "Unikaj lat związanych z Tobą.",
+ "capitalization": "Zastosuj wielką literę nie tylko na początku.",
+ "dates": "Unikaj dat związanych z Tobą.",
+ "l33t": "Unikaj przewidywalnych zastąpień liter, np. '@' zamiast 'a'.",
+ "longerKeyboardPattern": "Użyj dłuższych wzorców klawiatury i zmieniaj kierunek pisania kilka razy.",
+ "noNeed": "Możesz tworzyć silne hasła bez użycia symboli, cyfr ani wielkich liter.",
+ "pwned": "Jeśli używasz tego hasła gdzie indziej, powinieneś je zmienić.",
+ "recentYears": "Unikaj ostatnich lat.",
+ "repeated": "Unikaj powtórzonych słów i znaków.",
+ "reverseWords": "Unikaj odwróconych zapisów powszechnych słów.",
+ "sequences": "Unikaj powszechnych sekwencji znaków.",
+ "useWords": "Użyj kilku słów, ale unikaj powszechnych fraz."
+ },
+ "warnings": {
+ "common": "To jest powszechnie używane hasło.",
+ "commonNames": "Powszechne imiona i nazwiska są łatwe do odgadnięcia.",
+ "dates": "Daty są łatwe do odgadnięcia.",
+ "extendedRepeat": "Powtarzające się wzorce znaków, np. \"abcabcabc\", są łatwe do odgadnięcia.",
+ "keyPattern": "Krótkie wzorce klawiatury są łatwe do odgadnięcia.",
+ "namesByThemselves": "Same imiona lub nazwiska są łatwe do odgadnięcia.",
+ "pwned": "Twoje hasło zostało ujawnione w wyniku naruszenia danych w Internecie.",
+ "recentYears": "Ostatnie lata są łatwe do odgadnięcia.",
+ "sequences": "Powszechne sekwencje znaków, np. \"abc\", są łatwe do odgadnięcia.",
+ "similarToCommon": "To jest podobne do powszechnie używanego hasła.",
+ "simpleRepeat": "Powtarzające się znaki, np. \"aaa\", są łatwe do odgadnięcia.",
+ "straightRow": "Proste rzędy klawiszy na klawiaturze są łatwe do odgadnięcia.",
+ "topHundred": "To jest często używane hasło.",
+ "topTen": "To jest bardzo popularne hasło.",
+ "userInputs": "Nie powinno zawierać danych osobistych ani związanych z stroną.",
+ "wordByItself": "Same słowa są łatwe do odgadnięcia."
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "Dodaj konto",
+ "action__manageAccount": "Zarządzaj kontem",
+ "action__signOut": "Wyloguj",
+ "action__signOutAll": "Wyloguj ze wszystkich kont"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "Skopiowano!",
+ "actionLabel__copy": "Skopiuj wszystko",
+ "actionLabel__download": "Pobierz .txt",
+ "actionLabel__print": "Drukuj",
+ "infoText1": "Kody zapasowe zostaną włączone dla tego konta.",
+ "infoText2": "Trzymaj kody zapasowe w tajemnicy i przechowuj je bezpiecznie. Możesz wygenerować nowe kody zapasowe, jeśli podejrzewasz, że zostały skompromitowane.",
+ "subtitle__codelist": "Przechowuj je bezpiecznie i trzymaj je w tajemnicy.",
+ "successMessage": "Kody zapasowe są teraz włączone. Możesz użyć jednego z nich, aby zalogować się do swojego konta, jeśli stracisz dostęp do swojego urządzenia uwierzytelniającego. Każdy kod można użyć tylko raz.",
+ "successSubtitle": "Możesz użyć jednego z tych kodów, aby zalogować się do swojego konta, jeśli stracisz dostęp do swojego urządzenia uwierzytelniającego.",
+ "title": "Dodaj weryfikację kodu zapasowego",
+ "title__codelist": "Kody zapasowe"
+ },
+ "connectedAccountPage": {
+ "formHint": "Wybierz dostawcę, aby połączyć swoje konto.",
+ "formHint__noAccounts": "Brak dostępnych zewnętrznych dostawców kont.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} zostanie usunięty z tego konta.",
+ "messageLine2": "Nie będziesz już mógł używać tego połączonego konta, a wszelkie zależne funkcje przestaną działać.",
+ "successMessage": "{{connectedAccount}} został usunięty z Twojego konta.",
+ "title": "Usuń połączone konto"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "Dostawca został dodany do Twojego konta",
+ "title": "Dodaj połączone konto"
+ },
+ "deletePage": {
+ "actionDescription": "Wpisz \"Usuń konto\" poniżej, aby kontynuować.",
+ "confirm": "Usuń konto",
+ "messageLine1": "Czy na pewno chcesz usunąć swoje konto?",
+ "messageLine2": "Ta czynność jest trwała i nieodwracalna.",
+ "title": "Usuń konto"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "Na ten adres e-mail zostanie wysłany e-mail z kodem weryfikacyjnym.",
+ "formSubtitle": "Wprowadź kod weryfikacyjny wysłany na {{identifier}}",
+ "formTitle": "Kod weryfikacyjny",
+ "resendButton": "Nie otrzymałeś kodu? Wyślij ponownie",
+ "successMessage": "E-mail {{identifier}} został dodany do Twojego konta."
+ },
+ "emailLink": {
+ "formHint": "Na ten adres e-mail zostanie wysłany e-mail z linkiem weryfikacyjnym.",
+ "formSubtitle": "Kliknij w link weryfikacyjny w e-mailu wysłanym na {{identifier}}",
+ "formTitle": "Link weryfikacyjny",
+ "resendButton": "Nie otrzymałeś linku? Wyślij ponownie",
+ "successMessage": "E-mail {{identifier}} został dodany do Twojego konta."
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}} zostanie usunięty z tego konta.",
+ "messageLine2": "Nie będziesz już mógł się zalogować używając tego adresu e-mail.",
+ "successMessage": "{{emailAddress}} został usunięty z Twojego konta.",
+ "title": "Usuń adres e-mail"
+ },
+ "title": "Dodaj adres e-mail",
+ "verifyTitle": "Zweryfikuj adres e-mail"
+ },
+ "formButtonPrimary__add": "Dodaj",
+ "formButtonPrimary__continue": "Kontynuuj",
+ "formButtonPrimary__finish": "Zakończ",
+ "formButtonPrimary__remove": "Usuń",
+ "formButtonPrimary__save": "Zapisz",
+ "formButtonReset": "Anuluj",
+ "mfaPage": {
+ "formHint": "Wybierz metodę do dodania.",
+ "title": "Dodaj weryfikację dwuetapową"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "Użyj istniejącego numeru",
+ "primaryButton__addPhoneNumber": "Dodaj numer telefonu",
+ "removeResource": {
+ "messageLine1": "{{identifier}} nie będzie już otrzymywać kodów weryfikacyjnych podczas logowania.",
+ "messageLine2": "Twoje konto może być mniej bezpieczne. Czy na pewno chcesz kontynuować?",
+ "successMessage": "Weryfikacja dwuetapowa za pomocą kodów SMS została usunięta dla {{mfaPhoneCode}}",
+ "title": "Usuń weryfikację dwuetapową"
+ },
+ "subtitle__availablePhoneNumbers": "Wybierz istniejący numer telefonu, aby zarejestrować się do weryfikacji dwuetapowej za pomocą kodów SMS lub dodaj nowy.",
+ "subtitle__unavailablePhoneNumbers": "Brak dostępnych numerów telefonów do rejestracji weryfikacji dwuetapowej za pomocą kodów SMS, proszę dodać nowy.",
+ "successMessage1": "Podczas logowania będziesz musiał wprowadzić kod weryfikacyjny wysłany na ten numer telefonu jako dodatkowy krok.",
+ "successMessage2": "Zapisz te kody zapasowe i przechowuj je w bezpiecznym miejscu. Jeśli stracisz dostęp do swojego urządzenia uwierzytelniającego, możesz użyć kodów zapasowych do zalogowania się.",
+ "successTitle": "Weryfikacja kodów SMS włączona",
+ "title": "Dodaj weryfikację kodów SMS"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "Zeskanuj kod QR",
+ "buttonUnableToScan__nonPrimary": "Nie możesz zeskanować kodu QR?",
+ "infoText__ableToScan": "Skonfiguruj nową metodę logowania w swojej aplikacji uwierzytelniającej i zeskanuj poniższy kod QR, aby połączyć go z Twoim kontem.",
+ "infoText__unableToScan": "Skonfiguruj nową metodę logowania w swojej aplikacji uwierzytelniającej i wprowadź poniższy klucz.",
+ "inputLabel__unableToScan1": "Upewnij się, że hasła oparte na czasie lub jednorazowe są włączone, a następnie zakończ łączenie konta.",
+ "inputLabel__unableToScan2": "Alternatywnie, jeśli Twój uwierzytelniacz obsługuje adresy URL TOTP, możesz również skopiować pełny adres URL."
+ },
+ "removeResource": {
+ "messageLine1": "Kody weryfikacyjne z tego uwierzytelniacza nie będą już wymagane podczas logowania.",
+ "messageLine2": "Twoje konto może być mniej bezpieczne. Czy na pewno chcesz kontynuować?",
+ "successMessage": "Weryfikacja dwuetapowa za pomocą aplikacji uwierzytelniającej została usunięta.",
+ "title": "Usuń weryfikację dwuetapową"
+ },
+ "successMessage": "Weryfikacja dwuetapowa jest teraz włączona. Podczas logowania będziesz musiał wprowadzić kod weryfikacyjny z tego uwierzytelniacza jako dodatkowy krok.",
+ "title": "Dodaj aplikację uwierzytelniającą",
+ "verifySubtitle": "Wprowadź kod weryfikacyjny wygenerowany przez swoją aplikację uwierzytelniającą",
+ "verifyTitle": "Kod weryfikacyjny"
+ },
+ "mobileButton__menu": "Menu",
+ "navbar": {
+ "account": "Profil",
+ "description": "Zarządzaj informacjami o koncie.",
+ "security": "Bezpieczeństwo",
+ "title": "Konto"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} zostanie usunięty z tego konta.",
+ "title": "Usuń klucz dostępu"
+ },
+ "subtitle__rename": "Możesz zmienić nazwę klucza dostępu, aby łatwiej go znaleźć.",
+ "title__rename": "Zmień nazwę klucza dostępu"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "Zaleca się wylogowanie ze wszystkich innych urządzeń, które mogły używać twojego starego hasła.",
+ "readonly": "Twoje hasło obecnie nie może być edytowane, ponieważ możesz się zalogować tylko za pośrednictwem połączenia korporacyjnego.",
+ "successMessage__set": "Twoje hasło zostało ustawione.",
+ "successMessage__signOutOfOtherSessions": "Wszystkie inne urządzenia zostały wylogowane.",
+ "successMessage__update": "Twoje hasło zostało zaktualizowane.",
+ "title__set": "Ustaw hasło",
+ "title__update": "Zaktualizuj hasło"
+ },
+ "phoneNumberPage": {
+ "infoText": "Na ten numer telefonu zostanie wysłana wiadomość tekstowa z kodem weryfikacyjnym. Mogą obowiązywać opłaty za wiadomości i dane.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} zostanie usunięty z tego konta.",
+ "messageLine2": "Nie będziesz już mógł się zalogować za pomocą tego numeru telefonu.",
+ "successMessage": "{{phoneNumber}} został usunięty z twojego konta.",
+ "title": "Usuń numer telefonu"
+ },
+ "successMessage": "{{identifier}} został dodany do twojego konta.",
+ "title": "Dodaj numer telefonu",
+ "verifySubtitle": "Wprowadź kod weryfikacyjny wysłany na {{identifier}}",
+ "verifyTitle": "Zweryfikuj numer telefonu"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "Zalecany rozmiar 1:1, do 10 MB.",
+ "imageFormDestructiveActionSubtitle": "Usuń",
+ "imageFormSubtitle": "Prześlij",
+ "imageFormTitle": "Zdjęcie profilowe",
+ "readonly": "Twoje informacje profilowe zostały dostarczone przez połączenie korporacyjne i nie mogą być edytowane.",
+ "successMessage": "Twój profil został zaktualizowany.",
+ "title": "Zaktualizuj profil"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "Wyloguj z urządzenia",
+ "title": "Aktywne urządzenia"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "Spróbuj ponownie",
+ "actionLabel__reauthorize": "Autoryzuj teraz",
+ "destructiveActionTitle": "Usuń",
+ "primaryButton": "Połącz konto",
+ "subtitle__reauthorize": "Wymagane zakresy zostały zaktualizowane, a możesz doświadczać ograniczonej funkcjonalności. Proszę ponownie autoryzować tę aplikację, aby uniknąć problemów",
+ "title": "Połączone konta"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "Usuń konto",
+ "title": "Usuń konto"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "Usuń e-mail",
+ "detailsAction__nonPrimary": "Ustaw jako główny",
+ "detailsAction__primary": "Zakończ weryfikację",
+ "detailsAction__unverified": "Zweryfikuj",
+ "primaryButton": "Dodaj adres e-mail",
+ "title": "Adresy e-mail"
+ },
+ "enterpriseAccountsSection": {
+ "title": "Konta firmowe"
+ },
+ "headerTitle__account": "Szczegóły profilu",
+ "headerTitle__security": "Bezpieczeństwo",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "Wygeneruj ponownie",
+ "headerTitle": "Kody zapasowe",
+ "subtitle__regenerate": "Uzyskaj nowy zestaw bezpiecznych kodów zapasowych. Poprzednie kody zapasowe zostaną usunięte i nie będą mogły być użyte.",
+ "title__regenerate": "Wygeneruj ponownie kody zapasowe"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "Ustaw jako domyślny",
+ "destructiveActionLabel": "Usuń"
+ },
+ "primaryButton": "Dodaj weryfikację dwuetapową",
+ "title": "Weryfikacja dwuetapowa",
+ "totp": {
+ "destructiveActionTitle": "Usuń",
+ "headerTitle": "Aplikacja autoryzacyjna"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "Usuń",
+ "menuAction__rename": "Zmień nazwę",
+ "title": "Klucze dostępu"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "Ustaw hasło",
+ "primaryButton__updatePassword": "Zaktualizuj hasło",
+ "title": "Hasło"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "Usuń numer telefonu",
+ "detailsAction__nonPrimary": "Ustaw jako główny",
+ "detailsAction__primary": "Zakończ weryfikację",
+ "detailsAction__unverified": "Zweryfikuj numer telefonu",
+ "primaryButton": "Dodaj numer telefonu",
+ "title": "Numery telefonów"
+ },
+ "profileSection": {
+ "primaryButton": "Zaktualizuj profil",
+ "title": "Profil"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "Ustaw nazwę użytkownika",
+ "primaryButton__updateUsername": "Zaktualizuj nazwę użytkownika",
+ "title": "Nazwa użytkownika"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "Usuń portfel",
+ "primaryButton": "Portfele Web3",
+ "title": "Portfele Web3"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "Twoja nazwa użytkownika została zaktualizowana.",
+ "title__set": "Ustaw nazwę użytkownika",
+ "title__update": "Zaktualizuj nazwę użytkownika"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} zostanie usunięty z tego konta.",
+ "messageLine2": "Nie będziesz już mógł się zalogować za pomocą tej portfela web3.",
+ "successMessage": "{{web3Wallet}} został usunięty z twojego konta.",
+ "title": "Usuń portfel web3"
+ },
+ "subtitle__availableWallets": "Wybierz portfel web3, aby połączyć go z twoim kontem.",
+ "subtitle__unavailableWallets": "Brak dostępnych portfeli web3.",
+ "successMessage": "Portfel został dodany do twojego konta.",
+ "title": "Dodaj portfel web3"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/common.json b/DigitalHumanWeb/locales/pl-PL/common.json
new file mode 100644
index 0000000..352189c
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "O nas",
+ "advanceSettings": "Zaawansowane ustawienia",
+ "alert": {
+ "cloud": {
+ "action": "Darmowa wersja próbna",
+ "desc": "Dla wszystkich zarejestrowanych użytkowników mamy {{credit}} bezpłatnych punktów obliczeniowych, gotowych do użycia bez konieczności skomplikowanej konfiguracji. Wsparcie dla nieograniczonej historii rozmów i globalnej synchronizacji w chmurze. Czekają na Ciebie zaawansowane funkcje do odkrycia.",
+ "descOnMobile": "Dla wszystkich zarejestrowanych użytkowników oferujemy {{credit}} bezpłatnych punktów obliczeniowych, gotowych do użycia bez konieczności skomplikowanej konfiguracji.",
+ "title": "Witaj w {{name}}"
+ }
+ },
+ "appInitializing": "Aplikacja uruchamia się...",
+ "autoGenerate": "Automatyczne generowanie",
+ "autoGenerateTooltip": "Automatyczne uzupełnianie opisu asystenta na podstawie sugestii",
+ "autoGenerateTooltipDisabled": "Proszę wprowadzić słowo kluczowe przed użyciem funkcji automatycznego uzupełniania",
+ "back": "Powrót",
+ "batchDelete": "Usuń wiele",
+ "blog": "Blog produktowy",
+ "cancel": "Anuluj",
+ "changelog": "Dziennik zmian",
+ "close": "Zamknij",
+ "contact": "Skontaktuj się z nami",
+ "copy": "Kopiuj",
+ "copyFail": "Nie udało się skopiować",
+ "copySuccess": "Skopiowano pomyślnie",
+ "dataStatistics": {
+ "messages": "Wiadomości",
+ "sessions": "Sesje",
+ "today": "Dzisiaj",
+ "topics": "Tematy"
+ },
+ "defaultAgent": "Domyślny asystent",
+ "defaultSession": "Domyślna sesja",
+ "delete": "Usuń",
+ "document": "Dokumentacja",
+ "download": "Pobierz",
+ "duplicate": "Utwórz kopię",
+ "edit": "Edytuj",
+ "export": "Eksportuj ustawienia",
+ "exportType": {
+ "agent": "Eksportuj ustawienia asystenta",
+ "agentWithMessage": "Eksportuj ustawienia asystenta i wiadomości",
+ "all": "Eksportuj ustawienia globalne i wszystkie dane asystentów",
+ "allAgent": "Eksportuj wszystkie ustawienia asystentów",
+ "allAgentWithMessage": "Eksportuj wszystkie ustawienia asystentów i wiadomości",
+ "globalSetting": "Eksportuj ustawienia globalne"
+ },
+ "feedback": "Opinie i sugestie",
+ "follow": "Zaobserwuj nas na {{name}}",
+ "footer": {
+ "action": {
+ "feedback": "Podziel się swoją cenną opinią",
+ "star": "Dodaj gwiazdkę na GitHubie"
+ },
+ "and": "i",
+ "feedback": {
+ "action": "Podziel się opinią",
+ "desc": "Każdy twój pomysł i sugestia są dla nas bezcenne, nie możemy się doczekać, aby poznać twoją opinię! Skontaktuj się z nami, aby podzielić się opinią na temat funkcji produktu i doświadczeń z jego użytkowania, pomóż nam ulepszyć LobeChat.",
+ "title": "Podziel się swoją cenną opinią na GitHubie"
+ },
+ "later": "Później",
+ "star": {
+ "action": "Dodaj gwiazdkę",
+ "desc": "Jeśli podoba ci się nasz produkt i chcesz nas wesprzeć, czy mógłbyś dodać gwiazdkę na GitHubie? To małe działanie ma ogromne znaczenie dla nas i motywuje nas do ciągłego zapewniania ci wyjątkowego doświadczenia.",
+ "title": "Dodaj gwiazdkę na GitHubie"
+ },
+ "title": "Podoba ci się nasz produkt?"
+ },
+ "fullscreen": "Tryb pełnoekranowy",
+ "historyRange": "Zakres historii",
+ "import": "Importuj ustawienia",
+ "importModal": {
+ "error": {
+ "desc": "Przepraszamy, wystąpił błąd podczas importowania danych. Spróbuj ponownie zaimportować, lub <1>zgłoś problem1>, a my postaramy się jak najszybciej rozwiązać problem.",
+ "title": "Import danych nie powiódł się"
+ },
+ "finish": {
+ "onlySettings": "Pomyślnie zaimportowano ustawienia systemowe",
+ "start": "Rozpocznij korzystanie",
+ "subTitle": "Dane zaimportowano pomyślnie. Czas trwania: {{duration}} sekund. Szczegóły importu:",
+ "title": "Zakończono import danych"
+ },
+ "loading": "Trwa import danych, proszę czekać...",
+ "preparing": "Przygotowywanie modułu importu danych...",
+ "result": {
+ "added": "Pomyślnie zaimportowano",
+ "errors": "Błędy importu",
+ "messages": "Wiadomości",
+ "sessionGroups": "Grupy sesji",
+ "sessions": "Sesje",
+ "skips": "Pominięcia duplikatów",
+ "topics": "Tematy",
+ "type": "Typ danych"
+ },
+ "title": "Import danych",
+ "uploading": {
+ "desc": "Obecny plik jest duży, trwa wysyłanie...",
+ "restTime": "Pozostały czas",
+ "speed": "Prędkość wysyłania"
+ }
+ },
+ "information": "Społeczność i informacje",
+ "installPWA": "Zainstaluj aplikację przeglądarki",
+ "lang": {
+ "ar": "arabski",
+ "bg-BG": "bułgarski",
+ "bn": "Bengalski",
+ "cs-CZ": "Czeski",
+ "da-DK": "Duński",
+ "de-DE": "Niemiecki",
+ "el-GR": "Grecki",
+ "en": "Angielski",
+ "en-US": "Angielski (USA)",
+ "es-ES": "Hiszpański",
+ "fi-FI": "Fiński",
+ "fr-FR": "Francuski",
+ "hi-IN": "Hindi",
+ "hu-HU": "Węgierski",
+ "id-ID": "Indonezyjski",
+ "it-IT": "Włoski",
+ "ja-JP": "Japoński",
+ "ko-KR": "Koreański",
+ "nl-NL": "Holenderski",
+ "no-NO": "Norweski",
+ "pl-PL": "Polski",
+ "pt-BR": "Portugalski (Brazylia)",
+ "pt-PT": "Portugalski (Portugalia)",
+ "ro-RO": "Rumuński",
+ "ru-RU": "Rosyjski",
+ "sk-SK": "Słowacki",
+ "sr-RS": "Serbski",
+ "sv-SE": "Szwedzki",
+ "th-TH": "Tajski",
+ "tr-TR": "Turecki",
+ "uk-UA": "Ukraiński",
+ "vi-VN": "Wietnamski",
+ "zh": "Chiński uproszczony",
+ "zh-CN": "Chiński uproszczony",
+ "zh-TW": "Chiński tradycyjny"
+ },
+ "layoutInitializing": "Inicjowanie układu...",
+ "legal": "Oświadczenie prawne",
+ "loading": "Ładowanie...",
+ "mail": {
+ "business": "Współpraca biznesowa",
+ "support": "Wsparcie mailowe"
+ },
+ "oauth": "Logowanie SSO",
+ "officialSite": "Oficjalna strona internetowa",
+ "ok": "OK",
+ "password": "Hasło",
+ "pin": "Przypnij",
+ "pinOff": "Odepnij",
+ "privacy": "Polityka prywatności",
+ "regenerate": "Regeneruj",
+ "rename": "Zmień nazwę",
+ "reset": "Resetuj",
+ "retry": "Ponów",
+ "send": "Wyślij",
+ "setting": "Ustawienia",
+ "share": "Udostępnij",
+ "stop": "Zatrzymaj",
+ "sync": {
+ "actions": {
+ "settings": "Ustawienia synchronizacji",
+ "sync": "Synchronizuj teraz"
+ },
+ "awareness": {
+ "current": "Bieżące urządzenie"
+ },
+ "channel": "Kanał",
+ "disabled": {
+ "actions": {
+ "enable": "Włącz synchronizację chmurową",
+ "settings": "Konfiguruj parametry synchronizacji"
+ },
+ "desc": "Dane bieżącej sesji są przechowywane tylko w tej przeglądarce. Jeśli chcesz synchronizować dane między wieloma urządzeniami, skonfiguruj i włącz synchronizację chmurową.",
+ "title": "Synchronizacja danych wyłączona"
+ },
+ "enabled": {
+ "title": "Synchronizacja danych"
+ },
+ "status": {
+ "connecting": "Łączenie",
+ "disabled": "Synchronizacja wyłączona",
+ "ready": "Gotowy",
+ "synced": "Synchronizacja zakończona",
+ "syncing": "Synchronizacja w toku",
+ "unconnected": "Brak połączenia"
+ },
+ "title": "Stan synchronizacji",
+ "unconnected": {
+ "tip": "Błąd połączenia z serwerem sygnalizacyjnym, nie można nawiązać kanału komunikacyjnego punkt-punkt, sprawdź sieć i spróbuj ponownie"
+ }
+ },
+ "tab": {
+ "chat": "Czat",
+ "discover": "Odkryj",
+ "files": "Pliki",
+ "me": "ja",
+ "setting": "Ustawienia"
+ },
+ "telemetry": {
+ "allow": "Zezwalaj",
+ "deny": "Odmów",
+ "desc": "Chcemy anonimowo zbierać informacje o twoim użytkowaniu, aby pomóc nam ulepszyć LobeChat i zapewnić ci lepsze doświadczenia z naszym produktem. Możesz wyłączyć to w każdej chwili w „Ustawienia” - „O nas”.",
+ "learnMore": "Dowiedz się więcej",
+ "title": "Pomóż LobeChat stawać się lepszym"
+ },
+ "temp": "Tymczasowy",
+ "terms": "Warunki korzystania",
+ "updateAgent": "Zaktualizuj informacje o agencie",
+ "upgradeVersion": {
+ "action": "Aktualizuj",
+ "hasNew": "Dostępna jest nowa aktualizacja",
+ "newVersion": "Dostępna jest nowa wersja: {{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "Użytkownik Anonimowy",
+ "billing": "Zarządzanie rachunkami",
+ "cloud": "Wypróbuj {{name}}",
+ "data": "Przechowywanie danych",
+ "defaultNickname": "Użytkownik Wersji Społecznościowej",
+ "discord": "Wsparcie społeczności",
+ "docs": "Dokumentacja",
+ "email": "Wsparcie mailowe",
+ "feedback": "Opinie i sugestie",
+ "help": "Centrum pomocy",
+ "moveGuide": "Przenieś przycisk ustawień tutaj",
+ "plans": "Plan abonamentu",
+ "preview": "Podgląd",
+ "profile": "Zarządzanie kontem",
+ "setting": "Ustawienia aplikacji",
+ "usages": "Statystyki użycia"
+ },
+ "version": "Wersja"
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/components.json b/DigitalHumanWeb/locales/pl-PL/components.json
new file mode 100644
index 0000000..a91bed0
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "Przeciągnij pliki tutaj, aby przesłać wiele obrazów.",
+ "dragFileDesc": "Przeciągnij obrazy i pliki tutaj, aby przesłać wiele obrazów i plików.",
+ "dragFileTitle": "Prześlij plik",
+ "dragTitle": "Prześlij obraz"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "Dodaj do bazy wiedzy",
+ "addToOtherKnowledgeBase": "Dodaj do innej bazy wiedzy",
+ "batchChunking": "Partycjonowanie wsadowe",
+ "chunking": "Partycjonowanie",
+ "chunkingTooltip": "Podziel plik na wiele bloków tekstowych i wektoryzuj, aby umożliwić wyszukiwanie semantyczne i rozmowy o plikach",
+ "confirmDelete": "Zaraz usuniesz ten plik. Po usunięciu nie będzie można go odzyskać, proszę potwierdź swoje działanie",
+ "confirmDeleteMultiFiles": "Zaraz usuniesz wybrane {{count}} plików. Po usunięciu nie będzie można ich odzyskać, proszę potwierdź swoje działanie",
+ "confirmRemoveFromKnowledgeBase": "Zaraz usuniesz wybrane {{count}} plików z bazy wiedzy. Po usunięciu pliki będą nadal widoczne wśród wszystkich plików, proszę potwierdź swoje działanie",
+ "copyUrl": "Kopiuj link",
+ "copyUrlSuccess": "Adres pliku skopiowany pomyślnie",
+ "createChunkingTask": "Przygotowywanie...",
+ "deleteSuccess": "Plik usunięty pomyślnie",
+ "downloading": "Pobieranie pliku...",
+ "removeFromKnowledgeBase": "Usuń z bazy wiedzy",
+ "removeFromKnowledgeBaseSuccess": "Plik usunięty pomyślnie"
+ },
+ "bottom": "To już wszystko",
+ "config": {
+ "showFilesInKnowledgeBase": "Pokaż zawartość w bazie wiedzy"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "Prześlij plik",
+ "folder": "Prześlij folder",
+ "knowledgeBase": "Utwórz nową bazę wiedzy"
+ },
+ "or": "lub",
+ "title": "Przeciągnij plik lub folder tutaj"
+ },
+ "title": {
+ "createdAt": "Data utworzenia",
+ "size": "Rozmiar",
+ "title": "Plik"
+ },
+ "total": {
+ "fileCount": "Łącznie {{count}} pozycji",
+ "selectedCount": "Wybrano {{count}} pozycji"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "Bloki tekstowe nie zostały w pełni wektoryzowane, co spowoduje, że funkcja wyszukiwania semantycznego będzie niedostępna. Aby poprawić jakość wyszukiwania, proszę wektoryzować bloki tekstowe",
+ "error": "Błąd wektoryzacji",
+ "errorResult": "Błąd wektoryzacji, spróbuj ponownie po sprawdzeniu. Powód błędu:",
+ "processing": "Bloki tekstowe są wektoryzowane, proszę czekać",
+ "success": "Obecne bloki tekstowe zostały w pełni wektoryzowane"
+ },
+ "embeddings": "Wektoryzacja",
+ "status": {
+ "error": "Partycjonowanie nie powiodło się",
+ "errorResult": "Partycjonowanie nie powiodło się, proszę sprawdzić i spróbować ponownie. Powód niepowodzenia:",
+ "processing": "Partycjonowanie w toku",
+ "processingTip": "Serwer jest w trakcie dzielenia bloków tekstowych, zamknięcie strony nie wpłynie na postęp partycjonowania"
+ }
+ }
+ },
+ "GoBack": {
+ "back": "Wróć"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "Niestandardowy model, domyślnie obsługujący zarówno wywołania funkcji, jak i rozpoznawanie wizualne. Proszę zweryfikować możliwość użycia tych funkcji w praktyce.",
+ "file": "Ten model obsługuje wczytywanie plików i rozpoznawanie",
+ "functionCall": "Ten model obsługuje wywołania funkcji (Function Call).",
+ "tokens": "Ten model obsługuje maksymalnie {{tokens}} tokenów w pojedynczej sesji.",
+ "vision": "Ten model obsługuje rozpoznawanie wizualne."
+ },
+ "removed": "Ten model nie znajduje się na liście, jeśli zostanie odznaczony, zostanie automatycznie usunięty"
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "Brak włączonych modeli, przejdź do ustawień i włącz je",
+ "provider": "Dostawca"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/discover.json b/DigitalHumanWeb/locales/pl-PL/discover.json
new file mode 100644
index 0000000..fb78a54
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "Dodaj asystenta",
+ "addAgentAndConverse": "Dodaj asystenta i rozpocznij rozmowę",
+ "addAgentSuccess": "Dodano pomyślnie",
+ "conversation": {
+ "l1": "Cześć, jestem **{{name}}**, możesz zadać mi dowolne pytanie, postaram się odpowiedzieć ~",
+ "l2": "Oto moje umiejętności: ",
+ "l3": "Zacznijmy rozmowę!"
+ },
+ "description": "Opis asystenta",
+ "detail": "Szczegóły",
+ "list": "Lista asystentów",
+ "more": "Więcej",
+ "plugins": "Zintegrowane wtyczki",
+ "recentSubmits": "Ostatnie aktualizacje",
+ "suggestions": "Podobne rekomendacje",
+ "systemRole": "Ustawienia asystenta",
+ "try": "Spróbuj"
+ },
+ "back": "Powrót do odkryć",
+ "category": {
+ "assistant": {
+ "academic": "Akademicki",
+ "all": "Wszystko",
+ "career": "Kariera",
+ "copywriting": "Copywriting",
+ "design": "Projektowanie",
+ "education": "Edukacja",
+ "emotions": "Emocje",
+ "entertainment": "Rozrywka",
+ "games": "Gry",
+ "general": "Ogólne",
+ "life": "Życie",
+ "marketing": "Marketing",
+ "office": "Biuro",
+ "programming": "Programowanie",
+ "translation": "Tłumaczenie"
+ },
+ "plugin": {
+ "all": "Wszystko",
+ "gaming-entertainment": "Gry i rozrywka",
+ "life-style": "Styl życia",
+ "media-generate": "Generowanie mediów",
+ "science-education": "Nauka i edukacja",
+ "social": "Media społecznościowe",
+ "stocks-finance": "Akcje i finanse",
+ "tools": "Narzędzia",
+ "web-search": "Wyszukiwanie w sieci"
+ }
+ },
+ "cleanFilter": "Wyczyść filtr",
+ "create": "Utwórz",
+ "createGuide": {
+ "func1": {
+ "desc1": "W oknie rozmowy przejdź do ustawień w prawym górnym rogu, aby wejść na stronę ustawień asystenta, który chcesz dodać;",
+ "desc2": "Kliknij przycisk 'Wyślij do rynku asystentów' w prawym górnym rogu.",
+ "tag": "Metoda pierwsza",
+ "title": "Zgłoś przez LobeChat"
+ },
+ "func2": {
+ "button": "Przejdź do repozytorium asystentów na Githubie",
+ "desc": "Jeśli chcesz dodać asystenta do indeksu, użyj agent-template.json lub agent-template-full.json, aby utworzyć wpis w katalogu plugins, napisz krótki opis i odpowiednio oznacz, a następnie utwórz prośbę o ściągnięcie.",
+ "tag": "Metoda druga",
+ "title": "Zgłoś przez Github"
+ }
+ },
+ "dislike": "Nie lubię",
+ "filter": "Filtr",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "Wszyscy autorzy",
+ "followed": "Obserwowani autorzy",
+ "title": "Zakres autorów"
+ },
+ "contentLength": "Minimalna długość kontekstu",
+ "maxToken": {
+ "title": "Ustaw maksymalną długość (Token)",
+ "unlimited": "Bez ograniczeń"
+ },
+ "other": {
+ "functionCall": "Obsługuje wywołania funkcji",
+ "title": "Inne",
+ "vision": "Obsługuje rozpoznawanie wizualne",
+ "withKnowledge": "Z dołączoną bazą wiedzy",
+ "withTool": "Z dołączoną wtyczką"
+ },
+ "pricing": "Cena modelu",
+ "timePeriod": {
+ "all": "Wszystkie czasy",
+ "day": "Ostatnie 24 godziny",
+ "month": "Ostatnie 30 dni",
+ "title": "Zakres czasowy",
+ "week": "Ostatnie 7 dni",
+ "year": "Ostatni rok"
+ }
+ },
+ "home": {
+ "featuredAssistants": "Polecani asystenci",
+ "featuredModels": "Polecane modele",
+ "featuredProviders": "Polecani dostawcy modeli",
+ "featuredTools": "Polecane wtyczki",
+ "more": "Odkryj więcej"
+ },
+ "like": "Lubię",
+ "models": {
+ "chat": "Rozpocznij rozmowę",
+ "contentLength": "Maksymalna długość kontekstu",
+ "free": "Darmowe",
+ "guide": "Przewodnik konfiguracyjny",
+ "list": "Lista modeli",
+ "more": "Więcej",
+ "parameterList": {
+ "defaultValue": "Wartość domyślna",
+ "docs": "Zobacz dokumentację",
+ "frequency_penalty": {
+ "desc": "To ustawienie dostosowuje częstotliwość powtarzania określonych słów, które już pojawiły się w wejściu. Wyższa wartość zmniejsza prawdopodobieństwo powtórzeń, podczas gdy wartość ujemna ma odwrotny efekt. Kara za słownictwo nie wzrasta wraz z liczbą wystąpień. Wartości ujemne zachęcają do powtarzania słownictwa.",
+ "title": "Kara za częstotliwość"
+ },
+ "max_tokens": {
+ "desc": "To ustawienie definiuje maksymalną długość, jaką model może wygenerować w jednej odpowiedzi. Ustawienie wyższej wartości pozwala modelowi na generowanie dłuższych odpowiedzi, podczas gdy niższa wartość ogranicza długość odpowiedzi, czyniąc ją bardziej zwięzłą. W zależności od różnych scenariuszy zastosowania, odpowiednie dostosowanie tej wartości może pomóc osiągnąć oczekiwaną długość i szczegółowość odpowiedzi.",
+ "title": "Limit odpowiedzi na raz"
+ },
+ "presence_penalty": {
+ "desc": "To ustawienie ma na celu kontrolowanie powtarzania słownictwa w zależności od częstotliwości jego występowania w wejściu. Stara się rzadziej używać słów, które pojawiają się w wejściu, proporcjonalnie do ich częstotliwości. Kara za słownictwo wzrasta wraz z liczbą wystąpień. Wartości ujemne zachęcają do powtarzania słownictwa.",
+ "title": "Świeżość tematu"
+ },
+ "range": "Zakres",
+ "temperature": {
+ "desc": "To ustawienie wpływa na różnorodność odpowiedzi modelu. Niższe wartości prowadzą do bardziej przewidywalnych i typowych odpowiedzi, podczas gdy wyższe wartości zachęcają do bardziej zróżnicowanych i rzadziej spotykanych odpowiedzi. Gdy wartość wynosi 0, model zawsze daje tę samą odpowiedź na dane wejście.",
+ "title": "Losowość"
+ },
+ "title": "Parametry modelu",
+ "top_p": {
+ "desc": "To ustawienie ogranicza wybór modelu do słów o najwyższej prawdopodobieństwie: wybiera tylko te słowa, których skumulowane prawdopodobieństwo osiąga P. Niższe wartości sprawiają, że odpowiedzi modelu są bardziej przewidywalne, podczas gdy domyślne ustawienie pozwala modelowi wybierać z całego zakresu słownictwa.",
+ "title": "Próbkowanie jądra"
+ },
+ "type": "Typ"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChat obsługuje użycie niestandardowego klucza API dla tego dostawcy.",
+ "input": "Cena wejściowa",
+ "inputTooltip": "Koszt za milion tokenów",
+ "latency": "Opóźnienie",
+ "latencyTooltip": "Średni czas odpowiedzi dostawcy na pierwszy token",
+ "maxOutput": "Maksymalna długość wyjścia",
+ "maxOutputTooltip": "Maksymalna liczba tokenów, które ten punkt końcowy może wygenerować",
+ "officialTooltip": "Oficjalna usługa LobeHub",
+ "output": "Cena wyjściowa",
+ "outputTooltip": "Koszt za milion tokenów",
+ "streamCancellationTooltip": "Ten dostawca obsługuje funkcję anulowania strumienia.",
+ "throughput": "Przepustowość",
+ "throughputTooltip": "Średnia liczba tokenów przesyłanych na sekundę w żądaniach strumieniowych"
+ },
+ "suggestions": "Podobne modele",
+ "supportedProviders": "Dostawcy obsługujący ten model"
+ },
+ "plugins": {
+ "community": "Wtyczki społecznościowe",
+ "install": "Zainstaluj wtyczkę",
+ "installed": "Zainstalowane",
+ "list": "Lista wtyczek",
+ "meta": {
+ "description": "Opis",
+ "parameter": "Parametr",
+ "title": "Parametry narzędzia",
+ "type": "Typ"
+ },
+ "more": "Więcej",
+ "official": "Oficjalne wtyczki",
+ "recentSubmits": "Ostatnie aktualizacje",
+ "suggestions": "Podobne rekomendacje"
+ },
+ "providers": {
+ "config": "Konfiguracja dostawcy",
+ "list": "Lista dostawców modeli",
+ "modelCount": "{{count}} modeli",
+ "modelSite": "Dokumentacja modeli",
+ "more": "Więcej",
+ "officialSite": "Oficjalna strona",
+ "showAllModels": "Pokaż wszystkie modele",
+ "suggestions": "Powiązani dostawcy",
+ "supportedModels": "Obsługiwane modele"
+ },
+ "search": {
+ "placeholder": "Szukaj nazwy, opisu lub słowa kluczowego...",
+ "result": "{{count}} wyników wyszukiwania dotyczących {{keyword}}",
+ "searching": "Wyszukiwanie..."
+ },
+ "sort": {
+ "mostLiked": "Najbardziej lubiane",
+ "mostUsed": "Najczęściej używane",
+ "newest": "Od najnowszych",
+ "oldest": "Od najstarszych",
+ "recommended": "Polecane"
+ },
+ "tab": {
+ "assistants": "Asystenci",
+ "home": "Strona główna",
+ "models": "Modele",
+ "plugins": "Wtyczki",
+ "providers": "Dostawcy modeli"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/error.json b/DigitalHumanWeb/locales/pl-PL/error.json
new file mode 100644
index 0000000..a2be5ff
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "Continue session",
+ "desc": "{{greeting}}, it's great to continue serving you. Let's continue our previous conversation.",
+ "title": "Welcome back, {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "Powrót do strony głównej",
+ "desc": "Spróbuj ponownie później lub wróć do znanego świata",
+ "retry": "Ponów próbę",
+ "title": "Napotkano problem na stronie.."
+ },
+ "fetchError": "Błąd żądania",
+ "fetchErrorDetail": "Szczegóły błędu",
+ "notFound": {
+ "backHome": "Powrót do strony głównej",
+ "check": "Proszę sprawdzić, czy Twój adres URL jest poprawny",
+ "desc": "Nie możemy znaleźć strony, której szukasz",
+ "title": "Wkraczasz w nieznane terytorium?"
+ },
+ "pluginSettings": {
+ "desc": "Wykonaj poniższą konfigurację, aby rozpocząć korzystanie z tego wtyczki",
+ "title": "Konfiguracja wtyczki {{name}}"
+ },
+ "response": {
+ "400": "Przepraszamy, serwer nie rozumie Twojego żądania. Proszę sprawdź, czy parametry żądania są poprawne",
+ "401": "Przepraszamy, serwer odrzucił Twoje żądanie, prawdopodobnie z powodu niewystarczających uprawnień lub braku ważnej autoryzacji",
+ "403": "Przepraszamy, serwer odrzucił Twoje żądanie, nie masz uprawnień dostępu do tego zasobu",
+ "404": "Przepraszamy, serwer nie może odnaleźć żądanej strony lub zasobu. Proszę sprawdź, czy URL jest poprawny",
+ "405": "Przepraszamy, serwer nie obsługuje używanej metody żądania. Proszę sprawdź, czy metoda żądania jest poprawna",
+ "406": "Przepraszamy, serwer nie może zrealizować żądania zgodnie z żądanymi właściwościami zasobu",
+ "407": "Przepraszamy, aby kontynuować to żądanie, musisz najpierw uwierzytelnić się jako proxy",
+ "408": "Przepraszamy, serwer przekroczył limit czasu oczekiwania na żądanie, sprawdź swoje połączenie sieciowe i spróbuj ponownie",
+ "409": "Przepraszamy, żądanie nie może zostać zrealizowane z powodu konfliktu, być może zasób jest w niezgodnym stanie z żądaniem",
+ "410": "Przepraszamy, żądany zasób został trwale usunięty i nie można go odnaleźć",
+ "411": "Przepraszamy, serwer nie może przetworzyć żądania, które nie zawiera poprawnej długości treści",
+ "412": "Przepraszamy, Twoje żądanie nie spełnia warunków serwera i nie może zostać zrealizowane",
+ "413": "Przepraszamy, Twoje dane żądania są zbyt duże, serwer nie może ich przetworzyć",
+ "414": "Przepraszamy, URI żądania jest zbyt długie, serwer nie może go przetworzyć",
+ "415": "Przepraszamy, serwer nie może przetworzyć żądanej formatki mediów",
+ "416": "Przepraszamy, serwer nie może zrealizować zakresu żądania",
+ "417": "Przepraszamy, serwer nie może spełnić Twoich oczekiwań",
+ "422": "Przepraszamy, Twoje żądanie jest poprawne, ale z powodu błędów semantycznych nie może zostać zrealizowane",
+ "423": "Przepraszamy, żądany zasób jest zablokowany",
+ "424": "Przepraszamy, poprzednie nieudane żądanie uniemożliwia zrealizowanie bieżącego żądania",
+ "426": "Przepraszamy, serwer wymaga aktualizacji Twojego klienta do nowszej wersji protokołu",
+ "428": "Przepraszamy, serwer wymaga warunków wstępnych, żądanie musi zawierać poprawne nagłówki warunkowe",
+ "429": "Przepraszamy, Twoje żądania są zbyt liczne, serwer jest trochę przeciążony, spróbuj ponownie później",
+ "431": "Przepraszamy, nagłówek żądania jest zbyt duży, serwer nie może go przetworzyć",
+ "451": "Przepraszamy, z powodów prawnych serwer odmawia dostarczenia tego zasobu",
+ "500": "Przepraszamy, serwer napotkał pewne trudności i tymczasowo nie może zrealizować Twojego żądania. Proszę spróbuj ponownie później",
+ "502": "Przepraszamy, serwer wydaje się zgubić kierunek i tymczasowo nie może świadczyć usług. Proszę spróbuj ponownie później",
+ "503": "Przepraszamy, serwer tymczasowo nie może przetworzyć Twojego żądania, prawdopodobnie z powodu przeciążenia lub konserwacji. Proszę spróbuj ponownie później",
+ "504": "Przepraszamy, serwer nie otrzymał odpowiedzi od serwera nadrzędnego. Proszę spróbuj ponownie później",
+ "AgentRuntimeError": "Wystąpił błąd wykonania modelu językowego Lobe, prosimy o sprawdzenie poniższych informacji lub ponowne próbowanie.",
+ "FreePlanLimit": "Jesteś obecnie użytkownikiem darmowej wersji, nie możesz korzystać z tej funkcji. Proszę uaktualnić do planu płatnego, aby kontynuować korzystanie.",
+ "InvalidAccessCode": "Nieprawidłowy kod dostępu: Hasło jest nieprawidłowe lub puste. Proszę wprowadzić poprawne hasło dostępu lub dodać niestandardowy klucz API.",
+ "InvalidBedrockCredentials": "Uwierzytelnienie Bedrock nie powiodło się, prosimy sprawdzić AccessKeyId/SecretAccessKey i spróbować ponownie.",
+ "InvalidClerkUser": "Przepraszamy, nie jesteś obecnie zalogowany. Proszę najpierw zalogować się lub zarejestrować, aby kontynuować.",
+ "InvalidGithubToken": "Token dostępu osobistego do GitHub jest niewłaściwy lub pusty. Proszę sprawdzić Token dostępu osobistego do GitHub i spróbować ponownie.",
+ "InvalidOllamaArgs": "Nieprawidłowa konfiguracja Ollama, sprawdź konfigurację Ollama i spróbuj ponownie",
+ "InvalidProviderAPIKey": "{{provider}} Klucz API jest nieprawidłowy lub pusty. Sprawdź Klucz API {{provider}} i spróbuj ponownie.",
+ "LocationNotSupportError": "Przepraszamy, Twoja lokalizacja nie obsługuje tego usługi modelu, być może ze względu na ograniczenia regionalne lub brak dostępności usługi. Proszę sprawdź, czy bieżąca lokalizacja obsługuje tę usługę, lub spróbuj użyć innych informacji o lokalizacji.",
+ "NoOpenAIAPIKey": "Klucz API OpenAI jest pusty. Proszę dodać niestandardowy klucz API OpenAI",
+ "OllamaBizError": "Błąd usługi Ollama, sprawdź poniższe informacje lub spróbuj ponownie",
+ "OllamaServiceUnavailable": "Usługa Ollama jest niedostępna. Sprawdź, czy Ollama działa poprawnie, lub czy poprawnie skonfigurowano ustawienia przekraczania domeny Ollama",
+ "OpenAIBizError": "Wystąpił błąd usługi OpenAI, proszę sprawdzić poniższe informacje lub spróbować ponownie",
+ "PluginApiNotFound": "Przepraszamy, w manifestach wtyczki nie istnieje to API. Proszę sprawdź, czy metoda żądania jest zgodna z API w manifestach wtyczki",
+ "PluginApiParamsError": "Przepraszamy, walidacja parametrów wejściowych żądanej wtyczki nie powiodła się. Proszę sprawdź, czy parametry wejściowe są zgodne z informacjami opisującymi API",
+ "PluginFailToTransformArguments": "Przepraszamy, nie udało się przekształcić argumentów wywołania wtyczki. Spróbuj ponownie wygenerować wiadomość pomocnika lub zmień model AI o większej zdolności do wywoływania narzędzi i spróbuj ponownie",
+ "PluginGatewayError": "Przepraszamy, wystąpił błąd bramy wtyczki. Proszę sprawdź, czy konfiguracja bramy wtyczki jest poprawna",
+ "PluginManifestInvalid": "Przepraszamy, walidacja manifestu opisowego wtyczki nie powiodła się. Proszę sprawdź, czy format pliku opisowego wtyczki jest zgodny z normami",
+ "PluginManifestNotFound": "Przepraszamy, serwer nie odnalazł manifestu opisowego wtyczki (manifest.json). Proszę sprawdź, czy adres pliku opisowego wtyczki jest poprawny",
+ "PluginMarketIndexInvalid": "Przepraszamy, walidacja indeksu wtyczek nie powiodła się. Proszę sprawdź, czy format pliku indeksu jest zgodny z normami",
+ "PluginMarketIndexNotFound": "Przepraszamy, serwer nie odnalazł indeksu wtyczek. Proszę sprawdź, czy adres indeksu jest poprawny",
+ "PluginMetaInvalid": "Przepraszamy, walidacja metadanych wtyczki nie powiodła się. Proszę sprawdź, czy format metadanych wtyczki jest zgodny z normami",
+ "PluginMetaNotFound": "Przepraszamy, nie znaleziono metadanych wtyczki w indeksie. Sprawdź, czy informacje konfiguracyjne wtyczki są obecne w indeksie",
+ "PluginOpenApiInitError": "Przepraszamy, inicjalizacja klienta OpenAPI nie powiodła się. Proszę sprawdź, czy informacje konfiguracyjne OpenAPI są poprawne",
+ "PluginServerError": "Błąd zwrócony przez serwer wtyczki. Proszę sprawdź plik opisowy wtyczki, konfigurację wtyczki lub implementację serwera zgodnie z poniższymi informacjami o błędzie",
+ "PluginSettingsInvalid": "Ta wtyczka wymaga poprawnej konfiguracji przed użyciem. Proszę sprawdź, czy Twoja konfiguracja jest poprawna",
+ "ProviderBizError": "Wystąpił błąd usługi {{provider}}, proszę sprawdzić poniższe informacje lub spróbować ponownie",
+ "StreamChunkError": "Błąd analizy bloku wiadomości w żądaniu strumieniowym. Proszę sprawdzić, czy aktualny interfejs API jest zgodny z normami, lub skontaktować się z dostawcą API w celu uzyskania informacji.",
+ "SubscriptionPlanLimit": "Wykorzystałeś limit swojego abonamentu i nie możesz korzystać z tej funkcji. Proszę uaktualnić do wyższego planu lub zakupić dodatkowy pakiet zasobów, aby kontynuować korzystanie.",
+ "UnknownChatFetchError": "Przykro nam, wystąpił nieznany błąd żądania. Proszę sprawdzić poniższe informacje lub spróbować ponownie."
+ },
+ "stt": {
+ "responseError": "Błąd żądania usługi. Proszę sprawdź konfigurację i spróbuj ponownie"
+ },
+ "tts": {
+ "responseError": "Błąd żądania usługi. Proszę sprawdź konfigurację i spróbuj ponownie"
+ },
+ "unlock": {
+ "addProxyUrl": "Dodaj adres proxy OpenAI (opcjonalnie)",
+ "apiKey": {
+ "description": "Wprowadź swój Klucz API {{name}}, aby rozpocząć sesję.",
+ "title": "Użyj niestandardowego Klucza API {{name}}"
+ },
+ "closeMessage": "Zamknij komunikat",
+ "confirm": "Potwierdź i spróbuj ponownie",
+ "oauth": {
+ "description": "Administrator włączył jednolite uwierzytelnianie logowania. Kliknij poniższy przycisk, aby się zalogować i odblokować aplikację.",
+ "success": "Zalogowano pomyślnie",
+ "title": "Zaloguj się",
+ "welcome": "Witaj!"
+ },
+ "password": {
+ "description": "Administrator włączył szyfrowanie aplikacji. Po wprowadzeniu hasła aplikacja zostanie odblokowana. Hasło należy wprowadzić tylko raz.",
+ "placeholder": "Wprowadź hasło",
+ "title": "Wprowadź hasło, aby odblokować aplikację"
+ },
+ "tabs": {
+ "apiKey": "Niestandardowy klucz API",
+ "password": "Hasło"
+ }
+ },
+ "upload": {
+ "desc": "Szczegóły: {{detail}}",
+ "fileOnlySupportInServerMode": "Aktualny tryb wdrożenia nie obsługuje przesyłania plików, które nie są obrazami. Aby przesłać pliki w formacie {{ext}}, przełącz się na wdrożenie bazy danych na serwerze lub skorzystaj z usługi {{cloud}}.",
+ "networkError": "Proszę upewnić się, że Twoja sieć działa poprawnie oraz sprawdzić, czy konfiguracja CORS dla usługi przechowywania plików jest prawidłowa.",
+ "title": "Nie udało się przesłać pliku. Sprawdź połączenie sieciowe lub spróbuj ponownie później",
+ "unknownError": "Przyczyna błędu: {{reason}}",
+ "uploadFailed": "Wysyłanie pliku nie powiodło się."
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/file.json b/DigitalHumanWeb/locales/pl-PL/file.json
new file mode 100644
index 0000000..35c3e0c
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "Zarządzaj swoimi plikami i bazą wiedzy",
+ "detail": {
+ "basic": {
+ "createdAt": "Data utworzenia",
+ "filename": "Nazwa pliku",
+ "size": "Rozmiar pliku",
+ "title": "Informacje podstawowe",
+ "type": "Format",
+ "updatedAt": "Data aktualizacji"
+ },
+ "data": {
+ "chunkCount": "Liczba fragmentów",
+ "embedding": {
+ "default": "Nie zindeksowano",
+ "error": "Błąd",
+ "pending": "Oczekiwanie na uruchomienie",
+ "processing": "W trakcie przetwarzania",
+ "success": "Zakończono"
+ },
+ "embeddingStatus": "Indeksowanie"
+ }
+ },
+ "empty": "Brak przesłanych plików/folderów",
+ "header": {
+ "actions": {
+ "newFolder": "Nowy folder",
+ "uploadFile": "Prześlij plik",
+ "uploadFolder": "Prześlij folder"
+ },
+ "uploadButton": "Prześlij"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "Zaraz usuniesz tę bazę wiedzy, pliki w niej zawarte nie zostaną usunięte, a zostaną przeniesione do folderu wszystkich plików. Po usunięciu bazy wiedzy nie będzie można jej przywrócić, proszę działać ostrożnie.",
+ "empty": "Kliknij <1>+1>, aby rozpocząć tworzenie bazy wiedzy"
+ },
+ "new": "Nowa baza wiedzy",
+ "title": "Baza wiedzy"
+ },
+ "networkError": "Nie udało się uzyskać dostępu do bazy wiedzy, proszę sprawdzić połączenie sieciowe i spróbować ponownie",
+ "notSupportGuide": {
+ "desc": "Obecna instancja wdrożeniowa jest w trybie bazy danych klienta, co uniemożliwia korzystanie z funkcji zarządzania plikami. Proszę przełączyć się na <1>tryb wdrożenia bazy danych serwera1> lub bezpośrednio korzystać z <3>LobeChat Cloud3>",
+ "features": {
+ "allKind": {
+ "desc": "Obsługuje popularne typy plików, w tym formaty dokumentów takie jak Word, PPT, Excel, PDF, TXT oraz popularne pliki kodu, takie jak JS, Python",
+ "title": "Analiza różnych typów plików"
+ },
+ "embeddings": {
+ "desc": "Wykorzystuje wysokowydajne modele wektorowe do indeksowania fragmentów tekstu, umożliwiając semantyczne przeszukiwanie treści plików",
+ "title": "Semantyzacja wektorów"
+ },
+ "repos": {
+ "desc": "Obsługuje tworzenie bazy wiedzy i pozwala na dodawanie różnych typów plików, budując swoją własną wiedzę w danej dziedzinie",
+ "title": "Baza wiedzy"
+ }
+ },
+ "title": "Obecny tryb wdrożenia nie obsługuje zarządzania plikami"
+ },
+ "preview": {
+ "downloadFile": "Pobierz plik",
+ "unsupportedFileAndContact": "Ten format pliku nie jest obecnie obsługiwany w podglądzie online. Jeśli chcesz uzyskać podgląd, zachęcamy do <1>skontaktowania się z nami1>."
+ },
+ "searchFilePlaceholder": "Szukaj pliku",
+ "tab": {
+ "all": "Wszystkie pliki",
+ "audios": "Audio",
+ "documents": "Dokumenty",
+ "images": "Obrazy",
+ "videos": "Wideo",
+ "websites": "Strony internetowe"
+ },
+ "title": "Pliki",
+ "uploadDock": {
+ "body": {
+ "collapse": "Zwiń",
+ "item": {
+ "done": "Przesłano",
+ "error": "Błąd przesyłania, spróbuj ponownie",
+ "pending": "Przygotowanie do przesłania...",
+ "processing": "Przetwarzanie pliku...",
+ "restTime": "Pozostały czas {{time}}"
+ }
+ },
+ "totalCount": "Łącznie {{count}} pozycji",
+ "uploadStatus": {
+ "error": "Błąd przesyłania",
+ "pending": "Oczekiwanie na przesłanie",
+ "processing": "Przesyłanie",
+ "success": "Przesyłanie zakończone",
+ "uploading": "Przesyłanie w toku"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/knowledgeBase.json b/DigitalHumanWeb/locales/pl-PL/knowledgeBase.json
new file mode 100644
index 0000000..d179f31
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "Plik dodany pomyślnie, <1>zobacz teraz1>",
+ "confirm": "Dodaj",
+ "id": {
+ "placeholder": "Wybierz bazę wiedzy do dodania",
+ "required": "Wybierz bazę wiedzy",
+ "title": "Docelowa baza wiedzy"
+ },
+ "title": "Dodaj do bazy wiedzy",
+ "totalFiles": "Wybrano {{count}} plików"
+ },
+ "createNew": {
+ "confirm": "Utwórz",
+ "description": {
+ "placeholder": "Opis bazy wiedzy (opcjonalnie)"
+ },
+ "formTitle": "Podstawowe informacje",
+ "name": {
+ "placeholder": "Nazwa bazy wiedzy",
+ "required": "Proszę wpisać nazwę bazy wiedzy"
+ },
+ "title": "Utwórz nową bazę wiedzy"
+ },
+ "tab": {
+ "evals": "Oceny",
+ "files": "Dokumenty",
+ "settings": "Ustawienia",
+ "testing": "Testowanie przypomnienia"
+ },
+ "title": "Baza wiedzy"
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/market.json b/DigitalHumanWeb/locales/pl-PL/market.json
new file mode 100644
index 0000000..cc9f74f
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "Dodaj asystenta",
+ "addAgentAndConverse": "Dodaj asystenta i rozpocznij rozmowę",
+ "addAgentSuccess": "Dodano pomyślnie",
+ "guide": {
+ "func1": {
+ "desc1": "Przejdź do strony ustawień, klikając prawy górny róg okna rozmowy, aby przejść do strony ustawień asystenta, którego chcesz złożyć.",
+ "desc2": "Kliknij przycisk Zatwierdź na rynku asystentów w prawym górnym rogu.",
+ "tag": "Metoda pierwsza",
+ "title": "Złożenie przez LobeChat"
+ },
+ "func2": {
+ "button": "Przejdź do repozytorium asystentów na Githubie",
+ "desc": "Jeśli chcesz dodać asystenta do indeksu, użyj pliku agent-template.json lub agent-template-full.json, aby utworzyć wpis w katalogu wtyczek, napisz krótki opis i odpowiednio oznacz, a następnie utwórz żądanie ściągnięcia.",
+ "tag": "Metoda druga",
+ "title": "Złożenie przez Github"
+ }
+ },
+ "search": {
+ "placeholder": "Wyszukaj nazwę asystenta, opis lub słowa kluczowe..."
+ },
+ "sidebar": {
+ "comment": "Komentarze",
+ "prompt": "Podpowiedź",
+ "title": "Szczegóły asystenta"
+ },
+ "submitAgent": "Zatwierdź asystenta",
+ "title": {
+ "allAgents": "Wszyscy asystenci",
+ "recentSubmits": "Ostatnie dodane"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/metadata.json b/DigitalHumanWeb/locales/pl-PL/metadata.json
new file mode 100644
index 0000000..abfb8db
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}} oferuje najlepsze doświadczenia z ChatGPT, Claude, Gemini, OLLaMA WebUI",
+ "title": "{{appName}}: osobiste narzędzie AI, które daje ci mądrzejszy umysł"
+ },
+ "discover": {
+ "assistants": {
+ "description": "Tworzenie treści, copywriting, pytania i odpowiedzi, generowanie obrazów, generowanie wideo, generowanie głosu, inteligentny agent, automatyzacja przepływów pracy, dostosuj swojego osobistego asystenta AI / GPTs / OLLaMA",
+ "title": "Asystenci AI"
+ },
+ "description": "Tworzenie treści, copywriting, pytania i odpowiedzi, generowanie obrazów, generowanie wideo, generowanie głosu, inteligentny agent, automatyzacja przepływów pracy, dostosowane aplikacje AI, stwórz swoje osobiste stanowisko pracy AI",
+ "models": {
+ "description": "Odkryj popularne modele AI OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek",
+ "title": "Modele AI"
+ },
+ "plugins": {
+ "description": "Odkrywaj generatory wykresów, akademickie, generatory obrazów, generatory wideo, generatory głosu oraz zautomatyzowane przepływy pracy, aby wzbogacić możliwości swojego asystenta dzięki różnorodnym wtyczkom.",
+ "title": "Wtyczki AI"
+ },
+ "providers": {
+ "description": "Odkryj głównych dostawców modeli OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter",
+ "title": "Dostawcy usług modeli AI"
+ },
+ "search": "Szukaj",
+ "title": "Odkryj"
+ },
+ "plugins": {
+ "description": "Wyszukiwanie, generowanie wykresów, akademickie, generowanie obrazów, generowanie wideo, generowanie głosu, automatyzacja przepływów pracy, dostosuj możliwości wtyczek ToolCall dla ChatGPT / Claude",
+ "title": "Rynek wtyczek"
+ },
+ "welcome": {
+ "description": "{{appName}} oferuje najlepsze doświadczenia z ChatGPT, Claude, Gemini, OLLaMA WebUI",
+ "title": "Witamy w {{appName}}: osobistym narzędziu AI, które daje ci mądrzejszy umysł"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/migration.json b/DigitalHumanWeb/locales/pl-PL/migration.json
new file mode 100644
index 0000000..7b74404
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "Wyczyść lokalne dane",
+ "downloadBackup": "Pobierz kopię zapasową",
+ "reUpgrade": "Ponowne uaktualnienie",
+ "start": "Rozpocznij korzystanie",
+ "upgrade": "Uaktualnij"
+ },
+ "clear": {
+ "confirm": "Czy na pewno chcesz wyczyścić lokalne dane (ustawienia globalne nie zostaną dotknięte)? Upewnij się, że masz pobraną kopię zapasową danych."
+ },
+ "description": "W nowej wersji, przechowywanie danych w {{appName}} przeszło ogromny skok. Dlatego musimy zaktualizować dane z poprzedniej wersji, aby zapewnić Ci lepsze doświadczenia użytkownika.",
+ "features": {
+ "capability": {
+ "desc": "Oparte na technologii IndexedDB, wystarczającej, aby pomieścić wszystkie Twoje wiadomości z życia.",
+ "title": "Duża pojemność"
+ },
+ "performance": {
+ "desc": "Automatyczne indeksowanie milionów wiadomości, z czasem odpowiedzi na zapytania w milisekundach.",
+ "title": "Wysoka wydajność"
+ },
+ "use": {
+ "desc": "Obsługuje wyszukiwanie według tytułów, opisów, tagów, treści wiadomości, a nawet tekstów tłumaczeń, znacznie zwiększając efektywność codziennego wyszukiwania.",
+ "title": "Łatwiejsze w użyciu"
+ }
+ },
+ "title": "Ewolucja danych {{appName}}",
+ "upgrade": {
+ "error": {
+ "subTitle": "Przykro nam, wystąpił błąd podczas procesu aktualizacji bazy danych. Proszę spróbować następujących rozwiązań: A. Wyczyść lokalne dane, a następnie ponownie zaimportuj dane z kopii zapasowej; B. Kliknij przycisk „Ponownie zaktualizuj”.
Jeśli problem nadal występuje, proszę <1>zgłosić problem1>, a my jak najszybciej pomożemy Ci go rozwiązać.",
+ "title": "Aktualizacja bazy danych nie powiodła się"
+ },
+ "success": {
+ "subTitle": "Baza danych {{appName}} została zaktualizowana do najnowszej wersji, zacznij korzystać już teraz.",
+ "title": "Aktualizacja bazy danych powiodła się"
+ }
+ },
+ "upgradeTip": "Aktualizacja zajmuje zazwyczaj 10-20 sekund, w trakcie aktualizacji nie zamykaj {{appName}}."
+ },
+ "migrateError": {
+ "missVersion": "Importowane dane nie zawierają numeru wersji. Prosimy sprawdzić plik i spróbować ponownie.",
+ "noMigration": "Nie znaleziono planu migracji dla bieżącej wersji. Prosimy sprawdzić numer wersji i spróbować ponownie. Jeśli problem nadal występuje, prosimy zgłosić problem."
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/modelProvider.json b/DigitalHumanWeb/locales/pl-PL/modelProvider.json
new file mode 100644
index 0000000..75f611b
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "Wersja API Azure, stosuj format YYYY-MM-DD, zobacz [najnowszą wersję](https://learn.microsoft.com/pl-pl/azure/ai-services/openai/reference#chat-completions)",
+ "fetch": "Pobierz listę",
+ "title": "Wersja Azure API"
+ },
+ "empty": "Wprowadź identyfikator modelu, aby dodać pierwszy model",
+ "endpoint": {
+ "desc": "Wartość można znaleźć w sekcji 'Klucze i punkty końcowe' podczas sprawdzania zasobu w portalu Azure",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "Adres API Azure"
+ },
+ "modelListPlaceholder": "Wybierz lub dodaj model OpenAI, który wdrożyłeś",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "Wartość można znaleźć w sekcji 'Klucze i punkty końcowe' podczas sprawdzania zasobu w portalu Azure. Możesz użyć KEY1 lub KEY2",
+ "placeholder": "Azure API Key",
+ "title": "Klucz API"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "Wprowadź AWS Access Key Id",
+ "placeholder": "AWS Access Key Id",
+ "title": "AWS Access Key Id"
+ },
+ "checker": {
+ "desc": "Test czy AWS AccessKeyId / SecretAccessKey są poprawnie wypełnione"
+ },
+ "region": {
+ "desc": "Wprowadź AWS Region",
+ "placeholder": "AWS Region",
+ "title": "AWS Region"
+ },
+ "secretAccessKey": {
+ "desc": "Wprowadź AWS Secret Access Key",
+ "placeholder": "AWS Secret Access Key",
+ "title": "AWS Secret Access Key"
+ },
+ "sessionToken": {
+ "desc": "Jeśli korzystasz z AWS SSO/STS, wprowadź swój token sesji AWS",
+ "placeholder": "Token sesji AWS",
+ "title": "Token sesji AWS (opcjonalnie)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "Niestandardowy region usługi",
+ "customSessionToken": "Niestandardowy token sesji",
+ "description": "Wprowadź swój AWS AccessKeyId / SecretAccessKey, aby rozpocząć sesję. Aplikacja nie będzie przechowywać Twojej konfiguracji uwierzytelniania",
+ "title": "Użyj niestandardowych informacji uwierzytelniających Bedrock"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "Wprowadź swój osobisty token dostępu GitHub (PAT), kliknij [tutaj](https://github.com/settings/tokens), aby go utworzyć",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "Test czy adres proxy jest poprawnie wypełniony",
+ "title": "Sprawdzanie łączności"
+ },
+ "customModelName": {
+ "desc": "Dodaj własny model, oddzielaj modele przecinkiem (,)",
+ "placeholder": "vicuna,llava,codellama,llama2:13b-text",
+ "title": "Nazwa własnego modelu"
+ },
+ "download": {
+ "desc": "Ollama is currently downloading the model. Please try not to close this page. The download will resume from where it left off if interrupted.",
+ "remainingTime": "Remaining Time",
+ "speed": "Download Speed",
+ "title": "Downloading model {{model}}"
+ },
+ "endpoint": {
+ "desc": "Wprowadź adres rest API Ollama, jeśli lokalnie nie określono, pozostaw puste",
+ "title": "Adres proxy API"
+ },
+ "setup": {
+ "cors": {
+ "description": "Due to browser security restrictions, you need to configure cross-origin settings for Ollama to function properly.",
+ "linux": {
+ "env": "Add `Environment` under [Service] section, and set the OLLAMA_ORIGINS environment variable:",
+ "reboot": "Reload systemd and restart Ollama.",
+ "systemd": "Invoke systemd to edit the ollama service:"
+ },
+ "macos": "Open the Terminal application, paste the following command, and press Enter to run it.",
+ "reboot": "Restart the Ollama service after the execution is complete.",
+ "title": "Configure Ollama for Cross-Origin Access",
+ "windows": "On Windows, go to 'Control Panel' and edit system environment variables. Create a new environment variable named 'OLLAMA_ORIGINS' for your user account, set the value to '*', and click 'OK/Apply' to save."
+ },
+ "install": {
+ "description": "Upewnij się, że masz zainstalowanego Ollamę. Jeśli nie, pobierz ją ze strony internetowej <1> tutaj1>",
+ "docker": "If you prefer using Docker, Ollama also provides an official Docker image. You can pull it using the following command:",
+ "linux": {
+ "command": "Install using the following command:",
+ "manual": "Alternatively, you can refer to the <1>Linux Manual Installation Guide1> for manual installation."
+ },
+ "title": "Install and Start Ollama Locally",
+ "windowsTab": "Windows (Preview)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "Cancel Download",
+ "confirm": "Download",
+ "description": "Enter your Ollama model tag to continue the session",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "Starting download...",
+ "title": "Download specified Ollama model"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI Zero Jeden Wszystko"
+ },
+ "zhipu": {
+ "title": "Zhipu"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/models.json b/DigitalHumanWeb/locales/pl-PL/models.json
new file mode 100644
index 0000000..e0b3870
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34B, dzięki bogatym próbom treningowym, oferuje doskonałe wyniki w zastosowaniach branżowych."
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9B obsługuje 16K tokenów, oferując wydajne i płynne zdolności generowania języka."
+ },
+ "360gpt-pro": {
+ "description": "360GPT Pro, jako ważny członek serii modeli AI 360, zaspokaja różnorodne potrzeby aplikacji przetwarzania języka naturalnego dzięki wydajnym zdolnościom przetwarzania tekstu, obsługując zrozumienie długich tekstów i wielokrotne dialogi."
+ },
+ "360gpt-turbo": {
+ "description": "360GPT Turbo oferuje potężne zdolności obliczeniowe i dialogowe, charakteryzując się doskonałym rozumieniem semantycznym i wydajnością generacyjną, stanowiąc idealne rozwiązanie dla firm i deweloperów jako inteligentny asystent."
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8K kładzie nacisk na bezpieczeństwo semantyczne i odpowiedzialność, zaprojektowany specjalnie dla aplikacji o wysokich wymaganiach dotyczących bezpieczeństwa treści, zapewniając dokładność i stabilność doświadczeń użytkowników."
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Pro to zaawansowany model przetwarzania języka naturalnego wydany przez firmę 360, charakteryzujący się doskonałymi zdolnościami generowania i rozumienia tekstu, szczególnie w obszarze generowania i tworzenia treści, zdolny do obsługi skomplikowanych zadań związanych z konwersją językową i odgrywaniem ról."
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultra to najsilniejsza wersja w serii modeli Spark, która, oprócz ulepszonego łącza wyszukiwania w sieci, zwiększa zdolność rozumienia i podsumowywania treści tekstowych. Jest to kompleksowe rozwiązanie mające na celu zwiększenie wydajności biurowej i dokładne odpowiadanie na potrzeby, stanowiące inteligentny produkt wiodący w branży."
+ },
+ "Baichuan2-Turbo": {
+ "description": "Wykorzystuje technologię wzmacniania wyszukiwania, aby połączyć duży model z wiedzą branżową i wiedzą z całej sieci. Obsługuje przesyłanie różnych dokumentów, takich jak PDF, Word, oraz wprowadzanie adresów URL, zapewniając szybki i kompleksowy dostęp do informacji oraz dokładne i profesjonalne wyniki."
+ },
+ "Baichuan3-Turbo": {
+ "description": "Optymalizowany pod kątem częstych scenariuszy biznesowych, znacznie poprawiający efektywność i oferujący korzystny stosunek jakości do ceny. W porównaniu do modelu Baichuan2, generowanie treści wzrosło o 20%, pytania o wiedzę o 17%, a zdolności odgrywania ról o 40%. Ogólna wydajność jest lepsza niż GPT3.5."
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "Oferuje 128K ultra długi kontekst, zoptymalizowany pod kątem częstych scenariuszy biznesowych, znacznie poprawiający efektywność i oferujący korzystny stosunek jakości do ceny. W porównaniu do modelu Baichuan2, generowanie treści wzrosło o 20%, pytania o wiedzę o 17%, a zdolności odgrywania ról o 40%. Ogólna wydajność jest lepsza niż GPT3.5."
+ },
+ "Baichuan4": {
+ "description": "Model o najwyższej wydajności w kraju, przewyższający zagraniczne modele w zadaniach związanych z encyklopedią, długimi tekstami i generowaniem treści w języku chińskim. Posiada również wiodące w branży zdolności multimodalne, osiągając doskonałe wyniki w wielu autorytatywnych testach."
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B) to innowacyjny model, idealny do zastosowań w wielu dziedzinach i złożonych zadań."
+ },
+ "Max-32k": {
+ "description": "Spark Max 32K ma dużą zdolność przetwarzania kontekstu, lepsze zrozumienie kontekstu i zdolności logicznego rozumowania, obsługując teksty o długości 32K tokenów, odpowiednie do czytania długich dokumentów, prywatnych pytań o wiedzę i innych scenariuszy."
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPO to wysoce elastyczna fuzja wielu modeli, mająca na celu zapewnienie doskonałego doświadczenia twórczego."
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B) to model poleceń o wysokiej precyzji, idealny do złożonych obliczeń."
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B) oferuje zoptymalizowane wyjście językowe i różnorodne możliwości zastosowania."
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "Odświeżona wersja modelu Phi-3-mini."
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "Ten sam model Phi-3-medium, ale z większym rozmiarem kontekstu do RAG lub kilku strzałowego wywoływania."
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "Model z 14 miliardami parametrów, oferujący lepszą jakość niż Phi-3-mini, z naciskiem na dane o wysokiej jakości i gęstości rozumowania."
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "Ten sam model Phi-3-mini, ale z większym rozmiarem kontekstu do RAG lub kilku strzałowego wywoływania."
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "Najmniejszy członek rodziny Phi-3. Zoptymalizowany zarówno pod kątem jakości, jak i niskiej latencji."
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "Ten sam model Phi-3-small, ale z większym rozmiarem kontekstu do RAG lub kilku strzałowego wywoływania."
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "Model z 7 miliardami parametrów, oferujący lepszą jakość niż Phi-3-mini, z naciskiem na dane o wysokiej jakości i gęstości rozumowania."
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128K ma wyjątkową zdolność przetwarzania kontekstu, mogąc obsługiwać do 128K informacji kontekstowych, szczególnie odpowiedni do analizy całościowej i długoterminowego przetwarzania logicznego w długich tekstach, zapewniając płynne i spójne logicznie komunikowanie się oraz różnorodne wsparcie cytatów."
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "Jako wersja testowa Qwen2, Qwen1.5 wykorzystuje dużą ilość danych do osiągnięcia bardziej precyzyjnych funkcji dialogowych."
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B) oferuje szybkie odpowiedzi i naturalne umiejętności dialogowe, idealne do środowisk wielojęzycznych."
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2 to zaawansowany uniwersalny model językowy, wspierający różne typy poleceń."
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5 to nowa seria dużych modeli językowych, zaprojektowana w celu optymalizacji przetwarzania zadań instrukcyjnych."
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5 to nowa seria dużych modeli językowych, zaprojektowana w celu optymalizacji przetwarzania zadań instrukcyjnych."
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5 to nowa seria dużych modeli językowych, z silniejszymi zdolnościami rozumienia i generacji."
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5 to nowa seria dużych modeli językowych, zaprojektowana w celu optymalizacji przetwarzania zadań instrukcyjnych."
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coder koncentruje się na pisaniu kodu."
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Math koncentruje się na rozwiązywaniu problemów w dziedzinie matematyki, oferując profesjonalne odpowiedzi na trudne pytania."
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9B to otwarta wersja, oferująca zoptymalizowane doświadczenie dialogowe dla aplikacji konwersacyjnych."
+ },
+ "abab5.5-chat": {
+ "description": "Skierowany do scenariuszy produkcyjnych, wspierający przetwarzanie złożonych zadań i efektywne generowanie tekstu, odpowiedni do zastosowań w profesjonalnych dziedzinach."
+ },
+ "abab5.5s-chat": {
+ "description": "Zaprojektowany specjalnie do scenariuszy dialogowych w języku chińskim, oferujący wysokiej jakości generowanie dialogów w języku chińskim, odpowiedni do różnych zastosowań."
+ },
+ "abab6.5g-chat": {
+ "description": "Zaprojektowany specjalnie do dialogów z wielojęzycznymi postaciami, wspierający wysokiej jakości generowanie dialogów w języku angielskim i innych językach."
+ },
+ "abab6.5s-chat": {
+ "description": "Odpowiedni do szerokiego zakresu zadań przetwarzania języka naturalnego, w tym generowania tekstu, systemów dialogowych itp."
+ },
+ "abab6.5t-chat": {
+ "description": "Optymalizowany do scenariuszy dialogowych w języku chińskim, oferujący płynne i zgodne z chińskimi zwyczajami generowanie dialogów."
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "Open source model wywołań funkcji od Fireworks, oferujący doskonałe możliwości wykonania poleceń i otwarte, konfigurowalne cechy."
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "Firefunction-v2, najnowszy model firmy Fireworks, to wydajny model wywołań funkcji, opracowany na bazie Llama-3, zoptymalizowany do wywołań funkcji, dialogów i śledzenia poleceń."
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13b to model językowy wizualny, który może jednocześnie przyjmować obrazy i tekst, przeszkolony na wysokiej jakości danych, idealny do zadań multimodalnych."
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "Model Gemma 2 9B Instruct, oparty na wcześniejszej technologii Google, idealny do zadań generowania tekstu, takich jak odpowiadanie na pytania, podsumowywanie i wnioskowanie."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "Model Llama 3 70B Instruct, zaprojektowany do wielojęzycznych dialogów i rozumienia języka naturalnego, przewyższa większość konkurencyjnych modeli."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "Model Llama 3 70B Instruct (wersja HF), zgodny z wynikami oficjalnej implementacji, idealny do wysokiej jakości zadań śledzenia poleceń."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "Model Llama 3 8B Instruct, zoptymalizowany do dialogów i zadań wielojęzycznych, oferuje doskonałe i efektywne osiągi."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "Model Llama 3 8B Instruct (wersja HF), zgodny z wynikami oficjalnej implementacji, zapewnia wysoką spójność i kompatybilność międzyplatformową."
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "Model Llama 3.1 405B Instruct, z ogromną liczbą parametrów, idealny do złożonych zadań i śledzenia poleceń w scenariuszach o dużym obciążeniu."
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "Model Llama 3.1 70B Instruct oferuje doskonałe możliwości rozumienia i generowania języka, idealny do zadań dialogowych i analitycznych."
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "Model Llama 3.1 8B Instruct, zoptymalizowany do wielojęzycznych dialogów, potrafi przewyższyć większość modeli open source i closed source w powszechnych standardach branżowych."
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "Model Mixtral MoE 8x22B Instruct, z dużą liczbą parametrów i architekturą wielu ekspertów, kompleksowo wspierający efektywne przetwarzanie złożonych zadań."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "Model Mixtral MoE 8x7B Instruct, architektura wielu ekspertów, oferująca efektywne śledzenie i wykonanie poleceń."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "Model Mixtral MoE 8x7B Instruct (wersja HF), wydajność zgodna z oficjalną implementacją, idealny do różnych scenariuszy efektywnych zadań."
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "Model MythoMax L2 13B, łączący nowatorskie techniki łączenia, doskonały w narracji i odgrywaniu ról."
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "Model Phi 3 Vision Instruct, lekki model multimodalny, zdolny do przetwarzania złożonych informacji wizualnych i tekstowych, z silnymi zdolnościami wnioskowania."
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "Model StarCoder 15.5B, wspierający zaawansowane zadania programistyczne, z wzmocnionymi możliwościami wielojęzycznymi, idealny do złożonego generowania i rozumienia kodu."
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "Model StarCoder 7B, przeszkolony w ponad 80 językach programowania, oferujący doskonałe możliwości uzupełniania kodu i rozumienia kontekstu."
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "Model Yi-Large, oferujący doskonałe możliwości przetwarzania wielojęzycznego, nadający się do różnych zadań generowania i rozumienia języka."
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "Model wielojęzyczny z 398 miliardami parametrów (94 miliardy aktywnych), oferujący okno kontekstowe o długości 256K, wywoływanie funkcji, strukturalne wyjście i generację opartą na kontekście."
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "Model wielojęzyczny z 52 miliardami parametrów (12 miliardów aktywnych), oferujący okno kontekstowe o długości 256K, wywoływanie funkcji, strukturalne wyjście i generację opartą na kontekście."
+ },
+ "ai21-jamba-instruct": {
+ "description": "Model LLM oparty na Mamba, zaprojektowany do osiągania najlepszej wydajności, jakości i efektywności kosztowej."
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "Claude 3.5 Sonnet podnosi standardy branżowe, przewyższając modele konkurencji oraz Claude 3 Opus, osiągając doskonałe wyniki w szerokim zakresie ocen, jednocześnie oferując szybkość i koszty na poziomie naszych modeli średniej klasy."
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "Claude 3 Haiku to najszybszy i najbardziej kompaktowy model od Anthropic, oferujący niemal natychmiastową szybkość odpowiedzi. Może szybko odpowiadać na proste zapytania i prośby. Klienci będą mogli budować płynne doświadczenia AI, które naśladują interakcje międzyludzkie. Claude 3 Haiku może przetwarzać obrazy i zwracać wyjścia tekstowe, z oknem kontekstowym wynoszącym 200K."
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "Claude 3 Opus to najpotężniejszy model AI od Anthropic, z najnowocześniejszymi osiągami w wysoko złożonych zadaniach. Może obsługiwać otwarte podpowiedzi i nieznane scenariusze, oferując doskonałą płynność i ludzkie zdolności rozumienia. Claude 3 Opus pokazuje granice możliwości generatywnej AI. Claude 3 Opus może przetwarzać obrazy i zwracać wyjścia tekstowe, z oknem kontekstowym wynoszącym 200K."
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "Claude 3 Sonnet od Anthropic osiąga idealną równowagę między inteligencją a szybkością — szczególnie odpowiedni do obciążeń roboczych w przedsiębiorstwach. Oferuje maksymalną użyteczność po niższej cenie niż konkurencja i został zaprojektowany jako niezawodny, wytrzymały model główny, odpowiedni do skalowalnych wdrożeń AI. Claude 3 Sonnet może przetwarzać obrazy i zwracać wyjścia tekstowe, z oknem kontekstowym wynoszącym 200K."
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "Szybki, ekonomiczny model, który wciąż jest bardzo zdolny, może obsługiwać szereg zadań, w tym codzienne rozmowy, analizę tekstu, podsumowania i pytania dotyczące dokumentów."
+ },
+ "anthropic.claude-v2": {
+ "description": "Model Anthropic wykazuje wysokie zdolności w szerokim zakresie zadań, od złożonych rozmów i generowania treści kreatywnych po szczegółowe przestrzeganie instrukcji."
+ },
+ "anthropic.claude-v2:1": {
+ "description": "Zaktualizowana wersja Claude 2, z podwójnym oknem kontekstowym oraz poprawioną niezawodnością, wskaźnikiem halucynacji i dokładnością opartą na dowodach w kontekście długich dokumentów i RAG."
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 Haiku to najszybszy i najbardziej kompaktowy model Anthropic, zaprojektowany do niemal natychmiastowych odpowiedzi. Oferuje szybkie i dokładne wyniki w ukierunkowanych zadaniach."
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opus to najpotężniejszy model Anthropic do obsługi wysoce złożonych zadań. Wyróżnia się doskonałymi osiągami, inteligencją, płynnością i zdolnością rozumienia."
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 Sonnet oferuje możliwości przewyższające Opus oraz szybsze tempo niż Sonnet, zachowując tę samą cenę. Sonnet szczególnie dobrze radzi sobie z programowaniem, nauką o danych, przetwarzaniem wizualnym i zadaniami agenta."
+ },
+ "aya": {
+ "description": "Aya 23 to model wielojęzyczny wydany przez Cohere, wspierający 23 języki, ułatwiający różnorodne zastosowania językowe."
+ },
+ "aya:35b": {
+ "description": "Aya 23 to model wielojęzyczny wydany przez Cohere, wspierający 23 języki, ułatwiający różnorodne zastosowania językowe."
+ },
+ "charglm-3": {
+ "description": "CharGLM-3 zaprojektowany z myślą o odgrywaniu ról i emocjonalnym towarzyszeniu, obsługujący ultra-długą pamięć wielokrotną i spersonalizowane dialogi, z szerokim zakresem zastosowań."
+ },
+ "chatgpt-4o-latest": {
+ "description": "ChatGPT-4o to dynamiczny model, który jest na bieżąco aktualizowany, aby utrzymać najnowszą wersję. Łączy potężne zdolności rozumienia i generowania języka, co czyni go odpowiednim do zastosowań na dużą skalę, w tym obsługi klienta, edukacji i wsparcia technicznego."
+ },
+ "claude-2.0": {
+ "description": "Claude 2 oferuje postępy w kluczowych możliwościach dla przedsiębiorstw, w tym wiodącą w branży kontekst 200K tokenów, znacznie zmniejszającą częstość występowania halucynacji modelu, systemowe podpowiedzi oraz nową funkcję testową: wywołania narzędzi."
+ },
+ "claude-2.1": {
+ "description": "Claude 2 oferuje postępy w kluczowych możliwościach dla przedsiębiorstw, w tym wiodącą w branży kontekst 200K tokenów, znacznie zmniejszającą częstość występowania halucynacji modelu, systemowe podpowiedzi oraz nową funkcję testową: wywołania narzędzi."
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnet oferuje możliwości przewyższające Opus oraz szybsze tempo niż Sonnet, przy zachowaniu tej samej ceny. Sonnet szczególnie dobrze radzi sobie z programowaniem, nauką danych, przetwarzaniem wizualnym i zadaniami agenta."
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haiku to najszybszy i najbardziej kompaktowy model Anthropic, zaprojektowany do osiągania niemal natychmiastowych odpowiedzi. Oferuje szybkie i dokładne wyniki w ukierunkowanych zadaniach."
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opus to najpotężniejszy model Anthropic do przetwarzania wysoce złożonych zadań. Wykazuje doskonałe osiągi w zakresie wydajności, inteligencji, płynności i zrozumienia."
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnet zapewnia idealną równowagę między inteligencją a szybkością dla obciążeń roboczych w przedsiębiorstwach. Oferuje maksymalną użyteczność przy niższej cenie, jest niezawodny i odpowiedni do dużych wdrożeń."
+ },
+ "claude-instant-1.2": {
+ "description": "Model Anthropic przeznaczony do generowania tekstu o niskim opóźnieniu i wysokiej przepustowości, wspierający generowanie setek stron tekstu."
+ },
+ "codegeex-4": {
+ "description": "CodeGeeX-4 to potężny asystent programowania AI, obsługujący inteligentne pytania i odpowiedzi oraz uzupełnianie kodu w różnych językach programowania, zwiększając wydajność programistów."
+ },
+ "codegemma": {
+ "description": "CodeGemma to lekki model językowy, specjalizujący się w różnych zadaniach programistycznych, wspierający szybkie iteracje i integrację."
+ },
+ "codegemma:2b": {
+ "description": "CodeGemma to lekki model językowy, specjalizujący się w różnych zadaniach programistycznych, wspierający szybkie iteracje i integrację."
+ },
+ "codellama": {
+ "description": "Code Llama to model LLM skoncentrowany na generowaniu i dyskusji kodu, łączący wsparcie dla szerokiego zakresu języków programowania, odpowiedni do środowisk deweloperskich."
+ },
+ "codellama:13b": {
+ "description": "Code Llama to model LLM skoncentrowany na generowaniu i dyskusji kodu, łączący wsparcie dla szerokiego zakresu języków programowania, odpowiedni do środowisk deweloperskich."
+ },
+ "codellama:34b": {
+ "description": "Code Llama to model LLM skoncentrowany na generowaniu i dyskusji kodu, łączący wsparcie dla szerokiego zakresu języków programowania, odpowiedni do środowisk deweloperskich."
+ },
+ "codellama:70b": {
+ "description": "Code Llama to model LLM skoncentrowany na generowaniu i dyskusji kodu, łączący wsparcie dla szerokiego zakresu języków programowania, odpowiedni do środowisk deweloperskich."
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5 to duży model językowy wytrenowany na dużej ilości danych kodowych, zaprojektowany do rozwiązywania złożonych zadań programistycznych."
+ },
+ "codestral": {
+ "description": "Codestral to pierwszy model kodowy Mistral AI, oferujący doskonałe wsparcie dla zadań generowania kodu."
+ },
+ "codestral-latest": {
+ "description": "Codestral to nowoczesny model generacyjny skoncentrowany na generowaniu kodu, zoptymalizowany do zadań wypełniania i uzupełniania kodu."
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22B to model zaprojektowany do przestrzegania instrukcji, dialogów i programowania."
+ },
+ "cohere-command-r": {
+ "description": "Command R to skalowalny model generatywny, który koncentruje się na RAG i użyciu narzędzi, aby umożliwić AI na skalę produkcyjną dla przedsiębiorstw."
+ },
+ "cohere-command-r-plus": {
+ "description": "Command R+ to model zoptymalizowany pod kątem RAG, zaprojektowany do obsługi obciążeń roboczych na poziomie przedsiębiorstwa."
+ },
+ "command-r": {
+ "description": "Command R to LLM zoptymalizowany do dialogów i zadań z długim kontekstem, szczególnie odpowiedni do dynamicznej interakcji i zarządzania wiedzą."
+ },
+ "command-r-plus": {
+ "description": "Command R+ to model językowy o wysokiej wydajności, zaprojektowany z myślą o rzeczywistych scenariuszach biznesowych i złożonych zastosowaniach."
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instruct oferuje wysoką niezawodność w przetwarzaniu poleceń, wspierając różne branże."
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5 łączy doskonałe cechy wcześniejszych wersji, wzmacniając zdolności ogólne i kodowania."
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67B to zaawansowany model przeszkolony do złożonych dialogów."
+ },
+ "deepseek-chat": {
+ "description": "Nowy otwarty model łączący zdolności ogólne i kodowe, który nie tylko zachowuje ogólne zdolności dialogowe oryginalnego modelu czatu i potężne zdolności przetwarzania kodu modelu Coder, ale także lepiej dostosowuje się do ludzkich preferencji. Ponadto, DeepSeek-V2.5 osiągnął znaczne poprawy w zadaniach pisarskich, przestrzeganiu instrukcji i innych obszarach."
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2 to otwarty model kodowy Mixture-of-Experts, który doskonale radzi sobie z zadaniami kodowymi, porównywalny z GPT4-Turbo."
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2 to otwarty model kodowy Mixture-of-Experts, który doskonale radzi sobie z zadaniami kodowymi, porównywalny z GPT4-Turbo."
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2 to wydajny model językowy Mixture-of-Experts, odpowiedni do ekonomicznych potrzeb przetwarzania."
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236B to model kodowy zaprojektowany przez DeepSeek, oferujący potężne możliwości generowania kodu."
+ },
+ "deepseek/deepseek-chat": {
+ "description": "Nowy, otwarty model łączący zdolności ogólne i kodowe, który nie tylko zachowuje ogólne zdolności dialogowe oryginalnego modelu Chat, ale także potężne zdolności przetwarzania kodu modelu Coder, lepiej dostosowując się do ludzkich preferencji. Ponadto, DeepSeek-V2.5 osiągnął znaczne poprawy w zadaniach pisarskich, przestrzeganiu instrukcji i wielu innych obszarach."
+ },
+ "emohaa": {
+ "description": "Emohaa to model psychologiczny, posiadający profesjonalne umiejętności doradcze, pomagający użytkownikom zrozumieć problemy emocjonalne."
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001 (Tuning) oferuje stabilną i dostosowywalną wydajność, co czyni go idealnym wyborem dla rozwiązań złożonych zadań."
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002 (Tuning) oferuje doskonałe wsparcie multimodalne, koncentrując się na efektywnym rozwiązywaniu złożonych zadań."
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Pro to model AI o wysokiej wydajności od Google, zaprojektowany do szerokiego rozszerzania zadań."
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001 to wydajny model multimodalny, wspierający szerokie zastosowania."
+ },
+ "gemini-1.5-flash-002": {
+ "description": "Gemini 1.5 Flash 002 to wydajny model multimodalny, który wspiera szeroką gamę zastosowań."
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827 został zaprojektowany do obsługi dużych zadań, oferując niezrównaną prędkość przetwarzania."
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "Gemini 1.5 Flash 8B 0924 to najnowszy eksperymentalny model, który wykazuje znaczące poprawy wydajności w zastosowaniach tekstowych i multimodalnych."
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827 oferuje zoptymalizowane możliwości przetwarzania multimodalnego, odpowiednie do różnych złożonych scenariuszy zadań."
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flash to najnowszy model AI Google o wielu modalnościach, który charakteryzuje się szybkim przetwarzaniem i obsługuje wejścia tekstowe, obrazowe i wideo, co czyni go odpowiednim do efektywnego rozszerzania w różnych zadaniach."
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001 to skalowalne rozwiązanie AI multimodalnego, wspierające szeroki zakres złożonych zadań."
+ },
+ "gemini-1.5-pro-002": {
+ "description": "Gemini 1.5 Pro 002 to najnowszy model gotowy do produkcji, oferujący wyższą jakość wyników, ze szczególnym uwzględnieniem zadań matematycznych, długich kontekstów i zadań wizualnych."
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801 oferuje doskonałe możliwości przetwarzania multimodalnego, zapewniając większą elastyczność w rozwoju aplikacji."
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827 łączy najnowsze technologie optymalizacji, oferując bardziej efektywne możliwości przetwarzania danych multimodalnych."
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Pro obsługuje do 2 milionów tokenów, co czyni go idealnym wyborem dla średniej wielkości modeli multimodalnych, odpowiednim do wszechstronnej obsługi złożonych zadań."
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7B nadaje się do przetwarzania zadań średniej i małej skali, łącząc efektywność kosztową."
+ },
+ "gemma2": {
+ "description": "Gemma 2 to wydajny model wydany przez Google, obejmujący różnorodne zastosowania, od małych aplikacji po złożone przetwarzanie danych."
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9B to model zoptymalizowany do specyficznych zadań i integracji narzędzi."
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2 to wydajny model wydany przez Google, obejmujący różnorodne zastosowania, od małych aplikacji po złożone przetwarzanie danych."
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2 to wydajny model wydany przez Google, obejmujący różnorodne zastosowania, od małych aplikacji po złożone przetwarzanie danych."
+ },
+ "general": {
+ "description": "Spark Lite to lekki model dużego języka, charakteryzujący się bardzo niskim opóźnieniem i wysoką wydajnością przetwarzania, całkowicie darmowy i otwarty, wspierający funkcję wyszukiwania w czasie rzeczywistym. Jego szybka reakcja sprawia, że doskonale sprawdza się w zastosowaniach inferencyjnych i dostrajaniu modeli na urządzeniach o niskiej mocy obliczeniowej, oferując użytkownikom doskonały stosunek kosztów do korzyści oraz inteligentne doświadczenie, szczególnie w zadaniach związanych z pytaniami o wiedzę, generowaniem treści i wyszukiwaniem."
+ },
+ "generalv3": {
+ "description": "Spark Pro to model dużego języka o wysokiej wydajności, zoptymalizowany do profesjonalnych dziedzin, takich jak matematyka, programowanie, medycyna i edukacja, wspierający wyszukiwanie w sieci oraz wbudowane wtyczki, takie jak pogoda i daty. Jego zoptymalizowany model wykazuje doskonałe wyniki i wysoką wydajność w skomplikowanych pytaniach o wiedzę, rozumieniu języka oraz tworzeniu zaawansowanych tekstów, co czyni go idealnym wyborem do profesjonalnych zastosowań."
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Max to najbardziej wszechstronna wersja, wspierająca wyszukiwanie w sieci oraz wiele wbudowanych wtyczek. Jego kompleksowo zoptymalizowane zdolności rdzeniowe oraz funkcje ustawiania ról systemowych i wywoływania funkcji sprawiają, że wykazuje się wyjątkową wydajnością w różnych skomplikowanych zastosowaniach."
+ },
+ "glm-4": {
+ "description": "GLM-4 to stary flagowy model wydany w styczniu 2024 roku, obecnie zastąpiony przez silniejszy model GLM-4-0520."
+ },
+ "glm-4-0520": {
+ "description": "GLM-4-0520 to najnowsza wersja modelu, zaprojektowana do wysoko złożonych i zróżnicowanych zadań, z doskonałymi wynikami."
+ },
+ "glm-4-air": {
+ "description": "GLM-4-Air to opłacalna wersja, której wydajność jest zbliżona do GLM-4, oferująca szybkie działanie i przystępną cenę."
+ },
+ "glm-4-airx": {
+ "description": "GLM-4-AirX oferuje wydajną wersję GLM-4-Air, z szybkością wnioskowania do 2,6 razy szybszą."
+ },
+ "glm-4-alltools": {
+ "description": "GLM-4-AllTools to model inteligentny o wielu funkcjach, zoptymalizowany do wsparcia złożonego planowania instrukcji i wywołań narzędzi, takich jak przeglądanie sieci, interpretacja kodu i generowanie tekstu, odpowiedni do wykonywania wielu zadań."
+ },
+ "glm-4-flash": {
+ "description": "GLM-4-Flash to idealny wybór do przetwarzania prostych zadań, najszybszy i najtańszy."
+ },
+ "glm-4-long": {
+ "description": "GLM-4-Long obsługuje ultra-długie wejścia tekstowe, odpowiednie do zadań pamięciowych i przetwarzania dużych dokumentów."
+ },
+ "glm-4-plus": {
+ "description": "GLM-4-Plus jako flagowy model o wysokiej inteligencji, posiada potężne zdolności przetwarzania długich tekstów i złożonych zadań, z ogólnym wzrostem wydajności."
+ },
+ "glm-4v": {
+ "description": "GLM-4V oferuje potężne zdolności rozumienia i wnioskowania obrazów, obsługując różne zadania wizualne."
+ },
+ "glm-4v-plus": {
+ "description": "GLM-4V-Plus ma zdolność rozumienia treści wideo oraz wielu obrazów, odpowiedni do zadań multimodalnych."
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827 oferuje zoptymalizowane możliwości przetwarzania multimodalnego, odpowiednie do różnych złożonych zadań."
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827 łączy najnowsze technologie optymalizacji, oferując bardziej efektywne przetwarzanie danych multimodalnych."
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2 kontynuuje ideę lekkiego i wydajnego projektowania."
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2 to lekka seria modeli tekstowych open source od Google."
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2 to odchudzona seria otwartych modeli tekstowych Google."
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B) oferuje podstawowe możliwości przetwarzania poleceń, idealne do lekkich aplikacji."
+ },
+ "gpt-3.5-turbo": {
+ "description": "GPT 3.5 Turbo, odpowiedni do różnych zadań generowania i rozumienia tekstu, obecnie wskazuje na gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "GPT 3.5 Turbo, odpowiedni do różnych zadań generowania i rozumienia tekstu, obecnie wskazuje na gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "GPT 3.5 Turbo, odpowiedni do różnych zadań generowania i rozumienia tekstu, obecnie wskazuje na gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "GPT 3.5 Turbo, odpowiedni do różnych zadań generowania i rozumienia tekstu, obecnie wskazuje na gpt-3.5-turbo-0125."
+ },
+ "gpt-4": {
+ "description": "GPT-4 oferuje większe okno kontekstowe, zdolne do przetwarzania dłuższych wejść tekstowych, co czyni go odpowiednim do scenariuszy wymagających szerokiej integracji informacji i analizy danych."
+ },
+ "gpt-4-0125-preview": {
+ "description": "Najnowszy model GPT-4 Turbo posiada funkcje wizualne. Teraz zapytania wizualne mogą być obsługiwane za pomocą formatu JSON i wywołań funkcji. GPT-4 Turbo to ulepszona wersja, która oferuje opłacalne wsparcie dla zadań multimodalnych. Znajduje równowagę między dokładnością a wydajnością, co czyni go odpowiednim do aplikacji wymagających interakcji w czasie rzeczywistym."
+ },
+ "gpt-4-0613": {
+ "description": "GPT-4 oferuje większe okno kontekstowe, zdolne do przetwarzania dłuższych wejść tekstowych, co czyni go odpowiednim do scenariuszy wymagających szerokiej integracji informacji i analizy danych."
+ },
+ "gpt-4-1106-preview": {
+ "description": "Najnowszy model GPT-4 Turbo posiada funkcje wizualne. Teraz zapytania wizualne mogą być obsługiwane za pomocą formatu JSON i wywołań funkcji. GPT-4 Turbo to ulepszona wersja, która oferuje opłacalne wsparcie dla zadań multimodalnych. Znajduje równowagę między dokładnością a wydajnością, co czyni go odpowiednim do aplikacji wymagających interakcji w czasie rzeczywistym."
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "Najnowszy model GPT-4 Turbo posiada funkcje wizualne. Teraz zapytania wizualne mogą być obsługiwane za pomocą formatu JSON i wywołań funkcji. GPT-4 Turbo to ulepszona wersja, która oferuje opłacalne wsparcie dla zadań multimodalnych. Znajduje równowagę między dokładnością a wydajnością, co czyni go odpowiednim do aplikacji wymagających interakcji w czasie rzeczywistym."
+ },
+ "gpt-4-32k": {
+ "description": "GPT-4 oferuje większe okno kontekstowe, zdolne do przetwarzania dłuższych wejść tekstowych, co czyni go odpowiednim do scenariuszy wymagających szerokiej integracji informacji i analizy danych."
+ },
+ "gpt-4-32k-0613": {
+ "description": "GPT-4 oferuje większe okno kontekstowe, zdolne do przetwarzania dłuższych wejść tekstowych, co czyni go odpowiednim do scenariuszy wymagających szerokiej integracji informacji i analizy danych."
+ },
+ "gpt-4-turbo": {
+ "description": "Najnowszy model GPT-4 Turbo posiada funkcje wizualne. Teraz zapytania wizualne mogą być obsługiwane za pomocą formatu JSON i wywołań funkcji. GPT-4 Turbo to ulepszona wersja, która oferuje opłacalne wsparcie dla zadań multimodalnych. Znajduje równowagę między dokładnością a wydajnością, co czyni go odpowiednim do aplikacji wymagających interakcji w czasie rzeczywistym."
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "Najnowszy model GPT-4 Turbo posiada funkcje wizualne. Teraz zapytania wizualne mogą być obsługiwane za pomocą formatu JSON i wywołań funkcji. GPT-4 Turbo to ulepszona wersja, która oferuje opłacalne wsparcie dla zadań multimodalnych. Znajduje równowagę między dokładnością a wydajnością, co czyni go odpowiednim do aplikacji wymagających interakcji w czasie rzeczywistym."
+ },
+ "gpt-4-turbo-preview": {
+ "description": "Najnowszy model GPT-4 Turbo posiada funkcje wizualne. Teraz zapytania wizualne mogą być obsługiwane za pomocą formatu JSON i wywołań funkcji. GPT-4 Turbo to ulepszona wersja, która oferuje opłacalne wsparcie dla zadań multimodalnych. Znajduje równowagę między dokładnością a wydajnością, co czyni go odpowiednim do aplikacji wymagających interakcji w czasie rzeczywistym."
+ },
+ "gpt-4-vision-preview": {
+ "description": "Najnowszy model GPT-4 Turbo posiada funkcje wizualne. Teraz zapytania wizualne mogą być obsługiwane za pomocą formatu JSON i wywołań funkcji. GPT-4 Turbo to ulepszona wersja, która oferuje opłacalne wsparcie dla zadań multimodalnych. Znajduje równowagę między dokładnością a wydajnością, co czyni go odpowiednim do aplikacji wymagających interakcji w czasie rzeczywistym."
+ },
+ "gpt-4o": {
+ "description": "ChatGPT-4o to dynamiczny model, który jest na bieżąco aktualizowany, aby utrzymać najnowszą wersję. Łączy potężne zdolności rozumienia i generowania języka, co czyni go odpowiednim do zastosowań na dużą skalę, w tym obsługi klienta, edukacji i wsparcia technicznego."
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "ChatGPT-4o to dynamiczny model, który jest na bieżąco aktualizowany, aby utrzymać najnowszą wersję. Łączy potężne zdolności rozumienia i generowania języka, co czyni go odpowiednim do zastosowań na dużą skalę, w tym obsługi klienta, edukacji i wsparcia technicznego."
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o to dynamiczny model, który jest na bieżąco aktualizowany, aby utrzymać najnowszą wersję. Łączy potężne zdolności rozumienia i generowania języka, co czyni go odpowiednim do zastosowań na dużą skalę, w tym obsługi klienta, edukacji i wsparcia technicznego."
+ },
+ "gpt-4o-mini": {
+ "description": "GPT-4o mini to najnowszy model OpenAI, wprowadzony po GPT-4 Omni, obsługujący wejścia tekstowe i wizualne oraz generujący tekst. Jako ich najnowocześniejszy model w małej skali, jest znacznie tańszy niż inne niedawno wprowadzone modele, a jego cena jest o ponad 60% niższa niż GPT-3.5 Turbo. Utrzymuje najnowocześniejszą inteligencję, jednocześnie oferując znaczną wartość za pieniądze. GPT-4o mini uzyskał wynik 82% w teście MMLU i obecnie zajmuje wyższą pozycję w preferencjach czatu niż GPT-4."
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13B to model językowy łączący kreatywność i inteligencję, zintegrowany z wieloma wiodącymi modelami."
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "Innowacyjny model open source InternLM2.5, dzięki dużej liczbie parametrów, zwiększa inteligencję dialogową."
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5 oferuje inteligentne rozwiązania dialogowe w różnych scenariuszach."
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "Model Llama 3.1 70B Instruct, z 70B parametrami, oferujący doskonałe osiągi w dużych zadaniach generowania tekstu i poleceń."
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70B oferuje potężne możliwości wnioskowania AI, odpowiednie do złożonych zastosowań, wspierające ogromne przetwarzanie obliczeniowe przy zachowaniu efektywności i dokładności."
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8B to model o wysokiej wydajności, oferujący szybkie możliwości generowania tekstu, idealny do zastosowań wymagających dużej efektywności i opłacalności."
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "Model Llama 3.1 8B Instruct, z 8B parametrami, wspierający efektywne wykonanie zadań wskazujących, oferujący wysoką jakość generowania tekstu."
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "Model Llama 3.1 Sonar Huge Online, z 405B parametrami, obsługujący kontekst o długości około 127,000 tokenów, zaprojektowany do złożonych aplikacji czatu online."
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "Model Llama 3.1 Sonar Large Chat, z 70B parametrami, obsługujący kontekst o długości około 127,000 tokenów, idealny do złożonych zadań czatu offline."
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "Model Llama 3.1 Sonar Large Online, z 70B parametrami, obsługujący kontekst o długości około 127,000 tokenów, idealny do zadań czatu o dużej pojemności i różnorodności."
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "Model Llama 3.1 Sonar Small Chat, z 8B parametrami, zaprojektowany do czatów offline, obsługujący kontekst o długości około 127,000 tokenów."
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "Model Llama 3.1 Sonar Small Online, z 8B parametrami, obsługujący kontekst o długości około 127,000 tokenów, zaprojektowany do czatów online, efektywnie przetwarzający różne interakcje tekstowe."
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70B oferuje niezrównane możliwości przetwarzania złożoności, dostosowane do projektów o wysokich wymaganiach."
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8B zapewnia wysoką jakość wydajności wnioskowania, odpowiednią do różnych zastosowań."
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Use oferuje potężne możliwości wywoływania narzędzi, wspierając efektywne przetwarzanie złożonych zadań."
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Use to model zoptymalizowany do efektywnego korzystania z narzędzi, wspierający szybkie obliczenia równoległe."
+ },
+ "llama3.1": {
+ "description": "Llama 3.1 to wiodący model wydany przez Meta, obsługujący do 405B parametrów, mogący być stosowany w złożonych dialogach, tłumaczeniach wielojęzycznych i analizie danych."
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1 to wiodący model wydany przez Meta, obsługujący do 405B parametrów, mogący być stosowany w złożonych dialogach, tłumaczeniach wielojęzycznych i analizie danych."
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1 to wiodący model wydany przez Meta, obsługujący do 405B parametrów, mogący być stosowany w złożonych dialogach, tłumaczeniach wielojęzycznych i analizie danych."
+ },
+ "llava": {
+ "description": "LLaVA to multimodalny model łączący kodery wizualne i Vicunę, przeznaczony do silnego rozumienia wizualnego i językowego."
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7B oferuje zintegrowane możliwości przetwarzania wizualnego, generując złożone wyjścia na podstawie informacji wizualnych."
+ },
+ "llava:13b": {
+ "description": "LLaVA to multimodalny model łączący kodery wizualne i Vicunę, przeznaczony do silnego rozumienia wizualnego i językowego."
+ },
+ "llava:34b": {
+ "description": "LLaVA to multimodalny model łączący kodery wizualne i Vicunę, przeznaczony do silnego rozumienia wizualnego i językowego."
+ },
+ "mathstral": {
+ "description": "MathΣtral zaprojektowany do badań naukowych i wnioskowania matematycznego, oferujący efektywne możliwości obliczeniowe i interpretację wyników."
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "Potężny model z 70 miliardami parametrów, doskonały w rozumowaniu, kodowaniu i szerokich zastosowaniach językowych."
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "Wszechstronny model z 8 miliardami parametrów, zoptymalizowany do zadań dialogowych i generacji tekstu."
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "Modele tekstowe Llama 3.1 dostosowane do instrukcji, zoptymalizowane do wielojęzycznych przypadków użycia dialogowego, przewyższają wiele dostępnych modeli open source i zamkniętych w powszechnych benchmarkach branżowych."
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "Modele tekstowe Llama 3.1 dostosowane do instrukcji, zoptymalizowane do wielojęzycznych przypadków użycia dialogowego, przewyższają wiele dostępnych modeli open source i zamkniętych w powszechnych benchmarkach branżowych."
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "Modele tekstowe Llama 3.1 dostosowane do instrukcji, zoptymalizowane do wielojęzycznych przypadków użycia dialogowego, przewyższają wiele dostępnych modeli open source i zamkniętych w powszechnych benchmarkach branżowych."
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B) oferuje doskonałe możliwości przetwarzania języka i znakomite doświadczenie interakcji."
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B) to potężny model czatu, wspierający złożone potrzeby dialogowe."
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B) oferuje wsparcie dla wielu języków, obejmując bogatą wiedzę z różnych dziedzin."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Lite jest idealny do środowisk wymagających wysokiej wydajności i niskiego opóźnienia."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turbo oferuje doskonałe możliwości rozumienia i generowania języka, idealny do najbardziej wymagających zadań obliczeniowych."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Lite jest dostosowany do środowisk z ograniczonymi zasobami, oferując doskonałą równowagę wydajności."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turbo to wydajny model językowy, wspierający szeroki zakres zastosowań."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405B to potężny model do wstępnego uczenia się i dostosowywania instrukcji."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "Model Llama 3.1 Turbo 405B oferuje ogromną pojemność kontekstową dla przetwarzania dużych danych, wyróżniając się w zastosowaniach sztucznej inteligencji o dużej skali."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70B oferuje efektywne wsparcie dialogowe w wielu językach."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "Model Llama 3.1 70B został starannie dostosowany do aplikacji o dużym obciążeniu, kwantyzowany do FP8, co zapewnia wyższą wydajność obliczeniową i dokładność, gwarantując doskonałe osiągi w złożonych scenariuszach."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1 oferuje wsparcie dla wielu języków i jest jednym z wiodących modeli generacyjnych w branży."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "Model Llama 3.1 8B wykorzystuje kwantyzację FP8, obsługując do 131,072 kontekstowych tokenów, wyróżniając się wśród modeli open source, idealny do złożonych zadań, przewyższający wiele branżowych standardów."
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instruct zoptymalizowano do wysokiej jakości dialogów, osiągając znakomite wyniki w różnych ocenach ludzkich."
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instruct zoptymalizowano do wysokiej jakości scenariuszy dialogowych, osiągając lepsze wyniki niż wiele modeli zamkniętych."
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B Instruct to najnowsza wersja wydana przez Meta, zoptymalizowana do generowania wysokiej jakości dialogów, przewyższająca wiele wiodących modeli zamkniętych."
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct zaprojektowano z myślą o wysokiej jakości dialogach, osiągając znakomite wyniki w ocenach ludzkich, szczególnie w scenariuszach o wysokiej interakcji."
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct to najnowsza wersja wydana przez Meta, zoptymalizowana do wysokiej jakości scenariuszy dialogowych, przewyższająca wiele wiodących modeli zamkniętych."
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1 oferuje wsparcie dla wielu języków i jest jednym z wiodących modeli generacyjnych w branży."
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 405B Instruct to największy i najpotężniejszy model w rodzinie modeli Llama 3.1 Instruct. Jest to wysoko zaawansowany model do dialogów, wnioskowania i generowania danych, który może być również używany jako podstawa do specjalistycznego, ciągłego wstępnego szkolenia lub dostosowywania w określonych dziedzinach. Llama 3.1 oferuje wielojęzyczne duże modele językowe (LLM), które są zestawem wstępnie wytrenowanych, dostosowanych do instrukcji modeli generacyjnych, obejmujących rozmiary 8B, 70B i 405B (wejście/wyjście tekstowe). Modele tekstowe Llama 3.1 dostosowane do instrukcji (8B, 70B, 405B) zostały zoptymalizowane do zastosowań w wielojęzycznych dialogach i przewyższają wiele dostępnych modeli czatu open source w powszechnych testach branżowych. Llama 3.1 jest zaprojektowana do użytku komercyjnego i badawczego w wielu językach. Modele tekstowe dostosowane do instrukcji nadają się do czatu w stylu asystenta, podczas gdy modele wstępnie wytrenowane mogą być dostosowane do różnych zadań generowania języka naturalnego. Modele Llama 3.1 wspierają również wykorzystanie ich wyjść do poprawy innych modeli, w tym generowania danych syntetycznych i udoskonalania. Llama 3.1 jest modelem językowym autoregresywnym opartym na zoptymalizowanej architekturze transformatora. Dostosowane wersje wykorzystują nadzorowane dostosowywanie (SFT) oraz uczenie się ze wzmocnieniem z ludzkim feedbackiem (RLHF), aby odpowiadać ludzkim preferencjom dotyczącym pomocności i bezpieczeństwa."
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "Zaktualizowana wersja Meta Llama 3.1 70B Instruct, obejmująca rozszerzone 128K długości kontekstu, wielojęzyczność i poprawione zdolności wnioskowania. Llama 3.1 oferuje wielojęzyczne modele językowe (LLMs) jako zestaw wstępnie wytrenowanych, dostosowanych do instrukcji modeli generacyjnych, w tym rozmiarów 8B, 70B i 405B (wejście/wyjście tekstowe). Modele tekstowe Llama 3.1 dostosowane do instrukcji (8B, 70B, 405B) są zoptymalizowane do zastosowań w dialogach wielojęzycznych i przewyższają wiele dostępnych modeli czatu w powszechnych testach branżowych. Llama 3.1 jest przeznaczona do zastosowań komercyjnych i badawczych w wielu językach. Modele tekstowe dostosowane do instrukcji są odpowiednie do czatu podobnego do asystenta, podczas gdy modele wstępnie wytrenowane mogą być dostosowane do różnych zadań generowania języka naturalnego. Modele Llama 3.1 wspierają również wykorzystanie wyników ich modeli do poprawy innych modeli, w tym generowania danych syntetycznych i rafinacji. Llama 3.1 jest modelem językowym autoregresywnym, wykorzystującym zoptymalizowaną architekturę transformatora. Wersje dostosowane wykorzystują nadzorowane dostrajanie (SFT) i uczenie się ze wzmocnieniem z ludzkim feedbackiem (RLHF), aby dostosować się do ludzkich preferencji dotyczących pomocności i bezpieczeństwa."
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "Zaktualizowana wersja Meta Llama 3.1 8B Instruct, obejmująca rozszerzone 128K długości kontekstu, wielojęzyczność i poprawione zdolności wnioskowania. Llama 3.1 oferuje wielojęzyczne modele językowe (LLMs) jako zestaw wstępnie wytrenowanych, dostosowanych do instrukcji modeli generacyjnych, w tym rozmiarów 8B, 70B i 405B (wejście/wyjście tekstowe). Modele tekstowe Llama 3.1 dostosowane do instrukcji (8B, 70B, 405B) są zoptymalizowane do zastosowań w dialogach wielojęzycznych i przewyższają wiele dostępnych modeli czatu w powszechnych testach branżowych. Llama 3.1 jest przeznaczona do zastosowań komercyjnych i badawczych w wielu językach. Modele tekstowe dostosowane do instrukcji są odpowiednie do czatu podobnego do asystenta, podczas gdy modele wstępnie wytrenowane mogą być dostosowane do różnych zadań generowania języka naturalnego. Modele Llama 3.1 wspierają również wykorzystanie wyników ich modeli do poprawy innych modeli, w tym generowania danych syntetycznych i rafinacji. Llama 3.1 jest modelem językowym autoregresywnym, wykorzystującym zoptymalizowaną architekturę transformatora. Wersje dostosowane wykorzystują nadzorowane dostrajanie (SFT) i uczenie się ze wzmocnieniem z ludzkim feedbackiem (RLHF), aby dostosować się do ludzkich preferencji dotyczących pomocności i bezpieczeństwa."
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3 to otwarty duży model językowy (LLM) skierowany do deweloperów, badaczy i przedsiębiorstw, mający na celu pomoc w budowaniu, eksperymentowaniu i odpowiedzialnym rozwijaniu ich pomysłów na generatywną sztuczną inteligencję. Jako część podstawowego systemu innowacji globalnej społeczności, jest idealny do tworzenia treści, AI do dialogów, rozumienia języka, badań i zastosowań biznesowych."
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3 to otwarty duży model językowy (LLM) skierowany do deweloperów, badaczy i przedsiębiorstw, mający na celu pomoc w budowaniu, eksperymentowaniu i odpowiedzialnym rozwijaniu ich pomysłów na generatywną sztuczną inteligencję. Jako część podstawowego systemu innowacji globalnej społeczności, jest idealny dla urządzeń o ograniczonej mocy obliczeniowej i zasobach, a także dla szybszego czasu szkolenia."
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7B to najnowszy szybki i lekki model AI od Microsoftu, osiągający wydajność bliską 10-krotności istniejących wiodących modeli open source."
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22B to najnowocześniejszy model Wizard od Microsoftu, wykazujący niezwykle konkurencyjne osiągi."
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-V to nowa generacja multimodalnego dużego modelu wydanego przez OpenBMB, który posiada doskonałe zdolności rozpoznawania OCR oraz zrozumienia multimodalnego, wspierając szeroki zakres zastosowań."
+ },
+ "mistral": {
+ "description": "Mistral to model 7B wydany przez Mistral AI, odpowiedni do zmiennych potrzeb przetwarzania języka."
+ },
+ "mistral-large": {
+ "description": "Mixtral Large to flagowy model Mistral, łączący zdolności generowania kodu, matematyki i wnioskowania, wspierający kontekst o długości 128k."
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407) to zaawansowany model językowy (LLM) z najnowocześniejszymi zdolnościami rozumowania, wiedzy i kodowania."
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Large to flagowy model, doskonały w zadaniach wielojęzycznych, złożonym wnioskowaniu i generowaniu kodu, idealny do zaawansowanych zastosowań."
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemo, opracowany przez Mistral AI i NVIDIA, to model 12B o wysokiej wydajności."
+ },
+ "mistral-small": {
+ "description": "Mistral Small może być używany w każdym zadaniu opartym na języku, które wymaga wysokiej wydajności i niskiej latencji."
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Small to opcja o wysokiej efektywności kosztowej, szybka i niezawodna, odpowiednia do tłumaczeń, podsumowań i analizy sentymentu."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instruct jest znany z wysokiej wydajności, idealny do różnorodnych zadań językowych."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7B to model dostosowany na żądanie, oferujący zoptymalizowane odpowiedzi na zadania."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3 oferuje efektywne możliwości obliczeniowe i rozumienia języka naturalnego, idealne do szerokiego zakresu zastosowań."
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B) to super duży model językowy, wspierający ekstremalne wymagania przetwarzania."
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7B to wstępnie wytrenowany model rzadkiego mieszania ekspertów, przeznaczony do ogólnych zadań tekstowych."
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instruct to model o wysokiej wydajności, który łączy optymalizację prędkości z obsługą długiego kontekstu."
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemo to model z 7,3 miliardami parametrów, wspierający wiele języków i wysoką wydajność programowania."
+ },
+ "mixtral": {
+ "description": "Mixtral to model ekspercki Mistral AI, z otwartymi wagami, oferujący wsparcie w generowaniu kodu i rozumieniu języka."
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7B oferuje wysoką tolerancję na błędy w obliczeniach równoległych, odpowiednią do złożonych zadań."
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtral to model ekspercki Mistral AI, z otwartymi wagami, oferujący wsparcie w generowaniu kodu i rozumieniu języka."
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128K to model o zdolności przetwarzania kontekstu o ultra-długiej długości, odpowiedni do generowania bardzo długich tekstów, spełniający wymagania złożonych zadań generacyjnych, zdolny do przetwarzania treści do 128 000 tokenów, idealny do zastosowań w badaniach, akademickich i generowaniu dużych dokumentów."
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32K oferuje zdolność przetwarzania kontekstu o średniej długości, zdolną do przetwarzania 32 768 tokenów, szczególnie odpowiednią do generowania różnych długich dokumentów i złożonych dialogów, stosowaną w tworzeniu treści, generowaniu raportów i systemach dialogowych."
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8K zaprojektowany do generowania krótkich tekstów, charakteryzuje się wydajnością przetwarzania, zdolny do przetwarzania 8 192 tokenów, idealny do krótkich dialogów, notatek i szybkiego generowania treści."
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8B to ulepszona wersja Nous Hermes 2, zawierająca najnowsze wewnętrznie opracowane zbiory danych."
+ },
+ "o1-mini": {
+ "description": "o1-mini to szybki i ekonomiczny model wnioskowania zaprojektowany z myślą o programowaniu, matematyce i zastosowaniach naukowych. Model ten ma kontekst 128K i datę graniczną wiedzy z października 2023 roku."
+ },
+ "o1-preview": {
+ "description": "o1 to nowy model wnioskowania OpenAI, odpowiedni do złożonych zadań wymagających szerokiej wiedzy ogólnej. Model ten ma kontekst 128K i datę graniczną wiedzy z października 2023 roku."
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mamba to model językowy Mamba 2 skoncentrowany na generowaniu kodu, oferujący silne wsparcie dla zaawansowanych zadań kodowania i wnioskowania."
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7B to kompaktowy, ale wydajny model, doskonały do przetwarzania wsadowego i prostych zadań, takich jak klasyfikacja i generowanie tekstu, z dobrą wydajnością wnioskowania."
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemo to model 12B opracowany we współpracy z Nvidia, oferujący doskonałe możliwości wnioskowania i kodowania, łatwy do integracji i zastąpienia."
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22B to większy model eksperta, skoncentrowany na złożonych zadaniach, oferujący doskonałe możliwości wnioskowania i wyższą przepustowość."
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7B to model rzadkiego eksperta, który wykorzystuje wiele parametrów do zwiększenia prędkości wnioskowania, odpowiedni do przetwarzania zadań wielojęzycznych i generowania kodu."
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o to dynamiczny model, który jest na bieżąco aktualizowany, aby utrzymać najnowszą wersję. Łączy potężne zdolności rozumienia i generowania języka, odpowiedni do zastosowań na dużą skalę, w tym obsługi klienta, edukacji i wsparcia technicznego."
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o mini to najnowszy model OpenAI, wydany po GPT-4 Omni, obsługujący wejścia tekstowe i wizualne. Jako ich najnowocześniejszy mały model, jest znacznie tańszy od innych niedawnych modeli czołowych i kosztuje o ponad 60% mniej niż GPT-3.5 Turbo. Utrzymuje najnowocześniejszą inteligencję, oferując jednocześnie znaczną wartość za pieniądze. GPT-4o mini uzyskał wynik 82% w teście MMLU i obecnie zajmuje wyższą pozycję w preferencjach czatu niż GPT-4."
+ },
+ "openai/o1-mini": {
+ "description": "o1-mini to szybki i ekonomiczny model wnioskowania zaprojektowany z myślą o programowaniu, matematyce i zastosowaniach naukowych. Model ten ma kontekst 128K i datę graniczną wiedzy z października 2023 roku."
+ },
+ "openai/o1-preview": {
+ "description": "o1 to nowy model wnioskowania OpenAI, odpowiedni do złożonych zadań wymagających szerokiej wiedzy ogólnej. Model ten ma kontekst 128K i datę graniczną wiedzy z października 2023 roku."
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7B to otwarta biblioteka modeli językowych, dostrojona przy użyciu strategii „C-RLFT (warunkowe uczenie ze wzmocnieniem)”."
+ },
+ "openrouter/auto": {
+ "description": "W zależności od długości kontekstu, tematu i złożoności, Twoje zapytanie zostanie wysłane do Llama 3 70B Instruct, Claude 3.5 Sonnet (samoregulacja) lub GPT-4o."
+ },
+ "phi3": {
+ "description": "Phi-3 to lekki model otwarty wydany przez Microsoft, odpowiedni do efektywnej integracji i dużej skali wnioskowania wiedzy."
+ },
+ "phi3:14b": {
+ "description": "Phi-3 to lekki model otwarty wydany przez Microsoft, odpowiedni do efektywnej integracji i dużej skali wnioskowania wiedzy."
+ },
+ "pixtral-12b-2409": {
+ "description": "Model Pixtral wykazuje silne zdolności w zadaniach związanych z analizą wykresów i zrozumieniem obrazów, pytaniami dokumentowymi, wielomodalnym rozumowaniem i przestrzeganiem instrukcji, zdolny do przyjmowania obrazów w naturalnej rozdzielczości i proporcjach, a także do przetwarzania dowolnej liczby obrazów w długim oknie kontekstowym o długości do 128K tokenów."
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "Model kodowania Qwen."
+ },
+ "qwen-long": {
+ "description": "Qwen to ultra-duży model językowy, który obsługuje długie konteksty tekstowe oraz funkcje dialogowe oparte na długich dokumentach i wielu dokumentach."
+ },
+ "qwen-math-plus-latest": {
+ "description": "Model matematyczny Qwen, stworzony specjalnie do rozwiązywania problemów matematycznych."
+ },
+ "qwen-math-turbo-latest": {
+ "description": "Model matematyczny Qwen, stworzony specjalnie do rozwiązywania problemów matematycznych."
+ },
+ "qwen-max-latest": {
+ "description": "Model językowy Qwen Max o skali miliardów parametrów, obsługujący różne języki, w tym chiński i angielski, będący API modelu za produktem Qwen 2.5."
+ },
+ "qwen-plus-latest": {
+ "description": "Wzmocniona wersja modelu językowego Qwen Plus, obsługująca różne języki, w tym chiński i angielski."
+ },
+ "qwen-turbo-latest": {
+ "description": "Model językowy Qwen Turbo, obsługujący różne języki, w tym chiński i angielski."
+ },
+ "qwen-vl-chat-v1": {
+ "description": "Qwen VL obsługuje elastyczne interakcje, w tym wiele obrazów, wielokrotne pytania i odpowiedzi oraz zdolności twórcze."
+ },
+ "qwen-vl-max": {
+ "description": "Qwen to ultra-duży model językowy wizualny. W porównaniu do wersji ulepszonej, ponownie poprawia zdolności rozumienia wizualnego i przestrzegania instrukcji, oferując wyższy poziom percepcji wizualnej i poznawczej."
+ },
+ "qwen-vl-plus": {
+ "description": "Qwen to ulepszona wersja dużego modelu językowego wizualnego. Znacząco poprawia zdolności rozpoznawania szczegółów i tekstu, obsługując obrazy o rozdzielczości powyżej miliona pikseli i dowolnych proporcjach."
+ },
+ "qwen-vl-v1": {
+ "description": "Model wstępnie wytrenowany, zainicjowany przez model językowy Qwen-7B, dodający model obrazowy, z rozdzielczością wejściową obrazu wynoszącą 448."
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2 to nowa seria dużych modeli językowych, charakteryzująca się silniejszymi zdolnościami rozumienia i generowania."
+ },
+ "qwen2": {
+ "description": "Qwen2 to nowa generacja dużego modelu językowego Alibaba, wspierająca różnorodne potrzeby aplikacyjne dzięki doskonałej wydajności."
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "Model Qwen 2.5 o skali 14B, udostępniony na zasadzie open source."
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "Model Qwen 2.5 o skali 32B, udostępniony na zasadzie open source."
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "Model Qwen 2.5 o skali 72B, udostępniony na zasadzie open source."
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "Model Qwen 2.5 o skali 7B, udostępniony na zasadzie open source."
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "Otwarta wersja modelu kodowania Qwen."
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "Otwarta wersja modelu kodowania Qwen."
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "Model Qwen-Math, który ma silne zdolności rozwiązywania problemów matematycznych."
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "Model Qwen-Math, który ma silne zdolności rozwiązywania problemów matematycznych."
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "Model Qwen-Math, który ma silne zdolności rozwiązywania problemów matematycznych."
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2 to nowa generacja dużego modelu językowego Alibaba, wspierająca różnorodne potrzeby aplikacyjne dzięki doskonałej wydajności."
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2 to nowa generacja dużego modelu językowego Alibaba, wspierająca różnorodne potrzeby aplikacyjne dzięki doskonałej wydajności."
+ },
+ "qwen2:72b": {
+ "description": "Qwen2 to nowa generacja dużego modelu językowego Alibaba, wspierająca różnorodne potrzeby aplikacyjne dzięki doskonałej wydajności."
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar Mini to kompaktowy LLM, przewyższający GPT-3.5, z silnymi zdolnościami wielojęzycznymi, wspierający język angielski i koreański, oferujący wydajne i małe rozwiązanie."
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja) rozszerza możliwości Solar Mini, koncentrując się na języku japońskim, jednocześnie zachowując wysoką wydajność i doskonałe wyniki w użyciu języka angielskiego i koreańskiego."
+ },
+ "solar-pro": {
+ "description": "Solar Pro to model LLM o wysokiej inteligencji wydany przez Upstage, koncentrujący się na zdolności do przestrzegania instrukcji na pojedynczym GPU, osiągając wynik IFEval powyżej 80. Obecnie wspiera język angielski, a wersja oficjalna planowana jest na listopad 2024, z rozszerzeniem wsparcia językowego i długości kontekstu."
+ },
+ "step-1-128k": {
+ "description": "Równoważy wydajność i koszty, odpowiedni do ogólnych scenariuszy."
+ },
+ "step-1-256k": {
+ "description": "Posiada zdolność przetwarzania ultra długiego kontekstu, szczególnie odpowiedni do analizy długich dokumentów."
+ },
+ "step-1-32k": {
+ "description": "Obsługuje średniej długości dialogi, odpowiedni do różnych zastosowań."
+ },
+ "step-1-8k": {
+ "description": "Mały model, odpowiedni do lekkich zadań."
+ },
+ "step-1-flash": {
+ "description": "Model o wysokiej prędkości, odpowiedni do dialogów w czasie rzeczywistym."
+ },
+ "step-1v-32k": {
+ "description": "Obsługuje wejścia wizualne, wzmacniając doświadczenie interakcji multimodalnych."
+ },
+ "step-1v-8k": {
+ "description": "Mały model wizualny, odpowiedni do podstawowych zadań związanych z tekstem i obrazem."
+ },
+ "step-2-16k": {
+ "description": "Obsługuje interakcje z dużą ilością kontekstu, idealny do złożonych scenariuszy dialogowych."
+ },
+ "taichu_llm": {
+ "description": "Model językowy TaiChu charakteryzuje się wyjątkową zdolnością rozumienia języka oraz umiejętnościami w zakresie tworzenia tekstów, odpowiadania na pytania, programowania, obliczeń matematycznych, wnioskowania logicznego, analizy emocji i streszczenia tekstu. Innowacyjnie łączy wstępne uczenie się na dużych zbiorach danych z bogatą wiedzą z wielu źródeł, stale doskonaląc technologię algorytmiczną i nieustannie przyswajając nową wiedzę z zakresu słownictwa, struktury, gramatyki i semantyki z ogromnych zbiorów danych tekstowych, co prowadzi do ciągłej ewolucji modelu. Umożliwia użytkownikom łatwiejszy dostęp do informacji i usług oraz bardziej inteligentne doświadczenia."
+ },
+ "taichu_vqa": {
+ "description": "Taichu 2.0V łączy zdolności rozumienia obrazów, transferu wiedzy i logicznego wnioskowania, osiągając znakomite wyniki w dziedzinie pytań i odpowiedzi na podstawie tekstu i obrazów."
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B) oferuje zwiększoną moc obliczeniową dzięki efektywnym strategiom i architekturze modelu."
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B) jest przeznaczony do precyzyjnych zadań poleceniowych, oferując doskonałe możliwości przetwarzania języka."
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2 to model językowy dostarczany przez Microsoft AI, który wyróżnia się w złożonych dialogach, wielojęzyczności, wnioskowaniu i inteligentnych asystentach."
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2 to model językowy dostarczany przez Microsoft AI, który wyróżnia się w złożonych dialogach, wielojęzyczności, wnioskowaniu i inteligentnych asystentach."
+ },
+ "yi-large": {
+ "description": "Nowy model z miliardami parametrów, oferujący niezwykłe możliwości w zakresie pytań i generowania tekstu."
+ },
+ "yi-large-fc": {
+ "description": "Model yi-large z wzmocnioną zdolnością do wywołań narzędzi, odpowiedni do różnych scenariuszy biznesowych wymagających budowy agentów lub workflow."
+ },
+ "yi-large-preview": {
+ "description": "Wersja wstępna, zaleca się korzystanie z yi-large (nowa wersja)."
+ },
+ "yi-large-rag": {
+ "description": "Zaawansowana usługa oparta na modelu yi-large, łącząca techniki wyszukiwania i generowania, oferująca precyzyjne odpowiedzi oraz usługi wyszukiwania informacji w czasie rzeczywistym."
+ },
+ "yi-large-turbo": {
+ "description": "Model o doskonałym stosunku jakości do ceny, z doskonałymi osiągami. Wysokiej precyzji optymalizacja w oparciu o wydajność, szybkość wnioskowania i koszty."
+ },
+ "yi-medium": {
+ "description": "Model średniej wielkości, zrównoważony pod względem możliwości i kosztów. Głęboko zoptymalizowana zdolność do przestrzegania poleceń."
+ },
+ "yi-medium-200k": {
+ "description": "Okno kontekstowe o długości 200K, oferujące głębokie zrozumienie i generowanie długich tekstów."
+ },
+ "yi-spark": {
+ "description": "Mały, ale potężny, lekki model o wysokiej prędkości. Oferuje wzmocnione możliwości obliczeń matematycznych i pisania kodu."
+ },
+ "yi-vision": {
+ "description": "Model do złożonych zadań wizualnych, oferujący wysoką wydajność w zakresie rozumienia i analizy obrazów."
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/plugin.json b/DigitalHumanWeb/locales/pl-PL/plugin.json
new file mode 100644
index 0000000..f04bd80
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "Argumenty",
+ "function_call": "Wywołanie funkcji",
+ "off": "Wyłącz debugowanie",
+ "on": "Wyświetl informacje o wywołaniach wtyczki",
+ "payload": "dane wejściowe wtyczki",
+ "response": "Odpowiedź",
+ "tool_call": "żądanie wywołania narzędzia"
+ },
+ "detailModal": {
+ "info": {
+ "description": "Opis interfejsu API",
+ "name": "Nazwa interfejsu API"
+ },
+ "tabs": {
+ "info": "Zdolności wtyczki",
+ "manifest": "Plik instalacyjny",
+ "settings": "Ustawienia"
+ },
+ "title": "Szczegóły wtyczki"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "Czy na pewno chcesz usunąć tę lokalną wtyczkę? Po usunięciu nie będzie możliwe jej odzyskanie.",
+ "customParams": {
+ "useProxy": {
+ "label": "Zainstaluj za pośrednictwem serwera proxy (jeśli występują błędy dostępu z innej domeny, spróbuj włączyć tę opcję i ponownie zainstalować)"
+ }
+ },
+ "deleteSuccess": "Wtyczka została pomyślnie usunięta",
+ "manifest": {
+ "identifier": {
+ "desc": "Unikalny identyfikator wtyczki",
+ "label": "Identyfikator"
+ },
+ "mode": {
+ "local": "Konfiguracja wizualna",
+ "local-tooltip": "Konfiguracja wizualna nie jest obecnie obsługiwana",
+ "url": "Link online"
+ },
+ "name": {
+ "desc": "Tytuł wtyczki",
+ "label": "Tytuł",
+ "placeholder": "Wyszukiwarka"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "Autor wtyczki",
+ "label": "Autor"
+ },
+ "avatar": {
+ "desc": "Ikona wtyczki, może być emotikonem lub adresem URL",
+ "label": "Ikona"
+ },
+ "description": {
+ "desc": "Opis wtyczki",
+ "label": "Opis",
+ "placeholder": "Pobierz informacje z wyszukiwarek"
+ },
+ "formFieldRequired": "To pole jest wymagane",
+ "homepage": {
+ "desc": "Strona główna wtyczki",
+ "label": "Strona główna"
+ },
+ "identifier": {
+ "desc": "Unikalny identyfikator wtyczki, obsługuje tylko znaki alfanumeryczne, myślnik - i podkreślenie _",
+ "errorDuplicate": "Identyfikator jest już używany przez inną wtyczkę, proszę zmień identyfikator",
+ "label": "Identyfikator",
+ "pattenErrorMessage": "Dozwolone są tylko znaki alfanumeryczne, myślnik - i podkreślenie _"
+ },
+ "manifest": {
+ "desc": "{{appName}} zainstaluje wtyczkę za pośrednictwem tego linku",
+ "label": "Opis wtyczki (Manifest) URL",
+ "preview": "Podgląd",
+ "refresh": "Odśwież"
+ },
+ "title": {
+ "desc": "Tytuł wtyczki",
+ "label": "Tytuł",
+ "placeholder": "Wyszukiwarka"
+ }
+ },
+ "metaConfig": "Konfiguracja metadanych wtyczki",
+ "modalDesc": "Po dodaniu niestandardowej wtyczki można jej używać do weryfikacji rozwoju wtyczki lub bezpośrednio w sesji. Proszę odnieść się do <1>dokumentacji rozwojowej↗> dotyczącej rozwoju wtyczki.",
+ "openai": {
+ "importUrl": "Importuj z linku URL",
+ "schema": "Schemat"
+ },
+ "preview": {
+ "card": "Podgląd wyświetlania wtyczki",
+ "desc": "Podgląd opisu wtyczki",
+ "title": "Podgląd nazwy wtyczki"
+ },
+ "save": "Zainstaluj wtyczkę",
+ "saveSuccess": "Ustawienia wtyczki zostały pomyślnie zapisane",
+ "tabs": {
+ "manifest": "Opis funkcji manifestu (Manifest)",
+ "meta": "Metadane wtyczki"
+ },
+ "title": {
+ "create": "Dodaj niestandardową wtyczkę",
+ "edit": "Edytuj niestandardową wtyczkę"
+ },
+ "type": {
+ "lobe": "Wtyczka LobeChat",
+ "openai": "Wtyczka OpenAI"
+ },
+ "update": "Aktualizuj",
+ "updateSuccess": "Ustawienia wtyczki zostały pomyślnie zaktualizowane"
+ },
+ "error": {
+ "fetchError": "Nie udało się pobrać linku manifestu. Upewnij się, że link jest poprawny i zezwala na dostęp z innej domeny.",
+ "installError": "Instalacja wtyczki {{name}} nie powiodła się",
+ "manifestInvalid": "Manifest nie spełnia specyfikacji. Wynik walidacji: \n\n {{error}}",
+ "noManifest": "Plik manifestu nie istnieje",
+ "openAPIInvalid": "Analiza OpenAPI nie powiodła się. Błąd: \n\n {{error}}",
+ "reinstallError": "Nie udało się odświeżyć wtyczki {{name}}",
+ "urlError": "Link nie zwrócił treści w formacie JSON. Upewnij się, że jest to poprawny link."
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "Usunięte",
+ "local.config": "Konfiguracja",
+ "local.title": "Lokalne"
+ }
+ },
+ "loading": {
+ "content": "Wywoływanie wtyczki...",
+ "plugin": "Wtyczka jest uruchomiona..."
+ },
+ "pluginList": "Lista wtyczek",
+ "setting": "Ustawienia wtyczki",
+ "settings": {
+ "indexUrl": {
+ "title": "Indeks sklepu",
+ "tooltip": "Edycja nie jest obecnie obsługiwana"
+ },
+ "modalDesc": "Po skonfigurowaniu adresu sklepu wtyczek możesz korzystać z niestandardowego sklepu wtyczek",
+ "title": "Skonfiguruj sklep wtyczek"
+ },
+ "showInPortal": "Proszę sprawdzić szczegóły w obszarze roboczym",
+ "store": {
+ "actions": {
+ "confirmUninstall": "Wtyczka zostanie odinstalowana. Po odinstalowaniu konfiguracja wtyczki zostanie wyczyszczona. Potwierdź swoje działanie.",
+ "detail": "Szczegóły",
+ "install": "Instaluj",
+ "manifest": "Edytuj plik instalacyjny",
+ "settings": "Ustawienia",
+ "uninstall": "Odinstaluj"
+ },
+ "communityPlugin": "Wtyczka społecznościowa",
+ "customPlugin": "Niestandardowa wtyczka",
+ "empty": "Brak zainstalowanych wtyczek",
+ "installAllPlugins": "Zainstaluj wszystkie",
+ "networkError": "Nie udało się pobrać sklepu wtyczek. Sprawdź swoje połączenie sieciowe i spróbuj ponownie",
+ "placeholder": "Szukaj nazwy wtyczki, opisu lub słowa kluczowego...",
+ "releasedAt": "Wydane {{createdAt}}",
+ "tabs": {
+ "all": "Wszystkie",
+ "installed": "Zainstalowane"
+ },
+ "title": "Sklep wtyczek"
+ },
+ "unknownPlugin": "nieznana wtyczka"
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/portal.json b/DigitalHumanWeb/locales/pl-PL/portal.json
new file mode 100644
index 0000000..1dbd5fe
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "Artefakty",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "Część",
+ "file": "Plik"
+ }
+ },
+ "Plugins": "Wtyczki",
+ "actions": {
+ "genAiMessage": "Tworzenie wiadomości AI",
+ "summary": "Podsumowanie",
+ "summaryTooltip": "Podsumowanie bieżącej zawartości"
+ },
+ "artifacts": {
+ "display": {
+ "code": "Kod",
+ "preview": "Podgląd"
+ },
+ "svg": {
+ "copyAsImage": "Skopiuj jako obraz",
+ "copyFail": "Kopiowanie nie powiodło się, powód błędu: {{error}}",
+ "copySuccess": "Obraz skopiowany pomyślnie",
+ "download": {
+ "png": "Pobierz jako PNG",
+ "svg": "Pobierz jako SVG"
+ }
+ }
+ },
+ "emptyArtifactList": "Obecna lista Artefaktów jest pusta. Proszę użyć wtyczek w trakcie sesji, a następnie sprawdzić ponownie.",
+ "emptyKnowledgeList": "Aktualna lista wiedzy jest pusta. Proszę otworzyć bazę wiedzy w trakcie rozmowy, aby ją przeglądać.",
+ "files": "Pliki",
+ "messageDetail": "Szczegóły wiadomości",
+ "title": "Okno rozszerzenia"
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/providers.json b/DigitalHumanWeb/locales/pl-PL/providers.json
new file mode 100644
index 0000000..6be7bb5
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "360 AI to platforma modeli i usług AI wprowadzona przez firmę 360, oferująca różnorodne zaawansowane modele przetwarzania języka naturalnego, w tym 360GPT2 Pro, 360GPT Pro, 360GPT Turbo i 360GPT Turbo Responsibility 8K. Modele te łączą dużą liczbę parametrów z multimodalnymi zdolnościami, szeroko stosowanymi w generowaniu tekstu, rozumieniu semantycznym, systemach dialogowych i generowaniu kodu. Dzięki elastycznej strategii cenowej, 360 AI zaspokaja zróżnicowane potrzeby użytkowników, wspierając integrację przez deweloperów, co przyczynia się do innowacji i rozwoju aplikacji inteligentnych."
+ },
+ "anthropic": {
+ "description": "Anthropic to firma skoncentrowana na badaniach i rozwoju sztucznej inteligencji, oferująca szereg zaawansowanych modeli językowych, takich jak Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus i Claude 3 Haiku. Modele te osiągają idealną równowagę między inteligencją, szybkością a kosztami, nadając się do różnych zastosowań, od obciążeń na poziomie przedsiębiorstw po szybkie odpowiedzi. Claude 3.5 Sonnet, jako najnowszy model, wyróżnia się w wielu ocenach, jednocześnie zachowując wysoką opłacalność."
+ },
+ "azure": {
+ "description": "Azure oferuje różnorodne zaawansowane modele AI, w tym GPT-3.5 i najnowszą serię GPT-4, wspierające różne typy danych i złożone zadania, koncentrując się na bezpiecznych, niezawodnych i zrównoważonych rozwiązaniach AI."
+ },
+ "baichuan": {
+ "description": "Baichuan Intelligent to firma skoncentrowana na badaniach nad dużymi modelami sztucznej inteligencji, której modele osiągają doskonałe wyniki w krajowych zadaniach związanych z encyklopedią wiedzy, przetwarzaniem długich tekstów i generowaniem treści w języku chińskim, przewyższając zagraniczne modele mainstreamowe. Baichuan Intelligent dysponuje również wiodącymi w branży zdolnościami multimodalnymi, osiągając doskonałe wyniki w wielu autorytatywnych ocenach. Jej modele obejmują Baichuan 4, Baichuan 3 Turbo i Baichuan 3 Turbo 128k, zoptymalizowane pod kątem różnych scenariuszy zastosowań, oferując opłacalne rozwiązania."
+ },
+ "bedrock": {
+ "description": "Bedrock to usługa oferowana przez Amazon AWS, skoncentrowana na dostarczaniu zaawansowanych modeli językowych i wizualnych dla przedsiębiorstw. Jej rodzina modeli obejmuje serię Claude od Anthropic, serię Llama 3.1 od Meta i inne, oferując różnorodne opcje od lekkich do wysokowydajnych, wspierając generowanie tekstu, dialogi, przetwarzanie obrazów i inne zadania, odpowiednie dla różnych skal i potrzeb aplikacji biznesowych."
+ },
+ "deepseek": {
+ "description": "DeepSeek to firma skoncentrowana na badaniach i zastosowaniach technologii sztucznej inteligencji, której najnowszy model DeepSeek-V2.5 łączy zdolności do prowadzenia ogólnych rozmów i przetwarzania kodu, osiągając znaczące postępy w zakresie dostosowywania do preferencji ludzkich, zadań pisarskich i przestrzegania instrukcji."
+ },
+ "fireworksai": {
+ "description": "Fireworks AI to wiodący dostawca zaawansowanych modeli językowych, skoncentrowany na wywołaniach funkcji i przetwarzaniu multimodalnym. Jego najnowszy model Firefunction V2 oparty na Llama-3, zoptymalizowany do wywołań funkcji, dialogów i przestrzegania instrukcji. Model wizualny FireLLaVA-13B wspiera mieszane wejścia obrazów i tekstu. Inne znaczące modele to seria Llama i seria Mixtral, oferujące efektywne wsparcie dla wielojęzycznego przestrzegania instrukcji i generacji."
+ },
+ "github": {
+ "description": "Dzięki modelom GitHub, deweloperzy mogą stać się inżynierami AI i budować z wykorzystaniem wiodących modeli AI w branży."
+ },
+ "google": {
+ "description": "Seria Gemini od Google to najnowocześniejsze, uniwersalne modele AI stworzone przez Google DeepMind, zaprojektowane z myślą o multimodalności, wspierające bezproblemowe rozumienie i przetwarzanie tekstu, kodu, obrazów, dźwięku i wideo. Nadają się do różnych środowisk, od centrów danych po urządzenia mobilne, znacznie zwiększając wydajność i wszechstronność modeli AI."
+ },
+ "groq": {
+ "description": "Silnik inferencyjny LPU firmy Groq wyróżnia się w najnowszych niezależnych testach benchmarkowych dużych modeli językowych (LLM), redefiniując standardy rozwiązań AI dzięki niesamowitej szybkości i wydajności. Groq jest reprezentantem natychmiastowej szybkości inferencji, wykazując dobrą wydajność w wdrożeniach opartych na chmurze."
+ },
+ "minimax": {
+ "description": "MiniMax to firma technologiczna zajmująca się ogólną sztuczną inteligencją, założona w 2021 roku, dążąca do współtworzenia inteligencji z użytkownikami. MiniMax opracowało różne modele dużych modeli o różnych modalnościach, w tym model tekstowy MoE z bilionem parametrów, model głosowy oraz model obrazowy. Wprowadziło również aplikacje takie jak Conch AI."
+ },
+ "mistral": {
+ "description": "Mistral oferuje zaawansowane modele ogólne, specjalistyczne i badawcze, szeroko stosowane w złożonym rozumowaniu, zadaniach wielojęzycznych, generowaniu kodu i innych dziedzinach. Dzięki interfejsowi wywołań funkcji użytkownicy mogą integrować dostosowane funkcje, realizując konkretne zastosowania."
+ },
+ "moonshot": {
+ "description": "Moonshot to otwarta platforma stworzona przez Beijing Dark Side Technology Co., Ltd., oferująca różnorodne modele przetwarzania języka naturalnego, szeroko stosowane w takich dziedzinach jak tworzenie treści, badania akademickie, inteligentne rekomendacje, diagnoza medyczna i inne, wspierająca przetwarzanie długich tekstów i złożone zadania generacyjne."
+ },
+ "novita": {
+ "description": "Novita AI to platforma oferująca API do różnych dużych modeli językowych i generacji obrazów AI, elastyczna, niezawodna i opłacalna. Wspiera najnowsze modele open-source, takie jak Llama3, Mistral, i oferuje kompleksowe, przyjazne dla użytkownika oraz automatycznie skalowalne rozwiązania API dla rozwoju aplikacji generatywnej AI, odpowiednie dla szybkiego rozwoju startupów AI."
+ },
+ "ollama": {
+ "description": "Modele oferowane przez Ollama obejmują szeroki zakres zastosowań, w tym generowanie kodu, obliczenia matematyczne, przetwarzanie wielojęzyczne i interakcje konwersacyjne, wspierając różnorodne potrzeby wdrożeń na poziomie przedsiębiorstw i lokalnych."
+ },
+ "openai": {
+ "description": "OpenAI jest wiodącą na świecie instytucją badawczą w dziedzinie sztucznej inteligencji, której modele, takie jak seria GPT, przesuwają granice przetwarzania języka naturalnego. OpenAI dąży do zmiany wielu branż poprzez innowacyjne i efektywne rozwiązania AI. Ich produkty charakteryzują się znaczną wydajnością i opłacalnością, znajdując szerokie zastosowanie w badaniach, biznesie i innowacyjnych aplikacjach."
+ },
+ "openrouter": {
+ "description": "OpenRouter to platforma usługowa oferująca różnorodne interfejsy do nowoczesnych dużych modeli, wspierająca OpenAI, Anthropic, LLaMA i inne, odpowiednia dla zróżnicowanych potrzeb rozwojowych i aplikacyjnych. Użytkownicy mogą elastycznie wybierać optymalne modele i ceny zgodnie z własnymi potrzebami, co przyczynia się do poprawy doświadczeń związanych z AI."
+ },
+ "perplexity": {
+ "description": "Perplexity to wiodący dostawca modeli generacji dialogów, oferujący różnorodne zaawansowane modele Llama 3.1, wspierające aplikacje online i offline, szczególnie odpowiednie do złożonych zadań przetwarzania języka naturalnego."
+ },
+ "qwen": {
+ "description": "Tongyi Qianwen to samodzielnie opracowany przez Alibaba Cloud model językowy o dużej skali, charakteryzujący się silnymi zdolnościami rozumienia i generowania języka naturalnego. Może odpowiadać na różnorodne pytania, tworzyć treści pisemne, wyrażać opinie, pisać kod i działać w wielu dziedzinach."
+ },
+ "siliconcloud": {
+ "description": "SiliconFlow dąży do przyspieszenia AGI, aby przynieść korzyści ludzkości, poprawiając wydajność dużych modeli AI dzięki łatwemu w użyciu i niskokosztowemu stosowi GenAI."
+ },
+ "spark": {
+ "description": "Model Xinghuo od iFlytek oferuje potężne możliwości AI w wielu dziedzinach i językach, wykorzystując zaawansowaną technologię przetwarzania języka naturalnego do budowy innowacyjnych aplikacji odpowiednich dla inteligentnych urządzeń, inteligentnej medycyny, inteligentnych finansów i innych scenariuszy wertykalnych."
+ },
+ "stepfun": {
+ "description": "Model StepFun charakteryzuje się wiodącymi w branży zdolnościami multimodalnymi i złożonym rozumowaniem, wspierając zrozumienie bardzo długich tekstów oraz potężne funkcje samodzielnego wyszukiwania."
+ },
+ "taichu": {
+ "description": "Nowa generacja multimodalnych dużych modeli opracowana przez Instytut Automatyki Chińskiej Akademii Nauk i Wuhan Institute of Artificial Intelligence wspiera wielorundowe pytania i odpowiedzi, tworzenie tekstów, generowanie obrazów, zrozumienie 3D, analizę sygnałów i inne kompleksowe zadania pytaniowe, posiadając silniejsze zdolności poznawcze, rozumienia i tworzenia, oferując nową interaktywną doświadczenie."
+ },
+ "togetherai": {
+ "description": "Together AI dąży do osiągnięcia wiodącej wydajności poprzez innowacyjne modele AI, oferując szerokie możliwości dostosowywania, w tym wsparcie dla szybkiej ekspansji i intuicyjnych procesów wdrożeniowych, aby zaspokoić różnorodne potrzeby przedsiębiorstw."
+ },
+ "upstage": {
+ "description": "Upstage koncentruje się na opracowywaniu modeli AI dla różnych potrzeb biznesowych, w tym Solar LLM i dokumentów AI, mając na celu osiągnięcie sztucznej ogólnej inteligencji (AGI). Umożliwia tworzenie prostych agentów konwersacyjnych za pomocą Chat API oraz wspiera wywołania funkcji, tłumaczenia, osadzenia i zastosowania w określonych dziedzinach."
+ },
+ "zeroone": {
+ "description": "01.AI koncentruje się na technologiach sztucznej inteligencji w erze AI 2.0, intensywnie promując innowacje i zastosowania „człowiek + sztuczna inteligencja”, wykorzystując potężne modele i zaawansowane technologie AI w celu zwiększenia wydajności ludzkiej produkcji i realizacji technologicznego wsparcia."
+ },
+ "zhipu": {
+ "description": "Zhipu AI oferuje otwartą platformę modeli multimodalnych i językowych, wspierającą szeroki zakres zastosowań AI, w tym przetwarzanie tekstu, rozumienie obrazów i pomoc w programowaniu."
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/ragEval.json b/DigitalHumanWeb/locales/pl-PL/ragEval.json
new file mode 100644
index 0000000..f11d031
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "Nowy",
+ "description": {
+ "placeholder": "Opis zestawu danych (opcjonalnie)"
+ },
+ "name": {
+ "placeholder": "Nazwa zestawu danych",
+ "required": "Proszę wpisać nazwę zestawu danych"
+ },
+ "title": "Dodaj zestaw danych"
+ },
+ "dataset": {
+ "addNewButton": "Utwórz zestaw danych",
+ "emptyGuide": "Aktualny zestaw danych jest pusty, proszę utworzyć nowy zestaw danych.",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "Importuj dane"
+ },
+ "columns": {
+ "actions": "Operacje",
+ "ideal": {
+ "title": "Oczekiwana odpowiedź"
+ },
+ "question": {
+ "title": "Pytanie"
+ },
+ "referenceFiles": {
+ "title": "Pliki referencyjne"
+ }
+ },
+ "notSelected": "Proszę wybrać zestaw danych po lewej stronie",
+ "title": "Szczegóły zestawu danych"
+ },
+ "title": "Zestaw danych"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "Nowy",
+ "datasetId": {
+ "placeholder": "Wybierz swój zestaw danych do oceny",
+ "required": "Proszę wybrać zestaw danych do oceny"
+ },
+ "description": {
+ "placeholder": "Opis zadania oceny (opcjonalnie)"
+ },
+ "name": {
+ "placeholder": "Nazwa zadania oceny",
+ "required": "Proszę wpisać nazwę zadania oceny"
+ },
+ "title": "Dodaj zadanie oceny"
+ },
+ "addNewButton": "Utwórz ocenę",
+ "emptyGuide": "Aktualne zadania oceny są puste, rozpocznij tworzenie oceny.",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "Sprawdź status",
+ "confirmDelete": "Czy na pewno chcesz usunąć to zadanie oceny?",
+ "confirmRun": "Czy chcesz rozpocząć wykonanie? Po rozpoczęciu zadanie oceny będzie wykonywane asynchronicznie w tle, zamknięcie strony nie wpłynie na wykonanie asynchroniczne.",
+ "downloadRecords": "Pobierz oceny",
+ "retry": "Spróbuj ponownie",
+ "run": "Uruchom",
+ "title": "Operacje"
+ },
+ "datasetId": {
+ "title": "Zestaw danych"
+ },
+ "name": {
+ "title": "Nazwa zadania oceny"
+ },
+ "records": {
+ "title": "Liczba rekordów oceny"
+ },
+ "referenceFiles": {
+ "title": "Pliki referencyjne"
+ },
+ "status": {
+ "error": "Wystąpił błąd podczas wykonania",
+ "pending": "Oczekuje na wykonanie",
+ "processing": "W trakcie wykonywania",
+ "success": "Wykonanie zakończone sukcesem",
+ "title": "Status"
+ }
+ },
+ "title": "Lista zadań oceny"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/setting.json b/DigitalHumanWeb/locales/pl-PL/setting.json
new file mode 100644
index 0000000..4f8ec49
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "O nas"
+ },
+ "agentTab": {
+ "chat": "Preferencje czatu",
+ "meta": "Informacje o asystencie",
+ "modal": "Ustawienia modalne",
+ "plugin": "Ustawienia wtyczki",
+ "prompt": "Ustawienia roli",
+ "tts": "Usługi głosowe"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "Wybierając wysyłanie danych telemetrycznych, możesz pomóc nam poprawić ogólne doświadczenie użytkowników {{appName}}.",
+ "title": "Wysyłanie anonimowych danych użytkowych"
+ },
+ "title": "Analiza danych"
+ },
+ "danger": {
+ "clear": {
+ "action": "Wyczyść teraz",
+ "confirm": "Potwierdź wyczyszczenie wszystkich danych czatu?",
+ "desc": "Spowoduje to usunięcie wszystkich danych sesji, w tym asystenta, pliki, wiadomości, wtyczki itp.",
+ "success": "Wyczyszczono wszystkie wiadomości sesji",
+ "title": "Wyczyść wszystkie wiadomości sesji"
+ },
+ "reset": {
+ "action": "Zresetuj teraz",
+ "confirm": "Potwierdź zresetowanie wszystkich ustawień?",
+ "currentVersion": "Aktualna wersja",
+ "desc": "Zresetuj wszystkie ustawienia do wartości domyślnych",
+ "success": "Wszystkie ustawienia zostały zresetowane",
+ "title": "Zresetuj wszystkie ustawienia"
+ }
+ },
+ "header": {
+ "desc": "Preferencje i ustawienia modelu.",
+ "global": "Ustawienia globalne",
+ "session": "Ustawienia sesji",
+ "sessionDesc": "Ustawienia postaci i preferencje sesji.",
+ "sessionWithName": "Ustawienia sesji · {{name}}",
+ "title": "Ustawienia"
+ },
+ "llm": {
+ "aesGcm": "Twój klucz, adres proxy i inne będą szyfrowane za pomocą algorytmu szyfrowania <1>AES-GCM1>",
+ "apiKey": {
+ "desc": "Proszę wprowadź swój klucz API {{name}}",
+ "placeholder": "{{name}} klucz API",
+ "title": "Klucz API"
+ },
+ "checker": {
+ "button": "Sprawdź",
+ "desc": "Sprawdź poprawność wypełnienia klucza API i adresu proxy",
+ "pass": "Połączenie udane",
+ "title": "Test połączenia"
+ },
+ "customModelCards": {
+ "addNew": "Utwórz i dodaj model {{id}}",
+ "config": "Konfiguracja modelu",
+ "confirmDelete": "Czy na pewno chcesz usunąć ten niestandardowy model? Po usunięciu nie będzie możliwe przywrócenie. Proszę działać ostrożnie.",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "Pole faktycznego żądania w Azure OpenAI",
+ "placeholder": "Wprowadź nazwę wdrożenia modelu w Azure",
+ "title": "Nazwa wdrożenia modelu"
+ },
+ "displayName": {
+ "placeholder": "Wprowadź nazwę wyświetlaną modelu, np. ChatGPT, GPT-4 itp.",
+ "title": "Nazwa wyświetlana modelu"
+ },
+ "files": {
+ "extra": "Obecna implementacja przesyłania plików to jedynie rozwiązanie typu hack, przeznaczone wyłącznie do samodzielnego testowania. Pełna funkcjonalność przesyłania plików będzie dostępna w późniejszej wersji.",
+ "title": "Obsługa przesyłania plików"
+ },
+ "functionCall": {
+ "extra": "Ta konfiguracja włączy jedynie możliwość wywoływania funkcji w aplikacji, a to, czy wywołania funkcji będą wspierane, zależy całkowicie od samego modelu. Proszę samodzielnie przetestować dostępność wywołań funkcji w tym modelu.",
+ "title": "Obsługa wywołań funkcji"
+ },
+ "id": {
+ "extra": "Będzie wyświetlane jako etykieta modelu",
+ "placeholder": "Wprowadź identyfikator modelu, np. gpt-4-turbo-preview lub claude-2.1",
+ "title": "Identyfikator modelu"
+ },
+ "modalTitle": "Konfiguracja niestandardowego modelu",
+ "tokens": {
+ "title": "Maksymalna liczba tokenów",
+ "unlimited": "Nieograniczony"
+ },
+ "vision": {
+ "extra": "Ta konfiguracja włączy jedynie możliwość przesyłania obrazów w aplikacji, a to, czy rozpoznawanie będzie wspierane, zależy całkowicie od samego modelu. Proszę samodzielnie przetestować dostępność rozpoznawania wizualnego w tym modelu.",
+ "title": "Obsługa rozpoznawania wizyjnego"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "使用客户端请求模式,浏览器将直接发起会话请求,以提升响应速度",
+ "title": "使用客户端请求模式"
+ },
+ "fetcher": {
+ "fetch": "Pobierz listę modeli",
+ "fetching": "Trwa pobieranie listy modeli...",
+ "latestTime": "Ostatnia aktualizacja: {{time}}",
+ "noLatestTime": "Brak dostępnej listy"
+ },
+ "helpDoc": "Poradnik konfiguracji",
+ "modelList": {
+ "desc": "Wybierz modele do wyświetlenia w sesji. Wybrane modele będą widoczne na liście modeli",
+ "placeholder": "Wybierz model z listy",
+ "title": "Lista modeli",
+ "total": "Razem dostępne są {{count}} modele"
+ },
+ "proxyUrl": {
+ "desc": "Oprócz domyślnego adresu, musi zawierać http(s)://",
+ "title": "Adres proxy API"
+ },
+ "waitingForMore": "Więcej modeli jest obecnie w <1>planach dołączenia1>, prosimy o cierpliwość"
+ },
+ "plugin": {
+ "addTooltip": "Dodaj niestandardowy dodatek",
+ "clearDeprecated": "Usuń przestarzałe dodatki",
+ "empty": "Brak zainstalowanych dodatków, zapraszamy do odwiedzenia <1>sklepu z dodatkami1>",
+ "installStatus": {
+ "deprecated": "Odinstalowany"
+ },
+ "settings": {
+ "hint": "Proszę wypełnić poniższe ustawienia zgodnie z opisem",
+ "title": "Konfiguracja dodatku {{id}}",
+ "tooltip": "Konfiguracja dodatku"
+ },
+ "store": "Sklep z dodatkami"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "Awatar"
+ },
+ "backgroundColor": {
+ "title": "Kolor tła"
+ },
+ "description": {
+ "placeholder": "Proszę wprowadzić opis asystenta",
+ "title": "Opis asystenta"
+ },
+ "name": {
+ "placeholder": "Proszę wprowadzić nazwę asystenta",
+ "title": "Nazwa"
+ },
+ "prompt": {
+ "placeholder": "Proszę wprowadzić słowo kluczowe dla roli Prompt",
+ "title": "Ustawienia roli"
+ },
+ "tag": {
+ "placeholder": "Proszę wprowadzić tag",
+ "title": "Tag"
+ },
+ "title": "Informacje o asystencie"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "Automatyczne tworzenie tematu po przekroczeniu określonej liczby wiadomości",
+ "title": "Próg automatycznego tworzenia tematu"
+ },
+ "chatStyleType": {
+ "title": "Styl okna czatu",
+ "type": {
+ "chat": "Tryb rozmowy",
+ "docs": "Tryb dokumentów"
+ }
+ },
+ "compressThreshold": {
+ "desc": "Kompresja historii wiadomości, gdy przekroczy określoną wartość",
+ "title": "Próg kompresji historii"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "Automatyczne tworzenie tematu podczas rozmowy, działa tylko w przypadku tymczasowych tematów",
+ "title": "Automatyczne tworzenie tematu"
+ },
+ "enableCompressThreshold": {
+ "title": "Włącz próg kompresji historii"
+ },
+ "enableHistoryCount": {
+ "alias": "Bez limitu",
+ "limited": "Zawiera tylko {{number}} wiadomości",
+ "setlimited": "Ustaw limit wiadomości historycznych",
+ "title": "Ograniczenie liczby wiadomości w historii",
+ "unlimited": "Bez limitu wiadomości w historii"
+ },
+ "historyCount": {
+ "desc": "Liczba wiadomości przesyłanych w jednym żądaniu (obejmuje najnowsze pytania i odpowiedzi, gdzie każde pytanie i odpowiedź liczy się jako 1)",
+ "title": "Liczba wiadomości"
+ },
+ "inputTemplate": {
+ "desc": "Ostatnia wiadomość użytkownika zostanie wypełniona w tym szablonie",
+ "placeholder": "Szablon wejściowy {{text}} zostanie zastąpiony rzeczywistą wiadomością",
+ "title": "Szablon wejściowy"
+ },
+ "title": "Ustawienia czatu"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "Włącz limit jednorazowej odpowiedzi"
+ },
+ "frequencyPenalty": {
+ "desc": "Im większa wartość, tym większe prawdopodobieństwo zmniejszenia powtarzających się słów",
+ "title": "Kara za częstość"
+ },
+ "maxTokens": {
+ "desc": "Maksymalna liczba tokenów używanych w pojedynczej interakcji",
+ "title": "Limit jednorazowej odpowiedzi"
+ },
+ "model": {
+ "desc": "{{provider}} model",
+ "title": "Model"
+ },
+ "presencePenalty": {
+ "desc": "Im większa wartość, tym większe prawdopodobieństwo rozszerzenia się na nowe tematy",
+ "title": "Świeżość tematu"
+ },
+ "temperature": {
+ "desc": "Im większa wartość, tym odpowiedzi są bardziej losowe",
+ "title": "Losowość",
+ "titleWithValue": "Losowość {{value}}"
+ },
+ "title": "Ustawienia modelu",
+ "topP": {
+ "desc": "Podobne do losowości, ale nie należy zmieniać razem z losowością",
+ "title": "Najlepsze P"
+ }
+ },
+ "settingPlugin": {
+ "title": "Lista wtyczek"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "Administrator włączył szyfrowany dostęp",
+ "placeholder": "Wprowadź hasło dostępu",
+ "title": "Hasło dostępu"
+ },
+ "oauth": {
+ "info": {
+ "desc": "Zalogowano",
+ "title": "Informacje o koncie"
+ },
+ "signin": {
+ "action": "Zaloguj się",
+ "desc": "Zaloguj się za pomocą SSO, aby odblokować aplikację",
+ "title": "Zaloguj się na konto"
+ },
+ "signout": {
+ "action": "Wyloguj się",
+ "confirm": "Czy na pewno chcesz się wylogować?",
+ "success": "Wylogowanie zakończone pomyślnie"
+ }
+ },
+ "title": "Ustawienia systemowe"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "Model rozpoznawania mowy OpenAI",
+ "title": "OpenAI",
+ "ttsModel": "Model syntezy mowy OpenAI"
+ },
+ "showAllLocaleVoice": {
+ "desc": "Jeśli wyłączone, wyświetlane są tylko głosy w bieżącym języku",
+ "title": "Pokaż wszystkie głosy lokalne"
+ },
+ "stt": "Ustawienia rozpoznawania mowy",
+ "sttAutoStop": {
+ "desc": "Po wyłączeniu rozpoznawanie mowy nie zakończy się automatycznie, trzeba ręcznie kliknąć przycisk zakończenia",
+ "title": "Automatyczne zatrzymywanie rozpoznawania mowy"
+ },
+ "sttLocale": {
+ "desc": "Język wejścia mowy, opcja ta może poprawić dokładność rozpoznawania mowy",
+ "title": "Język rozpoznawania mowy"
+ },
+ "sttService": {
+ "desc": "Dla przeglądarki używana jest wbudowana usługa rozpoznawania mowy",
+ "title": "Usługa rozpoznawania mowy"
+ },
+ "title": "Usługi mowy",
+ "tts": "Ustawienia syntezy mowy",
+ "ttsService": {
+ "desc": "Jeśli korzystasz z usługi syntezy mowy OpenAI, upewnij się, że usługa modeli OpenAI jest włączona",
+ "title": "Usługa syntezy mowy"
+ },
+ "voice": {
+ "desc": "Wybierz głos dla bieżącego asystenta, różne usługi TTS obsługują różne głosy",
+ "preview": "Podgląd głosu",
+ "title": "Głos syntezy mowy"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "Awatar"
+ },
+ "fontSize": {
+ "desc": "Rozmiar czcionki wiadomości",
+ "marks": {
+ "normal": "Standardowy"
+ },
+ "title": "Rozmiar czcionki"
+ },
+ "lang": {
+ "autoMode": "Automatycznie",
+ "title": "Język"
+ },
+ "neutralColor": {
+ "desc": "Dostosowanie odcieni szarości z różnymi kolorami",
+ "title": "Kolor neutralny"
+ },
+ "primaryColor": {
+ "desc": "Dostosowywanie koloru motywu",
+ "title": "Kolor motywu"
+ },
+ "themeMode": {
+ "auto": "Automatyczny",
+ "dark": "Ciemny",
+ "light": "Jasny",
+ "title": "Motyw"
+ },
+ "title": "Ustawienia motywu"
+ },
+ "submitAgentModal": {
+ "button": "Prześlij asystenta",
+ "identifier": "Identyfikator asystenta",
+ "metaMiss": "Proszę uzupełnić informacje o asystencie przed przesłaniem, należy podać nazwę, opis i tagi",
+ "placeholder": "Wprowadź identyfikator asystenta, musi być unikalny, na przykład web-development",
+ "tooltips": "Udostępnij na rynku asystentów"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "Dodaj nazwę, aby ułatwić identyfikację",
+ "placeholder": "Wprowadź nazwę urządzenia",
+ "title": "Nazwa urządzenia"
+ },
+ "title": "Informacje o urządzeniu",
+ "unknownBrowser": "Nieznana przeglądarka",
+ "unknownOS": "Nieznany system"
+ },
+ "warning": {
+ "tip": "After a long period of community testing, WebRTC synchronization may not be able to reliably meet general data synchronization needs. Please <1>deploy a signaling server1> before use."
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "WebRTC użyje tej nazwy do utworzenia kanału synchronizacji, upewnij się, że nazwa kanału jest unikalna",
+ "placeholder": "Wprowadź nazwę kanału synchronizacji",
+ "shuffle": "Wygeneruj losowo",
+ "title": "Nazwa kanału synchronizacji"
+ },
+ "channelPassword": {
+ "desc": "Dodaj hasło, aby zapewnić prywatność kanału. Tylko urządzenia z poprawnym hasłem mogą dołączyć do kanału",
+ "placeholder": "Wprowadź hasło kanału synchronizacji",
+ "title": "Hasło kanału synchronizacji"
+ },
+ "desc": "Bezpośrednia, punkt-do-punktu komunikacja danych w czasie rzeczywistym, wymaga jednoczesnej obecności urządzeń online do synchronizacji",
+ "enabled": {
+ "invalid": "Please fill in the signaling server and synchronization channel name before enabling.",
+ "title": "Włącz synchronizację"
+ },
+ "signaling": {
+ "desc": "WebRTC will use this address for synchronization",
+ "placeholder": "Enter signaling server address",
+ "title": "Signaling Server"
+ },
+ "title": "Synchronizacja WebRTC"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "Model generowania metadanych asystenta",
+ "modelDesc": "Określa model używany do generowania nazwy, opisu, awatara i etykiety asystenta",
+ "title": "Automatyczne generowanie informacji o asystencie"
+ },
+ "queryRewrite": {
+ "label": "Model przekształcania zapytań",
+ "modelDesc": "Model używany do optymalizacji zapytań użytkowników",
+ "title": "Baza wiedzy"
+ },
+ "title": "Asystent Systemowy",
+ "topic": {
+ "label": "Model nazewnictwa tematów",
+ "modelDesc": "Określa model używany do automatycznego zmieniania nazw tematów",
+ "title": "Automatyczne nadawanie nazw tematom"
+ },
+ "translation": {
+ "label": "Model Tłumaczenia",
+ "modelDesc": "Określ model używany do tłumaczenia",
+ "title": "Ustawienia Asystenta Tłumaczenia"
+ }
+ },
+ "tab": {
+ "about": "O nas",
+ "agent": "Domyślny asystent",
+ "common": "Ustawienia ogólne",
+ "experiment": "Eksperyment",
+ "llm": "Model językowy",
+ "sync": "Synchronizacja w chmurze",
+ "system-agent": "System Agent",
+ "tts": "Usługa głosowa"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "Wbudowane"
+ },
+ "disabled": "Aktualny model nie obsługuje wywołań funkcji i nie można użyć wtyczki",
+ "plugins": {
+ "enabled": "Włączone {{num}}",
+ "groupName": "Wtyczki",
+ "noEnabled": "Brak włączonych wtyczek",
+ "store": "Sklep z wtyczkami"
+ },
+ "title": "Narzędzia rozszerzeń"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/tool.json b/DigitalHumanWeb/locales/pl-PL/tool.json
new file mode 100644
index 0000000..104d25e
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "Automatyczne generowanie",
+ "downloading": "Linki do obrazów wygenerowanych przez DallE3 są ważne tylko przez 1 godzinę. Trwa pobieranie obrazów do lokalnego bufora...",
+ "generate": "Generuj",
+ "generating": "Generowanie...",
+ "images": "Obrazy:",
+ "prompt": "słowo kluczowe"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pl-PL/welcome.json b/DigitalHumanWeb/locales/pl-PL/welcome.json
new file mode 100644
index 0000000..5166d58
--- /dev/null
+++ b/DigitalHumanWeb/locales/pl-PL/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "Importuj konfigurację",
+ "market": "Przeglądaj rynek",
+ "start": "Rozpocznij teraz"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "Zmień",
+ "title": "Nowe zalecenia asystenta:"
+ },
+ "defaultMessage": "Jestem Twoim osobistym inteligentnym asystentem {{appName}}. W czym mogę Ci teraz pomóc?\nJeśli potrzebujesz bardziej profesjonalnego lub dostosowanego asystenta, kliknij `+`, aby utworzyć niestandardowego asystenta.",
+ "defaultMessageWithoutCreate": "Jestem Twoim osobistym inteligentnym asystentem {{appName}}. W czym mogę Ci teraz pomóc?",
+ "qa": {
+ "q01": "Czym jest LobeHub?",
+ "q02": "Czym jest {{appName}}?",
+ "q03": "Czy {{appName}} ma wsparcie społeczności?",
+ "q04": "Jakie funkcje wspiera {{appName}}?",
+ "q05": "Jak wdrożyć i używać {{appName}}?",
+ "q06": "Jakie są ceny {{appName}}?",
+ "q07": "Czy {{appName}} jest darmowy?",
+ "q08": "Czy dostępna jest wersja w chmurze?",
+ "q09": "Czy wspiera lokalne modele językowe?",
+ "q10": "Czy wspiera rozpoznawanie i generowanie obrazów?",
+ "q11": "Czy wspiera syntezę mowy i rozpoznawanie mowy?",
+ "q12": "Czy wspiera system wtyczek?",
+ "q13": "Czy ma własny rynek do pozyskiwania GPT-ów?",
+ "q14": "Czy wspiera wielu dostawców usług AI?",
+ "q15": "Co powinienem zrobić, jeśli napotkam problemy podczas korzystania?"
+ },
+ "questions": {
+ "moreBtn": "Dowiedz się więcej",
+ "title": "Najczęściej zadawane pytania:"
+ },
+ "welcome": {
+ "afternoon": "Dzień dobry",
+ "morning": "Dzień dobry",
+ "night": "Dobry wieczór",
+ "noon": "Dzień dobry"
+ }
+ },
+ "header": "Witaj",
+ "pickAgent": "Wybierz szablon asystenta lub kontynuuj",
+ "skip": "Pomiń tworzenie",
+ "slogan": {
+ "desc1": "Ożyw swoje myślenie poprzez uruchomienie klastra mózgu. Twój inteligentny asystent zawsze jest obecny.",
+ "desc2": "Stwórz swojego pierwszego asystenta. Zaczynamy!",
+ "title": "Daj sobie mądrzejszy mózg"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/auth.json b/DigitalHumanWeb/locales/pt-BR/auth.json
new file mode 100644
index 0000000..ef3b9ce
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "Entrar",
+ "loginOrSignup": "Entrar / Registrar",
+ "profile": "Perfil",
+ "security": "Segurança",
+ "signout": "Sair",
+ "signup": "Cadastre-se"
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/chat.json b/DigitalHumanWeb/locales/pt-BR/chat.json
new file mode 100644
index 0000000..6258b73
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "Modelo"
+ },
+ "agentDefaultMessage": "Olá, eu sou **{{name}}**, você pode começar a conversar comigo agora ou ir para [Configurações do Assistente]({{url}}) para completar minhas informações.",
+ "agentDefaultMessageWithSystemRole": "Olá, eu sou **{{name}}**, {{systemRole}}, vamos conversar!",
+ "agentDefaultMessageWithoutEdit": "Olá, sou o **{{name}}**, vamos começar a conversa!",
+ "agents": "Assistente",
+ "artifact": {
+ "generating": "Gerando",
+ "thinking": "Pensando",
+ "thought": "Processo de pensamento",
+ "unknownTitle": "Obra sem título"
+ },
+ "backToBottom": "Voltar para o início",
+ "chatList": {
+ "longMessageDetail": "Ver detalhes"
+ },
+ "clearCurrentMessages": "Limpar mensagens atuais",
+ "confirmClearCurrentMessages": "Você está prestes a limpar as mensagens desta sessão. Depois de limpar, não será possível recuperá-las. Por favor, confirme sua ação.",
+ "confirmRemoveSessionItemAlert": "Você está prestes a remover este assistente. Depois de remover, não será possível recuperá-lo. Por favor, confirme sua ação.",
+ "confirmRemoveSessionSuccess": "Sessão removida com sucesso",
+ "defaultAgent": "Assistente Padrão",
+ "defaultList": "Lista padrão",
+ "defaultSession": "Sessão Padrão",
+ "duplicateSession": {
+ "loading": "Copiando...",
+ "success": "Cópia bem-sucedida",
+ "title": "{{title}} Cópia"
+ },
+ "duplicateTitle": "{{title}} Cópia",
+ "emptyAgent": "Sem assistente disponível",
+ "historyRange": "Intervalo de Histórico",
+ "inbox": {
+ "desc": "Ative o cluster cerebral, inspire faíscas de pensamento. Seu assistente inteligente, aqui para conversar sobre tudo.",
+ "title": "Conversa Aleatória"
+ },
+ "input": {
+ "addAi": "Adicionar uma mensagem de IA",
+ "addUser": "Adicionar uma mensagem de usuário",
+ "more": "mais",
+ "send": "Enviar",
+ "sendWithCmdEnter": "Pressione {{meta}} + Enter para enviar",
+ "sendWithEnter": "Pressione Enter para enviar",
+ "stop": "Parar",
+ "warp": "Quebrar linha"
+ },
+ "knowledgeBase": {
+ "all": "Todo conteúdo",
+ "allFiles": "Todos os arquivos",
+ "allKnowledgeBases": "Todos os bancos de conhecimento",
+ "disabled": "O modo de implantação atual não suporta diálogos com a base de conhecimento. Para utilizá-lo, por favor, mude para a implantação do banco de dados no servidor ou utilize o serviço {{cloud}}.",
+ "library": {
+ "action": {
+ "add": "Adicionar",
+ "detail": "Detalhes",
+ "remove": "Remover"
+ },
+ "title": "Arquivo/Banco de Conhecimento"
+ },
+ "relativeFilesOrKnowledgeBases": "Arquivos/Bancos de Conhecimento relacionados",
+ "title": "Banco de Conhecimento",
+ "uploadGuide": "Os arquivos enviados podem ser visualizados em 'Banco de Conhecimento'.",
+ "viewMore": "Ver mais"
+ },
+ "messageAction": {
+ "delAndRegenerate": "Excluir e Regenerar",
+ "regenerate": "Regenerar"
+ },
+ "newAgent": "Novo Assistente",
+ "pin": "Fixar",
+ "pinOff": "Desafixar",
+ "rag": {
+ "referenceChunks": "Referências",
+ "userQuery": {
+ "actions": {
+ "delete": "Excluir reescrita de Query",
+ "regenerate": "Regenerar Query"
+ }
+ }
+ },
+ "regenerate": "Regenerar",
+ "roleAndArchive": "Função e Arquivo",
+ "searchAgentPlaceholder": "Assistente de busca...",
+ "sendPlaceholder": "Digite a mensagem...",
+ "sessionGroup": {
+ "config": "Gerenciar grupos",
+ "confirmRemoveGroupAlert": "Você está prestes a excluir este grupo. Após a exclusão, os assistentes deste grupo serão movidos para a lista padrão. Por favor, confirme sua operação.",
+ "createAgentSuccess": "Assistente criado com sucesso",
+ "createGroup": "Criar novo grupo",
+ "createSuccess": "Criado com sucesso",
+ "creatingAgent": "Criando assistente...",
+ "inputPlaceholder": "Digite o nome do grupo...",
+ "moveGroup": "Mover para o grupo",
+ "newGroup": "Novo grupo",
+ "rename": "Renomear grupo",
+ "renameSuccess": "Renomeado com sucesso",
+ "sortSuccess": "Reordenação bem-sucedida",
+ "sorting": "Atualizando ordenação do grupo...",
+ "tooLong": "O nome do grupo deve ter entre 1 e 20 caracteres"
+ },
+ "shareModal": {
+ "download": "Baixar Captura de Tela",
+ "imageType": "Tipo de Imagem",
+ "screenshot": "Captura de Tela",
+ "settings": "Configurações de Exportação",
+ "shareToShareGPT": "Gerar Link de Compartilhamento ShareGPT",
+ "withBackground": "Com Imagem de Fundo",
+ "withFooter": "Com Rodapé",
+ "withPluginInfo": "Com Informações do Plugin",
+ "withSystemRole": "Com Função do Assistente"
+ },
+ "stt": {
+ "action": "Entrada de Voz",
+ "loading": "Reconhecendo...",
+ "prettifying": "Embelezando..."
+ },
+ "temp": "Temporário",
+ "tokenDetails": {
+ "chats": "Mensagens de bate-papo",
+ "rest": "Restante disponível",
+ "systemRole": "Configuração de papel do sistema",
+ "title": "Detalhes do Token",
+ "tools": "Configuração de plug-ins",
+ "total": "Total disponível",
+ "used": "Total utilizado"
+ },
+ "tokenTag": {
+ "overload": "Limite Excedido",
+ "remained": "Restante",
+ "used": "Usado"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "Renomeação Automática",
+ "duplicate": "Criar Cópia",
+ "export": "Exportar Tópico"
+ },
+ "checkOpenNewTopic": "Deseja abrir um novo tópico?",
+ "checkSaveCurrentMessages": "Salvar a conversa atual como tópico?",
+ "confirmRemoveAll": "Você está prestes a remover todos os tópicos. Depois de remover, não será possível recuperá-los. Por favor, confirme sua ação.",
+ "confirmRemoveTopic": "Você está prestes a remover este tópico. Depois de remover, não será possível recuperá-lo. Por favor, confirme sua ação.",
+ "confirmRemoveUnstarred": "Você está prestes a remover os tópicos não favoritados. Depois de remover, não será possível recuperá-los. Por favor, confirme sua ação.",
+ "defaultTitle": "Tópico Padrão",
+ "duplicateLoading": "Tópico sendo duplicado...",
+ "duplicateSuccess": "Tópico duplicado com sucesso",
+ "guide": {
+ "desc": "Clique em enviar no botão esquerdo para salvar a conversa atual como um tópico histórico e iniciar uma nova rodada de conversa",
+ "title": "Lista de Tópicos"
+ },
+ "openNewTopic": "Abrir Novo Tópico",
+ "removeAll": "Remover Todos os Tópicos",
+ "removeUnstarred": "Remover Tópicos Não Favoritados",
+ "saveCurrentMessages": "Salvar Mensagens Atuais como Tópico",
+ "searchPlaceholder": "Pesquisar tópicos...",
+ "title": "Lista de Tópicos"
+ },
+ "translate": {
+ "action": "Traduzir",
+ "clear": "Limpar Tradução"
+ },
+ "tts": {
+ "action": "Leitura de Voz",
+ "clear": "Limpar Leitura"
+ },
+ "updateAgent": "Atualizar Informações do Assistente",
+ "upload": {
+ "action": {
+ "fileUpload": "Enviar arquivo",
+ "folderUpload": "Enviar pasta",
+ "imageDisabled": "O modelo atual não suporta reconhecimento visual, por favor, mude de modelo antes de usar",
+ "imageUpload": "Enviar imagem",
+ "tooltip": "Enviar"
+ },
+ "clientMode": {
+ "actionFiletip": "Enviar arquivo",
+ "actionTooltip": "Enviar",
+ "disabled": "O modelo atual não suporta reconhecimento visual e análise de arquivos, por favor, mude de modelo antes de usar"
+ },
+ "preview": {
+ "prepareTasks": "Preparando partes...",
+ "status": {
+ "pending": "Preparando para upload...",
+ "processing": "Processando arquivo..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/clerk.json b/DigitalHumanWeb/locales/pt-BR/clerk.json
new file mode 100644
index 0000000..6955b90
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "Voltar",
+ "badge__default": "Padrão",
+ "badge__otherImpersonatorDevice": "Outro dispositivo de impostor",
+ "badge__primary": "Primário",
+ "badge__requiresAction": "Requer ação",
+ "badge__thisDevice": "Este dispositivo",
+ "badge__unverified": "Não verificado",
+ "badge__userDevice": "Dispositivo do usuário",
+ "badge__you": "Você",
+ "createOrganization": {
+ "formButtonSubmit": "Criar organização",
+ "invitePage": {
+ "formButtonReset": "Pular"
+ },
+ "title": "Criar organização"
+ },
+ "dates": {
+ "lastDay": "Ontem às {{ date | timeString('pt-BR') }}",
+ "next6Days": "{{ date | weekday('pt-BR','long') }} às {{ date | timeString('pt-BR') }}",
+ "nextDay": "Amanhã às {{ date | timeString('pt-BR') }}",
+ "numeric": "{{ date | numeric('pt-BR') }}",
+ "previous6Days": "Último(a) {{ date | weekday('pt-BR','long') }} às {{ date | timeString('pt-BR') }}",
+ "sameDay": "Hoje às {{ date | timeString('pt-BR') }}"
+ },
+ "dividerText": "ou",
+ "footerActionLink__useAnotherMethod": "Usar outro método",
+ "footerPageLink__help": "Ajuda",
+ "footerPageLink__privacy": "Privacidade",
+ "footerPageLink__terms": "Termos",
+ "formButtonPrimary": "Continuar",
+ "formButtonPrimary__verify": "Verificar",
+ "formFieldAction__forgotPassword": "Esqueceu a senha?",
+ "formFieldError__matchingPasswords": "Senhas correspondem.",
+ "formFieldError__notMatchingPasswords": "Senhas não correspondem.",
+ "formFieldError__verificationLinkExpired": "O link de verificação expirou. Por favor, solicite um novo link.",
+ "formFieldHintText__optional": "Opcional",
+ "formFieldHintText__slug": "Um slug é um ID legível por humanos que deve ser único. Muitas vezes é usado em URLs.",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "Excluir conta",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "exemplo@email.com, exemplo2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "minha-org",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "Ativar convites automáticos para este domínio",
+ "formFieldLabel__backupCode": "Código de backup",
+ "formFieldLabel__confirmDeletion": "Confirmação",
+ "formFieldLabel__confirmPassword": "Confirmar senha",
+ "formFieldLabel__currentPassword": "Senha atual",
+ "formFieldLabel__emailAddress": "Endereço de e-mail",
+ "formFieldLabel__emailAddress_username": "Endereço de e-mail ou nome de usuário",
+ "formFieldLabel__emailAddresses": "Endereços de e-mail",
+ "formFieldLabel__firstName": "Primeiro nome",
+ "formFieldLabel__lastName": "Sobrenome",
+ "formFieldLabel__newPassword": "Nova senha",
+ "formFieldLabel__organizationDomain": "Domínio",
+ "formFieldLabel__organizationDomainDeletePending": "Excluir convites e sugestões pendentes",
+ "formFieldLabel__organizationDomainEmailAddress": "Endereço de e-mail de verificação",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "Insira um endereço de e-mail sob este domínio para receber um código e verificar este domínio.",
+ "formFieldLabel__organizationName": "Nome",
+ "formFieldLabel__organizationSlug": "Slug",
+ "formFieldLabel__passkeyName": "Nome do passkey",
+ "formFieldLabel__password": "Senha",
+ "formFieldLabel__phoneNumber": "Número de telefone",
+ "formFieldLabel__role": "Função",
+ "formFieldLabel__signOutOfOtherSessions": "Sair de todas as outras sessões",
+ "formFieldLabel__username": "Nome de usuário",
+ "impersonationFab": {
+ "action__signOut": "Sair",
+ "title": "Logado como {{identifier}}"
+ },
+ "locale": "pt-BR",
+ "maintenanceMode": "Estamos passando por manutenção no momento, mas não se preocupe, não deve levar mais do que alguns minutos.",
+ "membershipRole__admin": "Administrador",
+ "membershipRole__basicMember": "Membro",
+ "membershipRole__guestMember": "Visitante",
+ "organizationList": {
+ "action__createOrganization": "Criar organização",
+ "action__invitationAccept": "Participar",
+ "action__suggestionsAccept": "Solicitar participação",
+ "createOrganization": "Criar Organização",
+ "invitationAcceptedLabel": "Participou",
+ "subtitle": "para continuar no(a) {{applicationName}}",
+ "suggestionsAcceptedLabel": "Aprovação pendente",
+ "title": "Escolha uma conta",
+ "titleWithoutPersonal": "Escolha uma organização"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "Convites automáticos",
+ "badge__automaticSuggestion": "Sugestões automáticas",
+ "badge__manualInvitation": "Sem inscrição automática",
+ "badge__unverified": "Não verificado",
+ "createDomainPage": {
+ "subtitle": "Adicione o domínio para verificar. Usuários com endereços de e-mail neste domínio podem se juntar à organização automaticamente ou solicitar participação.",
+ "title": "Adicionar domínio"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "Os convites não puderam ser enviados. Já existem convites pendentes para os seguintes endereços de e-mail: {{email_addresses}}.",
+ "formButtonPrimary__continue": "Enviar convites",
+ "selectDropdown__role": "Selecione a função",
+ "subtitle": "Digite ou cole um ou mais endereços de e-mail, separados por espaços ou vírgulas.",
+ "successMessage": "Convites enviados com sucesso",
+ "title": "Convidar novos membros"
+ },
+ "membersPage": {
+ "action__invite": "Convidar",
+ "activeMembersTab": {
+ "menuAction__remove": "Remover membro",
+ "tableHeader__actions": "",
+ "tableHeader__joined": "Participou",
+ "tableHeader__role": "Função",
+ "tableHeader__user": "Usuário"
+ },
+ "detailsTitle__emptyRow": "Nenhum membro para exibir",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "Convide usuários conectando um domínio de e-mail à sua organização. Qualquer pessoa que se inscrever com um domínio de e-mail correspondente poderá se juntar à organização a qualquer momento.",
+ "headerTitle": "Convites automáticos",
+ "primaryButton": "Gerenciar domínios verificados"
+ },
+ "table__emptyRow": "Nenhum convite para exibir"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "Revogar convite",
+ "tableHeader__invited": "Convidado"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "Usuários que se inscreverem com um domínio de e-mail correspondente poderão ver uma sugestão para solicitar participação em sua organização.",
+ "headerTitle": "Sugestões automáticas",
+ "primaryButton": "Gerenciar domínios verificados"
+ },
+ "menuAction__approve": "Aprovar",
+ "menuAction__reject": "Rejeitar",
+ "tableHeader__requested": "Acesso solicitado",
+ "table__emptyRow": "Nenhuma solicitação para exibir"
+ },
+ "start": {
+ "headerTitle__invitations": "Convites",
+ "headerTitle__members": "Membros",
+ "headerTitle__requests": "Solicitações"
+ }
+ },
+ "navbar": {
+ "description": "Gerencie sua organização",
+ "general": "Geral",
+ "members": "Membros",
+ "title": "Organização"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "Digite \"{{organizationName}}\" abaixo para continuar.",
+ "messageLine1": "Tem certeza de que deseja excluir esta organização?",
+ "messageLine2": "Esta ação é permanente e irreversível.",
+ "successMessage": "Você excluiu a organização.",
+ "title": "Excluir organização"
+ },
+ "leaveOrganization": {
+ "actionDescription": "Digite \"{{organizationName}}\" abaixo para continuar.",
+ "messageLine1": "Tem certeza de que deseja sair desta organização? Você perderá o acesso a esta organização e seus aplicativos.",
+ "messageLine2": "Esta ação é permanente e irreversível.",
+ "successMessage": "Você saiu da organização.",
+ "title": "Sair da organização"
+ },
+ "title": "Perigo"
+ },
+ "domainSection": {
+ "menuAction__manage": "Gerenciar",
+ "menuAction__remove": "Excluir",
+ "menuAction__verify": "Verificar",
+ "primaryButton": "Adicionar domínio",
+ "subtitle": "Permita que os usuários se juntem à organização automaticamente ou solicitem participação com base em um domínio de e-mail verificado.",
+ "title": "Domínios verificados"
+ },
+ "successMessage": "A organização foi atualizada.",
+ "title": "Atualizar perfil"
+ },
+ "removeDomainPage": {
+ "messageLine1": "O domínio de e-mail {{domain}} será removido.",
+ "messageLine2": "Os usuários não poderão se juntar automaticamente à organização após isso.",
+ "successMessage": "{{domain}} foi removido.",
+ "title": "Remover domínio"
+ },
+ "start": {
+ "headerTitle__general": "Geral",
+ "headerTitle__members": "Membros",
+ "profileSection": {
+ "primaryButton": "Atualizar perfil",
+ "title": "Perfil da Organização",
+ "uploadAction__title": "Logotipo"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "Remover este domínio afetará os usuários convidados.",
+ "removeDomainActionLabel__remove": "Remover domínio",
+ "removeDomainSubtitle": "Remova este domínio de seus domínios verificados",
+ "removeDomainTitle": "Remover domínio"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "Os usuários são convidados automaticamente para se juntar à organização quando se inscrevem e podem se juntar a qualquer momento.",
+ "automaticInvitationOption__label": "Convites automáticos",
+ "automaticSuggestionOption__description": "Os usuários recebem uma sugestão para solicitar participação, mas devem ser aprovados por um administrador antes de poderem se juntar à organização.",
+ "automaticSuggestionOption__label": "Sugestões automáticas",
+ "calloutInfoLabel": "Alterar o modo de inscrição afetará apenas novos usuários.",
+ "calloutInvitationCountLabel": "Convites pendentes enviados aos usuários: {{count}}",
+ "calloutSuggestionCountLabel": "Sugestões pendentes enviadas aos usuários: {{count}}",
+ "manualInvitationOption__description": "Os usuários só podem ser convidados manualmente para a organização.",
+ "manualInvitationOption__label": "Sem inscrição automática",
+ "subtitle": "Escolha como os usuários deste domínio podem se juntar à organização."
+ },
+ "start": {
+ "headerTitle__danger": "Perigo",
+ "headerTitle__enrollment": "Opções de inscrição"
+ },
+ "subtitle": "O domínio {{domain}} agora está verificado. Continue selecionando o modo de inscrição.",
+ "title": "Atualizar {{domain}}"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "Digite o código de verificação enviado para o seu endereço de e-mail",
+ "formTitle": "Código de verificação",
+ "resendButton": "Não recebeu o código? Reenviar",
+ "subtitle": "O domínio {{domainName}} precisa ser verificado por e-mail.",
+ "subtitleVerificationCodeScreen": "Um código de verificação foi enviado para {{emailAddress}}. Digite o código para continuar.",
+ "title": "Verificar domínio"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "Criar organização",
+ "action__invitationAccept": "Participar",
+ "action__manageOrganization": "Gerenciar",
+ "action__suggestionsAccept": "Solicitar participação",
+ "notSelected": "Nenhuma organização selecionada",
+ "personalWorkspace": "Conta pessoal",
+ "suggestionsAcceptedLabel": "Aprovação pendente"
+ },
+ "paginationButton__next": "Próximo",
+ "paginationButton__previous": "Anterior",
+ "paginationRowText__displaying": "Exibindo",
+ "paginationRowText__of": "de",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "Adicionar conta",
+ "action__signOutAll": "Sair de todas as contas",
+ "subtitle": "Selecione a conta com a qual deseja continuar.",
+ "title": "Escolha uma conta"
+ },
+ "alternativeMethods": {
+ "actionLink": "Obter ajuda",
+ "actionText": "Não tem nenhuma destas?",
+ "blockButton__backupCode": "Usar um código de backup",
+ "blockButton__emailCode": "Enviar código por e-mail para {{identifier}}",
+ "blockButton__emailLink": "Enviar link por e-mail para {{identifier}}",
+ "blockButton__passkey": "Entrar com sua chave de acesso",
+ "blockButton__password": "Entrar com sua senha",
+ "blockButton__phoneCode": "Enviar código por SMS para {{identifier}}",
+ "blockButton__totp": "Usar seu aplicativo autenticador",
+ "getHelp": {
+ "blockButton__emailSupport": "Suporte por e-mail",
+ "content": "Se estiver com dificuldades para entrar na sua conta, nos envie um e-mail e trabalharemos com você para restaurar o acesso o mais rápido possível.",
+ "title": "Obter ajuda"
+ },
+ "subtitle": "Enfrentando problemas? Você pode usar qualquer um destes métodos para entrar.",
+ "title": "Usar outro método"
+ },
+ "backupCodeMfa": {
+ "subtitle": "Seu código de backup é aquele que você recebeu ao configurar a autenticação em duas etapas.",
+ "title": "Digite um código de backup"
+ },
+ "emailCode": {
+ "formTitle": "Código de verificação",
+ "resendButton": "Não recebeu um código? Reenviar",
+ "subtitle": "para continuar em {{applicationName}}",
+ "title": "Verifique seu e-mail"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "Retorne à aba original para continuar.",
+ "title": "Este link de verificação expirou"
+ },
+ "failed": {
+ "subtitle": "Retorne à aba original para continuar.",
+ "title": "Este link de verificação é inválido"
+ },
+ "formSubtitle": "Use o link de verificação enviado para o seu e-mail",
+ "formTitle": "Link de verificação",
+ "loading": {
+ "subtitle": "Você será redirecionado em breve",
+ "title": "Entrando..."
+ },
+ "resendButton": "Não recebeu um link? Reenviar",
+ "subtitle": "para continuar em {{applicationName}}",
+ "title": "Verifique seu e-mail",
+ "unusedTab": {
+ "title": "Você pode fechar esta aba"
+ },
+ "verified": {
+ "subtitle": "Você será redirecionado em breve",
+ "title": "Entrou com sucesso"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Retorne à aba original para continuar",
+ "subtitleNewTab": "Retorne à nova aba aberta para continuar",
+ "titleNewTab": "Entrou em outra aba"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "Código de redefinição de senha",
+ "resendButton": "Não recebeu um código? Reenviar",
+ "subtitle": "para redefinir sua senha",
+ "subtitle_email": "Primeiro, insira o código enviado para o seu endereço de e-mail",
+ "subtitle_phone": "Primeiro, insira o código enviado para o seu telefone",
+ "title": "Redefinir senha"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "Redefinir sua senha",
+ "label__alternativeMethods": "Ou, entrar com outro método",
+ "title": "Esqueceu a senha?"
+ },
+ "noAvailableMethods": {
+ "message": "Não é possível prosseguir com o login. Não há fator de autenticação disponível.",
+ "subtitle": "Ocorreu um erro",
+ "title": "Não é possível entrar"
+ },
+ "passkey": {
+ "subtitle": "Usar sua chave de acesso confirma que é você. Seu dispositivo pode solicitar sua impressão digital, rosto ou bloqueio de tela.",
+ "title": "Use sua chave de acesso"
+ },
+ "password": {
+ "actionLink": "Usar outro método",
+ "subtitle": "Digite a senha associada à sua conta",
+ "title": "Digite sua senha"
+ },
+ "passwordPwned": {
+ "title": "Senha comprometida"
+ },
+ "phoneCode": {
+ "formTitle": "Código de verificação",
+ "resendButton": "Não recebeu um código? Reenviar",
+ "subtitle": "para continuar em {{applicationName}}",
+ "title": "Verifique seu telefone"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "Código de verificação",
+ "resendButton": "Não recebeu um código? Reenviar",
+ "subtitle": "Para continuar, por favor insira o código de verificação enviado para o seu telefone",
+ "title": "Verifique seu telefone"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "Redefinir senha",
+ "requiredMessage": "Por motivos de segurança, é necessário redefinir sua senha.",
+ "successMessage": "Sua senha foi alterada com sucesso. Entrando, por favor aguarde um momento.",
+ "title": "Definir nova senha"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "Precisamos verificar sua identidade antes de redefinir sua senha."
+ },
+ "start": {
+ "actionLink": "Registrar",
+ "actionLink__use_email": "Usar e-mail",
+ "actionLink__use_email_username": "Usar e-mail ou nome de usuário",
+ "actionLink__use_passkey": "Usar chave de acesso",
+ "actionLink__use_phone": "Usar telefone",
+ "actionLink__use_username": "Usar nome de usuário",
+ "actionText": "Não tem uma conta?",
+ "subtitle": "Bem-vindo de volta! Por favor, faça login para continuar",
+ "title": "Faça login em {{applicationName}}"
+ },
+ "totpMfa": {
+ "formTitle": "Código de verificação",
+ "subtitle": "Para continuar, por favor insira o código de verificação gerado pelo seu aplicativo autenticador",
+ "title": "Verificação em duas etapas"
+ }
+ },
+ "signInEnterPasswordTitle": "Digite sua senha",
+ "signUp": {
+ "continue": {
+ "actionLink": "Login",
+ "actionText": "Já tem uma conta?",
+ "subtitle": "Por favor, preencha os detalhes restantes para continuar",
+ "title": "Preencha os campos em falta"
+ },
+ "emailCode": {
+ "formSubtitle": "Digite o código de verificação enviado para o seu endereço de e-mail",
+ "formTitle": "Código de verificação",
+ "resendButton": "Não recebeu um código? Reenviar",
+ "subtitle": "Digite o código de verificação enviado para o seu e-mail",
+ "title": "Verifique seu e-mail"
+ },
+ "emailLink": {
+ "formSubtitle": "Use o link de verificação enviado para o seu endereço de e-mail",
+ "formTitle": "Link de verificação",
+ "loading": {
+ "title": "Registrando..."
+ },
+ "resendButton": "Não recebeu um link? Reenviar",
+ "subtitle": "para continuar em {{applicationName}}",
+ "title": "Verifique seu e-mail",
+ "verified": {
+ "title": "Registrado com sucesso"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Retorne à nova aba aberta para continuar",
+ "subtitleNewTab": "Retorne à aba anterior para continuar",
+ "title": "E-mail verificado com sucesso"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "Digite o código de verificação enviado para o seu número de telefone",
+ "formTitle": "Código de verificação",
+ "resendButton": "Não recebeu um código? Reenviar",
+ "subtitle": "Digite o código de verificação enviado para o seu telefone",
+ "title": "Verifique seu telefone"
+ },
+ "start": {
+ "actionLink": "Login",
+ "actionText": "Já tem uma conta?",
+ "subtitle": "Bem-vindo! Por favor, preencha os detalhes para começar",
+ "title": "Crie sua conta"
+ }
+ },
+ "socialButtonsBlockButton": "Continuar com {{provider|titleize}}",
+ "unstable__errors": {
+ "captcha_invalid": "O cadastro não foi bem-sucedido devido a validações de segurança falhadas. Por favor, atualize a página para tentar novamente ou entre em contato com o suporte para mais assistência.",
+ "captcha_unavailable": "O cadastro não foi bem-sucedido devido à validação de bot falhada. Por favor, atualize a página para tentar novamente ou entre em contato com o suporte para mais assistência.",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "Este endereço de e-mail já está em uso. Por favor, tente outro.",
+ "form_identifier_exists__phone_number": "Este número de telefone já está em uso. Por favor, tente outro.",
+ "form_identifier_exists__username": "Este nome de usuário já está em uso. Por favor, tente outro.",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "O endereço de e-mail deve ser um endereço de e-mail válido.",
+ "form_param_format_invalid__phone_number": "O número de telefone deve estar em um formato internacional válido.",
+ "form_param_max_length_exceeded__first_name": "O primeiro nome não deve exceder 256 caracteres.",
+ "form_param_max_length_exceeded__last_name": "O sobrenome não deve exceder 256 caracteres.",
+ "form_param_max_length_exceeded__name": "O nome não deve exceder 256 caracteres.",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "Sua senha não é forte o suficiente.",
+ "form_password_pwned": "Esta senha foi encontrada em uma violação e não pode ser usada, por favor, tente outra senha.",
+ "form_password_pwned__sign_in": "Esta senha foi encontrada em uma violação e não pode ser usada, por favor, redefina sua senha.",
+ "form_password_size_in_bytes_exceeded": "Sua senha excedeu o número máximo de bytes permitido, por favor, encurte-a ou remova alguns caracteres especiais.",
+ "form_password_validation_failed": "Senha incorreta",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "Você não pode excluir sua última identificação.",
+ "not_allowed_access": "",
+ "passkey_already_exists": "Um passkey já está registrado neste dispositivo.",
+ "passkey_not_supported": "Passkeys não são suportados neste dispositivo.",
+ "passkey_pa_not_supported": "O registro requer um autenticador de plataforma, mas o dispositivo não o suporta.",
+ "passkey_registration_cancelled": "O registro do passkey foi cancelado ou expirou.",
+ "passkey_retrieval_cancelled": "A verificação do passkey foi cancelada ou expirou.",
+ "passwordComplexity": {
+ "maximumLength": "menos de {{length}} caracteres",
+ "minimumLength": "{{length}} ou mais caracteres",
+ "requireLowercase": "uma letra minúscula",
+ "requireNumbers": "um número",
+ "requireSpecialCharacter": "um caractere especial",
+ "requireUppercase": "uma letra maiúscula",
+ "sentencePrefix": "Sua senha deve conter"
+ },
+ "phone_number_exists": "Este número de telefone já está em uso. Por favor, tente outro.",
+ "zxcvbn": {
+ "couldBeStronger": "Sua senha funciona, mas poderia ser mais forte. Tente adicionar mais caracteres.",
+ "goodPassword": "Sua senha atende a todos os requisitos necessários.",
+ "notEnough": "Sua senha não é forte o suficiente.",
+ "suggestions": {
+ "allUppercase": "Coloque letras maiúsculas em algumas, mas não em todas as letras.",
+ "anotherWord": "Adicione mais palavras menos comuns.",
+ "associatedYears": "Evite anos associados a você.",
+ "capitalization": "Coloque mais letras maiúsculas além da primeira letra.",
+ "dates": "Evite datas e anos associados a você.",
+ "l33t": "Evite substituições previsíveis de letras como '@' por 'a'.",
+ "longerKeyboardPattern": "Use padrões de teclado mais longos e mude a direção de digitação várias vezes.",
+ "noNeed": "Você pode criar senhas fortes sem usar símbolos, números ou letras maiúsculas.",
+ "pwned": "Se você usar esta senha em outro lugar, você deve alterá-la.",
+ "recentYears": "Evite anos recentes.",
+ "repeated": "Evite palavras e caracteres repetidos.",
+ "reverseWords": "Evite soletrações invertidas de palavras comuns.",
+ "sequences": "Evite sequências de caracteres comuns.",
+ "useWords": "Use várias palavras, mas evite frases comuns."
+ },
+ "warnings": {
+ "common": "Esta é uma senha comumente usada.",
+ "commonNames": "Nomes comuns são fáceis de adivinhar.",
+ "dates": "Datas são fáceis de adivinhar.",
+ "extendedRepeat": "Padrões de caracteres repetidos como \"abcabcabc\" são fáceis de adivinhar.",
+ "keyPattern": "Padrões de teclado curtos são fáceis de adivinhar.",
+ "namesByThemselves": "Nomes ou sobrenomes isolados são fáceis de adivinhar.",
+ "pwned": "Sua senha foi exposta em uma violação de dados na Internet.",
+ "recentYears": "Anos recentes são fáceis de adivinhar.",
+ "sequences": "Sequências de caracteres comuns como \"abc\" são fáceis de adivinhar.",
+ "similarToCommon": "Isso é semelhante a uma senha comumente usada.",
+ "simpleRepeat": "Caracteres repetidos como \"aaa\" são fáceis de adivinhar.",
+ "straightRow": "Linhas retas de teclas no seu teclado são fáceis de adivinhar.",
+ "topHundred": "Esta é uma senha frequentemente usada.",
+ "topTen": "Esta é uma senha muito usada.",
+ "userInputs": "Não deve haver dados pessoais ou relacionados à página.",
+ "wordByItself": "Palavras isoladas são fáceis de adivinhar."
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "Adicionar conta",
+ "action__manageAccount": "Gerenciar conta",
+ "action__signOut": "Sair",
+ "action__signOutAll": "Sair de todas as contas"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "Copiado!",
+ "actionLabel__copy": "Copiar tudo",
+ "actionLabel__download": "Baixar .txt",
+ "actionLabel__print": "Imprimir",
+ "infoText1": "Os códigos de backup serão habilitados para esta conta.",
+ "infoText2": "Mantenha os códigos de backup em segredo e armazene-os com segurança. Você pode regenerar os códigos de backup se suspeitar que foram comprometidos.",
+ "subtitle__codelist": "Armazene-os com segurança e mantenha-os em segredo.",
+ "successMessage": "Os códigos de backup estão agora habilitados. Você pode usar um deles para entrar em sua conta, caso perca o acesso ao seu dispositivo de autenticação. Cada código só pode ser usado uma vez.",
+ "successSubtitle": "Você pode usar um deles para entrar em sua conta, caso perca o acesso ao seu dispositivo de autenticação.",
+ "title": "Adicionar verificação de código de backup",
+ "title__codelist": "Códigos de backup"
+ },
+ "connectedAccountPage": {
+ "formHint": "Selecione um provedor para conectar sua conta.",
+ "formHint__noAccounts": "Não há provedores de conta externos disponíveis.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} será removido desta conta.",
+ "messageLine2": "Você não poderá mais usar esta conta conectada e quaisquer recursos dependentes deixarão de funcionar.",
+ "successMessage": "{{connectedAccount}} foi removido de sua conta.",
+ "title": "Remover conta conectada"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "O provedor foi adicionado à sua conta",
+ "title": "Adicionar conta conectada"
+ },
+ "deletePage": {
+ "actionDescription": "Digite \"Excluir conta\" abaixo para continuar.",
+ "confirm": "Excluir conta",
+ "messageLine1": "Tem certeza de que deseja excluir sua conta?",
+ "messageLine2": "Esta ação é permanente e irreversível.",
+ "title": "Excluir conta"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "Um e-mail contendo um código de verificação será enviado para este endereço de e-mail.",
+ "formSubtitle": "Digite o código de verificação enviado para {{identifier}}",
+ "formTitle": "Código de verificação",
+ "resendButton": "Não recebeu um código? Reenviar",
+ "successMessage": "O e-mail {{identifier}} foi adicionado à sua conta."
+ },
+ "emailLink": {
+ "formHint": "Um e-mail contendo um link de verificação será enviado para este endereço de e-mail.",
+ "formSubtitle": "Clique no link de verificação no e-mail enviado para {{identifier}}",
+ "formTitle": "Link de verificação",
+ "resendButton": "Não recebeu um link? Reenviar",
+ "successMessage": "O e-mail {{identifier}} foi adicionado à sua conta."
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}} será removido desta conta.",
+ "messageLine2": "Você não poderá mais entrar usando este endereço de e-mail.",
+ "successMessage": "{{emailAddress}} foi removido de sua conta.",
+ "title": "Remover endereço de e-mail"
+ },
+ "title": "Adicionar endereço de e-mail",
+ "verifyTitle": "Verificar endereço de e-mail"
+ },
+ "formButtonPrimary__add": "Adicionar",
+ "formButtonPrimary__continue": "Continuar",
+ "formButtonPrimary__finish": "Finalizar",
+ "formButtonPrimary__remove": "Remover",
+ "formButtonPrimary__save": "Salvar",
+ "formButtonReset": "Cancelar",
+ "mfaPage": {
+ "formHint": "Selecione um método para adicionar.",
+ "title": "Adicionar verificação em duas etapas"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "Usar número existente",
+ "primaryButton__addPhoneNumber": "Adicionar número de telefone",
+ "removeResource": {
+ "messageLine1": "{{identifier}} não receberá mais códigos de verificação ao entrar.",
+ "messageLine2": "Sua conta pode não ser tão segura. Tem certeza de que deseja continuar?",
+ "successMessage": "A verificação em duas etapas por código SMS foi removida para {{mfaPhoneCode}}",
+ "title": "Remover verificação em duas etapas"
+ },
+ "subtitle__availablePhoneNumbers": "Selecione um número de telefone existente para se registrar na verificação em duas etapas por código SMS ou adicione um novo.",
+ "subtitle__unavailablePhoneNumbers": "Não há números de telefone disponíveis para se registrar na verificação em duas etapas por código SMS, por favor adicione um novo.",
+ "successMessage1": "Ao entrar, você precisará inserir um código de verificação enviado para este número de telefone como uma etapa adicional.",
+ "successMessage2": "Salve esses códigos de backup e armazene-os em um local seguro. Se perder o acesso ao seu dispositivo de autenticação, você pode usar os códigos de backup para entrar.",
+ "successTitle": "Verificação por código SMS habilitada",
+ "title": "Adicionar verificação por código SMS"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "Escanear código QR em vez disso",
+ "buttonUnableToScan__nonPrimary": "Não consegue escanear o código QR?",
+ "infoText__ableToScan": "Configure um novo método de entrada em seu aplicativo autenticador e escaneie o código QR a seguir para vinculá-lo à sua conta.",
+ "infoText__unableToScan": "Configure um novo método de entrada em seu autenticador e insira a Chave fornecida abaixo.",
+ "inputLabel__unableToScan1": "Certifique-se de que as senhas baseadas em tempo ou únicas estão habilitadas e, em seguida, termine de vincular sua conta.",
+ "inputLabel__unableToScan2": "Alternativamente, se seu autenticador suportar URIs TOTP, você também pode copiar o URI completo."
+ },
+ "removeResource": {
+ "messageLine1": "Os códigos de verificação deste autenticador não serão mais necessários ao entrar.",
+ "messageLine2": "Sua conta pode não ser tão segura. Tem certeza de que deseja continuar?",
+ "successMessage": "A verificação em duas etapas via aplicativo autenticador foi removida.",
+ "title": "Remover verificação em duas etapas"
+ },
+ "successMessage": "A verificação em duas etapas está agora habilitada. Ao entrar, você precisará inserir um código de verificação deste autenticador como uma etapa adicional.",
+ "title": "Adicionar aplicativo autenticador",
+ "verifySubtitle": "Digite o código de verificação gerado pelo seu autenticador",
+ "verifyTitle": "Código de verificação"
+ },
+ "mobileButton__menu": "Menu",
+ "navbar": {
+ "account": "Perfil",
+ "description": "Gerencie as informações da sua conta.",
+ "security": "Segurança",
+ "title": "Conta"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} será removido desta conta.",
+ "title": "Remover senha"
+ },
+ "subtitle__rename": "Você pode alterar o nome da senha para facilitar a busca.",
+ "title__rename": "Renomear Senha"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "É recomendado sair de todas as outras sessões que possam ter usado sua senha antiga.",
+ "readonly": "Sua senha atualmente não pode ser editada porque você só pode fazer login via conexão empresarial.",
+ "successMessage__set": "Sua senha foi definida.",
+ "successMessage__signOutOfOtherSessions": "Todos os outros dispositivos foram desconectados.",
+ "successMessage__update": "Sua senha foi atualizada.",
+ "title__set": "Definir senha",
+ "title__update": "Atualizar senha"
+ },
+ "phoneNumberPage": {
+ "infoText": "Será enviado um SMS contendo um código de verificação para este número de telefone. Podem ser aplicadas taxas de mensagem e dados.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} será removido desta conta.",
+ "messageLine2": "Você não poderá mais fazer login usando este número de telefone.",
+ "successMessage": "{{phoneNumber}} foi removido da sua conta.",
+ "title": "Remover número de telefone"
+ },
+ "successMessage": "{{identifier}} foi adicionado à sua conta.",
+ "title": "Adicionar número de telefone",
+ "verifySubtitle": "Digite o código de verificação enviado para {{identifier}}",
+ "verifyTitle": "Verificar número de telefone"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "Tamanho recomendado 1:1, até 10MB.",
+ "imageFormDestructiveActionSubtitle": "Remover",
+ "imageFormSubtitle": "Enviar",
+ "imageFormTitle": "Imagem de perfil",
+ "readonly": "Suas informações de perfil foram fornecidas pela conexão empresarial e não podem ser editadas.",
+ "successMessage": "Seu perfil foi atualizado.",
+ "title": "Atualizar perfil"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "Sair do dispositivo",
+ "title": "Dispositivos ativos"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "Tentar novamente",
+ "actionLabel__reauthorize": "Autorizar agora",
+ "destructiveActionTitle": "Remover",
+ "primaryButton": "Conectar conta",
+ "subtitle__reauthorize": "Os escopos necessários foram atualizados e você pode estar experimentando funcionalidades limitadas. Por favor, reautorize este aplicativo para evitar problemas",
+ "title": "Contas conectadas"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "Excluir conta",
+ "title": "Excluir conta"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "Remover e-mail",
+ "detailsAction__nonPrimary": "Definir como principal",
+ "detailsAction__primary": "Completar verificação",
+ "detailsAction__unverified": "Verificar",
+ "primaryButton": "Adicionar endereço de e-mail",
+ "title": "Endereços de e-mail"
+ },
+ "enterpriseAccountsSection": {
+ "title": "Contas empresariais"
+ },
+ "headerTitle__account": "Detalhes do perfil",
+ "headerTitle__security": "Segurança",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "Regenerar",
+ "headerTitle": "Códigos de backup",
+ "subtitle__regenerate": "Obtenha um novo conjunto de códigos de backup seguros. Os códigos de backup anteriores serão excluídos e não poderão ser usados.",
+ "title__regenerate": "Regenerar códigos de backup"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "Definir como padrão",
+ "destructiveActionLabel": "Remover"
+ },
+ "primaryButton": "Adicionar verificação em duas etapas",
+ "title": "Verificação em duas etapas",
+ "totp": {
+ "destructiveActionTitle": "Remover",
+ "headerTitle": "Aplicativo autenticador"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "Remover",
+ "menuAction__rename": "Renomear",
+ "title": "Senhas"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "Definir senha",
+ "primaryButton__updatePassword": "Atualizar senha",
+ "title": "Senha"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "Remover número de telefone",
+ "detailsAction__nonPrimary": "Definir como principal",
+ "detailsAction__primary": "Completar verificação",
+ "detailsAction__unverified": "Verificar número de telefone",
+ "primaryButton": "Adicionar número de telefone",
+ "title": "Números de telefone"
+ },
+ "profileSection": {
+ "primaryButton": "Atualizar perfil",
+ "title": "Perfil"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "Definir nome de usuário",
+ "primaryButton__updateUsername": "Atualizar nome de usuário",
+ "title": "Nome de usuário"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "Remover carteira",
+ "primaryButton": "Carteiras Web3",
+ "title": "Carteiras Web3"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "Seu nome de usuário foi atualizado.",
+ "title__set": "Definir nome de usuário",
+ "title__update": "Atualizar nome de usuário"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} será removido desta conta.",
+ "messageLine2": "Você não poderá mais fazer login usando esta carteira web3.",
+ "successMessage": "{{web3Wallet}} foi removida da sua conta.",
+ "title": "Remover carteira web3"
+ },
+ "subtitle__availableWallets": "Selecione uma carteira web3 para conectar à sua conta.",
+ "subtitle__unavailableWallets": "Não há carteiras web3 disponíveis.",
+ "successMessage": "A carteira foi adicionada à sua conta.",
+ "title": "Adicionar carteira web3"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/common.json b/DigitalHumanWeb/locales/pt-BR/common.json
new file mode 100644
index 0000000..0cff597
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "Sobre",
+ "advanceSettings": "Configurações avançadas",
+ "alert": {
+ "cloud": {
+ "action": "Experimente de graça",
+ "desc": "Oferecemos {{credit}} pontos de computação gratuitos para todos os usuários registrados, sem necessidade de configuração complicada, pronto para uso, suporta histórico de conversas ilimitado e sincronização em nuvem global. Mais recursos avançados esperam por você para explorar.",
+ "descOnMobile": "Oferecemos {{credit}} créditos de computação gratuitos para todos os usuários registrados, sem necessidade de configurações complicadas, pronto para uso.",
+ "title": "Bem-vindo para experimentar {{name}}"
+ }
+ },
+ "appInitializing": "Aplicativo iniciando...",
+ "autoGenerate": "Auto completar",
+ "autoGenerateTooltip": "Auto completar descrição do assistente com base em sugestões",
+ "autoGenerateTooltipDisabled": "Por favor, preencha a dica antes de usar a função de preenchimento automático",
+ "back": "Voltar",
+ "batchDelete": "Excluir em massa",
+ "blog": "Blog de Produtos",
+ "cancel": "Cancelar",
+ "changelog": "Registro de alterações",
+ "close": "Fechar",
+ "contact": "Entre em contato",
+ "copy": "Copiar",
+ "copyFail": "Falha ao copiar",
+ "copySuccess": "Cópia bem-sucedida",
+ "dataStatistics": {
+ "messages": "Mensagens",
+ "sessions": "Sessões",
+ "today": "Hoje",
+ "topics": "Tópicos"
+ },
+ "defaultAgent": "Assistente padrão",
+ "defaultSession": "Sessão padrão",
+ "delete": "Excluir",
+ "document": "Documento de Uso",
+ "download": "Baixar",
+ "duplicate": "Duplicar",
+ "edit": "Editar",
+ "export": "Exportar configuração",
+ "exportType": {
+ "agent": "Exportar configuração do assistente",
+ "agentWithMessage": "Exportar assistente e mensagens",
+ "all": "Exportar configurações globais e todos os dados do assistente",
+ "allAgent": "Exportar todas as configurações do assistente",
+ "allAgentWithMessage": "Exportar todos os assistentes e mensagens",
+ "globalSetting": "Exportar configurações globais"
+ },
+ "feedback": "Feedback e sugestões",
+ "follow": "Siga-nos no {{name}}",
+ "footer": {
+ "action": {
+ "feedback": "Compartilhe seus valiosos comentários",
+ "star": "Dê uma estrela no GitHub"
+ },
+ "and": "e",
+ "feedback": {
+ "action": "Compartilhar feedback",
+ "desc": "Cada uma de suas ideias e sugestões é extremamente valiosa para nós. Mal podemos esperar para saber o que você pensa! Sinta-se à vontade para entrar em contato conosco para fornecer feedback sobre os recursos do produto e a experiência de uso, ajudando-nos a tornar o LobeChat ainda melhor.",
+ "title": "Compartilhe seu valioso feedback no GitHub"
+ },
+ "later": "Mais tarde",
+ "star": {
+ "action": "Dar uma estrela",
+ "desc": "Se você ama nosso produto e deseja nos apoiar, poderia nos dar uma estrela no GitHub? Esse pequeno gesto é significativo para nós e nos motiva a continuar oferecendo uma experiência de qualidade para você.",
+ "title": "Dê uma estrela para nós no GitHub"
+ },
+ "title": "Está gostando do nosso produto?"
+ },
+ "fullscreen": "Modo de Tela Cheia",
+ "historyRange": "Intervalo de histórico",
+ "import": "Importar configuração",
+ "importModal": {
+ "error": {
+ "desc": "Desculpe, ocorreu um erro durante o processo de importação de dados. Por favor, tente importar novamente ou <1>envie um problema1>, e nós iremos ajudá-lo a resolver o problema o mais rápido possível.",
+ "title": "Falha na importação de dados"
+ },
+ "finish": {
+ "onlySettings": "Configurações do sistema importadas com sucesso",
+ "start": "Começar a usar",
+ "subTitle": "Importação de dados concluída em {{duration}} segundos. Detalhes da importação:",
+ "title": "Importação de dados concluída"
+ },
+ "loading": "Importando dados, por favor aguarde...",
+ "preparing": "Preparando módulo de importação de dados...",
+ "result": {
+ "added": "Importação bem-sucedida",
+ "errors": "Erros na importação",
+ "messages": "Mensagens",
+ "sessionGroups": "Grupos de sessão",
+ "sessions": "Assistentes",
+ "skips": "Ignorados",
+ "topics": "Tópicos",
+ "type": "Tipo de dados"
+ },
+ "title": "Importar dados",
+ "uploading": {
+ "desc": "O arquivo atual é grande, estamos fazendo o upload...",
+ "restTime": "Tempo restante",
+ "speed": "Velocidade de upload"
+ }
+ },
+ "information": "Comunidade e Informações",
+ "installPWA": "Instalar aplicativo de navegador",
+ "lang": {
+ "ar": "árabe",
+ "bg-BG": "Búlgaro",
+ "bn": "Bengali",
+ "cs-CZ": "Tcheco",
+ "da-DK": "Dinamarquês",
+ "de-DE": "Alemão",
+ "el-GR": "Grego",
+ "en": "Inglês",
+ "en-US": "Inglês",
+ "es-ES": "Espanhol",
+ "fi-FI": "Finlandês",
+ "fr-FR": "Francês",
+ "hi-IN": "Hindi",
+ "hu-HU": "Húngaro",
+ "id-ID": "Indonésio",
+ "it-IT": "Italiano",
+ "ja-JP": "Japonês",
+ "ko-KR": "Coreano",
+ "nl-NL": "Holandês",
+ "no-NO": "Norueguês",
+ "pl-PL": "Polonês",
+ "pt-BR": "Português do Brasil",
+ "pt-PT": "Português",
+ "ro-RO": "Romeno",
+ "ru-RU": "Russo",
+ "sk-SK": "Eslovaco",
+ "sr-RS": "Sérvio",
+ "sv-SE": "Sueco",
+ "th-TH": "Tailandês",
+ "tr-TR": "Turco",
+ "uk-UA": "Ucraniano",
+ "vi-VN": "Vietnamita",
+ "zh": "Chinês",
+ "zh-CN": "Chinês simplificado",
+ "zh-TW": "Chinês tradicional"
+ },
+ "layoutInitializing": "Inicializando layout...",
+ "legal": "Aviso Legal",
+ "loading": "Carregando...",
+ "mail": {
+ "business": "Parcerias Comerciais",
+ "support": "Suporte por E-mail"
+ },
+ "oauth": "Login SSO",
+ "officialSite": "Site Oficial",
+ "ok": "OK",
+ "password": "Senha",
+ "pin": "Fixar",
+ "pinOff": "Desafixar",
+ "privacy": "Política de Privacidade",
+ "regenerate": "Regenerar",
+ "rename": "Renomear",
+ "reset": "Redefinir",
+ "retry": "Tentar novamente",
+ "send": "Enviar",
+ "setting": "Configuração",
+ "share": "Compartilhar",
+ "stop": "Parar",
+ "sync": {
+ "actions": {
+ "settings": "Configurações de Sincronização",
+ "sync": "Sincronizar Agora"
+ },
+ "awareness": {
+ "current": "Dispositivo Atual"
+ },
+ "channel": "Canal",
+ "disabled": {
+ "actions": {
+ "enable": "Habilitar Sincronização na Nuvem",
+ "settings": "Configurar Parâmetros de Sincronização"
+ },
+ "desc": "Os dados da sessão atual são armazenados apenas neste navegador. Se você precisa sincronizar os dados entre vários dispositivos, configure e habilite a sincronização na nuvem.",
+ "title": "Sincronização de Dados Desativada"
+ },
+ "enabled": {
+ "title": "Sincronização de Dados"
+ },
+ "status": {
+ "connecting": "Conectando",
+ "disabled": "Sincronização Desativada",
+ "ready": "Conectado",
+ "synced": "Sincronizado",
+ "syncing": "Sincronizando",
+ "unconnected": "Falha na Conexão"
+ },
+ "title": "Status de Sincronização",
+ "unconnected": {
+ "tip": "Falha na conexão com o servidor de sinalização. Não será possível estabelecer um canal de comunicação ponto a ponto. Verifique a rede e tente novamente."
+ }
+ },
+ "tab": {
+ "chat": "Chat",
+ "discover": "Descobrir",
+ "files": "Arquivos",
+ "me": "eu",
+ "setting": "Configuração"
+ },
+ "telemetry": {
+ "allow": "Permitir",
+ "deny": "Negar",
+ "desc": "Queremos coletar anonimamente suas informações de uso para nos ajudar a melhorar o LobeChat e oferecer uma experiência de produto melhor para você. Você pode desativar a qualquer momento em Configurações - Sobre.",
+ "learnMore": "Saiba mais",
+ "title": "Ajude o LobeChat a melhorar"
+ },
+ "temp": "Temporário",
+ "terms": "Termos de Serviço",
+ "updateAgent": "Atualizar informações do assistente",
+ "upgradeVersion": {
+ "action": "Atualizar",
+ "hasNew": "Nova atualização disponível",
+ "newVersion": "Nova versão disponível: {{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "Usuário Anônimo",
+ "billing": "Gerenciamento de faturas",
+ "cloud": "Experimente {{name}}",
+ "data": "Armazenamento de dados",
+ "defaultNickname": "Usuário da Comunidade",
+ "discord": "Suporte da Comunidade",
+ "docs": "Documentação",
+ "email": "Suporte por E-mail",
+ "feedback": "Feedback e Sugestões",
+ "help": "Central de Ajuda",
+ "moveGuide": "O botão de configurações foi movido para cá",
+ "plans": "Planos de Assinatura",
+ "preview": "Versão de visualização",
+ "profile": "Gerenciamento de Conta",
+ "setting": "Configurações do Aplicativo",
+ "usages": "Estatísticas de Uso"
+ },
+ "version": "Versão"
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/components.json b/DigitalHumanWeb/locales/pt-BR/components.json
new file mode 100644
index 0000000..f2d8cb0
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "Arraste os arquivos para cá, suportando o upload de várias imagens.",
+ "dragFileDesc": "Arraste imagens e arquivos para cá, suportando o upload de várias imagens e arquivos.",
+ "dragFileTitle": "Enviar arquivo",
+ "dragTitle": "Enviar imagem"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "Adicionar ao banco de conhecimento",
+ "addToOtherKnowledgeBase": "Adicionar a outro banco de conhecimento",
+ "batchChunking": "Divisão em lotes",
+ "chunking": "Divisão",
+ "chunkingTooltip": "Divida o arquivo em vários blocos de texto e vetorize, podendo ser usado para busca semântica e diálogo sobre o arquivo",
+ "confirmDelete": "Você está prestes a excluir este arquivo. Após a exclusão, ele não poderá ser recuperado. Por favor, confirme sua ação.",
+ "confirmDeleteMultiFiles": "Você está prestes a excluir {{count}} arquivos selecionados. Após a exclusão, eles não poderão ser recuperados. Por favor, confirme sua ação.",
+ "confirmRemoveFromKnowledgeBase": "Você está prestes a remover {{count}} arquivos selecionados do banco de conhecimento. Após a remoção, os arquivos ainda poderão ser visualizados em todos os arquivos. Por favor, confirme sua ação.",
+ "copyUrl": "Copiar link",
+ "copyUrlSuccess": "Endereço do arquivo copiado com sucesso",
+ "createChunkingTask": "Preparando...",
+ "deleteSuccess": "Arquivo excluído com sucesso",
+ "downloading": "Baixando arquivo...",
+ "removeFromKnowledgeBase": "Remover do banco de conhecimento",
+ "removeFromKnowledgeBaseSuccess": "Arquivo removido com sucesso"
+ },
+ "bottom": "Você chegou ao final",
+ "config": {
+ "showFilesInKnowledgeBase": "Mostrar conteúdo no banco de conhecimento"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "Carregar arquivo",
+ "folder": "Carregar pasta",
+ "knowledgeBase": "Criar novo banco de conhecimento"
+ },
+ "or": "ou",
+ "title": "Arraste arquivos ou pastas para cá"
+ },
+ "title": {
+ "createdAt": "Data de criação",
+ "size": "Tamanho",
+ "title": "Arquivo"
+ },
+ "total": {
+ "fileCount": "Total de {{count}} itens",
+ "selectedCount": "Selecionados {{count}} itens"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "Os blocos de texto ainda não foram completamente vetorizados, o que resultará na funcionalidade de busca semântica indisponível. Para melhorar a qualidade da busca, por favor, vetorize os blocos de texto.",
+ "error": "Falha na vetorização",
+ "errorResult": "Falha na vetorização, por favor verifique e tente novamente. Motivo da falha:",
+ "processing": "Os blocos de texto estão sendo vetorizados, por favor, aguarde.",
+ "success": "Atualmente, todos os blocos de texto foram vetorizados."
+ },
+ "embeddings": "Vetorizar",
+ "status": {
+ "error": "Falha na divisão",
+ "errorResult": "Falha na divisão, por favor, verifique e tente novamente. Motivo da falha:",
+ "processing": "Dividindo",
+ "processingTip": "O servidor está dividindo os blocos de texto, fechar a página não afetará o progresso da divisão."
+ }
+ }
+ },
+ "GoBack": {
+ "back": "Voltar"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "Modelo personalizado, por padrão, suporta chamadas de função e reconhecimento visual. Por favor, verifique a disponibilidade dessas capacidades de acordo com a situação real.",
+ "file": "Este modelo suporta leitura e reconhecimento de arquivos enviados.",
+ "functionCall": "Este modelo suporta chamadas de função.",
+ "tokens": "Este modelo suporta no máximo {{tokens}} tokens por sessão.",
+ "vision": "Este modelo suporta reconhecimento visual."
+ },
+ "removed": "Este modelo não está na lista, se for desmarcado, será removido automaticamente"
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "Nenhum modelo habilitado. Por favor, vá para as configurações e habilite um.",
+ "provider": "Fornecedor"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/discover.json b/DigitalHumanWeb/locales/pt-BR/discover.json
new file mode 100644
index 0000000..4b88bc8
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "Adicionar Assistente",
+ "addAgentAndConverse": "Adicionar Assistente e Conversar",
+ "addAgentSuccess": "Adição bem-sucedida",
+ "conversation": {
+ "l1": "Olá, eu sou **{{name}}**, você pode me fazer qualquer pergunta e eu farei o meu melhor para responder ~",
+ "l2": "Aqui estão minhas capacidades: ",
+ "l3": "Vamos começar a conversa!"
+ },
+ "description": "Introdução ao Assistente",
+ "detail": "Detalhes",
+ "list": "Lista de Assistentes",
+ "more": "Mais",
+ "plugins": "Integrar plugins",
+ "recentSubmits": "Atualizações Recentes",
+ "suggestions": "Sugestões Relacionadas",
+ "systemRole": "Configuração do Assistente",
+ "try": "Experimente"
+ },
+ "back": "Voltar à descoberta",
+ "category": {
+ "assistant": {
+ "academic": "Acadêmico",
+ "all": "Todos",
+ "career": "Carreira",
+ "copywriting": "Redação",
+ "design": "Design",
+ "education": "Educação",
+ "emotions": "Emoções",
+ "entertainment": "Entretenimento",
+ "games": "Jogos",
+ "general": "Geral",
+ "life": "Vida",
+ "marketing": "Marketing",
+ "office": "Escritório",
+ "programming": "Programação",
+ "translation": "Tradução"
+ },
+ "plugin": {
+ "all": "Todos",
+ "gaming-entertainment": "Jogos e Entretenimento",
+ "life-style": "Estilo de Vida",
+ "media-generate": "Geração de Mídia",
+ "science-education": "Ciência e Educação",
+ "social": "Mídias Sociais",
+ "stocks-finance": "Ações e Finanças",
+ "tools": "Ferramentas Úteis",
+ "web-search": "Busca na Web"
+ }
+ },
+ "cleanFilter": "Limpar Filtro",
+ "create": "Criar",
+ "createGuide": {
+ "func1": {
+ "desc1": "No painel de conversa, acesse a página de configurações do assistente que você deseja enviar pelo canto superior direito;",
+ "desc2": "Clique no botão de enviar para o mercado de assistentes no canto superior direito.",
+ "tag": "Método Um",
+ "title": "Enviar através do LobeChat"
+ },
+ "func2": {
+ "button": "Ir para o repositório de Assistentes no Github",
+ "desc": "Se você deseja adicionar o assistente ao índice, crie uma entrada usando agent-template.json ou agent-template-full.json no diretório plugins, escreva uma breve descrição e marque adequadamente, em seguida, crie um pull request.",
+ "tag": "Método Dois",
+ "title": "Enviar através do Github"
+ }
+ },
+ "dislike": "Não Gosto",
+ "filter": "Filtrar",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "Todos os Autores",
+ "followed": "Autores Seguidos",
+ "title": "Faixa de Autores"
+ },
+ "contentLength": "Comprimento Mínimo do Contexto",
+ "maxToken": {
+ "title": "Definir Comprimento Máximo (Token)",
+ "unlimited": "Ilimitado"
+ },
+ "other": {
+ "functionCall": "Suporte a Chamadas de Função",
+ "title": "Outros",
+ "vision": "Suporte a Reconhecimento Visual",
+ "withKnowledge": "Com Base de Conhecimento",
+ "withTool": "Com Plugins"
+ },
+ "pricing": "Preço do Modelo",
+ "timePeriod": {
+ "all": "Todo o Tempo",
+ "day": "Últimas 24 Horas",
+ "month": "Últimos 30 Dias",
+ "title": "Faixa de Tempo",
+ "week": "Últimos 7 Dias",
+ "year": "Último Ano"
+ }
+ },
+ "home": {
+ "featuredAssistants": "Assistentes Recomendados",
+ "featuredModels": "Modelos Recomendados",
+ "featuredProviders": "Provedores de Modelos Recomendados",
+ "featuredTools": "Plugins Recomendados",
+ "more": "Descubra Mais"
+ },
+ "like": "Gosto",
+ "models": {
+ "chat": "Iniciar Conversa",
+ "contentLength": "Comprimento Máximo do Contexto",
+ "free": "Gratuito",
+ "guide": "Guia de Configuração",
+ "list": "Lista de Modelos",
+ "more": "Mais",
+ "parameterList": {
+ "defaultValue": "Valor Padrão",
+ "docs": "Ver Documentação",
+ "frequency_penalty": {
+ "desc": "Esta configuração ajusta a frequência com que o modelo reutiliza vocabulário específico que já apareceu na entrada. Valores mais altos reduzem a probabilidade de repetição, enquanto valores negativos têm o efeito oposto. A penalidade de vocabulário não aumenta com o número de ocorrências. Valores negativos incentivam a reutilização de vocabulário.",
+ "title": "Penalidade de Frequência"
+ },
+ "max_tokens": {
+ "desc": "Esta configuração define o comprimento máximo que o modelo pode gerar em uma única resposta. Definir um valor mais alto permite que o modelo produza respostas mais longas, enquanto um valor mais baixo limita o comprimento da resposta, tornando-a mais concisa. Ajustar esse valor de forma razoável de acordo com diferentes cenários de aplicação pode ajudar a alcançar o comprimento e o nível de detalhe desejados na resposta.",
+ "title": "Limite de resposta única"
+ },
+ "presence_penalty": {
+ "desc": "Esta configuração visa controlar a reutilização de vocabulário com base na frequência com que aparece na entrada. Ela tenta usar menos palavras que aparecem com frequência, proporcionalmente à sua frequência de ocorrência. A penalidade de vocabulário aumenta com o número de ocorrências. Valores negativos incentivam a reutilização de vocabulário.",
+ "title": "Novidade do Tópico"
+ },
+ "range": "Faixa",
+ "temperature": {
+ "desc": "Esta configuração afeta a diversidade das respostas do modelo. Valores mais baixos resultam em respostas mais previsíveis e típicas, enquanto valores mais altos incentivam respostas mais variadas e incomuns. Quando o valor é 0, o modelo sempre dá a mesma resposta para uma entrada dada.",
+ "title": "Aleatoriedade"
+ },
+ "title": "Parâmetros do Modelo",
+ "top_p": {
+ "desc": "Esta configuração limita a seleção do modelo a uma certa proporção de vocabulário com maior probabilidade: seleciona apenas aquelas palavras cujo total acumulado de probabilidade atinge P. Valores mais baixos tornam as respostas do modelo mais previsíveis, enquanto a configuração padrão permite que o modelo escolha de todo o vocabulário disponível.",
+ "title": "Amostragem Nuclear"
+ },
+ "type": "Tipo"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChat suporta o uso de chaves API personalizadas para este provedor.",
+ "input": "Preço de Entrada",
+ "inputTooltip": "Custo por milhão de Tokens",
+ "latency": "Latência",
+ "latencyTooltip": "Tempo médio de resposta do provedor para enviar o primeiro Token",
+ "maxOutput": "Comprimento Máximo de Saída",
+ "maxOutputTooltip": "Número máximo de Tokens que este endpoint pode gerar",
+ "officialTooltip": "Serviço Oficial do LobeHub",
+ "output": "Preço de Saída",
+ "outputTooltip": "Custo por milhão de Tokens",
+ "streamCancellationTooltip": "Este provedor suporta a funcionalidade de cancelamento de fluxo.",
+ "throughput": "Taxa de Transferência",
+ "throughputTooltip": "Número médio de Tokens transmitidos por segundo em solicitações de fluxo"
+ },
+ "suggestions": "Modelos Relacionados",
+ "supportedProviders": "Provedores que suportam este modelo"
+ },
+ "plugins": {
+ "community": "Plugins da Comunidade",
+ "install": "Instalar Plugin",
+ "installed": "Instalado",
+ "list": "Lista de Plugins",
+ "meta": {
+ "description": "Descrição",
+ "parameter": "Parâmetro",
+ "title": "Parâmetros da Ferramenta",
+ "type": "Tipo"
+ },
+ "more": "Mais",
+ "official": "Plugins Oficiais",
+ "recentSubmits": "Atualizações Recentes",
+ "suggestions": "Sugestões Relacionadas"
+ },
+ "providers": {
+ "config": "Configurar Provedor",
+ "list": "Lista de Provedores de Modelos",
+ "modelCount": "{{count}} modelos",
+ "modelSite": "Documentação do modelo",
+ "more": "Mais",
+ "officialSite": "Site oficial",
+ "showAllModels": "Mostrar todos os modelos",
+ "suggestions": "Provedores Relacionados",
+ "supportedModels": "Modelos Suportados"
+ },
+ "search": {
+ "placeholder": "Pesquisar nome, descrição ou palavras-chave...",
+ "result": "{{count}} resultados de busca sobre {{keyword}}",
+ "searching": "Buscando..."
+ },
+ "sort": {
+ "mostLiked": "Mais Curtido",
+ "mostUsed": "Mais Usado",
+ "newest": "Mais Novo",
+ "oldest": "Mais Antigo",
+ "recommended": "Recomendado"
+ },
+ "tab": {
+ "assistants": "Assistentes",
+ "home": "Início",
+ "models": "Modelos",
+ "plugins": "Plugins",
+ "providers": "Provedores de Modelos"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/error.json b/DigitalHumanWeb/locales/pt-BR/error.json
new file mode 100644
index 0000000..dc9479d
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "Continuar a sessão",
+ "desc": "{{greeting}}, é um prazer poder continuar a te ajudar. Vamos continuar a conversa de onde paramos.",
+ "title": "Bem-vindo de volta, {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "Voltar para a página inicial",
+ "desc": "Tente novamente mais tarde, ou retorne ao mundo conhecido",
+ "retry": "Tentar novamente",
+ "title": "Ocorreu um problema na página.."
+ },
+ "fetchError": "Falha na solicitação",
+ "fetchErrorDetail": "Detalhes do erro",
+ "notFound": {
+ "backHome": "Voltar para a página inicial",
+ "check": "Por favor, verifique se a sua URL está correta",
+ "desc": "Não conseguimos encontrar a página que você está procurando",
+ "title": "Entrou em um território desconhecido?"
+ },
+ "pluginSettings": {
+ "desc": "Complete a seguinte configuração para começar a usar este plugin",
+ "title": "Configuração do plugin {{name}}"
+ },
+ "response": {
+ "400": "Desculpe, o servidor não entendeu sua solicitação. Verifique se os parâmetros da sua solicitação estão corretos",
+ "401": "Desculpe, o servidor recusou sua solicitação, possivelmente devido à falta de permissão ou autenticação inválida",
+ "403": "Desculpe, o servidor recusou sua solicitação. Você não tem permissão para acessar este conteúdo",
+ "404": "Desculpe, o servidor não encontrou a página ou recurso solicitado. Verifique se a URL está correta",
+ "405": "Desculpe, o servidor não suporta o método de solicitação que você está usando. Verifique se o método de solicitação está correto",
+ "406": "Desculpe, o servidor não pode completar a solicitação devido às características do conteúdo solicitado",
+ "407": "Desculpe, é necessário autenticação de proxy para continuar com esta solicitação",
+ "408": "Desculpe, o servidor excedeu o tempo de espera pela solicitação, verifique sua conexão de rede e tente novamente",
+ "409": "Desculpe, a solicitação não pôde ser processada devido a um conflito, possivelmente devido à incompatibilidade entre o estado do recurso e a solicitação",
+ "410": "Desculpe, o recurso solicitado foi permanentemente removido e não pode ser encontrado",
+ "411": "Desculpe, o servidor não pode processar uma solicitação sem um tamanho de conteúdo válido",
+ "412": "Desculpe, sua solicitação não atende às condições do servidor e não pode ser concluída",
+ "413": "Desculpe, sua solicitação contém uma quantidade de dados muito grande e o servidor não pode processá-la",
+ "414": "Desculpe, o URI da sua solicitação é muito longo e o servidor não pode processá-lo",
+ "415": "Desculpe, o servidor não pode processar o formato de mídia anexado à solicitação",
+ "416": "Desculpe, o servidor não pode atender à faixa solicitada",
+ "417": "Desculpe, o servidor não pode atender às suas expectativas",
+ "422": "Desculpe, sua solicitação está correta em termos de formato, mas contém erros semânticos e não pode ser respondida",
+ "423": "Desculpe, o recurso solicitado está bloqueado",
+ "424": "Desculpe, devido a uma solicitação anterior mal sucedida, a solicitação atual não pode ser concluída",
+ "426": "Desculpe, o servidor exige que seu cliente seja atualizado para uma versão de protocolo mais alta",
+ "428": "Desculpe, o servidor requer pré-condições e solicita que sua solicitação inclua cabeçalhos de condição corretos",
+ "429": "Desculpe, sua solicitação é muito frequente e o servidor está um pouco sobrecarregado, por favor, tente novamente mais tarde",
+ "431": "Desculpe, o campo de cabeçalho da sua solicitação é muito grande e o servidor não pode processá-lo",
+ "451": "Desculpe, por razões legais, o servidor se recusa a fornecer este recurso",
+ "500": "Desculpe, o servidor parece estar enfrentando algumas dificuldades e não pode concluir sua solicitação no momento. Por favor, tente novamente mais tarde",
+ "502": "Desculpe, o servidor parece estar temporariamente indisponível. Por favor, tente novamente mais tarde",
+ "503": "Desculpe, o servidor não pode processar sua solicitação no momento, possivelmente devido a sobrecarga ou manutenção. Por favor, tente novamente mais tarde",
+ "504": "Desculpe, o servidor não recebeu resposta do servidor upstream. Por favor, tente novamente mais tarde",
+ "AgentRuntimeError": "Erro de execução do modelo de linguagem Lobe, por favor, verifique as informações abaixo ou tente novamente",
+ "FreePlanLimit": "Atualmente, você é um usuário gratuito e não pode usar essa função. Por favor, faça upgrade para um plano pago para continuar usando.",
+ "InvalidAccessCode": "Senha de acesso inválida ou em branco. Por favor, insira a senha de acesso correta ou adicione uma Chave de API personalizada.",
+ "InvalidBedrockCredentials": "Credenciais Bedrock inválidas, por favor, verifique AccessKeyId/SecretAccessKey e tente novamente",
+ "InvalidClerkUser": "Desculpe, você ainda não fez login. Por favor, faça login ou registre uma conta antes de continuar.",
+ "InvalidGithubToken": "O Token de Acesso Pessoal do Github está incorreto ou vazio. Por favor, verifique o Token de Acesso Pessoal do Github e tente novamente.",
+ "InvalidOllamaArgs": "Configuração Ollama inválida, verifique a configuração do Ollama e tente novamente",
+ "InvalidProviderAPIKey": "{{provider}} API Key inválido ou em branco, por favor, verifique o {{provider}} API Key e tente novamente",
+ "LocationNotSupportError": "Desculpe, sua localização atual não suporta este serviço de modelo, pode ser devido a restrições geográficas ou serviço não disponível. Por favor, verifique se a localização atual suporta o uso deste serviço ou tente usar outras informações de localização.",
+ "NoOpenAIAPIKey": "A chave de API do OpenAI está em branco. Adicione uma chave de API personalizada do OpenAI",
+ "OllamaBizError": "Erro de negócio ao solicitar o serviço Ollama, verifique as informações a seguir ou tente novamente",
+ "OllamaServiceUnavailable": "O serviço Ollama não está disponível. Verifique se o Ollama está em execução corretamente ou se a configuração de CORS do Ollama está correta",
+ "OpenAIBizError": "Erro no serviço OpenAI solicitado. Por favor, verifique as informações abaixo ou tente novamente.",
+ "PluginApiNotFound": "Desculpe, o API especificado não existe no manifesto do plugin. Verifique se o método de solicitação corresponde ao API do manifesto do plugin",
+ "PluginApiParamsError": "Desculpe, a validação dos parâmetros de entrada da solicitação do plugin falhou. Verifique se os parâmetros de entrada correspondem às informações de descrição do API",
+ "PluginFailToTransformArguments": "Desculpe, falha ao transformar os argumentos da chamada do plugin. Por favor, tente regerar a mensagem do assistente ou tente novamente com um modelo de IA de chamada de ferramentas mais robusto.",
+ "PluginGatewayError": "Desculpe, ocorreu um erro no gateway do plugin. Verifique se a configuração do gateway do plugin está correta",
+ "PluginManifestInvalid": "Desculpe, a validação do manifesto de descrição do plugin falhou. Verifique se o formato do manifesto de descrição está correto",
+ "PluginManifestNotFound": "Desculpe, o servidor não encontrou o manifesto de descrição do plugin (manifest.json). Verifique se o endereço do arquivo de descrição do plugin está correto",
+ "PluginMarketIndexInvalid": "Desculpe, a validação do índice do plugin falhou. Verifique se o formato do arquivo do índice está correto",
+ "PluginMarketIndexNotFound": "Desculpe, o servidor não encontrou o índice do plugin. Verifique se o endereço do índice está correto",
+ "PluginMetaInvalid": "Desculpe, a validação das metainformações do plugin falhou. Verifique se o formato das metainformações do plugin está correto",
+ "PluginMetaNotFound": "Desculpe, o plugin não foi encontrado no índice. Verifique as informações de configuração do plugin no índice",
+ "PluginOpenApiInitError": "Desculpe, a inicialização do cliente OpenAPI falhou. Verifique se as informações de configuração do OpenAPI estão corretas",
+ "PluginServerError": "Erro na resposta do servidor do plugin. Verifique o arquivo de descrição do plugin, a configuração do plugin ou a implementação do servidor de acordo com as informações de erro abaixo",
+ "PluginSettingsInvalid": "Este plugin precisa ser configurado corretamente antes de ser usado. Verifique se sua configuração está correta",
+ "ProviderBizError": "Erro no serviço {{provider}} solicitado. Por favor, verifique as informações abaixo ou tente novamente.",
+ "StreamChunkError": "Erro de análise do bloco de mensagem da solicitação em fluxo. Verifique se a interface da API atual está em conformidade com os padrões ou entre em contato com seu fornecedor de API para mais informações.",
+ "SubscriptionPlanLimit": "Você atingiu o limite de sua assinatura e não pode usar essa função. Por favor, faça upgrade para um plano superior ou compre um pacote de recursos para continuar usando.",
+ "UnknownChatFetchError": "Desculpe, ocorreu um erro desconhecido na solicitação. Por favor, verifique as informações abaixo ou tente novamente."
+ },
+ "stt": {
+ "responseError": "Falha na solicitação de serviço. Verifique a configuração ou tente novamente"
+ },
+ "tts": {
+ "responseError": "Falha na solicitação de serviço. Verifique a configuração ou tente novamente"
+ },
+ "unlock": {
+ "addProxyUrl": "Adicionar URL de proxy OpenAI (opcional)",
+ "apiKey": {
+ "description": "Insira sua chave de API {{name}} para iniciar a sessão",
+ "title": "Usar chave de API personalizada {{name}}"
+ },
+ "closeMessage": "Fechar mensagem",
+ "confirm": "Confirmar e tentar novamente",
+ "oauth": {
+ "description": "O administrador ativou a autenticação de login unificado. Clique no botão abaixo para fazer login e desbloquear o aplicativo.",
+ "success": "Login bem-sucedido",
+ "title": "Faça login na sua conta",
+ "welcome": "Bem-vindo!"
+ },
+ "password": {
+ "description": "O administrador ativou a criptografia do aplicativo. Insira a senha do aplicativo para desbloqueá-lo. A senha só precisa ser inserida uma vez.",
+ "placeholder": "Insira a senha",
+ "title": "Insira a senha para desbloquear o aplicativo"
+ },
+ "tabs": {
+ "apiKey": "Chave de API personalizada",
+ "password": "Senha"
+ }
+ },
+ "upload": {
+ "desc": "Detalhes: {{detail}}",
+ "fileOnlySupportInServerMode": "O modo de implantação atual não suporta o upload de arquivos que não sejam imagens. Para fazer o upload de arquivos no formato {{ext}}, mude para a implantação do banco de dados no servidor ou utilize o serviço {{cloud}}.",
+ "networkError": "Por favor, verifique se sua rede está funcionando corretamente e se a configuração de CORS do serviço de armazenamento de arquivos está correta.",
+ "title": "Falha ao enviar o arquivo, verifique a conexão de rede ou tente novamente mais tarde",
+ "unknownError": "Erro: {{reason}}",
+ "uploadFailed": "Falha ao fazer o upload do arquivo."
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/file.json b/DigitalHumanWeb/locales/pt-BR/file.json
new file mode 100644
index 0000000..db3e1bd
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "Gerencie seus arquivos e repositórios de conhecimento",
+ "detail": {
+ "basic": {
+ "createdAt": "Data de criação",
+ "filename": "Nome do arquivo",
+ "size": "Tamanho do arquivo",
+ "title": "Informações básicas",
+ "type": "Formato",
+ "updatedAt": "Data de atualização"
+ },
+ "data": {
+ "chunkCount": "Número de partes",
+ "embedding": {
+ "default": "Ainda não vetorizado",
+ "error": "Falha",
+ "pending": "Aguardando início",
+ "processing": "Processando",
+ "success": "Concluído"
+ },
+ "embeddingStatus": "Vetorização"
+ }
+ },
+ "empty": "Nenhum arquivo/pasta enviado até o momento",
+ "header": {
+ "actions": {
+ "newFolder": "Nova pasta",
+ "uploadFile": "Enviar arquivo",
+ "uploadFolder": "Enviar pasta"
+ },
+ "uploadButton": "Enviar"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "Você está prestes a excluir este repositório de conhecimento. Os arquivos não serão excluídos, mas serão movidos para 'Todos os arquivos'. Após a exclusão, o repositório não poderá ser recuperado, por favor, proceda com cautela.",
+ "empty": "Clique em <1>+1> para começar a criar um repositório de conhecimento"
+ },
+ "new": "Novo repositório de conhecimento",
+ "title": "Repositório de conhecimento"
+ },
+ "networkError": "Falha ao acessar a base de conhecimento, por favor verifique a conexão de rede e tente novamente",
+ "notSupportGuide": {
+ "desc": "A instância atual está no modo de banco de dados cliente e não pode usar a funcionalidade de gerenciamento de arquivos. Por favor, mude para <1>modo de banco de dados servidor1>, ou use diretamente <3>LobeChat Cloud3>",
+ "features": {
+ "allKind": {
+ "desc": "Suporta os principais tipos de arquivos, incluindo formatos comuns como Word, PPT, Excel, PDF, TXT, além de arquivos de código como JS, Python",
+ "title": "Análise de vários tipos de arquivos"
+ },
+ "embeddings": {
+ "desc": "Utiliza modelos vetoriais de alto desempenho para vetorização de partes de texto, permitindo a busca semântica do conteúdo dos arquivos",
+ "title": "Semantização vetorial"
+ },
+ "repos": {
+ "desc": "Permite a criação de repositórios de conhecimento e a adição de diferentes tipos de arquivos, construindo seu próprio conhecimento de domínio",
+ "title": "Repositório de conhecimento"
+ }
+ },
+ "title": "O modo de implantação atual não suporta gerenciamento de arquivos"
+ },
+ "preview": {
+ "downloadFile": "Baixar arquivo",
+ "unsupportedFileAndContact": "Este formato de arquivo não é suportado para visualização online. Se você tiver interesse em visualizar, sinta-se à vontade para <1>nos enviar um feedback1>."
+ },
+ "searchFilePlaceholder": "Pesquisar arquivo",
+ "tab": {
+ "all": "Todos os arquivos",
+ "audios": "Áudios",
+ "documents": "Documentos",
+ "images": "Imagens",
+ "videos": "Vídeos",
+ "websites": "Sites"
+ },
+ "title": "Arquivos",
+ "uploadDock": {
+ "body": {
+ "collapse": "Recolher",
+ "item": {
+ "done": "Enviado",
+ "error": "Falha no envio, por favor, tente novamente",
+ "pending": "Preparando para enviar...",
+ "processing": "Processando arquivo...",
+ "restTime": "Restante {{time}}"
+ }
+ },
+ "totalCount": "Total de {{count}} itens",
+ "uploadStatus": {
+ "error": "Erro no envio",
+ "pending": "Aguardando envio",
+ "processing": "Enviando",
+ "success": "Envio concluído",
+ "uploading": "Enviando"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/knowledgeBase.json b/DigitalHumanWeb/locales/pt-BR/knowledgeBase.json
new file mode 100644
index 0000000..e770faf
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "Arquivo adicionado com sucesso, <1>ver agora1>",
+ "confirm": "Adicionar",
+ "id": {
+ "placeholder": "Selecione o conhecimento a ser adicionado",
+ "required": "Selecione o conhecimento",
+ "title": "Conhecimento alvo"
+ },
+ "title": "Adicionar ao conhecimento",
+ "totalFiles": "Foram selecionados {{count}} arquivos"
+ },
+ "createNew": {
+ "confirm": "Criar novo",
+ "description": {
+ "placeholder": "Descrição do conhecimento (opcional)"
+ },
+ "formTitle": "Informações básicas",
+ "name": {
+ "placeholder": "Nome do conhecimento",
+ "required": "Por favor, preencha o nome do conhecimento"
+ },
+ "title": "Criar conhecimento"
+ },
+ "tab": {
+ "evals": "Avaliações",
+ "files": "Documentos",
+ "settings": "Configurações",
+ "testing": "Teste de recuperação"
+ },
+ "title": "Conhecimento"
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/market.json b/DigitalHumanWeb/locales/pt-BR/market.json
new file mode 100644
index 0000000..41e9a88
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "Adicionar assistente",
+ "addAgentAndConverse": "Adicionar assistente e conversar",
+ "addAgentSuccess": "Adição bem-sucedida",
+ "guide": {
+ "func1": {
+ "desc1": "Na janela de conversa, acesse a página de configurações que você deseja enviar para o assistente através do ícone no canto superior direito;",
+ "desc2": "Clique no botão de envio para o mercado de assistentes no canto superior direito.",
+ "tag": "Método um",
+ "title": "Enviar através do LobeChat"
+ },
+ "func2": {
+ "button": "Ir para o repositório de assistentes no Github",
+ "desc": "Se deseja adicionar o assistente ao índice, use agent-template.json ou agent-template-full.json para criar uma entrada no diretório de plugins, escreva uma breve descrição e marque adequadamente, em seguida, crie uma solicitação de recebimento.",
+ "tag": "Método dois",
+ "title": "Enviar através do Github"
+ }
+ },
+ "search": {
+ "placeholder": "Buscar por nome, descrição ou palavra-chave do assistente..."
+ },
+ "sidebar": {
+ "comment": "Comentários",
+ "prompt": "Dica",
+ "title": "Detalhes do assistente"
+ },
+ "submitAgent": "Enviar assistente",
+ "title": {
+ "allAgents": "Todos os assistentes",
+ "recentSubmits": "Envios recentes"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/metadata.json b/DigitalHumanWeb/locales/pt-BR/metadata.json
new file mode 100644
index 0000000..ff20720
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}} traz a você a melhor experiência de uso do ChatGPT, Claude, Gemini e OLLaMA WebUI",
+ "title": "{{appName}}: Ferramenta de eficiência pessoal em IA, dê a si mesmo um cérebro mais inteligente"
+ },
+ "discover": {
+ "assistants": {
+ "description": "Criação de conteúdo, redação, perguntas e respostas, geração de imagens, geração de vídeos, geração de voz, Agentes Inteligentes, fluxos de trabalho automatizados, personalize seu assistente inteligente AI / GPTs / OLLaMA",
+ "title": "Assistentes de IA"
+ },
+ "description": "Criação de conteúdo, redação, perguntas e respostas, geração de imagens, geração de vídeos, geração de voz, Agentes Inteligentes, fluxos de trabalho automatizados, aplicativos de IA personalizados, personalize sua estação de trabalho de aplicativos AI",
+ "models": {
+ "description": "Explore os principais modelos de IA OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek",
+ "title": "Modelos de IA"
+ },
+ "plugins": {
+ "description": "Explore gráficos, geração acadêmica, geração de imagens, geração de vídeos, geração de voz e automação de fluxos de trabalho, integrando ricas capacidades de plugins para o seu assistente.",
+ "title": "Plugins de IA"
+ },
+ "providers": {
+ "description": "Explore os principais fornecedores de modelos OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter",
+ "title": "Provedores de Modelos de IA"
+ },
+ "search": "Pesquisar",
+ "title": "Descobrir"
+ },
+ "plugins": {
+ "description": "Pesquisa, geração de gráficos, acadêmico, geração de imagens, geração de vídeos, geração de voz, fluxos de trabalho automatizados, personalize as capacidades de plugins ToolCall exclusivos do ChatGPT / Claude",
+ "title": "Mercado de Plugins"
+ },
+ "welcome": {
+ "description": "{{appName}} traz a você a melhor experiência de uso do ChatGPT, Claude, Gemini e OLLaMA WebUI",
+ "title": "Bem-vindo ao {{appName}}: Ferramenta de eficiência pessoal em IA, dê a si mesmo um cérebro mais inteligente"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/migration.json b/DigitalHumanWeb/locales/pt-BR/migration.json
new file mode 100644
index 0000000..c17674d
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "Limpar banco de dados",
+ "downloadBackup": "Baixar backup",
+ "reUpgrade": "Reinstalar",
+ "start": "Começar",
+ "upgrade": "Atualização"
+ },
+ "clear": {
+ "confirm": "Você está prestes a limpar os dados locais (as configurações globais não serão afetadas). Por favor, confirme se você fez o download do backup dos dados."
+ },
+ "description": "Na nova versão, o armazenamento de dados do {{appName}} teve um grande avanço. Portanto, precisamos atualizar os dados da versão anterior para proporcionar uma melhor experiência de uso.",
+ "features": {
+ "capability": {
+ "desc": "Baseado na tecnologia IndexedDB, é capaz de armazenar todas as suas mensagens de conversa ao longo da vida.",
+ "title": "Alta Capacidade"
+ },
+ "performance": {
+ "desc": "Milhões de mensagens indexadas automaticamente, com respostas de consulta em milissegundos.",
+ "title": "Alto Desempenho"
+ },
+ "use": {
+ "desc": "Suporta a busca por títulos, descrições, etiquetas, conteúdo de mensagens e até textos traduzidos, aumentando significativamente a eficiência das buscas diárias.",
+ "title": "Mais Fácil de Usar"
+ }
+ },
+ "title": "Evolução dos Dados do {{appName}}",
+ "upgrade": {
+ "error": {
+ "subTitle": "Lamentamos, ocorreu uma anomalia durante o processo de atualização do banco de dados. Por favor, tente as seguintes soluções: A. Limpe os dados locais e reimporte os dados de backup; B. Clique no botão 'Reatualizar'.
Se o erro persistir, por favor <1>envie um problema1>, e nós iremos ajudá-lo a resolver o mais rápido possível.",
+ "title": "Falha na Atualização do Banco de Dados"
+ },
+ "success": {
+ "subTitle": "O banco de dados do {{appName}} foi atualizado para a versão mais recente, comece a aproveitar agora!",
+ "title": "Atualização do Banco de Dados Bem-Sucedida"
+ }
+ },
+ "upgradeTip": "A atualização leva aproximadamente 10 a 20 segundos, por favor, não feche o {{appName}} durante o processo."
+ },
+ "migrateError": {
+ "missVersion": "O arquivo de importação está sem o número da versão. Por favor, verifique o arquivo e tente novamente.",
+ "noMigration": "Não foi encontrado um plano de migração correspondente à versão atual. Por favor, verifique o número da versão e tente novamente. Se o problema persistir, por favor, envie um relatório de problema."
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/modelProvider.json b/DigitalHumanWeb/locales/pt-BR/modelProvider.json
new file mode 100644
index 0000000..2fda52a
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "A versão da API da Azure, seguindo o formato AAAA-MM-DD, consulte a [versão mais recente](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#chat-completions)",
+ "fetch": "Obter lista",
+ "title": "Versão da API da Azure"
+ },
+ "empty": "Por favor, insira o ID do modelo para adicionar o primeiro modelo",
+ "endpoint": {
+ "desc": "Você pode encontrar este valor na seção 'Chaves e Endpoints' ao verificar os recursos no portal Azure",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "Endereço da API Azure"
+ },
+ "modelListPlaceholder": "Selecione ou adicione o modelo OpenAI que você implantou",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "Você pode encontrar este valor na seção 'Chaves e Endpoints' ao verificar os recursos no portal Azure. Você pode usar KEY1 ou KEY2",
+ "placeholder": "Azure API Key",
+ "title": "API Key"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "Insira o AWS Access Key Id",
+ "placeholder": "AWS Access Key Id",
+ "title": "AWS Access Key Id"
+ },
+ "checker": {
+ "desc": "Teste se o AccessKeyId / SecretAccessKey foi preenchido corretamente"
+ },
+ "region": {
+ "desc": "Insira o AWS Region",
+ "placeholder": "AWS Region",
+ "title": "AWS Region"
+ },
+ "secretAccessKey": {
+ "desc": "Insira o AWS Secret Access Key",
+ "placeholder": "AWS Secret Access Key",
+ "title": "AWS Secret Access Key"
+ },
+ "sessionToken": {
+ "desc": "Se você estiver usando AWS SSO/STS, insira seu Token de Sessão da AWS",
+ "placeholder": "Token de Sessão da AWS",
+ "title": "Token de Sessão da AWS (opcional)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "Região de serviço personalizada",
+ "customSessionToken": "Token de Sessão Personalizado",
+ "description": "Digite sua AWS AccessKeyId / SecretAccessKey para iniciar a sessão. O aplicativo não irá armazenar suas configurações de autenticação",
+ "title": "Usar informações de autenticação Bedrock personalizadas"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "Insira seu PAT do Github, clique [aqui](https://github.com/settings/tokens) para criar",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "Teste se o endereço do proxy está corretamente preenchido",
+ "title": "Verificação de Conectividade"
+ },
+ "customModelName": {
+ "desc": "Adicione modelos personalizados, separe múltiplos modelos com vírgulas (,)",
+ "placeholder": "vicuna,llava,codellama,llama2:13b-text",
+ "title": "Nomes dos Modelos Personalizados"
+ },
+ "download": {
+ "desc": "Ollama is downloading the model. Please do not close this page. It will resume from where it left off if you restart the download.",
+ "remainingTime": "Remaining Time",
+ "speed": "Download Speed",
+ "title": "Downloading model {{model}}"
+ },
+ "endpoint": {
+ "desc": "Insira o endereço do proxy de interface da Ollama, se não foi especificado localmente, pode deixar em branco",
+ "title": "Endereço do Proxy de Interface"
+ },
+ "setup": {
+ "cors": {
+ "description": "Devido às restrições de segurança do navegador, você precisa configurar o Ollama para permitir o acesso entre domínios.",
+ "linux": {
+ "env": "Sob a seção [Service], adicione `Environment` e inclua a variável de ambiente OLLAMA_ORIGINS:",
+ "reboot": "Recarregue o systemd e reinicie o Ollama.",
+ "systemd": "Chame o systemd para editar o serviço ollama:"
+ },
+ "macos": "Abra o aplicativo 'Terminal', cole o comando abaixo e pressione Enter para executar:",
+ "reboot": "Após a conclusão, reinicie o serviço Ollama.",
+ "title": "Configurar o Ollama para permitir acesso entre domínios",
+ "windows": "No Windows, acesse o 'Painel de Controle' e edite as variáveis de ambiente do sistema. Crie uma nova variável de ambiente chamada 'OLLAMA_ORIGINS' para sua conta de usuário, com o valor '*', e clique em 'OK/Aplicar' para salvar."
+ },
+ "install": {
+ "description": "Certifique-se de que você ativou o Ollama. Se ainda não o fez, baixe o Ollama no site oficial <1>aqui1>.",
+ "docker": "Se preferir usar o Docker, o Ollama também oferece uma imagem oficial. Você pode puxá-la com o comando:",
+ "linux": {
+ "command": "Instale com o comando a seguir:",
+ "manual": "Ou, se preferir, consulte o <1>Guia de Instalação Manual do Linux1> para instalar manualmente."
+ },
+ "title": "Instale e inicie o aplicativo Ollama localmente",
+ "windowsTab": "Windows (Versão de Visualização)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "Cancel Download",
+ "confirm": "Download",
+ "description": "Enter your Ollama model tag to continue the session",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "Starting download...",
+ "title": "Download specified Ollama model"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI Zero e Um"
+ },
+ "zhipu": {
+ "title": "Zhipu"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/models.json b/DigitalHumanWeb/locales/pt-BR/models.json
new file mode 100644
index 0000000..a496c8d
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34B, com um rico conjunto de amostras de treinamento, oferece desempenho superior em aplicações industriais."
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9B suporta 16K Tokens, oferecendo capacidade de geração de linguagem eficiente e fluida."
+ },
+ "360gpt-pro": {
+ "description": "360GPT Pro, como um membro importante da série de modelos de IA da 360, atende a diversas aplicações de linguagem natural com sua capacidade eficiente de processamento de texto, suportando compreensão de longos textos e diálogos em múltiplas rodadas."
+ },
+ "360gpt-turbo": {
+ "description": "360GPT Turbo oferece poderosas capacidades de computação e diálogo, com excelente compreensão semântica e eficiência de geração, sendo a solução ideal de assistente inteligente para empresas e desenvolvedores."
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8K enfatiza segurança semântica e responsabilidade, projetado especificamente para cenários de aplicação com altas exigências de segurança de conteúdo, garantindo precisão e robustez na experiência do usuário."
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Pro é um modelo avançado de processamento de linguagem natural lançado pela 360, com excelente capacidade de geração e compreensão de texto, destacando-se especialmente na geração e criação de conteúdo, capaz de lidar com tarefas complexas de conversão de linguagem e interpretação de papéis."
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultra é a versão mais poderosa da série de grandes modelos Xinghuo, que, ao atualizar a conexão de busca online, melhora a capacidade de compreensão e resumo de conteúdo textual. É uma solução abrangente para aumentar a produtividade no trabalho e responder com precisão às demandas, sendo um produto inteligente líder na indústria."
+ },
+ "Baichuan2-Turbo": {
+ "description": "Utiliza tecnologia de busca aprimorada para conectar completamente o grande modelo com conhecimento de domínio e conhecimento da web. Suporta upload de vários documentos, como PDF e Word, e entrada de URLs, garantindo acesso a informações de forma rápida e abrangente, com resultados precisos e profissionais."
+ },
+ "Baichuan3-Turbo": {
+ "description": "Otimizado para cenários de alta frequência empresarial, com melhorias significativas de desempenho e excelente custo-benefício. Em comparação com o modelo Baichuan2, a criação de conteúdo aumentou em 20%, a resposta a perguntas de conhecimento em 17% e a capacidade de interpretação de papéis em 40%. O desempenho geral é superior ao do GPT-3.5."
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "Possui uma janela de contexto ultra longa de 128K, otimizada para cenários de alta frequência empresarial, com melhorias significativas de desempenho e excelente custo-benefício. Em comparação com o modelo Baichuan2, a criação de conteúdo aumentou em 20%, a resposta a perguntas de conhecimento em 17% e a capacidade de interpretação de papéis em 40%. O desempenho geral é superior ao do GPT-3.5."
+ },
+ "Baichuan4": {
+ "description": "O modelo é o melhor do país, superando modelos estrangeiros em tarefas em chinês, como enciclopédias, textos longos e criação de conteúdo. Também possui capacidades multimodais líderes na indústria, com desempenho excepcional em várias avaliações de referência."
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B) é um modelo inovador, adequado para aplicações em múltiplas áreas e tarefas complexas."
+ },
+ "Max-32k": {
+ "description": "O Spark Max 32K possui uma grande capacidade de processamento de contexto, com uma compreensão e raciocínio lógico mais robustos, suportando entradas de texto de 32K tokens, adequado para leitura de documentos longos, perguntas e respostas sobre conhecimento privado e outros cenários."
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPO é uma fusão de múltiplos modelos altamente flexível, projetada para oferecer uma experiência criativa excepcional."
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B) é um modelo de instrução de alta precisão, adequado para cálculos complexos."
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B) oferece saídas de linguagem otimizadas e diversas possibilidades de aplicação."
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "Atualização do modelo Phi-3-mini."
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "Mesmo modelo Phi-3-medium, mas com um tamanho de contexto maior para RAG ou prompting de poucos exemplos."
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "Um modelo de 14B parâmetros, que apresenta melhor qualidade do que o Phi-3-mini, com foco em dados densos de raciocínio de alta qualidade."
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "Mesmo modelo Phi-3-mini, mas com um tamanho de contexto maior para RAG ou prompting de poucos exemplos."
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "O menor membro da família Phi-3. Otimizado tanto para qualidade quanto para baixa latência."
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "Mesmo modelo Phi-3-small, mas com um tamanho de contexto maior para RAG ou prompting de poucos exemplos."
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "Um modelo de 7B parâmetros, que apresenta melhor qualidade do que o Phi-3-mini, com foco em dados densos de raciocínio de alta qualidade."
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128K possui capacidade de processamento de contexto extremamente grande, capaz de lidar com até 128K de informações de contexto, especialmente adequado para análise completa e processamento de associações lógicas de longo prazo em conteúdos longos, podendo fornecer lógica fluida e consistente e suporte a diversas citações em comunicações textuais complexas."
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "Como uma versão de teste do Qwen2, Qwen1.5 utiliza dados em larga escala para alcançar funcionalidades de diálogo mais precisas."
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B) oferece respostas rápidas e capacidade de diálogo natural, adequado para ambientes multilíngues."
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2 é um modelo de linguagem universal avançado, suportando diversos tipos de instruções."
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5 é uma nova série de modelos de linguagem em larga escala, projetada para otimizar o processamento de tarefas instrucionais."
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5 é uma nova série de modelos de linguagem em larga escala, projetada para otimizar o processamento de tarefas instrucionais."
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5 é uma nova série de modelos de linguagem em larga escala, com maior capacidade de compreensão e geração."
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5 é uma nova série de modelos de linguagem em larga escala, projetada para otimizar o processamento de tarefas instrucionais."
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coder foca na escrita de código."
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Math foca na resolução de problemas na área de matemática, oferecendo respostas especializadas para questões de alta dificuldade."
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9B é uma versão de código aberto, oferecendo uma experiência de diálogo otimizada para aplicações de conversa."
+ },
+ "abab5.5-chat": {
+ "description": "Voltado para cenários de produtividade, suportando o processamento de tarefas complexas e geração de texto eficiente, adequado para aplicações em áreas profissionais."
+ },
+ "abab5.5s-chat": {
+ "description": "Projetado para cenários de diálogo de personagens em chinês, oferecendo capacidade de geração de diálogos de alta qualidade em chinês, adequado para várias aplicações."
+ },
+ "abab6.5g-chat": {
+ "description": "Projetado para diálogos de personagens multilíngues, suportando geração de diálogos de alta qualidade em inglês e várias outras línguas."
+ },
+ "abab6.5s-chat": {
+ "description": "Adequado para uma ampla gama de tarefas de processamento de linguagem natural, incluindo geração de texto, sistemas de diálogo, etc."
+ },
+ "abab6.5t-chat": {
+ "description": "Otimizado para cenários de diálogo de personagens em chinês, oferecendo capacidade de geração de diálogos fluentes e que respeitam os hábitos de expressão em chinês."
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "Modelo de chamada de função de código aberto da Fireworks, oferecendo excelente capacidade de execução de instruções e características personalizáveis."
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "O Firefunction-v2 da Fireworks é um modelo de chamada de função de alto desempenho, desenvolvido com base no Llama-3 e otimizado para cenários como chamadas de função, diálogos e seguimento de instruções."
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13b é um modelo de linguagem visual que pode receber entradas de imagem e texto simultaneamente, treinado com dados de alta qualidade, adequado para tarefas multimodais."
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "O modelo Gemma 2 9B Instruct, baseado na tecnologia anterior da Google, é adequado para responder perguntas, resumir e realizar inferências em diversas tarefas de geração de texto."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "O modelo Llama 3 70B Instruct é otimizado para diálogos multilíngues e compreensão de linguagem natural, superando a maioria dos modelos concorrentes."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "O modelo Llama 3 70B Instruct (versão HF) mantém consistência com os resultados da implementação oficial, adequado para tarefas de seguimento de instruções de alta qualidade."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "O modelo Llama 3 8B Instruct é otimizado para diálogos e tarefas multilíngues, apresentando desempenho excepcional e eficiência."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "O modelo Llama 3 8B Instruct (versão HF) é consistente com os resultados da implementação oficial, apresentando alta consistência e compatibilidade entre plataformas."
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "O modelo Llama 3.1 405B Instruct possui parâmetros em escala extremamente grande, adequado para seguimento de instruções em tarefas complexas e cenários de alta carga."
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "O modelo Llama 3.1 70B Instruct oferece excelente compreensão e geração de linguagem natural, sendo a escolha ideal para tarefas de diálogo e análise."
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "O modelo Llama 3.1 8B Instruct é otimizado para diálogos multilíngues, superando a maioria dos modelos de código aberto e fechado em benchmarks do setor."
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "O modelo Mixtral MoE 8x22B Instruct, com parâmetros em grande escala e arquitetura de múltiplos especialistas, suporta o processamento eficiente de tarefas complexas."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "O modelo Mixtral MoE 8x7B Instruct, com uma arquitetura de múltiplos especialistas, oferece seguimento e execução de instruções de forma eficiente."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "O modelo Mixtral MoE 8x7B Instruct (versão HF) apresenta desempenho consistente com a implementação oficial, adequado para uma variedade de cenários de tarefas eficientes."
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "O modelo MythoMax L2 13B combina novas técnicas de fusão, sendo especializado em narrativas e interpretação de personagens."
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "O modelo Phi 3 Vision Instruct é um modelo multimodal leve, capaz de processar informações visuais e textuais complexas, com forte capacidade de raciocínio."
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "O modelo StarCoder 15.5B suporta tarefas de programação avançadas, com capacidade multilíngue aprimorada, adequado para geração e compreensão de código complexos."
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "O modelo StarCoder 7B é treinado para mais de 80 linguagens de programação, apresentando excelente capacidade de preenchimento de código e compreensão de contexto."
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "O modelo Yi-Large oferece excelente capacidade de processamento multilíngue, adequado para diversas tarefas de geração e compreensão de linguagem."
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "Um modelo multilíngue com 398B de parâmetros (94B ativos), oferecendo uma janela de contexto longa de 256K, chamada de função, saída estruturada e geração fundamentada."
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "Um modelo multilíngue com 52B de parâmetros (12B ativos), oferecendo uma janela de contexto longa de 256K, chamada de função, saída estruturada e geração fundamentada."
+ },
+ "ai21-jamba-instruct": {
+ "description": "Um modelo LLM baseado em Mamba de qualidade de produção para alcançar desempenho, qualidade e eficiência de custo de classe mundial."
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "O Claude 3.5 Sonnet eleva o padrão da indústria, superando modelos concorrentes e o Claude 3 Opus, apresentando um desempenho excepcional em avaliações amplas, ao mesmo tempo que mantém a velocidade e o custo de nossos modelos de nível médio."
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "O Claude 3 Haiku é o modelo mais rápido e compacto da Anthropic, oferecendo uma velocidade de resposta quase instantânea. Ele pode responder rapidamente a consultas e solicitações simples. Os clientes poderão construir uma experiência de IA sem costura que imita a interação humana. O Claude 3 Haiku pode processar imagens e retornar saídas de texto, com uma janela de contexto de 200K."
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "O Claude 3 Opus é o modelo de IA mais poderoso da Anthropic, com desempenho de ponta em tarefas altamente complexas. Ele pode lidar com prompts abertos e cenários não vistos, apresentando fluência excepcional e compreensão semelhante à humana. O Claude 3 Opus demonstra as possibilidades de geração de IA na vanguarda. O Claude 3 Opus pode processar imagens e retornar saídas de texto, com uma janela de contexto de 200K."
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "O Claude 3 Sonnet da Anthropic alcança um equilíbrio ideal entre inteligência e velocidade — especialmente adequado para cargas de trabalho empresariais. Ele oferece a máxima utilidade a um custo inferior ao dos concorrentes e foi projetado para ser um modelo confiável e durável, adequado para implantações de IA em larga escala. O Claude 3 Sonnet pode processar imagens e retornar saídas de texto, com uma janela de contexto de 200K."
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "Um modelo rápido, econômico e ainda muito capaz, capaz de lidar com uma variedade de tarefas, incluindo diálogos cotidianos, análise de texto, resumos e perguntas e respostas de documentos."
+ },
+ "anthropic.claude-v2": {
+ "description": "O modelo da Anthropic demonstra alta capacidade em uma ampla gama de tarefas, desde diálogos complexos e geração de conteúdo criativo até o seguimento detalhado de instruções."
+ },
+ "anthropic.claude-v2:1": {
+ "description": "A versão atualizada do Claude 2, com o dobro da janela de contexto, além de melhorias na confiabilidade, taxa de alucinação e precisão baseada em evidências em documentos longos e contextos RAG."
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 Haiku é o modelo mais rápido e compacto da Anthropic, projetado para oferecer respostas quase instantâneas. Ele possui desempenho direcionado rápido e preciso."
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opus é o modelo mais poderoso da Anthropic para lidar com tarefas altamente complexas. Ele se destaca em desempenho, inteligência, fluência e compreensão."
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 Sonnet oferece capacidades que vão além do Opus e uma velocidade superior ao Sonnet, mantendo o mesmo preço do Sonnet. O Sonnet é especialmente habilidoso em programação, ciência de dados, processamento visual e tarefas de agente."
+ },
+ "aya": {
+ "description": "Aya 23 é um modelo multilíngue lançado pela Cohere, suportando 23 idiomas, facilitando aplicações linguísticas diversificadas."
+ },
+ "aya:35b": {
+ "description": "Aya 23 é um modelo multilíngue lançado pela Cohere, suportando 23 idiomas, facilitando aplicações linguísticas diversificadas."
+ },
+ "charglm-3": {
+ "description": "O CharGLM-3 é projetado para interpretação de personagens e companhia emocional, suportando memória de múltiplas rodadas e diálogos personalizados, com ampla aplicação."
+ },
+ "chatgpt-4o-latest": {
+ "description": "O ChatGPT-4o é um modelo dinâmico, atualizado em tempo real para manter a versão mais atual. Ele combina uma poderosa capacidade de compreensão e geração de linguagem, adequado para cenários de aplicação em larga escala, incluindo atendimento ao cliente, educação e suporte técnico."
+ },
+ "claude-2.0": {
+ "description": "Claude 2 oferece avanços em capacidades críticas para empresas, incluindo um contexto líder do setor de 200K tokens, uma redução significativa na taxa de alucinação do modelo, prompts de sistema e uma nova funcionalidade de teste: chamadas de ferramentas."
+ },
+ "claude-2.1": {
+ "description": "Claude 2 oferece avanços em capacidades críticas para empresas, incluindo um contexto líder do setor de 200K tokens, uma redução significativa na taxa de alucinação do modelo, prompts de sistema e uma nova funcionalidade de teste: chamadas de ferramentas."
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnet oferece capacidades que superam o Opus e uma velocidade mais rápida que o Sonnet, mantendo o mesmo preço. O Sonnet é especialmente bom em programação, ciência de dados, processamento visual e tarefas de agente."
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haiku é o modelo mais rápido e compacto da Anthropic, projetado para respostas quase instantâneas. Ele possui desempenho direcionado rápido e preciso."
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opus é o modelo mais poderoso da Anthropic para lidar com tarefas altamente complexas. Ele se destaca em desempenho, inteligência, fluência e compreensão."
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnet oferece um equilíbrio ideal entre inteligência e velocidade para cargas de trabalho empresariais. Ele fornece máxima utilidade a um custo mais baixo, sendo confiável e adequado para implantação em larga escala."
+ },
+ "claude-instant-1.2": {
+ "description": "O modelo da Anthropic é utilizado para geração de texto de baixa latência e alta taxa de transferência, suportando a geração de centenas de páginas de texto."
+ },
+ "codegeex-4": {
+ "description": "O CodeGeeX-4 é um poderoso assistente de programação AI, suportando perguntas e respostas inteligentes e autocompletar em várias linguagens de programação, aumentando a eficiência do desenvolvimento."
+ },
+ "codegemma": {
+ "description": "CodeGemma é um modelo de linguagem leve especializado em diferentes tarefas de programação, suportando iterações rápidas e integração."
+ },
+ "codegemma:2b": {
+ "description": "CodeGemma é um modelo de linguagem leve especializado em diferentes tarefas de programação, suportando iterações rápidas e integração."
+ },
+ "codellama": {
+ "description": "Code Llama é um LLM focado em geração e discussão de código, combinando suporte a uma ampla gama de linguagens de programação, adequado para ambientes de desenvolvedores."
+ },
+ "codellama:13b": {
+ "description": "Code Llama é um LLM focado em geração e discussão de código, combinando suporte a uma ampla gama de linguagens de programação, adequado para ambientes de desenvolvedores."
+ },
+ "codellama:34b": {
+ "description": "Code Llama é um LLM focado em geração e discussão de código, combinando suporte a uma ampla gama de linguagens de programação, adequado para ambientes de desenvolvedores."
+ },
+ "codellama:70b": {
+ "description": "Code Llama é um LLM focado em geração e discussão de código, combinando suporte a uma ampla gama de linguagens de programação, adequado para ambientes de desenvolvedores."
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5 é um modelo de linguagem de grande escala treinado com uma vasta quantidade de dados de código, projetado para resolver tarefas de programação complexas."
+ },
+ "codestral": {
+ "description": "Codestral é o primeiro modelo de código da Mistral AI, oferecendo suporte excepcional para tarefas de geração de código."
+ },
+ "codestral-latest": {
+ "description": "Codestral é um modelo gerador de ponta focado em geração de código, otimizado para preenchimento intermediário e tarefas de conclusão de código."
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22B é um modelo projetado para seguir instruções, diálogos e programação."
+ },
+ "cohere-command-r": {
+ "description": "Command R é um modelo generativo escalável voltado para RAG e uso de ferramentas, permitindo IA em escala de produção para empresas."
+ },
+ "cohere-command-r-plus": {
+ "description": "Command R+ é um modelo otimizado para RAG de última geração, projetado para lidar com cargas de trabalho de nível empresarial."
+ },
+ "command-r": {
+ "description": "Command R é um LLM otimizado para tarefas de diálogo e longos contextos, especialmente adequado para interações dinâmicas e gerenciamento de conhecimento."
+ },
+ "command-r-plus": {
+ "description": "Command R+ é um modelo de linguagem de grande porte de alto desempenho, projetado para cenários empresariais reais e aplicações complexas."
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instruct oferece capacidade de processamento de instruções altamente confiável, suportando aplicações em diversos setores."
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5 combina as excelentes características das versões anteriores, aprimorando a capacidade geral e de codificação."
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67B é um modelo avançado treinado para diálogos de alta complexidade."
+ },
+ "deepseek-chat": {
+ "description": "Um novo modelo de código aberto que combina capacidades gerais e de codificação, não apenas preservando a capacidade de diálogo geral do modelo Chat original e a poderosa capacidade de processamento de código do modelo Coder, mas também alinhando-se melhor às preferências humanas. Além disso, o DeepSeek-V2.5 também alcançou melhorias significativas em várias áreas, como tarefas de escrita e seguimento de instruções."
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2 é um modelo de código de especialistas abertos, destacando-se em tarefas de codificação, comparável ao GPT4-Turbo."
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2 é um modelo de código de especialistas abertos, destacando-se em tarefas de codificação, comparável ao GPT4-Turbo."
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2 é um modelo de linguagem eficiente Mixture-of-Experts, adequado para demandas de processamento econômico."
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236B é o modelo de código projetado do DeepSeek, oferecendo forte capacidade de geração de código."
+ },
+ "deepseek/deepseek-chat": {
+ "description": "Um novo modelo de código aberto que integra capacidades gerais e de codificação, não apenas preservando a capacidade de diálogo geral do modelo Chat original e a poderosa capacidade de processamento de código do modelo Coder, mas também alinhando-se melhor às preferências humanas. Além disso, o DeepSeek-V2.5 também alcançou melhorias significativas em várias áreas, como tarefas de escrita e seguimento de instruções."
+ },
+ "emohaa": {
+ "description": "O Emohaa é um modelo psicológico com capacidade de consultoria profissional, ajudando os usuários a entender questões emocionais."
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001 (Ajuste) oferece desempenho estável e ajustável, sendo a escolha ideal para soluções de tarefas complexas."
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002 (Ajuste) oferece excelente suporte multimodal, focando na resolução eficaz de tarefas complexas."
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Pro é o modelo de IA de alto desempenho do Google, projetado para expansão em uma ampla gama de tarefas."
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001 é um modelo multimodal eficiente, suportando a expansão de aplicações amplas."
+ },
+ "gemini-1.5-flash-002": {
+ "description": "O Gemini 1.5 Flash 002 é um modelo multimodal eficiente, que suporta uma ampla gama de aplicações."
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827 é projetado para lidar com cenários de tarefas em larga escala, oferecendo velocidade de processamento incomparável."
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "O Gemini 1.5 Flash 8B 0924 é o mais recente modelo experimental, com melhorias significativas de desempenho em casos de uso de texto e multimídia."
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827 oferece capacidade de processamento multimodal otimizada, adequada para uma variedade de cenários de tarefas complexas."
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flash é o mais recente modelo de IA multimodal do Google, com capacidade de processamento rápido, suportando entradas de texto, imagem e vídeo, adequado para uma variedade de tarefas de expansão eficiente."
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001 é uma solução de IA multimodal escalável, suportando uma ampla gama de tarefas complexas."
+ },
+ "gemini-1.5-pro-002": {
+ "description": "O Gemini 1.5 Pro 002 é o mais recente modelo pronto para produção, oferecendo saídas de maior qualidade, com melhorias significativas em tarefas matemáticas, contextos longos e tarefas visuais."
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801 oferece excelente capacidade de processamento multimodal, proporcionando maior flexibilidade para o desenvolvimento de aplicações."
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827 combina as mais recentes tecnologias de otimização, trazendo maior eficiência no processamento de dados multimodais."
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Pro suporta até 2 milhões de tokens, sendo a escolha ideal para modelos multimodais de médio porte, adequados para suporte multifacetado em tarefas complexas."
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7B é adequado para o processamento de tarefas de pequeno a médio porte, combinando custo e eficiência."
+ },
+ "gemma2": {
+ "description": "Gemma 2 é um modelo eficiente lançado pelo Google, abrangendo uma variedade de cenários de aplicação, desde aplicações pequenas até processamento de dados complexos."
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9B é um modelo otimizado para integração de tarefas e ferramentas específicas."
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2 é um modelo eficiente lançado pelo Google, abrangendo uma variedade de cenários de aplicação, desde aplicações pequenas até processamento de dados complexos."
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2 é um modelo eficiente lançado pelo Google, abrangendo uma variedade de cenários de aplicação, desde aplicações pequenas até processamento de dados complexos."
+ },
+ "general": {
+ "description": "Spark Lite é um modelo de linguagem leve, com latência extremamente baixa e alta capacidade de processamento, totalmente gratuito e aberto, suportando funcionalidade de busca online em tempo real. Sua característica de resposta rápida o torna excelente em aplicações de inferência e ajuste de modelo em dispositivos de baixa potência, proporcionando aos usuários um excelente custo-benefício e experiência inteligente, especialmente em perguntas e respostas, geração de conteúdo e cenários de busca."
+ },
+ "generalv3": {
+ "description": "Spark Pro é um modelo de linguagem de alto desempenho otimizado para áreas profissionais, focando em matemática, programação, medicina, educação e outros campos, e suportando busca online e plugins integrados como clima e data. Seu modelo otimizado apresenta desempenho excepcional e eficiência em perguntas e respostas complexas, compreensão de linguagem e criação de texto de alto nível, sendo a escolha ideal para cenários de aplicação profissional."
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Max é a versão mais completa, suportando busca online e muitos plugins integrados. Suas capacidades centrais totalmente otimizadas, juntamente com a definição de papéis do sistema e a funcionalidade de chamada de funções, fazem com que seu desempenho em vários cenários de aplicação complexos seja extremamente excepcional."
+ },
+ "glm-4": {
+ "description": "O GLM-4 é a versão antiga lançada em janeiro de 2024, atualmente substituída pelo mais poderoso GLM-4-0520."
+ },
+ "glm-4-0520": {
+ "description": "O GLM-4-0520 é a versão mais recente do modelo, projetada para tarefas altamente complexas e diversificadas, com desempenho excepcional."
+ },
+ "glm-4-air": {
+ "description": "O GLM-4-Air é uma versão econômica, com desempenho próximo ao GLM-4, oferecendo alta velocidade a um preço acessível."
+ },
+ "glm-4-airx": {
+ "description": "O GLM-4-AirX oferece uma versão eficiente do GLM-4-Air, com velocidade de inferência até 2,6 vezes mais rápida."
+ },
+ "glm-4-alltools": {
+ "description": "O GLM-4-AllTools é um modelo de agente multifuncional, otimizado para suportar planejamento de instruções complexas e chamadas de ferramentas, como navegação na web, interpretação de código e geração de texto, adequado para execução de múltiplas tarefas."
+ },
+ "glm-4-flash": {
+ "description": "O GLM-4-Flash é a escolha ideal para tarefas simples, com a maior velocidade e o preço mais acessível."
+ },
+ "glm-4-long": {
+ "description": "O GLM-4-Long suporta entradas de texto superlongas, adequado para tarefas de memória e processamento de documentos em larga escala."
+ },
+ "glm-4-plus": {
+ "description": "O GLM-4-Plus, como um modelo de alta inteligência, possui uma forte capacidade de lidar com textos longos e tarefas complexas, com desempenho amplamente aprimorado."
+ },
+ "glm-4v": {
+ "description": "O GLM-4V oferece uma forte capacidade de compreensão e raciocínio de imagens, suportando várias tarefas visuais."
+ },
+ "glm-4v-plus": {
+ "description": "O GLM-4V-Plus possui a capacidade de entender conteúdo de vídeo e múltiplas imagens, adequado para tarefas multimodais."
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827 oferece capacidade de processamento multimodal otimizada, adequada para uma variedade de cenários de tarefas complexas."
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827 combina as mais recentes tecnologias de otimização, proporcionando uma capacidade de processamento de dados multimodal mais eficiente."
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2 continua a filosofia de design leve e eficiente."
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2 é uma série de modelos de texto de código aberto leve da Google."
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2 é uma série de modelos de texto de código aberto leve da Google."
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B) oferece capacidade básica de processamento de instruções, adequada para aplicações leves."
+ },
+ "gpt-3.5-turbo": {
+ "description": "O GPT 3.5 Turbo é adequado para uma variedade de tarefas de geração e compreensão de texto, atualmente apontando para gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "O GPT 3.5 Turbo é adequado para uma variedade de tarefas de geração e compreensão de texto, atualmente apontando para gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "O GPT 3.5 Turbo é adequado para uma variedade de tarefas de geração e compreensão de texto, atualmente apontando para gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "O GPT 3.5 Turbo é adequado para uma variedade de tarefas de geração e compreensão de texto, atualmente apontando para gpt-3.5-turbo-0125."
+ },
+ "gpt-4": {
+ "description": "O GPT-4 oferece uma janela de contexto maior, capaz de lidar com entradas de texto mais longas, adequado para cenários que exigem integração ampla de informações e análise de dados."
+ },
+ "gpt-4-0125-preview": {
+ "description": "O mais recente modelo GPT-4 Turbo possui funcionalidades visuais. Agora, solicitações visuais podem ser feitas usando o modo JSON e chamadas de função. O GPT-4 Turbo é uma versão aprimorada, oferecendo suporte econômico para tarefas multimodais. Ele encontra um equilíbrio entre precisão e eficiência, adequado para aplicações que requerem interação em tempo real."
+ },
+ "gpt-4-0613": {
+ "description": "O GPT-4 oferece uma janela de contexto maior, capaz de lidar com entradas de texto mais longas, adequado para cenários que exigem integração ampla de informações e análise de dados."
+ },
+ "gpt-4-1106-preview": {
+ "description": "O mais recente modelo GPT-4 Turbo possui funcionalidades visuais. Agora, solicitações visuais podem ser feitas usando o modo JSON e chamadas de função. O GPT-4 Turbo é uma versão aprimorada, oferecendo suporte econômico para tarefas multimodais. Ele encontra um equilíbrio entre precisão e eficiência, adequado para aplicações que requerem interação em tempo real."
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "O mais recente modelo GPT-4 Turbo possui funcionalidades visuais. Agora, solicitações visuais podem ser feitas usando o modo JSON e chamadas de função. O GPT-4 Turbo é uma versão aprimorada, oferecendo suporte econômico para tarefas multimodais. Ele encontra um equilíbrio entre precisão e eficiência, adequado para aplicações que requerem interação em tempo real."
+ },
+ "gpt-4-32k": {
+ "description": "O GPT-4 oferece uma janela de contexto maior, capaz de lidar com entradas de texto mais longas, adequado para cenários que exigem integração ampla de informações e análise de dados."
+ },
+ "gpt-4-32k-0613": {
+ "description": "O GPT-4 oferece uma janela de contexto maior, capaz de lidar com entradas de texto mais longas, adequado para cenários que exigem integração ampla de informações e análise de dados."
+ },
+ "gpt-4-turbo": {
+ "description": "O mais recente modelo GPT-4 Turbo possui funcionalidades visuais. Agora, solicitações visuais podem ser feitas usando o modo JSON e chamadas de função. O GPT-4 Turbo é uma versão aprimorada, oferecendo suporte econômico para tarefas multimodais. Ele encontra um equilíbrio entre precisão e eficiência, adequado para aplicações que requerem interação em tempo real."
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "O mais recente modelo GPT-4 Turbo possui funcionalidades visuais. Agora, solicitações visuais podem ser feitas usando o modo JSON e chamadas de função. O GPT-4 Turbo é uma versão aprimorada, oferecendo suporte econômico para tarefas multimodais. Ele encontra um equilíbrio entre precisão e eficiência, adequado para aplicações que requerem interação em tempo real."
+ },
+ "gpt-4-turbo-preview": {
+ "description": "O mais recente modelo GPT-4 Turbo possui funcionalidades visuais. Agora, solicitações visuais podem ser feitas usando o modo JSON e chamadas de função. O GPT-4 Turbo é uma versão aprimorada, oferecendo suporte econômico para tarefas multimodais. Ele encontra um equilíbrio entre precisão e eficiência, adequado para aplicações que requerem interação em tempo real."
+ },
+ "gpt-4-vision-preview": {
+ "description": "O mais recente modelo GPT-4 Turbo possui funcionalidades visuais. Agora, solicitações visuais podem ser feitas usando o modo JSON e chamadas de função. O GPT-4 Turbo é uma versão aprimorada, oferecendo suporte econômico para tarefas multimodais. Ele encontra um equilíbrio entre precisão e eficiência, adequado para aplicações que requerem interação em tempo real."
+ },
+ "gpt-4o": {
+ "description": "O ChatGPT-4o é um modelo dinâmico, atualizado em tempo real para manter a versão mais atual. Ele combina uma poderosa capacidade de compreensão e geração de linguagem, adequado para cenários de aplicação em larga escala, incluindo atendimento ao cliente, educação e suporte técnico."
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "O ChatGPT-4o é um modelo dinâmico, atualizado em tempo real para manter a versão mais atual. Ele combina uma poderosa capacidade de compreensão e geração de linguagem, adequado para cenários de aplicação em larga escala, incluindo atendimento ao cliente, educação e suporte técnico."
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "O ChatGPT-4o é um modelo dinâmico, atualizado em tempo real para manter a versão mais atual. Ele combina uma poderosa capacidade de compreensão e geração de linguagem, adequado para cenários de aplicação em larga escala, incluindo atendimento ao cliente, educação e suporte técnico."
+ },
+ "gpt-4o-mini": {
+ "description": "O GPT-4o mini é o mais recente modelo lançado pela OpenAI após o GPT-4 Omni, suportando entrada de texto e imagem e gerando texto como saída. Como seu modelo compacto mais avançado, ele é muito mais acessível do que outros modelos de ponta recentes, custando mais de 60% menos que o GPT-3.5 Turbo. Ele mantém uma inteligência de ponta, ao mesmo tempo que oferece um custo-benefício significativo. O GPT-4o mini obteve uma pontuação de 82% no teste MMLU e atualmente está classificado acima do GPT-4 em preferências de chat."
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13B é um modelo de linguagem que combina criatividade e inteligência, integrando vários modelos de ponta."
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "O modelo de código aberto inovador InternLM2.5, com um grande número de parâmetros, melhora a inteligência do diálogo."
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5 oferece soluções de diálogo inteligente em múltiplos cenários."
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "O modelo Llama 3.1 70B Instruct possui 70B de parâmetros, capaz de oferecer desempenho excepcional em tarefas de geração de texto e instrução em larga escala."
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70B oferece capacidade de raciocínio AI mais poderosa, adequada para aplicações complexas, suportando um processamento computacional extenso e garantindo eficiência e precisão."
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8B é um modelo de alto desempenho, oferecendo capacidade de geração de texto rápida, ideal para cenários de aplicação que exigem eficiência em larga escala e custo-benefício."
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "O modelo Llama 3.1 8B Instruct possui 8B de parâmetros, suportando a execução eficiente de tarefas de instrução, oferecendo excelente capacidade de geração de texto."
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "O modelo Llama 3.1 Sonar Huge Online possui 405B de parâmetros, suportando um comprimento de contexto de aproximadamente 127.000 tokens, projetado para aplicações de chat online complexas."
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "O modelo Llama 3.1 Sonar Large Chat possui 70B de parâmetros, suportando um comprimento de contexto de aproximadamente 127.000 tokens, adequado para tarefas de chat offline complexas."
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "O modelo Llama 3.1 Sonar Large Online possui 70B de parâmetros, suportando um comprimento de contexto de aproximadamente 127.000 tokens, adequado para tarefas de chat de alta capacidade e diversidade."
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "O modelo Llama 3.1 Sonar Small Chat possui 8B de parâmetros, projetado para chats offline, suportando um comprimento de contexto de aproximadamente 127.000 tokens."
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "O modelo Llama 3.1 Sonar Small Online possui 8B de parâmetros, suportando um comprimento de contexto de aproximadamente 127.000 tokens, projetado para chats online, capaz de processar eficientemente diversas interações textuais."
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70B oferece capacidade de processamento incomparável para complexidade, projetado sob medida para projetos de alta demanda."
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8B oferece desempenho de raciocínio de alta qualidade, adequado para uma variedade de necessidades de aplicação."
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Use oferece poderosa capacidade de chamada de ferramentas, suportando o processamento eficiente de tarefas complexas."
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Use é um modelo otimizado para uso eficiente de ferramentas, suportando cálculos paralelos rápidos."
+ },
+ "llama3.1": {
+ "description": "Llama 3.1 é um modelo líder lançado pela Meta, suportando até 405B de parâmetros, aplicável em diálogos complexos, tradução multilíngue e análise de dados."
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1 é um modelo líder lançado pela Meta, suportando até 405B de parâmetros, aplicável em diálogos complexos, tradução multilíngue e análise de dados."
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1 é um modelo líder lançado pela Meta, suportando até 405B de parâmetros, aplicável em diálogos complexos, tradução multilíngue e análise de dados."
+ },
+ "llava": {
+ "description": "LLaVA é um modelo multimodal que combina um codificador visual e Vicuna, projetado para forte compreensão visual e linguística."
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7B oferece capacidade de processamento visual integrada, gerando saídas complexas a partir de informações visuais."
+ },
+ "llava:13b": {
+ "description": "LLaVA é um modelo multimodal que combina um codificador visual e Vicuna, projetado para forte compreensão visual e linguística."
+ },
+ "llava:34b": {
+ "description": "LLaVA é um modelo multimodal que combina um codificador visual e Vicuna, projetado para forte compreensão visual e linguística."
+ },
+ "mathstral": {
+ "description": "MathΣtral é projetado para pesquisa científica e raciocínio matemático, oferecendo capacidade de cálculo eficaz e interpretação de resultados."
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "Um poderoso modelo com 70 bilhões de parâmetros, destacando-se em raciocínio, codificação e amplas aplicações linguísticas."
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "Um modelo versátil com 8 bilhões de parâmetros, otimizado para tarefas de diálogo e geração de texto."
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "Os modelos de texto apenas ajustados por instrução Llama 3.1 são otimizados para casos de uso de diálogo multilíngue e superam muitos dos modelos de chat de código aberto e fechado disponíveis em benchmarks comuns da indústria."
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "Os modelos de texto apenas ajustados por instrução Llama 3.1 são otimizados para casos de uso de diálogo multilíngue e superam muitos dos modelos de chat de código aberto e fechado disponíveis em benchmarks comuns da indústria."
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "Os modelos de texto apenas ajustados por instrução Llama 3.1 são otimizados para casos de uso de diálogo multilíngue e superam muitos dos modelos de chat de código aberto e fechado disponíveis em benchmarks comuns da indústria."
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B) oferece excelente capacidade de processamento de linguagem e uma experiência interativa notável."
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B) é um modelo de chat poderoso, suportando necessidades de diálogo complexas."
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B) oferece suporte multilíngue, abrangendo um rico conhecimento em diversas áreas."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Lite é ideal para ambientes que exigem alta eficiência e baixa latência."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turbo oferece uma capacidade excepcional de compreensão e geração de linguagem, adequado para as tarefas computacionais mais exigentes."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Lite é adequado para ambientes com recursos limitados, oferecendo um excelente equilíbrio de desempenho."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turbo é um modelo de linguagem de alto desempenho, suportando uma ampla gama de cenários de aplicação."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405B é um modelo poderoso para pré-treinamento e ajuste de instruções."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "O modelo Llama 3.1 Turbo 405B oferece suporte a um contexto de capacidade extremamente grande para processamento de grandes volumes de dados, destacando-se em aplicações de inteligência artificial em larga escala."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70B oferece suporte a diálogos multilíngues de forma eficiente."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "O modelo Llama 3.1 70B é ajustado para aplicações de alta carga, quantizado para FP8, oferecendo maior eficiência computacional e precisão, garantindo desempenho excepcional em cenários complexos."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1 oferece suporte multilíngue, sendo um dos modelos geradores líderes da indústria."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "O modelo Llama 3.1 8B utiliza quantização FP8, suportando até 131.072 tokens de contexto, destacando-se entre os modelos de código aberto, ideal para tarefas complexas e superando muitos benchmarks do setor."
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instruct é otimizado para cenários de diálogo de alta qualidade, apresentando desempenho excepcional em várias avaliações humanas."
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instruct otimiza cenários de diálogo de alta qualidade, com desempenho superior a muitos modelos fechados."
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B Instruct é a versão mais recente da Meta, otimizada para gerar diálogos de alta qualidade, superando muitos modelos fechados de liderança."
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct é projetado para diálogos de alta qualidade, destacando-se em avaliações humanas, especialmente em cenários de alta interação."
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct é a versão mais recente lançada pela Meta, otimizada para cenários de diálogo de alta qualidade, superando muitos modelos fechados de ponta."
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1 oferece suporte multilíngue e é um dos modelos geradores líderes do setor."
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 405B Instruct é o maior e mais poderoso modelo da série Llama 3.1 Instruct, sendo um modelo altamente avançado para raciocínio conversacional e geração de dados sintéticos, que também pode ser usado como base para pré-treinamento ou ajuste fino em domínios específicos. Os modelos de linguagem de grande escala (LLMs) multilíngues oferecidos pelo Llama 3.1 são um conjunto de modelos geradores pré-treinados e ajustados por instruções, incluindo tamanhos de 8B, 70B e 405B (entrada/saída de texto). Os modelos de texto ajustados por instruções do Llama 3.1 (8B, 70B, 405B) são otimizados para casos de uso de diálogo multilíngue e superaram muitos modelos de chat de código aberto disponíveis em benchmarks comuns da indústria. O Llama 3.1 é projetado para uso comercial e de pesquisa em várias línguas. Os modelos de texto ajustados por instruções são adequados para chats semelhantes a assistentes, enquanto os modelos pré-treinados podem se adaptar a várias tarefas de geração de linguagem natural. O modelo Llama 3.1 também suporta a utilização de sua saída para melhorar outros modelos, incluindo geração de dados sintéticos e refinamento. O Llama 3.1 é um modelo de linguagem autoregressivo que utiliza uma arquitetura de transformador otimizada. As versões ajustadas utilizam ajuste fino supervisionado (SFT) e aprendizado por reforço com feedback humano (RLHF) para alinhar-se às preferências humanas em relação à utilidade e segurança."
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "A versão atualizada do Meta Llama 3.1 70B Instruct, incluindo um comprimento de contexto expandido de 128K, multilinguismo e capacidades de raciocínio aprimoradas. Os modelos de linguagem de grande porte (LLMs) do Llama 3.1 são um conjunto de modelos geradores pré-treinados e ajustados por instruções, incluindo tamanhos de 8B, 70B e 405B (entrada/saída de texto). Os modelos de texto ajustados por instruções do Llama 3.1 (8B, 70B, 405B) são otimizados para casos de uso de diálogo multilíngue e superaram muitos modelos de chat de código aberto disponíveis em benchmarks de indústria comuns. O Llama 3.1 é projetado para uso comercial e de pesquisa em várias línguas. Os modelos de texto ajustados por instruções são adequados para chats semelhantes a assistentes, enquanto os modelos pré-treinados podem se adaptar a várias tarefas de geração de linguagem natural. O modelo Llama 3.1 também suporta a utilização de suas saídas para melhorar outros modelos, incluindo geração de dados sintéticos e refinamento. O Llama 3.1 é um modelo de linguagem autoregressivo usando uma arquitetura de transformador otimizada. As versões ajustadas utilizam ajuste fino supervisionado (SFT) e aprendizado por reforço com feedback humano (RLHF) para alinhar-se às preferências humanas por ajuda e segurança."
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "A versão atualizada do Meta Llama 3.1 8B Instruct, incluindo um comprimento de contexto expandido de 128K, multilinguismo e capacidades de raciocínio aprimoradas. Os modelos de linguagem de grande porte (LLMs) do Llama 3.1 são um conjunto de modelos geradores pré-treinados e ajustados por instruções, incluindo tamanhos de 8B, 70B e 405B (entrada/saída de texto). Os modelos de texto ajustados por instruções do Llama 3.1 (8B, 70B, 405B) são otimizados para casos de uso de diálogo multilíngue e superaram muitos modelos de chat de código aberto disponíveis em benchmarks de indústria comuns. O Llama 3.1 é projetado para uso comercial e de pesquisa em várias línguas. Os modelos de texto ajustados por instruções são adequados para chats semelhantes a assistentes, enquanto os modelos pré-treinados podem se adaptar a várias tarefas de geração de linguagem natural. O modelo Llama 3.1 também suporta a utilização de suas saídas para melhorar outros modelos, incluindo geração de dados sintéticos e refinamento. O Llama 3.1 é um modelo de linguagem autoregressivo usando uma arquitetura de transformador otimizada. As versões ajustadas utilizam ajuste fino supervisionado (SFT) e aprendizado por reforço com feedback humano (RLHF) para alinhar-se às preferências humanas por ajuda e segurança."
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3 é um modelo de linguagem de grande escala (LLM) aberto voltado para desenvolvedores, pesquisadores e empresas, projetado para ajudá-los a construir, experimentar e expandir suas ideias de IA geradora de forma responsável. Como parte de um sistema de base para inovação da comunidade global, é ideal para criação de conteúdo, IA de diálogo, compreensão de linguagem, P&D e aplicações empresariais."
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3 é um modelo de linguagem de grande escala (LLM) aberto voltado para desenvolvedores, pesquisadores e empresas, projetado para ajudá-los a construir, experimentar e expandir suas ideias de IA geradora de forma responsável. Como parte de um sistema de base para inovação da comunidade global, é ideal para dispositivos de borda com capacidade de computação e recursos limitados, além de tempos de treinamento mais rápidos."
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7B é o modelo leve e rápido mais recente da Microsoft AI, com desempenho próximo a 10 vezes o de modelos de código aberto existentes."
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22B é o modelo Wizard mais avançado da Microsoft, demonstrando um desempenho extremamente competitivo."
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-V é a nova geração de grandes modelos multimodais lançada pela OpenBMB, com excelente capacidade de reconhecimento de OCR e compreensão multimodal, suportando uma ampla gama de cenários de aplicação."
+ },
+ "mistral": {
+ "description": "Mistral é um modelo de 7B lançado pela Mistral AI, adequado para demandas de processamento de linguagem variáveis."
+ },
+ "mistral-large": {
+ "description": "Mixtral Large é o modelo de destaque da Mistral, combinando capacidades de geração de código, matemática e raciocínio, suportando uma janela de contexto de 128k."
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407) é um modelo de linguagem avançado (LLM) com capacidades de raciocínio, conhecimento e codificação de última geração."
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Large é o modelo de destaque, especializado em tarefas multilíngues, raciocínio complexo e geração de código, sendo a escolha ideal para aplicações de alto nível."
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemo é um modelo de 12B desenvolvido em colaboração entre a Mistral AI e a NVIDIA, oferecendo desempenho eficiente."
+ },
+ "mistral-small": {
+ "description": "Mistral Small pode ser usado em qualquer tarefa baseada em linguagem que exija alta eficiência e baixa latência."
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Small é uma opção de alto custo-benefício, rápida e confiável, adequada para casos de uso como tradução, resumo e análise de sentimentos."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instruct é conhecido por seu alto desempenho, adequado para diversas tarefas de linguagem."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7B é um modelo ajustado sob demanda, oferecendo respostas otimizadas para tarefas."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3 oferece capacidade computacional eficiente e compreensão de linguagem natural, adequada para uma ampla gama de aplicações."
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B) é um super modelo de linguagem, suportando demandas de processamento extremamente altas."
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7B é um modelo de especialistas esparsos pré-treinados, utilizado para tarefas de texto de uso geral."
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instruct é um modelo de padrão industrial de alto desempenho, com otimização de velocidade e suporte a longos contextos."
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemo é um modelo de 7.3B parâmetros com suporte multilíngue e programação de alto desempenho."
+ },
+ "mixtral": {
+ "description": "Mixtral é o modelo de especialistas da Mistral AI, com pesos de código aberto, oferecendo suporte em geração de código e compreensão de linguagem."
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7B oferece alta capacidade de computação paralela com tolerância a falhas, adequado para tarefas complexas."
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtral é o modelo de especialistas da Mistral AI, com pesos de código aberto, oferecendo suporte em geração de código e compreensão de linguagem."
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128K é um modelo com capacidade de processamento de contexto ultra longo, adequado para gerar textos muito longos, atendendo a demandas complexas de geração, capaz de lidar com até 128.000 tokens, ideal para pesquisa, acadêmicos e geração de documentos extensos."
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32K oferece capacidade de processamento de contexto de comprimento médio, capaz de lidar com 32.768 tokens, especialmente adequado para gerar vários documentos longos e diálogos complexos, aplicável em criação de conteúdo, geração de relatórios e sistemas de diálogo."
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8K é projetado para tarefas de geração de texto curto, com desempenho de processamento eficiente, capaz de lidar com 8.192 tokens, ideal para diálogos curtos, anotações e geração rápida de conteúdo."
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8B é uma versão aprimorada do Nous Hermes 2, contendo os conjuntos de dados mais recentes desenvolvidos internamente."
+ },
+ "o1-mini": {
+ "description": "o1-mini é um modelo de raciocínio rápido e econômico, projetado para cenários de programação, matemática e ciências. Este modelo possui um contexto de 128K e uma data limite de conhecimento em outubro de 2023."
+ },
+ "o1-preview": {
+ "description": "o1 é o novo modelo de raciocínio da OpenAI, adequado para tarefas complexas que exigem amplo conhecimento geral. Este modelo possui um contexto de 128K e uma data limite de conhecimento em outubro de 2023."
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mamba é um modelo de linguagem Mamba 2 focado em geração de código, oferecendo forte suporte para tarefas avançadas de codificação e raciocínio."
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7B é um modelo compacto, mas de alto desempenho, especializado em processamento em lote e tarefas simples, como classificação e geração de texto, com boa capacidade de raciocínio."
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemo é um modelo de 12B desenvolvido em colaboração com a Nvidia, oferecendo excelente desempenho em raciocínio e codificação, fácil de integrar e substituir."
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22B é um modelo de especialistas maior, focado em tarefas complexas, oferecendo excelente capacidade de raciocínio e maior taxa de transferência."
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7B é um modelo de especialistas esparsos, utilizando múltiplos parâmetros para aumentar a velocidade de raciocínio, adequado para tarefas de geração de linguagem e código."
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o é um modelo dinâmico, atualizado em tempo real para manter a versão mais atual. Ele combina forte compreensão e capacidade de geração de linguagem, adequado para cenários de aplicação em larga escala, incluindo atendimento ao cliente, educação e suporte técnico."
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o mini é o mais recente modelo da OpenAI, lançado após o GPT-4 Omni, que suporta entrada de texto e imagem e saída de texto. Como seu modelo compacto mais avançado, é muito mais barato do que outros modelos de ponta recentes e custa mais de 60% menos que o GPT-3.5 Turbo. Ele mantém inteligência de ponta, ao mesmo tempo que oferece uma relação custo-benefício significativa. O GPT-4o mini obteve uma pontuação de 82% no teste MMLU e atualmente está classificado acima do GPT-4 em preferências de chat."
+ },
+ "openai/o1-mini": {
+ "description": "o1-mini é um modelo de raciocínio rápido e econômico, projetado para cenários de programação, matemática e ciências. Este modelo possui um contexto de 128K e uma data limite de conhecimento em outubro de 2023."
+ },
+ "openai/o1-preview": {
+ "description": "o1 é o novo modelo de raciocínio da OpenAI, adequado para tarefas complexas que exigem amplo conhecimento geral. Este modelo possui um contexto de 128K e uma data limite de conhecimento em outubro de 2023."
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7B é uma biblioteca de modelos de linguagem de código aberto ajustada com a estratégia de 'C-RLFT (refinamento de aprendizado por reforço condicional)'."
+ },
+ "openrouter/auto": {
+ "description": "Com base no comprimento do contexto, tema e complexidade, sua solicitação será enviada para Llama 3 70B Instruct, Claude 3.5 Sonnet (autoajustável) ou GPT-4o."
+ },
+ "phi3": {
+ "description": "Phi-3 é um modelo leve e aberto lançado pela Microsoft, adequado para integração eficiente e raciocínio de conhecimento em larga escala."
+ },
+ "phi3:14b": {
+ "description": "Phi-3 é um modelo leve e aberto lançado pela Microsoft, adequado para integração eficiente e raciocínio de conhecimento em larga escala."
+ },
+ "pixtral-12b-2409": {
+ "description": "O modelo Pixtral demonstra forte capacidade em tarefas de compreensão de gráficos e imagens, perguntas e respostas de documentos, raciocínio multimodal e seguimento de instruções, podendo ingerir imagens em resolução natural e proporções, além de processar um número arbitrário de imagens em uma janela de contexto longa de até 128K tokens."
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "Modelo de código Qwen."
+ },
+ "qwen-long": {
+ "description": "O Qwen é um modelo de linguagem em larga escala que suporta contextos de texto longos e funcionalidades de diálogo baseadas em documentos longos e múltiplos cenários."
+ },
+ "qwen-math-plus-latest": {
+ "description": "O modelo de matemática Qwen é especificamente projetado para resolver problemas matemáticos."
+ },
+ "qwen-math-turbo-latest": {
+ "description": "O modelo de matemática Qwen é especificamente projetado para resolver problemas matemáticos."
+ },
+ "qwen-max-latest": {
+ "description": "O modelo de linguagem em larga escala Qwen Max, com trilhões de parâmetros, que suporta entradas em diferentes idiomas, incluindo chinês e inglês, e é o modelo de API por trás da versão do produto Qwen 2.5."
+ },
+ "qwen-plus-latest": {
+ "description": "A versão aprimorada do modelo de linguagem em larga escala Qwen Plus, que suporta entradas em diferentes idiomas, incluindo chinês e inglês."
+ },
+ "qwen-turbo-latest": {
+ "description": "O modelo de linguagem em larga escala Qwen Turbo, que suporta entradas em diferentes idiomas, incluindo chinês e inglês."
+ },
+ "qwen-vl-chat-v1": {
+ "description": "O Qwen VL suporta uma maneira de interação flexível, incluindo múltiplas imagens, perguntas e respostas em várias rodadas, e capacidades criativas."
+ },
+ "qwen-vl-max": {
+ "description": "O Qwen é um modelo de linguagem visual em larga escala. Em comparação com a versão aprimorada, ele melhora ainda mais a capacidade de raciocínio visual e a adesão a instruções, oferecendo um nível mais alto de percepção e cognição visual."
+ },
+ "qwen-vl-plus": {
+ "description": "O Qwen é uma versão aprimorada do modelo de linguagem visual em larga escala, melhorando significativamente a capacidade de reconhecimento de detalhes e texto, suportando imagens com resolução superior a um milhão de pixels e qualquer proporção de largura e altura."
+ },
+ "qwen-vl-v1": {
+ "description": "Inicializado com o modelo de linguagem Qwen-7B, adicionando um modelo de imagem, um modelo pré-treinado com resolução de entrada de imagem de 448."
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2 é uma nova série de grandes modelos de linguagem, com capacidades de compreensão e geração mais robustas."
+ },
+ "qwen2": {
+ "description": "Qwen2 é a nova geração de modelo de linguagem em larga escala da Alibaba, oferecendo desempenho excepcional para atender a diversas necessidades de aplicação."
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "Modelo de 14B parâmetros do Qwen 2.5, disponível como código aberto."
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "Modelo de 32B parâmetros do Qwen 2.5, disponível como código aberto."
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "Modelo de 72B parâmetros do Qwen 2.5, disponível como código aberto."
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "Modelo de 7B parâmetros do Qwen 2.5, disponível como código aberto."
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "Versão de código aberto do modelo de código Qwen."
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "Versão de código aberto do modelo de código Qwen."
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "O modelo Qwen-Math possui uma forte capacidade de resolução de problemas matemáticos."
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "O modelo Qwen-Math possui uma forte capacidade de resolução de problemas matemáticos."
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "O modelo Qwen-Math possui uma forte capacidade de resolução de problemas matemáticos."
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2 é a nova geração de modelo de linguagem em larga escala da Alibaba, oferecendo desempenho excepcional para atender a diversas necessidades de aplicação."
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2 é a nova geração de modelo de linguagem em larga escala da Alibaba, oferecendo desempenho excepcional para atender a diversas necessidades de aplicação."
+ },
+ "qwen2:72b": {
+ "description": "Qwen2 é a nova geração de modelo de linguagem em larga escala da Alibaba, oferecendo desempenho excepcional para atender a diversas necessidades de aplicação."
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar Mini é um LLM compacto, com desempenho superior ao GPT-3.5, possuindo forte capacidade multilíngue, suportando inglês e coreano, oferecendo uma solução eficiente e compacta."
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja) expande as capacidades do Solar Mini, focando no japonês, enquanto mantém eficiência e desempenho excepcional no uso do inglês e coreano."
+ },
+ "solar-pro": {
+ "description": "Solar Pro é um LLM de alta inteligência lançado pela Upstage, focado na capacidade de seguir instruções em um único GPU, com pontuação IFEval acima de 80. Atualmente suporta inglês, com uma versão oficial planejada para lançamento em novembro de 2024, que expandirá o suporte a idiomas e comprimento de contexto."
+ },
+ "step-1-128k": {
+ "description": "Equilibra desempenho e custo, adequado para cenários gerais."
+ },
+ "step-1-256k": {
+ "description": "Possui capacidade de processamento de contexto ultra longo, especialmente adequado para análise de documentos longos."
+ },
+ "step-1-32k": {
+ "description": "Suporta diálogos de comprimento médio, adequado para diversas aplicações."
+ },
+ "step-1-8k": {
+ "description": "Modelo pequeno, adequado para tarefas leves."
+ },
+ "step-1-flash": {
+ "description": "Modelo de alta velocidade, adequado para diálogos em tempo real."
+ },
+ "step-1v-32k": {
+ "description": "Suporta entradas visuais, aprimorando a experiência de interação multimodal."
+ },
+ "step-1v-8k": {
+ "description": "Modelo visual compacto, adequado para tarefas básicas de texto e imagem."
+ },
+ "step-2-16k": {
+ "description": "Suporta interações de contexto em larga escala, adequado para cenários de diálogo complexos."
+ },
+ "taichu_llm": {
+ "description": "O modelo de linguagem Taichu possui uma forte capacidade de compreensão de linguagem, além de habilidades em criação de texto, perguntas e respostas, programação de código, cálculos matemáticos, raciocínio lógico, análise de sentimentos e resumo de texto. Inova ao combinar pré-treinamento com grandes dados e conhecimento rico de múltiplas fontes, aprimorando continuamente a tecnologia de algoritmos e absorvendo novos conhecimentos de vocabulário, estrutura, gramática e semântica de grandes volumes de dados textuais, proporcionando aos usuários informações e serviços mais convenientes e uma experiência mais inteligente."
+ },
+ "taichu_vqa": {
+ "description": "O Taichu 2.0V combina habilidades de compreensão de imagem, transferência de conhecimento e atribuição lógica, destacando-se no campo de perguntas e respostas baseadas em texto e imagem."
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B) oferece capacidade de computação aprimorada através de estratégias e arquiteturas de modelo eficientes."
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B) é adequado para tarefas de instrução refinadas, oferecendo excelente capacidade de processamento de linguagem."
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2 é um modelo de linguagem fornecido pela Microsoft AI, destacando-se em diálogos complexos, multilíngue, raciocínio e assistentes inteligentes."
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2 é um modelo de linguagem fornecido pela Microsoft AI, destacando-se em diálogos complexos, multilíngue, raciocínio e assistentes inteligentes."
+ },
+ "yi-large": {
+ "description": "Modelo de nova geração com trilhões de parâmetros, oferecendo capacidades excepcionais de perguntas e respostas e geração de texto."
+ },
+ "yi-large-fc": {
+ "description": "Baseado no modelo yi-large, suporta e aprimora a capacidade de chamadas de ferramentas, adequado para diversos cenários de negócios que exigem a construção de agentes ou fluxos de trabalho."
+ },
+ "yi-large-preview": {
+ "description": "Versão inicial, recomenda-se o uso do yi-large (nova versão)."
+ },
+ "yi-large-rag": {
+ "description": "Serviço de alto nível baseado no modelo yi-large, combinando técnicas de recuperação e geração para fornecer respostas precisas, com serviços de busca em tempo real na web."
+ },
+ "yi-large-turbo": {
+ "description": "Excelente relação custo-benefício e desempenho excepcional. Ajuste de alta precisão baseado em desempenho, velocidade de raciocínio e custo."
+ },
+ "yi-medium": {
+ "description": "Modelo de tamanho médio com ajuste fino, equilibrando capacidades e custo. Otimização profunda da capacidade de seguir instruções."
+ },
+ "yi-medium-200k": {
+ "description": "Janela de contexto ultra longa de 200K, oferecendo compreensão e geração de texto em profundidade."
+ },
+ "yi-spark": {
+ "description": "Modelo leve e ágil. Oferece capacidades aprimoradas de cálculos matemáticos e escrita de código."
+ },
+ "yi-vision": {
+ "description": "Modelo para tarefas visuais complexas, oferecendo alta performance em compreensão e análise de imagens."
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/plugin.json b/DigitalHumanWeb/locales/pt-BR/plugin.json
new file mode 100644
index 0000000..0a34b12
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "Argumentos de Chamada",
+ "function_call": "Chamada de Função",
+ "off": "Desativar Depuração",
+ "on": "Ver Informações de Chamada de Plugin",
+ "payload": "carga do plugin",
+ "response": "Resposta",
+ "tool_call": "solicitação de chamada de ferramenta"
+ },
+ "detailModal": {
+ "info": {
+ "description": "Descrição da API",
+ "name": "Nome da API"
+ },
+ "tabs": {
+ "info": "Capacidades do Plugin",
+ "manifest": "Arquivo de Instalação",
+ "settings": "Configurações"
+ },
+ "title": "Detalhes do Plugin"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "Tem certeza de que deseja excluir este plugin local? Esta ação não poderá ser desfeita.",
+ "customParams": {
+ "useProxy": {
+ "label": "Instalar via Proxy (se ocorrer erro de acesso entre domínios, tente ativar esta opção e reinstalar)"
+ }
+ },
+ "deleteSuccess": "Plugin excluído com sucesso",
+ "manifest": {
+ "identifier": {
+ "desc": "Identificador único do plugin",
+ "label": "Identificador"
+ },
+ "mode": {
+ "local": "Configuração Visual",
+ "local-tooltip": "Configuração visual não suportada temporariamente",
+ "url": "Link Online"
+ },
+ "name": {
+ "desc": "Título do plugin",
+ "label": "Título",
+ "placeholder": "Pesquisar mecanismo de busca"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "Autor do plugin",
+ "label": "Autor"
+ },
+ "avatar": {
+ "desc": "Ícone do plugin, pode ser um Emoji ou um URL",
+ "label": "Ícone"
+ },
+ "description": {
+ "desc": "Descrição do plugin",
+ "label": "Descrição",
+ "placeholder": "Obter informações de um mecanismo de busca"
+ },
+ "formFieldRequired": "Este campo é obrigatório",
+ "homepage": {
+ "desc": "Página inicial do plugin",
+ "label": "Página Inicial"
+ },
+ "identifier": {
+ "desc": "Identificador único do plugin, será automaticamente reconhecido a partir do manifesto",
+ "errorDuplicate": "Identificador duplicado com um plugin existente, por favor modifique o identificador",
+ "label": "Identificador",
+ "pattenErrorMessage": "Apenas caracteres alfanuméricos, - e _ são permitidos"
+ },
+ "manifest": {
+ "desc": "{{appName}} será instalado através deste link para adicionar o plugin",
+ "label": "URL do Arquivo de Descrição do Plugin (Manifest)",
+ "preview": "Visualizar Manifesto",
+ "refresh": "Atualizar"
+ },
+ "title": {
+ "desc": "Título do plugin",
+ "label": "Título",
+ "placeholder": "Pesquisar mecanismo de busca"
+ }
+ },
+ "metaConfig": "Configuração de Metadados do Plugin",
+ "modalDesc": "Após adicionar um plugin personalizado, ele pode ser usado para validação de desenvolvimento de plugin ou diretamente em uma conversa. Consulte o <1>documento de desenvolvimento↗> para desenvolver plugins.",
+ "openai": {
+ "importUrl": "Importar a partir de URL",
+ "schema": "Esquema"
+ },
+ "preview": {
+ "card": "Visualizar Efeito do Plugin",
+ "desc": "Visualizar Descrição do Plugin",
+ "title": "Visualizar Nome do Plugin"
+ },
+ "save": "Instalar Plugin",
+ "saveSuccess": "Configurações do plugin salvas com sucesso",
+ "tabs": {
+ "manifest": "Lista de Descrição de Funcionalidades (Manifest)",
+ "meta": "Metadados do Plugin"
+ },
+ "title": {
+ "create": "Adicionar Plugin Personalizado",
+ "edit": "Editar Plugin Personalizado"
+ },
+ "type": {
+ "lobe": "Plugin LobeChat",
+ "openai": "Plugin OpenAI"
+ },
+ "update": "Atualizar",
+ "updateSuccess": "Configurações do plugin atualizadas com sucesso"
+ },
+ "error": {
+ "fetchError": "Falha ao buscar o link do manifesto. Certifique-se de que o link é válido e permita o acesso entre domínios.",
+ "installError": "Falha na instalação do plugin {{name}}.",
+ "manifestInvalid": "O manifesto não está em conformidade com as especificações. Resultado da validação: \n\n {{error}}",
+ "noManifest": "Manifesto não encontrado",
+ "openAPIInvalid": "Falha ao analisar o OpenAPI. Erro: \n\n {{error}}",
+ "reinstallError": "Falha ao atualizar o plugin {{name}}",
+ "urlError": "O link não retornou conteúdo no formato JSON. Certifique-se de que o link é válido."
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "Obsoleto",
+ "local.config": "Configuração",
+ "local.title": "Personalizado"
+ }
+ },
+ "loading": {
+ "content": "Carregando o plugin...",
+ "plugin": "Executando o plugin..."
+ },
+ "pluginList": "Lista de Plugins",
+ "setting": "Configuração do Plugin",
+ "settings": {
+ "indexUrl": {
+ "title": "Índice do Mercado",
+ "tooltip": "Edição online não suportada. Configure através de variáveis de ambiente durante a implantação."
+ },
+ "modalDesc": "Após configurar o endereço do mercado de plugins, você poderá usar um mercado personalizado de plugins.",
+ "title": "Configurações do Mercado de Plugins"
+ },
+ "showInPortal": "Por favor, veja os detalhes na área de trabalho",
+ "store": {
+ "actions": {
+ "confirmUninstall": "Você está prestes a desinstalar este plugin. A desinstalação irá limpar a configuração do plugin. Confirme a operação.",
+ "detail": "Detalhes",
+ "install": "Instalar",
+ "manifest": "Editar arquivo de instalação",
+ "settings": "Configurações",
+ "uninstall": "Desinstalar"
+ },
+ "communityPlugin": "Plugin da Comunidade",
+ "customPlugin": "Personalizado",
+ "empty": "Nenhum plugin instalado",
+ "installAllPlugins": "Instalar todos os plugins",
+ "networkError": "Falha ao obter a loja de plugins. Verifique a conexão de rede e tente novamente.",
+ "placeholder": "Pesquisar por nome, descrição ou palavra-chave do plugin...",
+ "releasedAt": "Lançado em {{createdAt}}",
+ "tabs": {
+ "all": "Todos",
+ "installed": "Instalados"
+ },
+ "title": "Loja de Plugins"
+ },
+ "unknownPlugin": "Plugin desconhecido"
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/portal.json b/DigitalHumanWeb/locales/pt-BR/portal.json
new file mode 100644
index 0000000..6c890f7
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "Artefatos",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "Parte",
+ "file": "Arquivo"
+ }
+ },
+ "Plugins": "Plugins",
+ "actions": {
+ "genAiMessage": "Gerar mensagem de IA",
+ "summary": "Resumo",
+ "summaryTooltip": "Resumir o conteúdo atual"
+ },
+ "artifacts": {
+ "display": {
+ "code": "Código",
+ "preview": "Prévia"
+ },
+ "svg": {
+ "copyAsImage": "Copiar como imagem",
+ "copyFail": "Falha ao copiar, motivo do erro: {{error}}",
+ "copySuccess": "Imagem copiada com sucesso",
+ "download": {
+ "png": "Baixar como PNG",
+ "svg": "Baixar como SVG"
+ }
+ }
+ },
+ "emptyArtifactList": "A lista de Artefatos atual está vazia. Por favor, use os plugins conforme necessário durante a sessão e depois verifique novamente.",
+ "emptyKnowledgeList": "A lista de conhecimentos atual está vazia. Por favor, ative o repositório de conhecimentos conforme necessário durante a conversa antes de visualizar.",
+ "files": "Arquivos",
+ "messageDetail": "Detalhes da mensagem",
+ "title": "Janela de Expansão"
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/providers.json b/DigitalHumanWeb/locales/pt-BR/providers.json
new file mode 100644
index 0000000..96da851
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "360 AI é a plataforma de modelos e serviços de IA lançada pela empresa 360, oferecendo uma variedade de modelos avançados de processamento de linguagem natural, incluindo 360GPT2 Pro, 360GPT Pro, 360GPT Turbo e 360GPT Turbo Responsibility 8K. Esses modelos combinam grandes parâmetros e capacidades multimodais, sendo amplamente aplicados em geração de texto, compreensão semântica, sistemas de diálogo e geração de código. Com uma estratégia de preços flexível, a 360 AI atende a diversas necessidades dos usuários, apoiando a integração de desenvolvedores e promovendo a inovação e o desenvolvimento de aplicações inteligentes."
+ },
+ "anthropic": {
+ "description": "A Anthropic é uma empresa focada em pesquisa e desenvolvimento de inteligência artificial, oferecendo uma gama de modelos de linguagem avançados, como Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus e Claude 3 Haiku. Esses modelos alcançam um equilíbrio ideal entre inteligência, velocidade e custo, adequando-se a uma variedade de cenários de aplicação, desde cargas de trabalho empresariais até respostas rápidas. O Claude 3.5 Sonnet, como seu modelo mais recente, se destacou em várias avaliações, mantendo uma alta relação custo-benefício."
+ },
+ "azure": {
+ "description": "Azure oferece uma variedade de modelos avançados de IA, incluindo GPT-3.5 e a mais recente série GPT-4, suportando diversos tipos de dados e tarefas complexas, com foco em soluções de IA seguras, confiáveis e sustentáveis."
+ },
+ "baichuan": {
+ "description": "Baichuan Intelligent é uma empresa focada no desenvolvimento de grandes modelos de inteligência artificial, cujos modelos se destacam em tarefas em chinês, como enciclopédias de conhecimento, processamento de textos longos e criação de conteúdo, superando modelos mainstream estrangeiros. A Baichuan Intelligent também possui capacidades multimodais líderes do setor, destacando-se em várias avaliações de autoridade. Seus modelos incluem Baichuan 4, Baichuan 3 Turbo e Baichuan 3 Turbo 128k, otimizados para diferentes cenários de aplicação, oferecendo soluções com alta relação custo-benefício."
+ },
+ "bedrock": {
+ "description": "Bedrock é um serviço oferecido pela Amazon AWS, focado em fornecer modelos de linguagem e visão de IA avançados para empresas. Sua família de modelos inclui a série Claude da Anthropic, a série Llama 3.1 da Meta, entre outros, abrangendo uma variedade de opções, desde modelos leves até de alto desempenho, suportando geração de texto, diálogos, processamento de imagens e outras tarefas, adequando-se a aplicações empresariais de diferentes escalas e necessidades."
+ },
+ "deepseek": {
+ "description": "A DeepSeek é uma empresa focada em pesquisa e aplicação de tecnologia de inteligência artificial, cujo modelo mais recente, DeepSeek-V2.5, combina capacidades de diálogo geral e processamento de código, alcançando melhorias significativas em alinhamento com preferências humanas, tarefas de escrita e seguimento de instruções."
+ },
+ "fireworksai": {
+ "description": "Fireworks AI é um fornecedor líder de serviços de modelos de linguagem avançados, focando em chamadas de função e processamento multimodal. Seu modelo mais recente, Firefunction V2, baseado em Llama-3, é otimizado para chamadas de função, diálogos e seguimento de instruções. O modelo de linguagem visual FireLLaVA-13B suporta entradas mistas de imagem e texto. Outros modelos notáveis incluem a série Llama e a série Mixtral, oferecendo suporte eficiente para seguimento e geração de instruções multilíngues."
+ },
+ "github": {
+ "description": "Com os Modelos do GitHub, os desenvolvedores podem se tornar engenheiros de IA e construir com os principais modelos de IA da indústria."
+ },
+ "google": {
+ "description": "A série Gemini do Google é seu modelo de IA mais avançado e versátil, desenvolvido pela Google DeepMind, projetado para ser multimodal, suportando compreensão e processamento sem costura de texto, código, imagens, áudio e vídeo. Adequado para uma variedade de ambientes, desde data centers até dispositivos móveis, melhorando significativamente a eficiência e a aplicabilidade dos modelos de IA."
+ },
+ "groq": {
+ "description": "O motor de inferência LPU da Groq se destacou em testes de benchmark independentes de modelos de linguagem de grande escala (LLM), redefinindo os padrões de soluções de IA com sua velocidade e eficiência impressionantes. A Groq representa uma velocidade de inferência em tempo real, demonstrando bom desempenho em implantações baseadas em nuvem."
+ },
+ "minimax": {
+ "description": "MiniMax é uma empresa de tecnologia de inteligência artificial geral fundada em 2021, dedicada a co-criar inteligência com os usuários. A MiniMax desenvolveu internamente diferentes modelos gerais de grande escala, incluindo um modelo de texto MoE com trilhões de parâmetros, um modelo de voz e um modelo de imagem. Também lançou aplicações como Conch AI."
+ },
+ "mistral": {
+ "description": "A Mistral oferece modelos avançados gerais, especializados e de pesquisa, amplamente utilizados em raciocínio complexo, tarefas multilíngues, geração de código, entre outros, permitindo que os usuários integrem funcionalidades personalizadas por meio de interfaces de chamada de função."
+ },
+ "moonshot": {
+ "description": "Moonshot é uma plataforma de código aberto lançada pela Beijing Dark Side Technology Co., Ltd., oferecendo uma variedade de modelos de processamento de linguagem natural, com ampla gama de aplicações, incluindo, mas não se limitando a, criação de conteúdo, pesquisa acadêmica, recomendações inteligentes e diagnósticos médicos, suportando processamento de textos longos e tarefas de geração complexas."
+ },
+ "novita": {
+ "description": "Novita AI é uma plataforma que oferece uma variedade de modelos de linguagem de grande escala e serviços de geração de imagens de IA, sendo flexível, confiável e econômica. Suporta os mais recentes modelos de código aberto, como Llama3 e Mistral, e fornece soluções de API abrangentes, amigáveis ao usuário e escaláveis para o desenvolvimento de aplicações de IA, adequadas para o rápido crescimento de startups de IA."
+ },
+ "ollama": {
+ "description": "Os modelos oferecidos pela Ollama abrangem amplamente áreas como geração de código, operações matemáticas, processamento multilíngue e interações de diálogo, atendendo a diversas necessidades de implantação em nível empresarial e local."
+ },
+ "openai": {
+ "description": "OpenAI é uma das principais instituições de pesquisa em inteligência artificial do mundo, cujos modelos, como a série GPT, estão na vanguarda do processamento de linguagem natural. A OpenAI se dedica a transformar vários setores por meio de soluções de IA inovadoras e eficientes. Seus produtos apresentam desempenho e custo-benefício significativos, sendo amplamente utilizados em pesquisa, negócios e aplicações inovadoras."
+ },
+ "openrouter": {
+ "description": "OpenRouter é uma plataforma de serviço que oferece interfaces para diversos modelos de ponta, suportando OpenAI, Anthropic, LLaMA e mais, adequada para diversas necessidades de desenvolvimento e aplicação. Os usuários podem escolher flexivelmente o modelo e o preço mais adequados às suas necessidades, melhorando a experiência de IA."
+ },
+ "perplexity": {
+ "description": "Perplexity é um fornecedor líder de modelos de geração de diálogo, oferecendo uma variedade de modelos avançados Llama 3.1, suportando aplicações online e offline, especialmente adequados para tarefas complexas de processamento de linguagem natural."
+ },
+ "qwen": {
+ "description": "Qwen é um modelo de linguagem de grande escala desenvolvido pela Alibaba Cloud, com forte capacidade de compreensão e geração de linguagem natural. Ele pode responder a várias perguntas, criar conteúdo escrito, expressar opiniões e escrever código, atuando em vários campos."
+ },
+ "siliconcloud": {
+ "description": "SiliconFlow se dedica a acelerar a AGI para beneficiar a humanidade, melhorando a eficiência da IA em larga escala por meio de uma pilha GenAI fácil de usar e de baixo custo."
+ },
+ "spark": {
+ "description": "O modelo Spark da iFlytek oferece poderosas capacidades de IA em múltiplos domínios e idiomas, utilizando tecnologia avançada de processamento de linguagem natural para construir aplicações inovadoras adequadas a cenários verticais como hardware inteligente, saúde inteligente e finanças inteligentes."
+ },
+ "stepfun": {
+ "description": "O modelo StepFun possui capacidades de multimodalidade e raciocínio complexo líderes do setor, suportando compreensão de textos longos e um poderoso mecanismo de busca autônomo."
+ },
+ "taichu": {
+ "description": "O Instituto de Automação da Academia Chinesa de Ciências e o Instituto de Pesquisa em Inteligência Artificial de Wuhan lançaram uma nova geração de grandes modelos multimodais, suportando tarefas abrangentes de perguntas e respostas, criação de texto, geração de imagens, compreensão 3D, análise de sinais, entre outras, com capacidades cognitivas, de compreensão e criação mais fortes, proporcionando uma nova experiência interativa."
+ },
+ "togetherai": {
+ "description": "A Together AI se dedica a alcançar desempenho de ponta por meio de modelos de IA inovadores, oferecendo amplas capacidades de personalização, incluindo suporte para escalabilidade rápida e processos de implantação intuitivos, atendendo a diversas necessidades empresariais."
+ },
+ "upstage": {
+ "description": "Upstage se concentra no desenvolvimento de modelos de IA para diversas necessidades comerciais, incluindo Solar LLM e Document AI, visando alcançar uma inteligência geral artificial (AGI) que funcione. Crie agentes de diálogo simples por meio da API de Chat e suporte chamadas de função, tradução, incorporação e aplicações em domínios específicos."
+ },
+ "zeroone": {
+ "description": "01.AI se concentra na tecnologia de inteligência artificial da era 2.0, promovendo fortemente a inovação e aplicação de 'humano + inteligência artificial', utilizando modelos poderosos e tecnologia de IA avançada para aumentar a produtividade humana e realizar a capacitação tecnológica."
+ },
+ "zhipu": {
+ "description": "Zhipu AI oferece uma plataforma aberta para modelos multimodais e de linguagem, suportando uma ampla gama de cenários de aplicação de IA, incluindo processamento de texto, compreensão de imagens e assistência em programação."
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/ragEval.json b/DigitalHumanWeb/locales/pt-BR/ragEval.json
new file mode 100644
index 0000000..c50b55e
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "Novo",
+ "description": {
+ "placeholder": "Descrição do conjunto de dados (opcional)"
+ },
+ "name": {
+ "placeholder": "Nome do conjunto de dados",
+ "required": "Por favor, preencha o nome do conjunto de dados"
+ },
+ "title": "Adicionar Conjunto de Dados"
+ },
+ "dataset": {
+ "addNewButton": "Criar Conjunto de Dados",
+ "emptyGuide": "O conjunto de dados atual está vazio, por favor crie um conjunto de dados.",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "Importar Dados"
+ },
+ "columns": {
+ "actions": "Ações",
+ "ideal": {
+ "title": "Resposta Esperada"
+ },
+ "question": {
+ "title": "Pergunta"
+ },
+ "referenceFiles": {
+ "title": "Arquivos de Referência"
+ }
+ },
+ "notSelected": "Por favor, selecione um conjunto de dados à esquerda",
+ "title": "Detalhes do Conjunto de Dados"
+ },
+ "title": "Conjunto de Dados"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "Novo",
+ "datasetId": {
+ "placeholder": "Por favor, selecione seu conjunto de dados de avaliação",
+ "required": "Por favor, selecione o conjunto de dados de avaliação"
+ },
+ "description": {
+ "placeholder": "Descrição da tarefa de avaliação (opcional)"
+ },
+ "name": {
+ "placeholder": "Nome da tarefa de avaliação",
+ "required": "Por favor, preencha o nome da tarefa de avaliação"
+ },
+ "title": "Adicionar Tarefa de Avaliação"
+ },
+ "addNewButton": "Criar Avaliação",
+ "emptyGuide": "A tarefa de avaliação atual está vazia, comece a criar uma avaliação.",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "Verificar Status",
+ "confirmDelete": "Deseja excluir esta avaliação?",
+ "confirmRun": "Deseja iniciar a execução? A execução será realizada de forma assíncrona em segundo plano, fechar a página não afetará a execução da tarefa assíncrona.",
+ "downloadRecords": "Baixar Avaliação",
+ "retry": "Tentar Novamente",
+ "run": "Executar",
+ "title": "Ações"
+ },
+ "datasetId": {
+ "title": "Conjunto de Dados"
+ },
+ "name": {
+ "title": "Nome da Tarefa de Avaliação"
+ },
+ "records": {
+ "title": "Número de Registros de Avaliação"
+ },
+ "referenceFiles": {
+ "title": "Arquivos de Referência"
+ },
+ "status": {
+ "error": "Erro na Execução",
+ "pending": "Aguardando Execução",
+ "processing": "Executando",
+ "success": "Execução Bem-Sucedida",
+ "title": "Status"
+ }
+ },
+ "title": "Lista de Tarefas de Avaliação"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/setting.json b/DigitalHumanWeb/locales/pt-BR/setting.json
new file mode 100644
index 0000000..0192eb8
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "Sobre"
+ },
+ "agentTab": {
+ "chat": "Preferências de bate-papo",
+ "meta": "Informações do assistente",
+ "modal": "Configurações do modelo",
+ "plugin": "Configurações do plug-in",
+ "prompt": "Configuração de personagem",
+ "tts": "Serviço de voz"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "Ao escolher enviar dados de telemetria, você pode nos ajudar a melhorar a experiência geral do usuário no {{appName}}",
+ "title": "Enviar dados de uso anônimo"
+ },
+ "title": "Análise de dados"
+ },
+ "danger": {
+ "clear": {
+ "action": "Limpar Agora",
+ "confirm": "Confirmar a exclusão de todos os dados de conversa?",
+ "desc": "Isso irá apagar todos os dados de conversas, incluindo assistentes, arquivos, mensagens, e plugins.",
+ "success": "Todos os dados de conversa foram apagados com sucesso",
+ "title": "Limpar Todas as Conversas"
+ },
+ "reset": {
+ "action": "Redefinir Agora",
+ "confirm": "Confirmar a redefinição de todas as configurações?",
+ "currentVersion": "Versão Atual",
+ "desc": "Redefinir todas as configurações para os valores padrão",
+ "success": "Todas as configurações foram redefinidas com sucesso",
+ "title": "Redefinir Todas as Configurações"
+ }
+ },
+ "header": {
+ "desc": "Preferências e configurações do modelo.",
+ "global": "Configurações Globais",
+ "session": "Configurações de Sessão",
+ "sessionDesc": "Configurações de personagem e preferências de sessão.",
+ "sessionWithName": "Configurações de Sessão · {{name}}",
+ "title": "Configurações"
+ },
+ "llm": {
+ "aesGcm": "Suas chaves, endereço do agente, etc., serão criptografados usando o algoritmo de criptografia <1>AES-GCM1>",
+ "apiKey": {
+ "desc": "Por favor, insira sua chave de API {{name}}",
+ "placeholder": "Chave de API {{name}}",
+ "title": "Chave de API"
+ },
+ "checker": {
+ "button": "Verificar",
+ "desc": "Verifica se a API Key e o endereço do proxy estão preenchidos corretamente",
+ "pass": "Verificação aprovada",
+ "title": "Verificação de Conectividade"
+ },
+ "customModelCards": {
+ "addNew": "Criar e adicionar o modelo {{id}}",
+ "config": "Configurar modelo",
+ "confirmDelete": "Você está prestes a excluir este modelo personalizado. Depois de excluído, não poderá ser recuperado. Por favor, proceda com cuidado.",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "O campo real solicitado no Azure OpenAI",
+ "placeholder": "Insira o nome de implantação do modelo no Azure",
+ "title": "Nome de Implantação do Modelo"
+ },
+ "displayName": {
+ "placeholder": "Insira o nome de exibição do modelo, como ChatGPT, GPT-4, etc.",
+ "title": "Nome de Exibição do Modelo"
+ },
+ "files": {
+ "extra": "A implementação atual de upload de arquivos é apenas uma solução alternativa, limitada a tentativas pessoais. A capacidade completa de upload de arquivos deve ser aguardada em implementações futuras.",
+ "title": "Suporte a Upload de Arquivos"
+ },
+ "functionCall": {
+ "extra": "Esta configuração ativará apenas a capacidade de chamada de funções dentro do aplicativo; a possibilidade de suporte a chamadas de funções depende totalmente do modelo em si. Por favor, teste a disponibilidade da capacidade de chamada de funções desse modelo.",
+ "title": "Suporte a Chamada de Função"
+ },
+ "id": {
+ "extra": "Será exibido como rótulo do modelo",
+ "placeholder": "Insira o ID do modelo, como gpt-4-turbo-preview ou claude-2.1",
+ "title": "ID do Modelo"
+ },
+ "modalTitle": "Configuração de Modelo Personalizado",
+ "tokens": {
+ "title": "Número Máximo de Tokens",
+ "unlimited": "ilimitado"
+ },
+ "vision": {
+ "extra": "Esta configuração ativará apenas a configuração de upload de imagens dentro do aplicativo; a capacidade de reconhecimento depende totalmente do modelo em si. Por favor, teste a disponibilidade da capacidade de reconhecimento visual desse modelo.",
+ "title": "Suporte a Reconhecimento Visual"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "O modo de solicitação do cliente iniciará diretamente a solicitação da sessão a partir do navegador, melhorando a velocidade de resposta",
+ "title": "Usar o modo de solicitação do cliente"
+ },
+ "fetcher": {
+ "fetch": "Obter lista de modelos",
+ "fetching": "Obtendo lista de modelos...",
+ "latestTime": "Última atualização: {{time}}",
+ "noLatestTime": "Lista não disponível"
+ },
+ "helpDoc": "Tutorial de configuração",
+ "modelList": {
+ "desc": "Escolha os modelos a serem exibidos na conversa. Os modelos selecionados serão exibidos na lista de modelos.",
+ "placeholder": "Selecione um modelo da lista",
+ "title": "Lista de Modelos",
+ "total": "Total de {{count}} modelos disponíveis"
+ },
+ "proxyUrl": {
+ "desc": "Além do endereço padrão, deve incluir http(s)://",
+ "title": "Endereço do Proxy da API"
+ },
+ "waitingForMore": "Mais modelos estão sendo <1>planejados para serem adicionados1>, aguarde ansiosamente"
+ },
+ "plugin": {
+ "addTooltip": "Adicionar plug-in personalizado",
+ "clearDeprecated": "Remover plug-ins inválidos",
+ "empty": "Nenhum plug-in instalado no momento, visite a <1>loja de plug-ins1> para explorar",
+ "installStatus": {
+ "deprecated": "Desinstalado"
+ },
+ "settings": {
+ "hint": "Por favor, preencha as configurações abaixo de acordo com a descrição",
+ "title": "Configurações do plug-in {{id}}",
+ "tooltip": "Configurações do plug-in"
+ },
+ "store": "Loja de plug-ins"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "Avatar"
+ },
+ "backgroundColor": {
+ "title": "Cor de fundo"
+ },
+ "description": {
+ "placeholder": "Digite a descrição do assistente",
+ "title": "Descrição do assistente"
+ },
+ "name": {
+ "placeholder": "Digite o nome do assistente",
+ "title": "Nome"
+ },
+ "prompt": {
+ "placeholder": "Digite a palavra de prompt do papel",
+ "title": "Configuração do papel"
+ },
+ "tag": {
+ "placeholder": "Digite a etiqueta",
+ "title": "Etiqueta"
+ },
+ "title": "Informações do assistente"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "Quando o número de mensagens atingir esse valor, um tópico será criado automaticamente",
+ "title": "Limite de mensagens"
+ },
+ "chatStyleType": {
+ "title": "Estilo da janela de chat",
+ "type": {
+ "chat": "Modo de conversa",
+ "docs": "Modo de documento"
+ }
+ },
+ "compressThreshold": {
+ "desc": "Quando o número de mensagens não compactadas ultrapassar esse valor, elas serão compactadas",
+ "title": "Limite de compactação de mensagens"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "Se deve criar automaticamente um tópico durante a conversa, apenas válido em tópicos temporários",
+ "title": "Criar tópico automaticamente"
+ },
+ "enableCompressThreshold": {
+ "title": "Ativar limite de compactação de mensagens"
+ },
+ "enableHistoryCount": {
+ "alias": "Sem limite",
+ "limited": "Incluir apenas {{number}} mensagens de conversa",
+ "setlimited": "Definir número de mensagens de histórico",
+ "title": "Limitar número de mensagens de histórico",
+ "unlimited": "Sem limite de mensagens de histórico"
+ },
+ "historyCount": {
+ "desc": "Número de mensagens incluídas em cada solicitação (incluindo a última pergunta feita. Cada pergunta e resposta contam como 1)",
+ "title": "Número de mensagens incluídas"
+ },
+ "inputTemplate": {
+ "desc": "A última mensagem do usuário será preenchida neste modelo",
+ "placeholder": "O modelo de pré-processamento {{text}} será substituído pela entrada em tempo real",
+ "title": "Pré-processamento de entrada do usuário"
+ },
+ "title": "Configurações de chat"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "Ativar limite de resposta única"
+ },
+ "frequencyPenalty": {
+ "desc": "Quanto maior o valor, maior a probabilidade de reduzir palavras repetidas",
+ "title": "Penalidade de frequência"
+ },
+ "maxTokens": {
+ "desc": "Número máximo de tokens a serem usados em uma interação única",
+ "title": "Limite de resposta única"
+ },
+ "model": {
+ "desc": "{{provider}} modelo",
+ "title": "Modelo"
+ },
+ "presencePenalty": {
+ "desc": "Quanto maior o valor, maior a probabilidade de expandir para novos tópicos",
+ "title": "Penalidade de novidade do tópico"
+ },
+ "temperature": {
+ "desc": "Quanto maior o valor, mais aleatória será a resposta",
+ "title": "Aleatoriedade",
+ "titleWithValue": "Aleatoriedade {{value}}"
+ },
+ "title": "Configurações do modelo",
+ "topP": {
+ "desc": "Semelhante à aleatoriedade, mas não deve ser alterado junto com a aleatoriedade",
+ "title": "Amostragem principal"
+ }
+ },
+ "settingPlugin": {
+ "title": "Lista de plugins"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "O administrador habilitou o acesso criptografado",
+ "placeholder": "Digite a senha de acesso",
+ "title": "Senha de acesso"
+ },
+ "oauth": {
+ "info": {
+ "desc": "Logado",
+ "title": "Informações da conta"
+ },
+ "signin": {
+ "action": "Entrar",
+ "desc": "Faça login com SSO para desbloquear o aplicativo",
+ "title": "Entrar na conta"
+ },
+ "signout": {
+ "action": "Sair",
+ "confirm": "Confirmar saída?",
+ "success": "Saiu da conta com sucesso"
+ }
+ },
+ "title": "Configurações do sistema"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "Modelo de reconhecimento de fala OpenAI",
+ "title": "OpenAI",
+ "ttsModel": "Modelo de síntese de fala OpenAI"
+ },
+ "showAllLocaleVoice": {
+ "desc": "Se desativado, mostrará apenas as vozes no idioma atual",
+ "title": "Mostrar todas as vozes do idioma"
+ },
+ "stt": "Configurações de reconhecimento de fala",
+ "sttAutoStop": {
+ "desc": "Se desativado, o reconhecimento de fala não será encerrado automaticamente e precisará ser encerrado manualmente",
+ "title": "Parar reconhecimento de fala automaticamente"
+ },
+ "sttLocale": {
+ "desc": "Idioma da entrada de fala, isso pode melhorar a precisão do reconhecimento de fala",
+ "title": "Idioma do reconhecimento de fala"
+ },
+ "sttService": {
+ "desc": "Onde 'browser' é o serviço nativo de reconhecimento de fala do navegador",
+ "title": "Serviço de reconhecimento de fala"
+ },
+ "title": "Serviço de fala",
+ "tts": "Configurações de síntese de fala",
+ "ttsService": {
+ "desc": "Se estiver usando o serviço de síntese de fala OpenAI, certifique-se de que o serviço do modelo OpenAI esteja habilitado",
+ "title": "Serviço de síntese de fala"
+ },
+ "voice": {
+ "desc": "Escolha uma voz para o assistente atual, diferentes serviços TTS suportam vozes diferentes",
+ "preview": "Ouvir voz",
+ "title": "Voz de síntese de fala"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "Avatar"
+ },
+ "fontSize": {
+ "desc": "Tamanho da fonte do conteúdo do chat",
+ "marks": {
+ "normal": "Normal"
+ },
+ "title": "Tamanho da Fonte"
+ },
+ "lang": {
+ "autoMode": "Seguir sistema",
+ "title": "Idioma"
+ },
+ "neutralColor": {
+ "desc": "Personalização de tons de cinza com diferentes tendências de cores",
+ "title": "Cor Neutra"
+ },
+ "primaryColor": {
+ "desc": "Cor do tema personalizado",
+ "title": "Cor Principal"
+ },
+ "themeMode": {
+ "auto": "Automático",
+ "dark": "Escuro",
+ "light": "Claro",
+ "title": "Tema"
+ },
+ "title": "Configurações de Tema"
+ },
+ "submitAgentModal": {
+ "button": "Enviar Assistente",
+ "identifier": "Identificador do assistente",
+ "metaMiss": "Por favor, complete as informações do assistente antes de enviar, incluindo nome, descrição e etiqueta",
+ "placeholder": "Insira o identificador único do assistente, como por exemplo, desenvolvimento-web",
+ "tooltips": "Compartilhar no mercado de assistentes"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "Adicione um nome para facilitar a identificação",
+ "placeholder": "Insira o nome do dispositivo",
+ "title": "Nome do dispositivo"
+ },
+ "title": "Informações do dispositivo",
+ "unknownBrowser": "Navegador desconhecido",
+ "unknownOS": "Sistema operacional desconhecido"
+ },
+ "warning": {
+ "tip": "Após um longo período de testes comunitários, a sincronização WebRTC pode não atender de forma estável às demandas gerais de sincronização de dados. Por favor, <1>implante um servidor de sinalização1> antes de usar."
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "O WebRTC usará este nome para criar um canal de sincronização. Certifique-se de que o nome do canal seja único",
+ "placeholder": "Insira o nome do canal de sincronização",
+ "shuffle": "Gerar aleatoriamente",
+ "title": "Nome do canal de sincronização"
+ },
+ "channelPassword": {
+ "desc": "Adicione uma senha para garantir a privacidade do canal. Apenas com a senha correta os dispositivos poderão ingressar no canal",
+ "placeholder": "Insira a senha do canal de sincronização",
+ "title": "Senha do canal de sincronização"
+ },
+ "desc": "Comunicação de dados em tempo real ponto a ponto. Os dispositivos precisam estar online simultaneamente para sincronizar",
+ "enabled": {
+ "invalid": "Por favor, preencha o endereço do servidor de sinalização e o nome do canal de sincronização antes de ativar.",
+ "title": "Ativar sincronização"
+ },
+ "signaling": {
+ "desc": "O WebRTC usará este endereço para sincronização",
+ "placeholder": "Insira o endereço do servidor de sinalização",
+ "title": "Servidor de Sinalização"
+ },
+ "title": "Sincronização WebRTC"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "Modelo de Geração de Metadados do Assistente",
+ "modelDesc": "Especifica o modelo usado para gerar o nome, descrição, avatar e tags do assistente",
+ "title": "Geração Automática de Informações do Assistente"
+ },
+ "queryRewrite": {
+ "label": "Modelo de Reescrita de Perguntas",
+ "modelDesc": "Modelo designado para otimizar as perguntas dos usuários",
+ "title": "Base de Conhecimento"
+ },
+ "title": "Assistente do Sistema",
+ "topic": {
+ "label": "Modelo de Nomeação de Tópicos",
+ "modelDesc": "Especifica o modelo usado para renomeação automática de tópicos",
+ "title": "Renomeação Automática de Tópicos"
+ },
+ "translation": {
+ "label": "Modelo de Tradução",
+ "modelDesc": "Especifica o modelo usado para tradução",
+ "title": "Configurações do Assistente de Tradução"
+ }
+ },
+ "tab": {
+ "about": "Sobre",
+ "agent": "Assistente Padrão",
+ "common": "Configurações Comuns",
+ "experiment": "Experimento",
+ "llm": "Modelo de Linguagem",
+ "sync": "Sincronização na nuvem",
+ "system-agent": "Assistente do Sistema",
+ "tts": "Serviço de Voz"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "Integrados"
+ },
+ "disabled": "O modelo atual não suporta chamadas de função e não pode usar plugins",
+ "plugins": {
+ "enabled": "Ativado {{num}}",
+ "groupName": "Plugins",
+ "noEnabled": "Nenhum plugin ativado no momento",
+ "store": "Loja de Plugins"
+ },
+ "title": "Ferramentas de Extensão"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/tool.json b/DigitalHumanWeb/locales/pt-BR/tool.json
new file mode 100644
index 0000000..8c1e193
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "Auto gerar",
+ "downloading": "O link da imagem gerada pelo DALL·E3 é válido apenas por 1 hora, está baixando a imagem para o armazenamento local...",
+ "generate": "Gerar",
+ "generating": "Gerando...",
+ "images": "Imagens:",
+ "prompt": "Palavra-chave"
+ }
+}
diff --git a/DigitalHumanWeb/locales/pt-BR/welcome.json b/DigitalHumanWeb/locales/pt-BR/welcome.json
new file mode 100644
index 0000000..36e8b3a
--- /dev/null
+++ b/DigitalHumanWeb/locales/pt-BR/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "Importar configuração",
+ "market": "Explorar o mercado",
+ "start": "Começar agora"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "Trocar",
+ "title": "Recomendação de assistentes adicionais:"
+ },
+ "defaultMessage": "Eu sou seu assistente inteligente pessoal {{appName}}. Como posso ajudá-lo agora?\nSe você precisar de um assistente mais profissional ou personalizado, clique em `+` para criar um assistente personalizado.",
+ "defaultMessageWithoutCreate": "Eu sou seu assistente inteligente pessoal {{appName}}. Como posso ajudá-lo agora?",
+ "qa": {
+ "q01": "O que é o LobeHub?",
+ "q02": "O que é {{appName}}?",
+ "q03": "{{appName}} tem suporte da comunidade?",
+ "q04": "Quais funcionalidades {{appName}} suporta?",
+ "q05": "Como implantar e usar {{appName}}?",
+ "q06": "Como é a precificação do {{appName}}?",
+ "q07": "{{appName}} é gratuito?",
+ "q08": "Há uma versão em nuvem disponível?",
+ "q09": "Suporta modelos de linguagem locais?",
+ "q10": "Suporta reconhecimento e geração de imagens?",
+ "q11": "Suporta síntese de voz e reconhecimento de voz?",
+ "q12": "Suporta sistema de plugins?",
+ "q13": "Há um mercado próprio para obter GPTs?",
+ "q14": "Suporta vários provedores de serviços de IA?",
+ "q15": "O que devo fazer se encontrar problemas ao usar?"
+ },
+ "questions": {
+ "moreBtn": "Saiba mais",
+ "title": "Perguntas frequentes:"
+ },
+ "welcome": {
+ "afternoon": "Boa tarde",
+ "morning": "Bom dia",
+ "night": "Boa noite",
+ "noon": "Boa tarde"
+ }
+ },
+ "header": "Bem-vindo",
+ "pickAgent": "Ou escolha entre os modelos de assistente abaixo",
+ "skip": "Pular",
+ "slogan": {
+ "desc1": "Ative o cluster cerebral e estimule faíscas de pensamento. Seu assistente inteligente, sempre presente.",
+ "desc2": "Crie seu primeiro assistente. Vamos começar!",
+ "title": "Dê a si mesmo um cérebro mais inteligente"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/auth.json b/DigitalHumanWeb/locales/ru-RU/auth.json
new file mode 100644
index 0000000..08be604
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "Войти",
+ "loginOrSignup": "Войти / Зарегистрироваться",
+ "profile": "Профиль",
+ "security": "Безопасность",
+ "signout": "Выйти",
+ "signup": "Зарегистрироваться"
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/chat.json b/DigitalHumanWeb/locales/ru-RU/chat.json
new file mode 100644
index 0000000..b900f9a
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "Модель"
+ },
+ "agentDefaultMessage": "Здравствуйте, я **{{name}}**. Вы можете сразу начать со мной разговор или перейти в [настройки помощника]({{url}}), чтобы дополнить мою информацию.",
+ "agentDefaultMessageWithSystemRole": "Привет, я **{{name}}**, {{systemRole}}. Давай начнем разговор!",
+ "agentDefaultMessageWithoutEdit": "Привет, я **{{name}}**, давай начнём разговор!",
+ "agents": "Ассистент",
+ "artifact": {
+ "generating": "Генерация",
+ "thinking": "В процессе размышлений",
+ "thought": "Процесс мышления",
+ "unknownTitle": "Безымянное произведение"
+ },
+ "backToBottom": "Вернуться вниз",
+ "chatList": {
+ "longMessageDetail": "Посмотреть детали"
+ },
+ "clearCurrentMessages": "Очистить текущий разговор",
+ "confirmClearCurrentMessages": "Вы уверены, что хотите очистить текущий разговор? После этого его нельзя будет восстановить.",
+ "confirmRemoveSessionItemAlert": "Вы уверены, что хотите удалить этого помощника? После этого его нельзя будет восстановить.",
+ "confirmRemoveSessionSuccess": "Сеанс удален успешно",
+ "defaultAgent": "Пользовательский помощник",
+ "defaultList": "Список по умолчанию",
+ "defaultSession": "Пользовательский помощник",
+ "duplicateSession": {
+ "loading": "Копирование...",
+ "success": "Копирование завершено",
+ "title": "{{title}} Копия"
+ },
+ "duplicateTitle": "{{title}} Копия",
+ "emptyAgent": "Нет ассистента",
+ "historyRange": "История сообщений",
+ "inbox": {
+ "desc": "Зажги искру мысли, открой кластер мозгов. Твой виртуальный ассистент, готовый обсудить все с тобой.",
+ "title": "Просто поболтаем"
+ },
+ "input": {
+ "addAi": "Добавить сообщение AI",
+ "addUser": "Добавить сообщение пользователя",
+ "more": "больше",
+ "send": "Отправить",
+ "sendWithCmdEnter": "Отправить с помощью {{meta}} + Enter",
+ "sendWithEnter": "Отправить с помощью Enter",
+ "stop": "Остановить",
+ "warp": "Перенос строки"
+ },
+ "knowledgeBase": {
+ "all": "Все содержимое",
+ "allFiles": "Все файлы",
+ "allKnowledgeBases": "Все базы знаний",
+ "disabled": "Текущий режим развертывания не поддерживает диалоги с базой знаний. Для использования, пожалуйста, переключитесь на развертывание с серверной базой данных или используйте {{cloud}} сервис.",
+ "library": {
+ "action": {
+ "add": "Добавить",
+ "detail": "Детали",
+ "remove": "Удалить"
+ },
+ "title": "Файлы/База знаний"
+ },
+ "relativeFilesOrKnowledgeBases": "Связанные файлы/Базы знаний",
+ "title": "База знаний",
+ "uploadGuide": "Загруженные файлы можно просмотреть в «Базе знаний»",
+ "viewMore": "Посмотреть больше"
+ },
+ "messageAction": {
+ "delAndRegenerate": "Удалить и пересоздать",
+ "regenerate": "Пересоздать"
+ },
+ "newAgent": "Создать помощника",
+ "pin": "Закрепить",
+ "pinOff": "Открепить",
+ "rag": {
+ "referenceChunks": "Цитируемые источники",
+ "userQuery": {
+ "actions": {
+ "delete": "Удалить переписанный запрос",
+ "regenerate": "Перегенерировать запрос"
+ }
+ }
+ },
+ "regenerate": "Сгенерировать заново",
+ "roleAndArchive": "Роль и архив",
+ "searchAgentPlaceholder": "Поиск помощника...",
+ "sendPlaceholder": "Введите сообщение...",
+ "sessionGroup": {
+ "config": "Управление группами",
+ "confirmRemoveGroupAlert": "Вы уверены, что хотите удалить эту группу? После удаления помощники из этой группы будут перемещены в список по умолчанию.",
+ "createAgentSuccess": "Агент успешно создан",
+ "createGroup": "Создать новую группу",
+ "createSuccess": "Создание успешно",
+ "creatingAgent": "Создание агента...",
+ "inputPlaceholder": "Введите название группы...",
+ "moveGroup": "Переместить в группу",
+ "newGroup": "Новая группа",
+ "rename": "Переименовать группу",
+ "renameSuccess": "Переименование успешно",
+ "sortSuccess": "Успешно отсортировано",
+ "sorting": "Обновление сортировки группы...",
+ "tooLong": "Название группы должно содержать от 1 до 20 символов"
+ },
+ "shareModal": {
+ "download": "Скачать скриншот",
+ "imageType": "Тип изображения",
+ "screenshot": "Скриншот",
+ "settings": "Настройки экспорта",
+ "shareToShareGPT": "Создать ссылку для обмена ShareGPT",
+ "withBackground": "С фоном",
+ "withFooter": "С нижним колонтитулом",
+ "withPluginInfo": "С информацией о плагинах",
+ "withSystemRole": "С ролью помощника"
+ },
+ "stt": {
+ "action": "Голосовой ввод",
+ "loading": "Распознавание...",
+ "prettifying": "Форматирование..."
+ },
+ "temp": "Временный",
+ "tokenDetails": {
+ "chats": "Чаты",
+ "rest": "Остаток",
+ "systemRole": "Роль системы",
+ "title": "Детали контекста",
+ "tools": "Инструменты",
+ "total": "Всего",
+ "used": "Использовано"
+ },
+ "tokenTag": {
+ "overload": "Превышение лимита",
+ "remained": "Осталось",
+ "used": "Использовано"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "Умное переименование",
+ "duplicate": "Создать копию",
+ "export": "Экспорт темы"
+ },
+ "checkOpenNewTopic": "Открыть новую тему?",
+ "checkSaveCurrentMessages": "Сохранить текущий разговор как тему?",
+ "confirmRemoveAll": "Вы уверены, что хотите удалить все темы? После этого их нельзя будет восстановить.",
+ "confirmRemoveTopic": "Вы уверены, что хотите удалить эту тему? После этого ее нельзя будет восстановить.",
+ "confirmRemoveUnstarred": "Вы уверены, что хотите удалить неотмеченные темы? После этого их нельзя будет восстановить.",
+ "defaultTitle": "Стандартная тема",
+ "duplicateLoading": "Копирование темы...",
+ "duplicateSuccess": "Тема успешно скопирована",
+ "guide": {
+ "desc": "Нажмите на кнопку слева, чтобы сохранить текущий разговор в качестве исторической темы и начать новый разговор",
+ "title": "Список тем"
+ },
+ "openNewTopic": "Создать новую тему",
+ "removeAll": "Удалить все темы",
+ "removeUnstarred": "Удалить неотмеченные темы",
+ "saveCurrentMessages": "Сохранить текущий разговор как тему",
+ "searchPlaceholder": "Поиск тем...",
+ "title": "Список тем"
+ },
+ "translate": {
+ "action": "Перевести",
+ "clear": "Удалить перевод"
+ },
+ "tts": {
+ "action": "Озвучить текст",
+ "clear": "Удалить озвучку"
+ },
+ "updateAgent": "Обновить информацию помощника",
+ "upload": {
+ "action": {
+ "fileUpload": "Загрузить файл",
+ "folderUpload": "Загрузить папку",
+ "imageDisabled": "Текущая модель не поддерживает визуальное распознавание, пожалуйста, переключитесь на другую модель",
+ "imageUpload": "Загрузить изображение",
+ "tooltip": "Загрузить"
+ },
+ "clientMode": {
+ "actionFiletip": "Загрузить файл",
+ "actionTooltip": "Загрузить",
+ "disabled": "Текущая модель не поддерживает визуальное распознавание и анализ файлов, пожалуйста, переключитесь на другую модель"
+ },
+ "preview": {
+ "prepareTasks": "Подготовка блоков...",
+ "status": {
+ "pending": "Подготовка к загрузке...",
+ "processing": "Обработка файла..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/clerk.json b/DigitalHumanWeb/locales/ru-RU/clerk.json
new file mode 100644
index 0000000..31b49b6
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "Назад",
+ "badge__default": "По умолчанию",
+ "badge__otherImpersonatorDevice": "Другое устройство имперсонации",
+ "badge__primary": "Основной",
+ "badge__requiresAction": "Требуется действие",
+ "badge__thisDevice": "Это устройство",
+ "badge__unverified": "Непроверенный",
+ "badge__userDevice": "Устройство пользователя",
+ "badge__you": "Вы",
+ "createOrganization": {
+ "formButtonSubmit": "Создать организацию",
+ "invitePage": {
+ "formButtonReset": "Пропустить"
+ },
+ "title": "Создать организацию"
+ },
+ "dates": {
+ "lastDay": "Вчера в {{ date | timeString('ru-RU') }}",
+ "next6Days": "{{ date | weekday('ru-RU','long') }} в {{ date | timeString('ru-RU') }}",
+ "nextDay": "Завтра в {{ date | timeString('ru-RU') }}",
+ "numeric": "{{ date | numeric('ru-RU') }}",
+ "previous6Days": "Прошлая {{ date | weekday('ru-RU','long') }} в {{ date | timeString('ru-RU') }}",
+ "sameDay": "Сегодня в {{ date | timeString('ru-RU') }}"
+ },
+ "dividerText": "или",
+ "footerActionLink__useAnotherMethod": "Использовать другой метод",
+ "footerPageLink__help": "Помощь",
+ "footerPageLink__privacy": "Конфиденциальность",
+ "footerPageLink__terms": "Условия",
+ "formButtonPrimary": "Продолжить",
+ "formButtonPrimary__verify": "Подтвердить",
+ "formFieldAction__forgotPassword": "Забыли пароль?",
+ "formFieldError__matchingPasswords": "Пароли совпадают.",
+ "formFieldError__notMatchingPasswords": "Пароли не совпадают.",
+ "formFieldError__verificationLinkExpired": "Срок действия ссылки для подтверждения истек. Пожалуйста, запросите новую ссылку.",
+ "formFieldHintText__optional": "Необязательно",
+ "formFieldHintText__slug": "Slug - это уникальный идентификатор, который удобно использовать в URL.",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "Удалить аккаунт",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "example@email.com, example2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "моя-орг",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "Включить автоматические приглашения для этого домена",
+ "formFieldLabel__backupCode": "Резервный код",
+ "formFieldLabel__confirmDeletion": "Подтверждение",
+ "formFieldLabel__confirmPassword": "Подтвердите пароль",
+ "formFieldLabel__currentPassword": "Текущий пароль",
+ "formFieldLabel__emailAddress": "Адрес электронной почты",
+ "formFieldLabel__emailAddress_username": "Адрес электронной почты или имя пользователя",
+ "formFieldLabel__emailAddresses": "Адреса электронной почты",
+ "formFieldLabel__firstName": "Имя",
+ "formFieldLabel__lastName": "Фамилия",
+ "formFieldLabel__newPassword": "Новый пароль",
+ "formFieldLabel__organizationDomain": "Домен",
+ "formFieldLabel__organizationDomainDeletePending": "Удалить ожидающие приглашения и предложения",
+ "formFieldLabel__organizationDomainEmailAddress": "Адрес электронной почты для подтверждения",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "Введите адрес электронной почты под этим доменом, чтобы получить код и подтвердить домен.",
+ "formFieldLabel__organizationName": "Название",
+ "formFieldLabel__organizationSlug": "Slug",
+ "formFieldLabel__passkeyName": "Имя ключа доступа",
+ "formFieldLabel__password": "Пароль",
+ "formFieldLabel__phoneNumber": "Номер телефона",
+ "formFieldLabel__role": "Роль",
+ "formFieldLabel__signOutOfOtherSessions": "Выйти из всех других устройств",
+ "formFieldLabel__username": "Имя пользователя",
+ "impersonationFab": {
+ "action__signOut": "Выйти",
+ "title": "Вошли как {{identifier}}"
+ },
+ "locale": "ru-RU",
+ "maintenanceMode": "В настоящее время мы находимся на техническом обслуживании, но не волнуйтесь, это не займет больше нескольких минут.",
+ "membershipRole__admin": "Администратор",
+ "membershipRole__basicMember": "Участник",
+ "membershipRole__guestMember": "Гость",
+ "organizationList": {
+ "action__createOrganization": "Создать организацию",
+ "action__invitationAccept": "Присоединиться",
+ "action__suggestionsAccept": "Запросить присоединение",
+ "createOrganization": "Создать организацию",
+ "invitationAcceptedLabel": "Присоединен",
+ "subtitle": "для продолжения в {{applicationName}}",
+ "suggestionsAcceptedLabel": "Ожидает подтверждения",
+ "title": "Выберите учетную запись",
+ "titleWithoutPersonal": "Выберите организацию"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "Автоматические приглашения",
+ "badge__automaticSuggestion": "Автоматические предложения",
+ "badge__manualInvitation": "Нет автоматического вступления",
+ "badge__unverified": "Неподтвержденный",
+ "createDomainPage": {
+ "subtitle": "Добавьте домен для проверки. Пользователи с адресами электронной почты на этом домене могут присоединиться к организации автоматически или запросить присоединение.",
+ "title": "Добавить домен"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "Приглашения не могут быть отправлены. Уже есть ожидающие приглашения для следующих адресов электронной почты: {{email_addresses}}.",
+ "formButtonPrimary__continue": "Отправить приглашения",
+ "selectDropdown__role": "Выберите роль",
+ "subtitle": "Введите или вставьте один или несколько адресов электронной почты, разделенные пробелами или запятыми.",
+ "successMessage": "Приглашения успешно отправлены",
+ "title": "Пригласить новых участников"
+ },
+ "membersPage": {
+ "action__invite": "Пригласить",
+ "activeMembersTab": {
+ "menuAction__remove": "Удалить участника",
+ "tableHeader__actions": "",
+ "tableHeader__joined": "Присоединился",
+ "tableHeader__role": "Роль",
+ "tableHeader__user": "Пользователь"
+ },
+ "detailsTitle__emptyRow": "Нет участников для отображения",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "Пригласите пользователей, подключив домен электронной почты к вашей организации. Любой, кто зарегистрируется с соответствующим доменом электронной почты, сможет присоединиться к организации в любое время.",
+ "headerTitle": "Автоматические приглашения",
+ "primaryButton": "Управление подтвержденными доменами"
+ },
+ "table__emptyRow": "Нет приглашений для отображения"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "Отозвать приглашение",
+ "tableHeader__invited": "Приглашен"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "Пользователи, зарегистрировавшиеся с соответствующим доменом электронной почты, смогут видеть предложение запросить присоединение к вашей организации.",
+ "headerTitle": "Автоматические предложения",
+ "primaryButton": "Управление подтвержденными доменами"
+ },
+ "menuAction__approve": "Утвердить",
+ "menuAction__reject": "Отклонить",
+ "tableHeader__requested": "Запрошен доступ",
+ "table__emptyRow": "Нет запросов для отображения"
+ },
+ "start": {
+ "headerTitle__invitations": "Приглашения",
+ "headerTitle__members": "Участники",
+ "headerTitle__requests": "Запросы"
+ }
+ },
+ "navbar": {
+ "description": "Управляйте своей организацией.",
+ "general": "Общее",
+ "members": "Участники",
+ "title": "Организация"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "Введите \"{{organizationName}}\" ниже, чтобы продолжить.",
+ "messageLine1": "Вы уверены, что хотите удалить эту организацию?",
+ "messageLine2": "Это действие является постоянным и необратимым.",
+ "successMessage": "Вы удалили организацию.",
+ "title": "Удалить организацию"
+ },
+ "leaveOrganization": {
+ "actionDescription": "Введите \"{{organizationName}}\" ниже, чтобы продолжить.",
+ "messageLine1": "Вы уверены, что хотите покинуть эту организацию? Вы потеряете доступ к этой организации и ее приложениям.",
+ "messageLine2": "Это действие является постоянным и необратимым.",
+ "successMessage": "Вы покинули организацию.",
+ "title": "Покинуть организацию"
+ },
+ "title": "Опасность"
+ },
+ "domainSection": {
+ "menuAction__manage": "Управление",
+ "menuAction__remove": "Удалить",
+ "menuAction__verify": "Подтвердить",
+ "primaryButton": "Добавить домен",
+ "subtitle": "Разрешите пользователям присоединяться к организации автоматически или запрашивать присоединение на основе подтвержденного электронного домена.",
+ "title": "Подтвержденные домены"
+ },
+ "successMessage": "Организация была обновлена.",
+ "title": "Обновить профиль"
+ },
+ "removeDomainPage": {
+ "messageLine1": "Домен электронной почты {{domain}} будет удален.",
+ "messageLine2": "Пользователи больше не смогут присоединяться к организации автоматически после этого.",
+ "successMessage": "{{domain}} был удален.",
+ "title": "Удалить домен"
+ },
+ "start": {
+ "headerTitle__general": "Общее",
+ "headerTitle__members": "Участники",
+ "profileSection": {
+ "primaryButton": "Обновить профиль",
+ "title": "Профиль организации",
+ "uploadAction__title": "Логотип"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "Удаление этого домена повлияет на приглашенных пользователей.",
+ "removeDomainActionLabel__remove": "Удалить домен",
+ "removeDomainSubtitle": "Удалить этот домен из ваших подтвержденных доменов",
+ "removeDomainTitle": "Удалить домен"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "Пользователи автоматически приглашаются присоединиться к организации при регистрации и могут присоединиться в любое время.",
+ "automaticInvitationOption__label": "Автоматические приглашения",
+ "automaticSuggestionOption__description": "Пользователи получают предложение запросить присоединение, но должны быть утверждены администратором, прежде чем они смогут присоединиться к организации.",
+ "automaticSuggestionOption__label": "Автоматические предложения",
+ "calloutInfoLabel": "Изменение режима вступления повлияет только на новых пользователей.",
+ "calloutInvitationCountLabel": "Ожидающие приглашения, отправленные пользователям: {{count}}",
+ "calloutSuggestionCountLabel": "Ожидающие предложения, отправленные пользователям: {{count}}",
+ "manualInvitationOption__description": "Пользователи могут быть приглашены в организацию только вручную.",
+ "manualInvitationOption__label": "Нет автоматического вступления",
+ "subtitle": "Выберите, как пользователи с этого домена могут присоединиться к организации."
+ },
+ "start": {
+ "headerTitle__danger": "Опасность",
+ "headerTitle__enrollment": "Варианты вступления"
+ },
+ "subtitle": "Домен {{domain}} теперь подтвержден. Продолжайте, выбрав режим вступления.",
+ "title": "Обновить {{domain}}"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "Введите код подтверждения, отправленный на ваш адрес электронной почты",
+ "formTitle": "Код подтверждения",
+ "resendButton": "Не получили код? Отправить еще раз",
+ "subtitle": "Домен {{domainName}} необходимо подтвердить по электронной почте.",
+ "subtitleVerificationCodeScreen": "На {{emailAddress}} был отправлен код подтверждения. Введите код для продолжения.",
+ "title": "Подтвердить домен"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "Создать организацию",
+ "action__invitationAccept": "Присоединиться",
+ "action__manageOrganization": "Управление",
+ "action__suggestionsAccept": "Запросить присоединение",
+ "notSelected": "Организация не выбрана",
+ "personalWorkspace": "Личный аккаунт",
+ "suggestionsAcceptedLabel": "Ожидает подтверждения"
+ },
+ "paginationButton__next": "Далее",
+ "paginationButton__previous": "Назад",
+ "paginationRowText__displaying": "Отображение",
+ "paginationRowText__of": "из",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "Добавить аккаунт",
+ "action__signOutAll": "Выйти из всех аккаунтов",
+ "subtitle": "Выберите аккаунт, с которым вы хотите продолжить.",
+ "title": "Выберите аккаунт"
+ },
+ "alternativeMethods": {
+ "actionLink": "Получить помощь",
+ "actionText": "Нет ни одного из них?",
+ "blockButton__backupCode": "Использовать резервный код",
+ "blockButton__emailCode": "Отправить код на почту {{identifier}}",
+ "blockButton__emailLink": "Отправить ссылку на почту {{identifier}}",
+ "blockButton__passkey": "Войти с помощью вашего парольного ключа",
+ "blockButton__password": "Войти с помощью вашего пароля",
+ "blockButton__phoneCode": "Отправить SMS-код на {{identifier}}",
+ "blockButton__totp": "Использовать приложение аутентификации",
+ "getHelp": {
+ "blockButton__emailSupport": "Поддержка по почте",
+ "content": "Если у вас возникли проблемы с входом в ваш аккаунт, напишите нам, и мы постараемся восстановить доступ как можно скорее.",
+ "title": "Получить помощь"
+ },
+ "subtitle": "Столкнулись с проблемами? Вы можете использовать любой из этих методов для входа.",
+ "title": "Используйте другой метод"
+ },
+ "backupCodeMfa": {
+ "subtitle": "Ваш резервный код - тот, который вы получили при настройке двухэтапной аутентификации.",
+ "title": "Введите резервный код"
+ },
+ "emailCode": {
+ "formTitle": "Код подтверждения",
+ "resendButton": "Не получили код? Отправить еще раз",
+ "subtitle": "для продолжения в {{applicationName}}",
+ "title": "Проверьте свою почту"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "Вернитесь на исходную вкладку, чтобы продолжить.",
+ "title": "Срок действия этой ссылки подтверждения истек"
+ },
+ "failed": {
+ "subtitle": "Вернитесь на исходную вкладку, чтобы продолжить.",
+ "title": "Эта ссылка подтверждения недействительна"
+ },
+ "formSubtitle": "Используйте ссылку подтверждения, отправленную на вашу почту",
+ "formTitle": "Ссылка подтверждения",
+ "loading": {
+ "subtitle": "Вы будете перенаправлены в ближайшее время",
+ "title": "Вход в систему..."
+ },
+ "resendButton": "Не получили ссылку? Отправить еще раз",
+ "subtitle": "для продолжения в {{applicationName}}",
+ "title": "Проверьте свою почту",
+ "unusedTab": {
+ "title": "Вы можете закрыть эту вкладку"
+ },
+ "verified": {
+ "subtitle": "Вы будете перенаправлены в ближайшее время",
+ "title": "Успешный вход в систему"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Вернитесь на исходную вкладку, чтобы продолжить",
+ "subtitleNewTab": "Вернитесь на вновь открытую вкладку, чтобы продолжить",
+ "titleNewTab": "Вход выполнен на другой вкладке"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "Код сброса пароля",
+ "resendButton": "Не получили код? Отправить еще раз",
+ "subtitle": "для сброса вашего пароля",
+ "subtitle_email": "Сначала введите код, отправленный на ваш адрес электронной почты",
+ "subtitle_phone": "Сначала введите код, отправленный на ваш телефон",
+ "title": "Сброс пароля"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "Сбросить пароль",
+ "label__alternativeMethods": "Или войдите другим способом",
+ "title": "Забыли пароль?"
+ },
+ "noAvailableMethods": {
+ "message": "Невозможно продолжить вход. Нет доступных факторов аутентификации.",
+ "subtitle": "Произошла ошибка",
+ "title": "Невозможно войти"
+ },
+ "passkey": {
+ "subtitle": "Использование вашего парольного ключа подтверждает, что это вы. Ваше устройство может запросить ваш отпечаток пальца, лицо или блокировку экрана.",
+ "title": "Используйте ваш парольный ключ"
+ },
+ "password": {
+ "actionLink": "Использовать другой метод",
+ "subtitle": "Введите пароль, связанный с вашим аккаунтом",
+ "title": "Введите ваш пароль"
+ },
+ "passwordPwned": {
+ "title": "Пароль скомпрометирован"
+ },
+ "phoneCode": {
+ "formTitle": "Код подтверждения",
+ "resendButton": "Не получили код? Отправить еще раз",
+ "subtitle": "для продолжения в {{applicationName}}",
+ "title": "Проверьте свой телефон"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "Код подтверждения",
+ "resendButton": "Не получили код? Отправить еще раз",
+ "subtitle": "Для продолжения введите код подтверждения, отправленный на ваш телефон",
+ "title": "Проверьте свой телефон"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "Сбросить пароль",
+ "requiredMessage": "По соображениям безопасности требуется сбросить ваш пароль.",
+ "successMessage": "Ваш пароль успешно изменен. Выполняется вход, подождите немного.",
+ "title": "Установите новый пароль"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "Мы должны проверить вашу личность перед сбросом пароля."
+ },
+ "start": {
+ "actionLink": "Зарегистрироваться",
+ "actionLink__use_email": "Использовать почту",
+ "actionLink__use_email_username": "Использовать почту или имя пользователя",
+ "actionLink__use_passkey": "Использовать парольный ключ",
+ "actionLink__use_phone": "Использовать телефон",
+ "actionLink__use_username": "Использовать имя пользователя",
+ "actionText": "Нет учетной записи?",
+ "subtitle": "Добро пожаловать! Пожалуйста, заполните данные, чтобы начать.",
+ "title": "Создайте свою учетную запись"
+ },
+ "totpMfa": {
+ "formTitle": "Код подтверждения",
+ "subtitle": "Для продолжения введите код подтверждения, сгенерированный вашим приложением аутентификации",
+ "title": "Двухэтапная проверка"
+ }
+ },
+ "signInEnterPasswordTitle": "Введите ваш пароль",
+ "signUp": {
+ "continue": {
+ "actionLink": "Войти",
+ "actionText": "Уже есть аккаунт?",
+ "subtitle": "Пожалуйста, заполните оставшиеся данные, чтобы продолжить.",
+ "title": "Заполните недостающие поля"
+ },
+ "emailCode": {
+ "formSubtitle": "Введите код подтверждения, отправленный на ваш адрес электронной почты",
+ "formTitle": "Код подтверждения",
+ "resendButton": "Не получили код? Отправить еще раз",
+ "subtitle": "Введите код подтверждения, отправленный на ваш адрес электронной почты",
+ "title": "Подтвердите вашу почту"
+ },
+ "emailLink": {
+ "formSubtitle": "Используйте ссылку подтверждения, отправленную на ваш адрес электронной почты",
+ "formTitle": "Ссылка подтверждения",
+ "loading": {
+ "title": "Регистрация..."
+ },
+ "resendButton": "Не получили ссылку? Отправить еще раз",
+ "subtitle": "для продолжения в {{applicationName}}",
+ "title": "Подтвердите вашу почту",
+ "verified": {
+ "title": "Успешная регистрация"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Вернитесь на вновь открытую вкладку, чтобы продолжить",
+ "subtitleNewTab": "Вернитесь на предыдущую вкладку, чтобы продолжить",
+ "title": "Электронная почта успешно подтверждена"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "Введите код подтверждения, отправленный на ваш номер телефона",
+ "formTitle": "Код подтверждения",
+ "resendButton": "Не получили код? Отправить еще раз",
+ "subtitle": "Введите код подтверждения, отправленный на ваш номер телефона",
+ "title": "Подтвердите ваш телефон"
+ },
+ "start": {
+ "actionLink": "Войти",
+ "actionText": "Уже есть аккаунт?",
+ "subtitle": "Добро пожаловать! Пожалуйста, заполните данные, чтобы начать.",
+ "title": "Создайте свою учетную запись"
+ }
+ },
+ "socialButtonsBlockButton": "Продолжить с {{provider|titleize}}",
+ "unstable__errors": {
+ "captcha_invalid": "Регистрация не удалась из-за неудачной проверки безопасности. Пожалуйста, обновите страницу и попробуйте снова или обратитесь в службу поддержки для получения дополнительной помощи.",
+ "captcha_unavailable": "Регистрация не удалась из-за неудачной проверки на ботов. Пожалуйста, обновите страницу и попробуйте снова или обратитесь в службу поддержки для получения дополнительной помощи.",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "Этот адрес электронной почты занят. Пожалуйста, попробуйте другой.",
+ "form_identifier_exists__phone_number": "Этот номер телефона занят. Пожалуйста, попробуйте другой.",
+ "form_identifier_exists__username": "Это имя пользователя занято. Пожалуйста, попробуйте другое.",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "Адрес электронной почты должен быть действительным адресом электронной почты.",
+ "form_param_format_invalid__phone_number": "Номер телефона должен быть в действительном международном формате.",
+ "form_param_max_length_exceeded__first_name": "Имя не должно превышать 256 символов.",
+ "form_param_max_length_exceeded__last_name": "Фамилия не должна превышать 256 символов.",
+ "form_param_max_length_exceeded__name": "Имя не должно превышать 256 символов.",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "Ваш пароль недостаточно надежен.",
+ "form_password_pwned": "Этот пароль был обнаружен в результате утечки данных и не может быть использован. Пожалуйста, попробуйте другой пароль.",
+ "form_password_pwned__sign_in": "Этот пароль был обнаружен в результате утечки данных и не может быть использован. Пожалуйста, сбросьте свой пароль.",
+ "form_password_size_in_bytes_exceeded": "Ваш пароль превысил максимально допустимое количество байт. Пожалуйста, сократите его или удалите некоторые специальные символы.",
+ "form_password_validation_failed": "Неверный пароль.",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "Вы не можете удалить свою последнюю идентификацию.",
+ "not_allowed_access": "",
+ "passkey_already_exists": "Ключ доступа уже зарегистрирован на этом устройстве.",
+ "passkey_not_supported": "Ключи доступа не поддерживаются на этом устройстве.",
+ "passkey_pa_not_supported": "Для регистрации требуется аутентификатор платформы, но устройство его не поддерживает.",
+ "passkey_registration_cancelled": "Регистрация ключа доступа была отменена или истекло время ожидания.",
+ "passkey_retrieval_cancelled": "Проверка ключа доступа была отменена или истекло время ожидания.",
+ "passwordComplexity": {
+ "maximumLength": "менее {{length}} символов",
+ "minimumLength": "{{length}} или более символов",
+ "requireLowercase": "строчную букву",
+ "requireNumbers": "цифру",
+ "requireSpecialCharacter": "специальный символ",
+ "requireUppercase": "заглавную букву",
+ "sentencePrefix": "Ваш пароль должен содержать"
+ },
+ "phone_number_exists": "Этот номер телефона занят. Пожалуйста, попробуйте другой.",
+ "zxcvbn": {
+ "couldBeStronger": "Ваш пароль работает, но может быть надежнее. Попробуйте добавить больше символов.",
+ "goodPassword": "Ваш пароль соответствует всем необходимым требованиям.",
+ "notEnough": "Ваш пароль недостаточно надежен.",
+ "suggestions": {
+ "allUppercase": "Сделайте заглавными некоторые, но не все буквы.",
+ "anotherWord": "Добавьте больше слов, которые редко используются.",
+ "associatedYears": "Избегайте годов, которые связаны с вами.",
+ "capitalization": "Используйте заглавные буквы не только в начале слова.",
+ "dates": "Избегайте дат и годов, связанных с вами.",
+ "l33t": "Избегайте предсказуемых замен букв, например, '@' вместо 'a'.",
+ "longerKeyboardPattern": "Используйте более длинные шаблоны клавиатуры и меняйте направление набора несколько раз.",
+ "noNeed": "Вы можете создавать надежные пароли без использования символов, цифр или заглавных букв.",
+ "pwned": "Если вы используете этот пароль в другом месте, вам следует его изменить.",
+ "recentYears": "Избегайте недавних лет.",
+ "repeated": "Избегайте повторяющихся слов и символов.",
+ "reverseWords": "Избегайте перевернутых написаний обычных слов.",
+ "sequences": "Избегайте обычных последовательностей символов.",
+ "useWords": "Используйте несколько слов, но избегайте общих фраз."
+ },
+ "warnings": {
+ "common": "Это часто используемый пароль.",
+ "commonNames": "Общие имена и фамилии легко угадать.",
+ "dates": "Даты легко угадать.",
+ "extendedRepeat": "Повторяющиеся шаблоны символов, например, \"abcabcabc\", легко угадать.",
+ "keyPattern": "Короткие шаблоны клавиатуры легко угадать.",
+ "namesByThemselves": "Одиночные имена или фамилии легко угадать.",
+ "pwned": "Ваш пароль был обнаружен в результате утечки данных в Интернете.",
+ "recentYears": "Недавние годы легко угадать.",
+ "sequences": "Общие последовательности символов, например, \"abc\", легко угадать.",
+ "similarToCommon": "Этот пароль похож на часто используемый.",
+ "simpleRepeat": "Повторяющиеся символы, например, \"aaa\", легко угадать.",
+ "straightRow": "Прямые ряды клавиш на вашей клавиатуре легко угадать.",
+ "topHundred": "Это часто используемый пароль.",
+ "topTen": "Это очень часто используемый пароль.",
+ "userInputs": "Не должно быть никаких персональных или связанных с страницей данных.",
+ "wordByItself": "Одиночные слова легко угадать."
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "Добавить аккаунт",
+ "action__manageAccount": "Управление аккаунтом",
+ "action__signOut": "Выйти",
+ "action__signOutAll": "Выйти из всех аккаунтов"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "Скопировано!",
+ "actionLabel__copy": "Скопировать все",
+ "actionLabel__download": "Скачать .txt",
+ "actionLabel__print": "Распечатать",
+ "infoText1": "Резервные коды будут включены для этой учетной записи.",
+ "infoText2": "Храните резервные коды в тайне и надежно. Вы можете восстановить резервные коды, если подозреваете их компрометацию.",
+ "subtitle__codelist": "Храните их надежно и держите в секрете.",
+ "successMessage": "Резервные коды теперь включены. Вы можете использовать один из них для входа в свою учетную запись, если потеряете доступ к своему аутентификационному устройству. Каждый код можно использовать только один раз.",
+ "successSubtitle": "Вы можете использовать один из них для входа в свою учетную запись, если потеряете доступ к своему аутентификационному устройству.",
+ "title": "Добавить проверку резервного кода",
+ "title__codelist": "Резервные коды"
+ },
+ "connectedAccountPage": {
+ "formHint": "Выберите провайдера для подключения вашей учетной записи.",
+ "formHint__noAccounts": "Нет доступных внешних провайдеров учетных записей.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} будет удален из этой учетной записи.",
+ "messageLine2": "Вы больше не сможете использовать эту подключенную учетную запись, и любые зависимые функции перестанут работать.",
+ "successMessage": "{{connectedAccount}} был удален из вашей учетной записи.",
+ "title": "Удалить подключенную учетную запись"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "Провайдер был добавлен в вашу учетную запись",
+ "title": "Добавить подключенную учетную запись"
+ },
+ "deletePage": {
+ "actionDescription": "Введите \"Удалить учетную запись\" ниже, чтобы продолжить.",
+ "confirm": "Удалить учетную запись",
+ "messageLine1": "Вы уверены, что хотите удалить свою учетную запись?",
+ "messageLine2": "Это действие является постоянным и необратимым.",
+ "title": "Удалить учетную запись"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "На этот адрес электронной почты будет отправлено письмо с кодом подтверждения.",
+ "formSubtitle": "Введите код подтверждения, отправленный на {{identifier}}",
+ "formTitle": "Код подтверждения",
+ "resendButton": "Не получили код? Отправить еще раз",
+ "successMessage": "Электронная почта {{identifier}} была добавлена в вашу учетную запись."
+ },
+ "emailLink": {
+ "formHint": "На этот адрес электронной почты будет отправлена ссылка для подтверждения.",
+ "formSubtitle": "Щелкните по ссылке подтверждения в письме, отправленном на {{identifier}}",
+ "formTitle": "Ссылка для подтверждения",
+ "resendButton": "Не получили ссылку? Отправить еще раз",
+ "successMessage": "Электронная почта {{identifier}} была добавлена в вашу учетную запись."
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}} будет удален из этой учетной записи.",
+ "messageLine2": "Вы больше не сможете входить, используя этот адрес электронной почты.",
+ "successMessage": "{{emailAddress}} был удален из вашей учетной записи.",
+ "title": "Удалить адрес электронной почты"
+ },
+ "title": "Добавить адрес электронной почты",
+ "verifyTitle": "Подтвердить адрес электронной почты"
+ },
+ "formButtonPrimary__add": "Добавить",
+ "formButtonPrimary__continue": "Продолжить",
+ "formButtonPrimary__finish": "Завершить",
+ "formButtonPrimary__remove": "Удалить",
+ "formButtonPrimary__save": "Сохранить",
+ "formButtonReset": "Отмена",
+ "mfaPage": {
+ "formHint": "Выберите метод для добавления.",
+ "title": "Добавить двухэтапную проверку"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "Использовать существующий номер",
+ "primaryButton__addPhoneNumber": "Добавить номер телефона",
+ "removeResource": {
+ "messageLine1": "{{identifier}} больше не будет получать коды подтверждения при входе.",
+ "messageLine2": "Ваша учетная запись может быть менее защищенной. Вы уверены, что хотите продолжить?",
+ "successMessage": "Двухэтапная проверка по SMS-коду была удалена для {{mfaPhoneCode}}",
+ "title": "Удалить двухэтапную проверку"
+ },
+ "subtitle__availablePhoneNumbers": "Выберите существующий номер телефона для регистрации двухэтапной проверки по SMS-коду или добавьте новый.",
+ "subtitle__unavailablePhoneNumbers": "Нет доступных номеров телефонов для регистрации двухэтапной проверки по SMS-коду, добавьте новый.",
+ "successMessage1": "При входе в систему вам потребуется ввести код подтверждения, отправленный на этот номер телефона, как дополнительный шаг.",
+ "successMessage2": "Сохраните эти резервные коды и храните их в надежном месте. Если вы потеряете доступ к своему аутентификационному устройству, вы сможете использовать резервные коды для входа.",
+ "successTitle": "Включена проверка по SMS-коду",
+ "title": "Добавить проверку по SMS-коду"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "Вместо этого отсканируйте QR-код",
+ "buttonUnableToScan__nonPrimary": "Не удается отсканировать QR-код?",
+ "infoText__ableToScan": "Настройте новый метод входа в свое приложение аутентификации и отсканируйте следующий QR-код, чтобы связать его с вашей учетной записью.",
+ "infoText__unableToScan": "Настройте новый метод входа в свое приложение аутентификации и введите предоставленный ключ ниже.",
+ "inputLabel__unableToScan1": "Убедитесь, что включены одноразовые пароли на основе времени, затем завершите привязку вашей учетной записи.",
+ "inputLabel__unableToScan2": "Кроме того, если ваш аутентификатор поддерживает URI TOTP, вы также можете скопировать полный URI."
+ },
+ "removeResource": {
+ "messageLine1": "При входе в систему больше не потребуются коды подтверждения от этого аутентификатора.",
+ "messageLine2": "Ваша учетная запись может быть менее защищенной. Вы уверены, что хотите продолжить?",
+ "successMessage": "Двухэтапная проверка через приложение аутентификации была удалена.",
+ "title": "Удалить двухэтапную проверку"
+ },
+ "successMessage": "Двухэтапная проверка теперь включена. При входе в систему вам потребуется ввести код подтверждения от этого аутентификатора как дополнительный шаг.",
+ "title": "Добавить приложение аутентификации",
+ "verifySubtitle": "Введите код подтверждения, сгенерированный вашим аутентификатором",
+ "verifyTitle": "Код подтверждения"
+ },
+ "mobileButton__menu": "Меню",
+ "navbar": {
+ "account": "Профиль",
+ "description": "Управление информацией вашей учетной записи.",
+ "security": "Безопасность",
+ "title": "Учетная запись"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} будет удален из этой учетной записи.",
+ "title": "Удалить код доступа"
+ },
+ "subtitle__rename": "Вы можете изменить имя кода доступа, чтобы его было легче найти.",
+ "title__rename": "Переименовать код доступа"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "Рекомендуется выйти из всех других устройств, которые могли использовать ваш старый пароль.",
+ "readonly": "Ваш пароль в настоящее время нельзя изменить, потому что вы можете войти только через корпоративное подключение.",
+ "successMessage__set": "Ваш пароль установлен.",
+ "successMessage__signOutOfOtherSessions": "Все другие устройства были отключены.",
+ "successMessage__update": "Ваш пароль обновлен.",
+ "title__set": "Установить пароль",
+ "title__update": "Обновить пароль"
+ },
+ "phoneNumberPage": {
+ "infoText": "На этот номер телефона будет отправлено текстовое сообщение с кодом подтверждения. Могут применяться тарифы за сообщения и данные.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} будет удален из этой учетной записи.",
+ "messageLine2": "Вы больше не сможете войти, используя этот номер телефона.",
+ "successMessage": "{{phoneNumber}} был удален из вашей учетной записи.",
+ "title": "Удалить номер телефона"
+ },
+ "successMessage": "{{identifier}} был добавлен в вашу учетную запись.",
+ "title": "Добавить номер телефона",
+ "verifySubtitle": "Введите код подтверждения, отправленный на {{identifier}}",
+ "verifyTitle": "Подтвердить номер телефона"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "Рекомендуемый размер 1:1, до 10 МБ.",
+ "imageFormDestructiveActionSubtitle": "Удалить",
+ "imageFormSubtitle": "Загрузить",
+ "imageFormTitle": "Изображение профиля",
+ "readonly": "Информация о вашем профиле была предоставлена через корпоративное подключение и не может быть изменена.",
+ "successMessage": "Ваш профиль обновлен.",
+ "title": "Обновить профиль"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "Выйти из устройства",
+ "title": "Активные устройства"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "Попробуйте снова",
+ "actionLabel__reauthorize": "Авторизовать сейчас",
+ "destructiveActionTitle": "Удалить",
+ "primaryButton": "Подключить аккаунт",
+ "subtitle__reauthorize": "Требуемые разрешения были обновлены, и вы можете испытывать ограниченную функциональность. Пожалуйста, повторно авторизуйте это приложение, чтобы избежать проблем",
+ "title": "Подключенные аккаунты"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "Удалить аккаунт",
+ "title": "Удалить аккаунт"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "Удалить электронную почту",
+ "detailsAction__nonPrimary": "Установить как основную",
+ "detailsAction__primary": "Завершить верификацию",
+ "detailsAction__unverified": "Проверить",
+ "primaryButton": "Добавить адрес электронной почты",
+ "title": "Адреса электронной почты"
+ },
+ "enterpriseAccountsSection": {
+ "title": "Корпоративные аккаунты"
+ },
+ "headerTitle__account": "Детали профиля",
+ "headerTitle__security": "Безопасность",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "Сгенерировать заново",
+ "headerTitle": "Резервные коды",
+ "subtitle__regenerate": "Получите новый набор безопасных резервных кодов. Предыдущие резервные коды будут удалены и не могут быть использованы.",
+ "title__regenerate": "Сгенерировать резервные коды"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "Установить по умолчанию",
+ "destructiveActionLabel": "Удалить"
+ },
+ "primaryButton": "Добавить двухэтапную верификацию",
+ "title": "Двухэтапная верификация",
+ "totp": {
+ "destructiveActionTitle": "Удалить",
+ "headerTitle": "Приложение аутентификатора"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "Удалить",
+ "menuAction__rename": "Переименовать",
+ "title": "Парольные ключи"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "Установить пароль",
+ "primaryButton__updatePassword": "Обновить пароль",
+ "title": "Пароль"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "Удалить номер телефона",
+ "detailsAction__nonPrimary": "Установить как основной",
+ "detailsAction__primary": "Завершить верификацию",
+ "detailsAction__unverified": "Проверить номер телефона",
+ "primaryButton": "Добавить номер телефона",
+ "title": "Номера телефонов"
+ },
+ "profileSection": {
+ "primaryButton": "Обновить профиль",
+ "title": "Профиль"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "Установить имя пользователя",
+ "primaryButton__updateUsername": "Обновить имя пользователя",
+ "title": "Имя пользователя"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "Удалить кошелек",
+ "primaryButton": "Кошельки Web3",
+ "title": "Кошельки Web3"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "Ваше имя пользователя обновлено.",
+ "title__set": "Установить имя пользователя",
+ "title__update": "Обновить имя пользователя"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} будет удален из этой учетной записи.",
+ "messageLine2": "Вы больше не сможете войти, используя этот кошелек web3.",
+ "successMessage": "{{web3Wallet}} был удален из вашей учетной записи.",
+ "title": "Удалить кошелек web3"
+ },
+ "subtitle__availableWallets": "Выберите кошелек web3 для подключения к вашей учетной записи.",
+ "subtitle__unavailableWallets": "Нет доступных кошельков web3.",
+ "successMessage": "Кошелек был добавлен в вашу учетную запись.",
+ "title": "Добавить кошелек web3"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/common.json b/DigitalHumanWeb/locales/ru-RU/common.json
new file mode 100644
index 0000000..dfb77b6
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "О нас",
+ "advanceSettings": "Расширенные настройки",
+ "alert": {
+ "cloud": {
+ "action": "Бесплатный опыт",
+ "desc": "Мы предоставляем {{credit}} бесплатных вычислительных баллов для всех зарегистрированных пользователей, не требуется сложная настройка, просто распакуйте и используйте, поддерживает бесконечную историю диалогов и глобальную синхронизацию в облаке. Больше продвинутых функций ждут вас для исследования.",
+ "descOnMobile": "Мы предоставляем {{credit}} бесплатных вычислительных баллов для всех зарегистрированных пользователей, не требуется сложная настройка, просто распакуйте и используйте.",
+ "title": "Добро пожаловать в {{name}}"
+ }
+ },
+ "appInitializing": "Приложение запускается...",
+ "autoGenerate": "Автозаполнение",
+ "autoGenerateTooltip": "Автоматическое дополнение описания агента на основе подсказок",
+ "autoGenerateTooltipDisabled": "Пожалуйста, введите подсказку перед использованием функции автозаполнения",
+ "back": "Назад",
+ "batchDelete": "Пакетное удаление",
+ "blog": "Блог о продуктах",
+ "cancel": "Отмена",
+ "changelog": "История изменений",
+ "close": "Закрыть",
+ "contact": "Свяжитесь с нами",
+ "copy": "Копировать",
+ "copyFail": "Не удалось скопировать",
+ "copySuccess": "Успешно скопировано",
+ "dataStatistics": {
+ "messages": "Сообщения",
+ "sessions": "Сессии",
+ "today": "Сегодня",
+ "topics": "Темы"
+ },
+ "defaultAgent": "Пользовательский агент",
+ "defaultSession": "Пользовательский агент",
+ "delete": "Удалить",
+ "document": "Документация",
+ "download": "Скачать",
+ "duplicate": "Создать копию",
+ "edit": "Редактировать",
+ "export": "Экспорт настроек",
+ "exportType": {
+ "agent": "Экспорт настроек агента",
+ "agentWithMessage": "Экспорт настроек агента и сообщений",
+ "all": "Экспорт глобальных настроек и всех данных агентов",
+ "allAgent": "Экспорт всех настроек агентов",
+ "allAgentWithMessage": "Экспорт всех настроек агентов и сообщений",
+ "globalSetting": "Экспорт глобальных настроек"
+ },
+ "feedback": "Обратная связь и предложения",
+ "follow": "Подпишитесь на нас на {{name}}",
+ "footer": {
+ "action": {
+ "feedback": "Поделитесь своими ценными отзывами",
+ "star": "Поставьте звезду на GitHub"
+ },
+ "and": "и",
+ "feedback": {
+ "action": "Поделиться отзывом",
+ "desc": "Каждая ваша идея и рекомендация очень важны для нас, и мы с нетерпением ждем вашего мнения! Не стесняйтесь связаться с нами, чтобы предоставить отзыв о функциях продукта и опыте использования, чтобы помочь нам сделать LobeChat еще лучше.",
+ "title": "Поделитесь своими ценными отзывами на GitHub"
+ },
+ "later": "позже",
+ "star": {
+ "action": "Поставьте звезду",
+ "desc": "Если вам нравится наш продукт и вы хотите нас поддержать, не могли бы вы поставить нам звезду на GitHub? Это маленькое действие имеет большое значение для нас и мотивирует нас продолжать предоставлять вам лучший опыт использования.",
+ "title": "Поставьте звезду на GitHub для нас"
+ },
+ "title": "Нравится наш продукт?"
+ },
+ "fullscreen": "Полноэкранный режим",
+ "historyRange": "История",
+ "import": "Импорт настроек",
+ "importModal": {
+ "error": {
+ "desc": "Извините, произошла ошибка в процессе импорта данных. Попробуйте импортировать заново или <1>сообщите о проблеме1>, и мы постараемся помочь вам как можно скорее.",
+ "title": "Ошибка импорта данных"
+ },
+ "finish": {
+ "onlySettings": "Настройки системы успешно импортированы",
+ "start": "Начать использование",
+ "subTitle": "Данные успешно импортированы за {{duration}} секунд. Детали импорта:",
+ "title": "Импорт данных завершен"
+ },
+ "loading": "Идет импорт данных, пожалуйста, подождите...",
+ "preparing": "Подготовка модуля импорта данных...",
+ "result": {
+ "added": "Успешно импортировано",
+ "errors": "Ошибка импорта",
+ "messages": "Сообщения",
+ "sessionGroups": "Группы сессий",
+ "sessions": "Агенты",
+ "skips": "Пропущено дубликатов",
+ "topics": "Темы",
+ "type": "Тип данных"
+ },
+ "title": "Импорт данных",
+ "uploading": {
+ "desc": "Файл сейчас загружается, так как он довольно большой...",
+ "restTime": "Оставшееся время",
+ "speed": "Скорость загрузки"
+ }
+ },
+ "information": "Сообщество и информация",
+ "installPWA": "Установить веб-приложение",
+ "lang": {
+ "ar": "арабский",
+ "bg-BG": "болгарский",
+ "bn": "Бенгальский",
+ "cs-CZ": "Чешский",
+ "da-DK": "Датский",
+ "de-DE": "немецкий",
+ "el-GR": "Греческий",
+ "en": "Английский",
+ "en-US": "Английский",
+ "es-ES": "испанский",
+ "fi-FI": "Финский",
+ "fr-FR": "французский",
+ "hi-IN": "Хинди",
+ "hu-HU": "Венгерский",
+ "id-ID": "Индонезийский",
+ "it-IT": "Итальянский",
+ "ja-JP": "Японский",
+ "ko-KR": "Корейский",
+ "nl-NL": "Голландский",
+ "no-NO": "Норвежский",
+ "pl-PL": "Польский",
+ "pt-BR": "португальский",
+ "pt-PT": "Португальский",
+ "ro-RO": "Румынский",
+ "ru-RU": "Русский",
+ "sk-SK": "Словацкий",
+ "sr-RS": "Сербский",
+ "sv-SE": "Шведский",
+ "th-TH": "Тайский",
+ "tr-TR": "турецкий",
+ "uk-UA": "Украинский",
+ "vi-VN": "Вьетнамский",
+ "zh": "Китайский",
+ "zh-CN": "Китайский",
+ "zh-TW": "Традиционный китайский"
+ },
+ "layoutInitializing": "Инициализация макета...",
+ "legal": "Юридическое уведомление",
+ "loading": "Загрузка...",
+ "mail": {
+ "business": "Деловое сотрудничество",
+ "support": "Поддержка по электронной почте"
+ },
+ "oauth": "Вход через единую учетную запись (SSO)",
+ "officialSite": "Официальный сайт",
+ "ok": "ОК",
+ "password": "Пароль",
+ "pin": "Закрепить",
+ "pinOff": "Открепить",
+ "privacy": "Политика конфиденциальности",
+ "regenerate": "Перегенерировать",
+ "rename": "Переименовать",
+ "reset": "Сброс",
+ "retry": "Повторить",
+ "send": "Отправить",
+ "setting": "Настройка",
+ "share": "Поделиться",
+ "stop": "Остановить",
+ "sync": {
+ "actions": {
+ "settings": "Настройки синхронизации",
+ "sync": "Синхронизировать сейчас"
+ },
+ "awareness": {
+ "current": "Текущее устройство"
+ },
+ "channel": "Канал",
+ "disabled": {
+ "actions": {
+ "enable": "Включить облачную синхронизацию",
+ "settings": "Настройки синхронизации"
+ },
+ "desc": "Данные текущей сессии хранятся только в этом браузере. Если вам нужно синхронизировать данные между несколькими устройствами, настройте и включите облачную синхронизацию.",
+ "title": "Синхронизация данных не включена"
+ },
+ "enabled": {
+ "title": "Синхронизация данных"
+ },
+ "status": {
+ "connecting": "Подключение",
+ "disabled": "Синхронизация не включена",
+ "ready": "Готово",
+ "synced": "Синхронизировано",
+ "syncing": "Синхронизация",
+ "unconnected": "Соединение не установлено"
+ },
+ "title": "Статус синхронизации",
+ "unconnected": {
+ "tip": "Не удалось подключиться к серверу сигнализации. Невозможно установить канал для прямого обмена сообщениями. Пожалуйста, проверьте сеть и повторите попытку"
+ }
+ },
+ "tab": {
+ "chat": "Чат",
+ "discover": "Открыть",
+ "files": "Файлы",
+ "me": "я",
+ "setting": "Настройки"
+ },
+ "telemetry": {
+ "allow": "Разрешить",
+ "deny": "Отказать",
+ "desc": "Мы хотели бы анонимно собирать информацию о вашем использовании, чтобы помочь нам улучшить LobeChat и предоставить вам лучший опыт использования продукта. Вы можете отключить это в любое время в «Настройки» - «О программе».",
+ "learnMore": "Узнать больше",
+ "title": "Помогите улучшить LobeChat"
+ },
+ "temp": "Временный",
+ "terms": "Условия использования",
+ "updateAgent": "Обновить информацию об агенте",
+ "upgradeVersion": {
+ "action": "обновить",
+ "hasNew": "Доступно обновление",
+ "newVersion": "Доступна новая версия: {{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "Анонимный пользователь",
+ "billing": "Управление счетами",
+ "cloud": "Опыт {{name}}",
+ "data": "Хранилище данных",
+ "defaultNickname": "Пользователь сообщества",
+ "discord": "Поддержка сообщества",
+ "docs": "Документация",
+ "email": "Поддержка по электронной почте",
+ "feedback": "Обратная связь и предложения",
+ "help": "Центр помощи",
+ "moveGuide": "Кнопка настроек перемещена сюда",
+ "plans": "Планы подписки",
+ "preview": "Предпросмотр",
+ "profile": "Управление аккаунтом",
+ "setting": "Настройки приложения",
+ "usages": "Статистика использования"
+ },
+ "version": "Версия"
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/components.json b/DigitalHumanWeb/locales/ru-RU/components.json
new file mode 100644
index 0000000..d3d9d0d
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "Перетащите файлы сюда, поддерживается загрузка нескольких изображений.",
+ "dragFileDesc": "Перетащите изображения и файлы сюда, поддерживается загрузка нескольких изображений и файлов.",
+ "dragFileTitle": "Загрузить файл",
+ "dragTitle": "Загрузить изображение"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "Добавить в базу знаний",
+ "addToOtherKnowledgeBase": "Добавить в другую базу знаний",
+ "batchChunking": "Пакетная разбивка",
+ "chunking": "Разбивка",
+ "chunkingTooltip": "Разделите файл на несколько текстовых блоков и векторизуйте их для семантического поиска и диалога с файлом",
+ "confirmDelete": "Вы собираетесь удалить этот файл. После удаления его нельзя будет восстановить. Пожалуйста, подтвердите ваше действие.",
+ "confirmDeleteMultiFiles": "Вы собираетесь удалить выбранные {{count}} файлов. После удаления их нельзя будет восстановить. Пожалуйста, подтвердите ваше действие.",
+ "confirmRemoveFromKnowledgeBase": "Вы собираетесь удалить выбранные {{count}} файлов из базы знаний. После удаления файлы все еще будут доступны во всех файлах. Пожалуйста, подтвердите ваше действие.",
+ "copyUrl": "Скопировать ссылку",
+ "copyUrlSuccess": "Адрес файла успешно скопирован",
+ "createChunkingTask": "Подготовка...",
+ "deleteSuccess": "Файл успешно удален",
+ "downloading": "Загрузка файла...",
+ "removeFromKnowledgeBase": "Удалить из базы знаний",
+ "removeFromKnowledgeBaseSuccess": "Файл успешно удален"
+ },
+ "bottom": "Вы достигли конца",
+ "config": {
+ "showFilesInKnowledgeBase": "Показать содержимое в базе знаний"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "Загрузить файл",
+ "folder": "Загрузить папку",
+ "knowledgeBase": "Создать новую базу знаний"
+ },
+ "or": "или",
+ "title": "Перетащите файл или папку сюда"
+ },
+ "title": {
+ "createdAt": "Дата создания",
+ "size": "Размер",
+ "title": "Файл"
+ },
+ "total": {
+ "fileCount": "Всего {{count}} элементов",
+ "selectedCount": "Выбрано {{count}} элементов"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "Текстовые блоки еще не полностью векторизованы, что приведет к недоступности функции семантического поиска. Для повышения качества поиска, пожалуйста, векторизуйте текстовые блоки.",
+ "error": "Ошибка векторизации",
+ "errorResult": "Ошибка векторизации, пожалуйста, проверьте и повторите попытку. Причина сбоя:",
+ "processing": "Текстовые блоки векторизуются, пожалуйста, подождите.",
+ "success": "Все текущие текстовые блоки успешно векторизованы."
+ },
+ "embeddings": "Векторизация",
+ "status": {
+ "error": "Ошибка разбивки",
+ "errorResult": "Ошибка разбивки, пожалуйста, проверьте и повторите попытку. Причина ошибки:",
+ "processing": "В процессе разбивки",
+ "processingTip": "Сервер разбивает текстовые блоки, закрытие страницы не повлияет на процесс разбивки."
+ }
+ }
+ },
+ "GoBack": {
+ "back": "Назад"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "Пользовательская модель по умолчанию поддерживает как вызов функций, так и распознавание изображений. Пожалуйста, проверьте доступность указанных возможностей в вашем случае",
+ "file": "Эта модель поддерживает загрузку и распознавание файлов",
+ "functionCall": "Эта модель поддерживает вызов функций",
+ "tokens": "Эта модель поддерживает до {{tokens}} токенов в одной сессии",
+ "vision": "Эта модель поддерживает распознавание изображений"
+ },
+ "removed": "Эта модель не находится в списке. Если вы ее отмените, она будет автоматически удалена"
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "Нет активированных моделей. Пожалуйста, перейдите в настройки и включите модель",
+ "provider": "Поставщик"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/discover.json b/DigitalHumanWeb/locales/ru-RU/discover.json
new file mode 100644
index 0000000..1259a91
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "Добавить помощника",
+ "addAgentAndConverse": "Добавить помощника и начать беседу",
+ "addAgentSuccess": "Успешно добавлено",
+ "conversation": {
+ "l1": "Привет, я **{{name}}**, вы можете задавать мне любые вопросы, и я постараюсь на них ответить ~",
+ "l2": "Вот мои возможности: ",
+ "l3": "Давайте начнем беседу!"
+ },
+ "description": "Описание помощника",
+ "detail": "Детали",
+ "list": "Список помощников",
+ "more": "Больше",
+ "plugins": "Интеграция плагинов",
+ "recentSubmits": "Недавние обновления",
+ "suggestions": "Рекомендуемые",
+ "systemRole": "Настройки помощника",
+ "try": "Попробовать"
+ },
+ "back": "Вернуться к открытиям",
+ "category": {
+ "assistant": {
+ "academic": "Академический",
+ "all": "Все",
+ "career": "Карьера",
+ "copywriting": "Копирайтинг",
+ "design": "Дизайн",
+ "education": "Образование",
+ "emotions": "Эмоции",
+ "entertainment": "Развлечения",
+ "games": "Игры",
+ "general": "Общее",
+ "life": "Жизнь",
+ "marketing": "Маркетинг",
+ "office": "Офис",
+ "programming": "Программирование",
+ "translation": "Перевод"
+ },
+ "plugin": {
+ "all": "Все",
+ "gaming-entertainment": "Игры и развлечения",
+ "life-style": "Стиль жизни",
+ "media-generate": "Генерация медиа",
+ "science-education": "Наука и образование",
+ "social": "Социальные медиа",
+ "stocks-finance": "Финансовые рынки",
+ "tools": "Полезные инструменты",
+ "web-search": "Веб-поиск"
+ }
+ },
+ "cleanFilter": "Очистить фильтр",
+ "create": "Создать",
+ "createGuide": {
+ "func1": {
+ "desc1": "В окне беседы перейдите в настройки, нажав на значок в правом верхнем углу, чтобы попасть на страницу настроек помощника;",
+ "desc2": "Нажмите кнопку 'Отправить на рынок помощников' в правом верхнем углу.",
+ "tag": "Метод 1",
+ "title": "Отправить через LobeChat"
+ },
+ "func2": {
+ "button": "Перейти в репозиторий помощников на Github",
+ "desc": "Если вы хотите добавить помощника в индекс, создайте запись с помощью agent-template.json или agent-template-full.json в каталоге plugins, напишите краткое описание и соответствующие теги, затем создайте запрос на слияние.",
+ "tag": "Метод 2",
+ "title": "Отправить через Github"
+ }
+ },
+ "dislike": "Не нравится",
+ "filter": "Фильтр",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "Все авторы",
+ "followed": "Подписанные авторы",
+ "title": "Диапазон авторов"
+ },
+ "contentLength": "Минимальная длина контекста",
+ "maxToken": {
+ "title": "Установить максимальную длину (Token)",
+ "unlimited": "Без ограничений"
+ },
+ "other": {
+ "functionCall": "Поддержка вызова функций",
+ "title": "Другие",
+ "vision": "Поддержка визуального распознавания",
+ "withKnowledge": "С включенной базой знаний",
+ "withTool": "С включенным плагином"
+ },
+ "pricing": "Цена модели",
+ "timePeriod": {
+ "all": "Все время",
+ "day": "Последние 24 часа",
+ "month": "Последние 30 дней",
+ "title": "Период времени",
+ "week": "Последние 7 дней",
+ "year": "Последний год"
+ }
+ },
+ "home": {
+ "featuredAssistants": "Рекомендуемые помощники",
+ "featuredModels": "Рекомендуемые модели",
+ "featuredProviders": "Рекомендуемые поставщики моделей",
+ "featuredTools": "Рекомендуемые плагины",
+ "more": "Узнать больше"
+ },
+ "like": "Нравится",
+ "models": {
+ "chat": "Начать беседу",
+ "contentLength": "Максимальная длина контекста",
+ "free": "Бесплатно",
+ "guide": "Руководство по настройке",
+ "list": "Список моделей",
+ "more": "Больше",
+ "parameterList": {
+ "defaultValue": "Значение по умолчанию",
+ "docs": "Посмотреть документацию",
+ "frequency_penalty": {
+ "desc": "Эта настройка регулирует частоту повторного использования определенных слов, уже появившихся во входных данных. Более высокие значения снижают вероятность такого повторения, в то время как отрицательные значения имеют противоположный эффект. Штраф за слова не увеличивается с увеличением частоты появления. Отрицательные значения будут поощрять повторное использование слов.",
+ "title": "Штраф за частоту"
+ },
+ "max_tokens": {
+ "desc": "Эта настройка определяет максимальную длину, которую модель может сгенерировать за один ответ. Установка более высокого значения позволяет модели генерировать более длинные ответы, в то время как более низкое значение ограничивает длину ответа, делая его более кратким. В зависимости от различных сценариев использования разумная настройка этого значения может помочь достичь ожидаемой длины и степени детализации ответа.",
+ "title": "Ограничение на один ответ"
+ },
+ "presence_penalty": {
+ "desc": "Эта настройка предназначена для контроля повторного использования слов в зависимости от их частоты появления во входных данных. Она пытается реже использовать те слова, которые встречаются чаще, пропорционально их частоте. Штраф за слова увеличивается с увеличением частоты появления. Отрицательные значения будут поощрять повторное использование слов.",
+ "title": "Свежесть темы"
+ },
+ "range": "Диапазон",
+ "temperature": {
+ "desc": "Эта настройка влияет на разнообразие ответов модели. Более низкие значения приводят к более предсказуемым и типичным ответам, в то время как более высокие значения поощряют более разнообразные и необычные ответы. Когда значение установлено на 0, модель всегда дает один и тот же ответ на данный ввод.",
+ "title": "Случайность"
+ },
+ "title": "Параметры модели",
+ "top_p": {
+ "desc": "Эта настройка ограничивает выбор модели до определенного процента наиболее вероятных слов: выбираются только те слова, которые достигают накопленной вероятности P. Более низкие значения делают ответы модели более предсказуемыми, в то время как значение по умолчанию позволяет модели выбирать из всего диапазона слов.",
+ "title": "Ядерная выборка"
+ },
+ "type": "Тип"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChat поддерживает использование пользовательского API-ключа для этого поставщика.",
+ "input": "Цена ввода",
+ "inputTooltip": "Стоимость за миллион токенов",
+ "latency": "Задержка",
+ "latencyTooltip": "Среднее время ответа поставщика на первый токен",
+ "maxOutput": "Максимальная длина вывода",
+ "maxOutputTooltip": "Максимальное количество токенов, которые может сгенерировать эта конечная точка",
+ "officialTooltip": "Официальный сервис LobeHub",
+ "output": "Цена вывода",
+ "outputTooltip": "Стоимость за миллион токенов",
+ "streamCancellationTooltip": "Этот поставщик поддерживает функцию отмены потока.",
+ "throughput": "Пропускная способность",
+ "throughputTooltip": "Среднее количество токенов, передаваемых в секунду для потоковых запросов"
+ },
+ "suggestions": "Связанные модели",
+ "supportedProviders": "Поставщики, поддерживающие эту модель"
+ },
+ "plugins": {
+ "community": "Сообщество плагинов",
+ "install": "Установить плагин",
+ "installed": "Установлено",
+ "list": "Список плагинов",
+ "meta": {
+ "description": "Описание",
+ "parameter": "Параметр",
+ "title": "Параметры инструмента",
+ "type": "Тип"
+ },
+ "more": "Больше",
+ "official": "Официальные плагины",
+ "recentSubmits": "Недавние обновления",
+ "suggestions": "Рекомендуемые"
+ },
+ "providers": {
+ "config": "Конфигурация провайдера",
+ "list": "Список поставщиков моделей",
+ "modelCount": "{{count}} моделей",
+ "modelSite": "Документация модели",
+ "more": "Больше",
+ "officialSite": "Официальный сайт",
+ "showAllModels": "Показать все модели",
+ "suggestions": "Связанные провайдеры",
+ "supportedModels": "Поддерживаемые модели"
+ },
+ "search": {
+ "placeholder": "Поиск по названию или ключевым словам...",
+ "result": "{{count}} результатов поиска по {{keyword}}",
+ "searching": "Поиск..."
+ },
+ "sort": {
+ "mostLiked": "Наиболее понравившиеся",
+ "mostUsed": "Наиболее используемые",
+ "newest": "Сначала новые",
+ "oldest": "Сначала старые",
+ "recommended": "Рекомендуемые"
+ },
+ "tab": {
+ "assistants": "Ассистенты",
+ "home": "Главная",
+ "models": "Модели",
+ "plugins": "Плагины",
+ "providers": "Поставщики моделей"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/error.json b/DigitalHumanWeb/locales/ru-RU/error.json
new file mode 100644
index 0000000..0c8cb0b
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "Продолжить разговор",
+ "desc": "{{greeting}}, рады снова быть к вашим услугам. Давайте продолжим нашу беседу",
+ "title": "Добро пожаловать обратно, {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "Вернуться на главную",
+ "desc": "Попробуйте позже или вернитесь в знакомый мир",
+ "retry": "Повторить попытку",
+ "title": "Произошла проблема на странице.."
+ },
+ "fetchError": "Ошибка запроса",
+ "fetchErrorDetail": "Подробности ошибки",
+ "notFound": {
+ "backHome": "Вернуться на главную",
+ "check": "Пожалуйста, проверьте, правильный ли ваш URL",
+ "desc": "Мы не можем найти страницу, которую вы ищете",
+ "title": "Заблудились в неизведанных местах?"
+ },
+ "pluginSettings": {
+ "desc": "Чтобы начать использовать этот плагин, выполните следующую конфигурацию",
+ "title": "{{name}} Настройки плагина"
+ },
+ "response": {
+ "400": "Извините, сервер не понимает ваш запрос. Убедитесь в правильности параметров запроса",
+ "401": "Извините, сервер отклонил ваш запрос, возможно, из-за недостаточного количества разрешений или недействительной аутентификации.",
+ "403": "Извините, сервер отклонил ваш запрос. У вас нет разрешения на доступ к этому содержимому",
+ "404": "Извините, сервер не может найти запрошенную вами страницу или ресурс. Убедитесь, что ваш URL-адрес указан правильно.",
+ "405": "К сожалению, сервер не поддерживает используемый вами метод запроса. Убедитесь, что ваш метод запроса правильный.",
+ "406": "Извините, сервер не может выполнить запрос в соответствии с характеристиками запрошенного контента",
+ "407": "Извините, для продолжения запроса необходима аутентификация прокси",
+ "408": "Извините, сервер превысил время ожидания при ожидании запроса. Пожалуйста, проверьте свое сетевое подключение и повторите попытку",
+ "409": "Извините, запрос не может быть обработан из-за конфликта, возможно, из-за несовместимости состояния ресурса и запроса",
+ "410": "Извините, запрашиваемый ресурс был удален навсегда и не может быть найден",
+ "411": "Извините, сервер не может обработать запрос без указания длины допустимого содержимого",
+ "412": "Извините, ваш запрос не соответствует условиям на стороне сервера и не может быть выполнен",
+ "413": "Извините, ваш запрос слишком велик, сервер не может его обработать",
+ "414": "Извините, ваш URI запроса слишком длинный, сервер не может его обработать",
+ "415": "Извините, сервер не может обработать запрошенный формат медиа",
+ "416": "Извините, сервер не может удовлетворить запрошенный диапазон",
+ "417": "Извините, сервер не может удовлетворить ваше ожидание",
+ "422": "Извините, ваш запрос имеет правильный формат, но из-за семантической ошибки не может быть обработан",
+ "423": "Извините, запрашиваемый ресурс заблокирован",
+ "424": "Извините, из-за предыдущей неудачной попытки запрос не может быть выполнен",
+ "426": "Извините, сервер требует обновления вашего клиента до более новой версии протокола",
+ "428": "Извините, сервер требует предварительных условий, ваш запрос должен содержать правильные условные заголовки",
+ "429": "Извините, ваш запрос слишком частый, и сервер немного устал. Повторите попытку позже.",
+ "431": "Извините, ваш заголовок запроса слишком велик, сервер не может его обработать",
+ "451": "Извините, по юридическим причинам сервер отказывается предоставить этот ресурс",
+ "500": "К сожалению, сервер, похоже, испытывает некоторые трудности и временно не может выполнить ваш запрос. Повторите попытку позже.",
+ "502": "К сожалению, сервер, похоже, потерян и временно не может предоставлять услуги. Повторите попытку позже.",
+ "503": "К сожалению, сервер в настоящее время не может обработать ваш запрос, возможно, из-за перегрузки или технического обслуживания. Повторите попытку позже.",
+ "504": "К сожалению, сервер не получил ответа от вышестоящего сервера. Повторите попытку позже.",
+ "AgentRuntimeError": "Ошибка выполнения времени выполнения языковой модели Lobe, пожалуйста, проверьте и повторите попытку в соответствии с предоставленной информацией",
+ "FreePlanLimit": "Вы являетесь бесплатным пользователем и не можете использовать эту функцию. Пожалуйста, перейдите на платный план для продолжения использования.",
+ "InvalidAccessCode": "Неверный код доступа: введите правильный код доступа или добавьте пользовательский ключ API",
+ "InvalidBedrockCredentials": "Аутентификация Bedrock не прошла, пожалуйста, проверьте AccessKeyId/SecretAccessKey и повторите попытку",
+ "InvalidClerkUser": "Извините, вы еще не вошли в систему. Пожалуйста, войдите или зарегистрируйтесь, прежде чем продолжить",
+ "InvalidGithubToken": "Личный токен доступа Github некорректен или пуст, пожалуйста, проверьте личный токен доступа Github и повторите попытку",
+ "InvalidOllamaArgs": "Неверная конфигурация Ollama, пожалуйста, проверьте конфигурацию Ollama и повторите попытку",
+ "InvalidProviderAPIKey": "{{provider}} API ключ недействителен или отсутствует. Пожалуйста, проверьте ключ API {{provider}} и повторите попытку",
+ "LocationNotSupportError": "Извините, ваше текущее местоположение не поддерживает эту службу модели, возможно из-за ограничений региона или недоступности службы. Пожалуйста, убедитесь, что текущее местоположение поддерживает использование этой службы, или попробуйте использовать другую информацию о местоположении.",
+ "NoOpenAIAPIKey": "Ключ OpenAI API пуст, пожалуйста, добавьте свой собственный ключ OpenAI API",
+ "OllamaBizError": "Ошибка обращения к сервису Ollama, пожалуйста, проверьте следующую информацию или повторите попытку",
+ "OllamaServiceUnavailable": "Сервис Ollama недоступен. Пожалуйста, проверьте, работает ли Ollama правильно, и правильно ли настроена его конфигурация для кросс-доменных запросов",
+ "OpenAIBizError": "Ошибка обслуживания OpenAI. Пожалуйста, проверьте следующую информацию или повторите попытку",
+ "PluginApiNotFound": "К сожалению, API не существует в манифесте плагина. Пожалуйста, проверьте, соответствует ли ваш метод запроса API манифеста плагина",
+ "PluginApiParamsError": "К сожалению, проверка входных параметров для запроса плагина не удалась. Пожалуйста, проверьте, соответствуют ли входные параметры описанию API",
+ "PluginFailToTransformArguments": "Извините, не удалось преобразовать аргументы вызова плагина. Попробуйте сгенерировать помощь заново или повторите попытку с более мощной моделью искусственного интеллекта для вызова инструментов.",
+ "PluginGatewayError": "Извините, возникла ошибка шлюза плагина. Пожалуйста, проверьте правильность конфигурации шлюза плагина.",
+ "PluginManifestInvalid": "К сожалению, проверка манифеста плагина не удалась. Пожалуйста, проверьте правильность формата манифеста",
+ "PluginManifestNotFound": "К сожалению, серверу не удалось найти файл манифеста плагина (manifest.json). Пожалуйста, проверьте правильность адреса файла манифеста плагина",
+ "PluginMarketIndexInvalid": "К сожалению, проверка индекса плагина не удалась. Пожалуйста, проверьте правильность формата индексного файла",
+ "PluginMarketIndexNotFound": "К сожалению, сервер не смог найти индекс плагина. Пожалуйста, проверьте правильность адреса индекса",
+ "PluginMetaInvalid": "К сожалению, проверка метаданных плагина не удалась. Пожалуйста, проверьте правильность формата метаданных плагина",
+ "PluginMetaNotFound": "К сожалению, плагин не найден в индексе. Пожалуйста, проверьте информацию о конфигурации плагина в индексе",
+ "PluginOpenApiInitError": "Извините, не удалось инициализировать клиент OpenAPI. Пожалуйста, проверьте правильность информации конфигурации OpenAPI.",
+ "PluginServerError": "Запрос сервера плагина возвратил ошибку. Проверьте файл манифеста плагина, конфигурацию плагина или реализацию сервера на основе информации об ошибке ниже",
+ "PluginSettingsInvalid": "Этот плагин необходимо правильно настроить, прежде чем его можно будет использовать. Пожалуйста, проверьте правильность вашей конфигурации",
+ "ProviderBizError": "Ошибка обслуживания {{provider}}. Пожалуйста, проверьте следующую информацию или повторите попытку",
+ "StreamChunkError": "Ошибка разбора блока сообщения потокового запроса. Пожалуйста, проверьте, соответствует ли текущий API стандартам, или свяжитесь с вашим поставщиком API для получения консультации.",
+ "SubscriptionPlanLimit": "Вы исчерпали свой лимит подписки и не можете использовать эту функцию. Пожалуйста, перейдите на более высокий план или приобретите дополнительные ресурсы для продолжения использования.",
+ "UnknownChatFetchError": "Извините, произошла неизвестная ошибка запроса. Пожалуйста, проверьте информацию ниже или попробуйте снова."
+ },
+ "stt": {
+ "responseError": "Ошибка запроса сервиса. Пожалуйста, проверьте конфигурацию или повторите попытку"
+ },
+ "tts": {
+ "responseError": "Ошибка запроса сервиса. Пожалуйста, проверьте конфигурацию или повторите попытку"
+ },
+ "unlock": {
+ "addProxyUrl": "Добавить URL прокси-сервера OpenAI (необязательно)",
+ "apiKey": {
+ "description": "Просто введите ваш API ключ {{name}}, чтобы начать сеанс",
+ "title": "Используйте настраиваемый API ключ {{name}}"
+ },
+ "closeMessage": "Закрыть сообщение",
+ "confirm": "Подтвердить и повторить попытку",
+ "oauth": {
+ "description": "Администратор включил единую систему аутентификации. Нажмите кнопку ниже, чтобы войти и разблокировать приложение.",
+ "success": "Успешный вход",
+ "title": "Вход в аккаунт",
+ "welcome": "Добро пожаловать!"
+ },
+ "password": {
+ "description": "Шифрование приложения включено администратором. Введите пароль приложения, чтобы разблокировать приложение. Пароль необходимо ввести только один раз.",
+ "placeholder": "Введите пароль",
+ "title": "Введите пароль для разблокировки приложения"
+ },
+ "tabs": {
+ "apiKey": "Пользовательский ключ API",
+ "password": "Пароль"
+ }
+ },
+ "upload": {
+ "desc": "Подробности: {{detail}}",
+ "fileOnlySupportInServerMode": "Текущий режим развертывания не поддерживает загрузку файлов, отличных от изображений. Чтобы загрузить файл формата {{ext}}, пожалуйста, переключитесь на развертывание с серверной базой данных или используйте сервис {{cloud}}.",
+ "networkError": "Пожалуйста, убедитесь, что ваше соединение с сетью работает нормально, и проверьте правильность конфигурации кросс-доменных запросов для службы хранения файлов.",
+ "title": "Ошибка загрузки файла. Проверьте подключение к сети или попробуйте позже",
+ "unknownError": "Причина ошибки: {{reason}}",
+ "uploadFailed": "Ошибка загрузки файла"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/file.json b/DigitalHumanWeb/locales/ru-RU/file.json
new file mode 100644
index 0000000..bd878e9
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "Управляйте своими файлами и базой знаний",
+ "detail": {
+ "basic": {
+ "createdAt": "Дата создания",
+ "filename": "Имя файла",
+ "size": "Размер файла",
+ "title": "Основная информация",
+ "type": "Формат",
+ "updatedAt": "Дата обновления"
+ },
+ "data": {
+ "chunkCount": "Количество частей",
+ "embedding": {
+ "default": "Векторизация не выполнена",
+ "error": "Ошибка",
+ "pending": "Ожидание запуска",
+ "processing": "В процессе",
+ "success": "Завершено"
+ },
+ "embeddingStatus": "Векторизация"
+ }
+ },
+ "empty": "Нет загруженных файлов/папок",
+ "header": {
+ "actions": {
+ "newFolder": "Создать папку",
+ "uploadFile": "Загрузить файл",
+ "uploadFolder": "Загрузить папку"
+ },
+ "uploadButton": "Загрузить"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "Вы собираетесь удалить эту базу знаний. Файлы в ней не будут удалены и будут перемещены в общие файлы. После удаления база знаний не может быть восстановлена, пожалуйста, будьте осторожны.",
+ "empty": "Нажмите <1>+1>, чтобы начать создание базы знаний"
+ },
+ "new": "Создать базу знаний",
+ "title": "База знаний"
+ },
+ "networkError": "Не удалось получить базу знаний, пожалуйста, проверьте сетевое соединение и попробуйте снова",
+ "notSupportGuide": {
+ "desc": "Текущий развертываемый экземпляр находится в режиме клиентской базы данных, функции управления файлами недоступны. Пожалуйста, переключитесь на <1>режим серверной базы данных1> или используйте <3>LobeChat Cloud3> напрямую.",
+ "features": {
+ "allKind": {
+ "desc": "Поддержка основных типов файлов, включая Word, PPT, Excel, PDF, TXT и другие распространенные форматы документов, а также JS, Python и другие популярные файлы кода.",
+ "title": "Разнообразие форматов файлов"
+ },
+ "embeddings": {
+ "desc": "Использование высокопроизводительных векторных моделей для векторизации текстовых частей, что позволяет осуществлять семантический поиск по содержимому файлов.",
+ "title": "Семантическая векторизация"
+ },
+ "repos": {
+ "desc": "Поддержка создания базы знаний с возможностью добавления различных типов файлов для построения вашей области знаний.",
+ "title": "База знаний"
+ }
+ },
+ "title": "Текущий режим развертывания не поддерживает управление файлами"
+ },
+ "preview": {
+ "downloadFile": "Скачать файл",
+ "unsupportedFileAndContact": "Этот формат файла в настоящее время не поддерживает онлайн-просмотр. Если у вас есть запрос на просмотр, пожалуйста, <1>сообщите нам1>."
+ },
+ "searchFilePlaceholder": "Поиск файла",
+ "tab": {
+ "all": "Все файлы",
+ "audios": "Аудио",
+ "documents": "Документы",
+ "images": "Изображения",
+ "videos": "Видео",
+ "websites": "Веб-сайты"
+ },
+ "title": "Файлы",
+ "uploadDock": {
+ "body": {
+ "collapse": "Свернуть",
+ "item": {
+ "done": "Загружено",
+ "error": "Ошибка загрузки, попробуйте снова",
+ "pending": "Подготовка к загрузке...",
+ "processing": "Обработка файла...",
+ "restTime": "Осталось {{time}}"
+ }
+ },
+ "totalCount": "Всего {{count}} элементов",
+ "uploadStatus": {
+ "error": "Ошибка загрузки",
+ "pending": "Ожидание загрузки",
+ "processing": "Загрузка",
+ "success": "Загрузка завершена",
+ "uploading": "Загрузка..."
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/knowledgeBase.json b/DigitalHumanWeb/locales/ru-RU/knowledgeBase.json
new file mode 100644
index 0000000..0ebf97d
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "Файл успешно добавлен, <1>посмотреть сейчас1>",
+ "confirm": "Добавить",
+ "id": {
+ "placeholder": "Пожалуйста, выберите базу знаний для добавления",
+ "required": "Пожалуйста, выберите базу знаний",
+ "title": "Целевая база знаний"
+ },
+ "title": "Добавить в базу знаний",
+ "totalFiles": "Выбрано {{count}} файлов"
+ },
+ "createNew": {
+ "confirm": "Создать",
+ "description": {
+ "placeholder": "Описание базы знаний (необязательно)"
+ },
+ "formTitle": "Основная информация",
+ "name": {
+ "placeholder": "Название базы знаний",
+ "required": "Пожалуйста, введите название базы знаний"
+ },
+ "title": "Создать новую базу знаний"
+ },
+ "tab": {
+ "evals": "Оценка",
+ "files": "Документы",
+ "settings": "Настройки",
+ "testing": "Тестирование возврата"
+ },
+ "title": "База знаний"
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/market.json b/DigitalHumanWeb/locales/ru-RU/market.json
new file mode 100644
index 0000000..d26c453
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "Добавить агента",
+ "addAgentAndConverse": "Добавить агента и начать разговор",
+ "addAgentSuccess": "Агент успешно добавлен",
+ "guide": {
+ "func1": {
+ "desc1": "Перейдите на страницу настроек, нажав на значок в правом верхнем углу окна сеанса.",
+ "desc2": "Нажмите кнопку \"Отправить в магазин помощников\" в правом верхнем углу.",
+ "tag": "Метод 1",
+ "title": "Отправка через LobeChat"
+ },
+ "func2": {
+ "button": "Перейти в репозиторий помощника на Github",
+ "desc": "Если вы хотите добавить помощник в индекс, создайте запись в файле agent-template.json или agent-template-full.json в каталоге плагинов, напишите краткое описание и соответствующие теги, а затем создайте запрос на извлечение.",
+ "tag": "Метод 2",
+ "title": "Отправка через Github"
+ }
+ },
+ "search": {
+ "placeholder": "Введите название или ключевое слово помощника..."
+ },
+ "sidebar": {
+ "comment": "Комментарии",
+ "prompt": "Подсказка",
+ "title": "Детали агента"
+ },
+ "submitAgent": "Отправить агента",
+ "title": {
+ "allAgents": "Все агенты",
+ "recentSubmits": "Недавние добавления"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/metadata.json b/DigitalHumanWeb/locales/ru-RU/metadata.json
new file mode 100644
index 0000000..d5ac78f
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}} предлагает вам лучший опыт использования ChatGPT, Claude, Gemini и OLLaMA WebUI",
+ "title": "{{appName}}: личный инструмент AI для повышения эффективности, дайте себе более умный мозг"
+ },
+ "discover": {
+ "assistants": {
+ "description": "Создание контента, копирайтинг, вопросы и ответы, генерация изображений, генерация видео, генерация речи, интеллектуальные агенты, автоматизированные рабочие процессы, настройка вашего собственного AI / GPTs / OLLaMA интеллектуального помощника",
+ "title": "AI помощники"
+ },
+ "description": "Создание контента, копирайтинг, вопросы и ответы, генерация изображений, генерация видео, генерация речи, интеллектуальные агенты, автоматизированные рабочие процессы, настройка AI приложений, настройка вашего собственного рабочего стола AI приложений",
+ "models": {
+ "description": "Изучите основные AI модели OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek",
+ "title": "AI модели"
+ },
+ "plugins": {
+ "description": "Ищите генерацию графиков, научные исследования, генерацию изображений, генерацию видео, генерацию речи и автоматизированные рабочие процессы, интегрируя богатые возможности плагинов для вашего помощника.",
+ "title": "AI плагины"
+ },
+ "providers": {
+ "description": "Изучите основных поставщиков моделей OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter",
+ "title": "Поставщики AI моделей"
+ },
+ "search": "Поиск",
+ "title": "Открыть"
+ },
+ "plugins": {
+ "description": "Поиск, генерация графиков, академические исследования, генерация изображений, генерация видео, генерация речи, автоматизация рабочих процессов, настройте возможности ToolCall для ChatGPT / Claude",
+ "title": "Рынок плагинов"
+ },
+ "welcome": {
+ "description": "{{appName}} предлагает вам лучший опыт использования ChatGPT, Claude, Gemini и OLLaMA WebUI",
+ "title": "Добро пожаловать в {{appName}}: личный инструмент AI для повышения эффективности, дайте себе более умный мозг"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/migration.json b/DigitalHumanWeb/locales/ru-RU/migration.json
new file mode 100644
index 0000000..d029fca
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "Очистить базу данных",
+ "downloadBackup": "Скачать резервную копию",
+ "reUpgrade": "Повторное обновление",
+ "start": "Начать использование",
+ "upgrade": "Обновить"
+ },
+ "clear": {
+ "confirm": "Вы уверены, что хотите очистить локальную базу данных? (Глобальные настройки не будут затронуты). Пожалуйста, убедитесь, что вы скачали резервную копию данных."
+ },
+ "description": "В новой версии хранилище данных {{appName}} сделало огромный шаг вперед. Поэтому мы должны обновить данные старой версии, чтобы предоставить вам лучший опыт использования.",
+ "features": {
+ "capability": {
+ "desc": "На основе технологии IndexedDB, достаточно вместительное, чтобы хранить все ваши сообщения за жизнь",
+ "title": "Большая емкость"
+ },
+ "performance": {
+ "desc": "Автоматическая индексация миллионов сообщений, поиск с откликом в миллисекундах",
+ "title": "Высокая производительность"
+ },
+ "use": {
+ "desc": "Поддержка поиска по заголовкам, описаниям, тегам, содержимому сообщений и даже текстам переводов, эффективность повседневного поиска значительно повышена",
+ "title": "Удобнее в использовании"
+ }
+ },
+ "title": "Эволюция данных {{appName}}",
+ "upgrade": {
+ "error": {
+ "subTitle": "К сожалению, во время процесса обновления базы данных произошла ошибка. Пожалуйста, попробуйте следующие решения: A. Очистите локальные данные и повторно импортируйте резервные данные; B. Нажмите кнопку «Перезагрузить обновление».
Если ошибка повторяется, пожалуйста, <1>сообщите о проблеме1>, и мы сразу же поможем вам разобраться.",
+ "title": "Ошибка обновления базы данных"
+ },
+ "success": {
+ "subTitle": "База данных {{appName}} была успешно обновлена до последней версии, начните использовать её прямо сейчас.",
+ "title": "Обновление базы данных успешно"
+ }
+ },
+ "upgradeTip": "Обновление займет примерно 10-20 секунд, пожалуйста, не закрывайте {{appName}} во время обновления."
+ },
+ "migrateError": {
+ "missVersion": "Отсутствует номер версии импортируемых данных. Пожалуйста, проверьте файл и повторите попытку.",
+ "noMigration": "Не найдено схемы миграции для текущей версии. Пожалуйста, проверьте номер версии и повторите попытку. Если проблема сохраняется, пожалуйста, сообщите о проблеме"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/modelProvider.json b/DigitalHumanWeb/locales/ru-RU/modelProvider.json
new file mode 100644
index 0000000..0569ef1
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "Версия API Azure, следующая формату ГГГГ-ММ-ДД, см. [последнюю версию](https://learn.microsoft.com/ru-ru/azure/ai-services/openai/reference#chat-completions)",
+ "fetch": "Получить список",
+ "title": "Версия Azure API"
+ },
+ "empty": "Введите идентификатор модели, чтобы добавить первую модель",
+ "endpoint": {
+ "desc": "Можно найти в разделе «Ключи и конечные точки» при проверке ресурса в портале Azure",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "Адрес Azure API"
+ },
+ "modelListPlaceholder": "Выберите или добавьте модель OpenAI, которую вы развернули",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "Можно найти в разделе «Ключи и конечные точки» при проверке ресурса в портале Azure. Можно использовать KEY1 или KEY2",
+ "placeholder": "Azure API Key",
+ "title": "API Key"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "Введите ваш AWS Access Key ID",
+ "placeholder": "AWS Access Key Id",
+ "title": "AWS Access Key ID"
+ },
+ "checker": {
+ "desc": "Проверить правильность заполнения AccessKeyId/SecretAccessKey"
+ },
+ "region": {
+ "desc": "Введите ваш AWS Region",
+ "placeholder": "AWS Region",
+ "title": "AWS Region"
+ },
+ "secretAccessKey": {
+ "desc": "Введите ваш AWS Secret Access Key",
+ "placeholder": "AWS Secret Access Key",
+ "title": "AWS Secret Access Key"
+ },
+ "sessionToken": {
+ "desc": "Если вы используете AWS SSO/STS, введите ваш AWS Session Token",
+ "placeholder": "AWS Session Token",
+ "title": "AWS Session Token (необязательно)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "Пользовательский регион обслуживания",
+ "customSessionToken": "Пользовательский токен сессии",
+ "description": "Введите свой ключ доступа AWS AccessKeyId / SecretAccessKey, чтобы начать сеанс. Приложение не будет сохранять вашу конфигурацию аутентификации",
+ "title": "Использовать пользовательскую информацию аутентификации Bedrock"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "Введите ваш персональный токен доступа GitHub (PAT), нажмите [здесь](https://github.com/settings/tokens), чтобы создать его",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "Проверить правильность адреса прокси",
+ "title": "Проверка связности"
+ },
+ "customModelName": {
+ "desc": "Добавить кастомные модели, разделяя их через запятую (,)",
+ "placeholder": "vicuna,llava,codellama,llama2:13b-text",
+ "title": "Название кастомных моделей"
+ },
+ "download": {
+ "desc": "Ollama is downloading the model. Please try not to close this page. The download will resume from where it left off if interrupted.",
+ "remainingTime": "Remaining Time",
+ "speed": "Download Speed",
+ "title": "Downloading model {{model}}"
+ },
+ "endpoint": {
+ "desc": "Введите адрес прокси-интерфейса Ollama, если локально не указано иное, можете оставить пустым",
+ "title": "Адрес прокси-интерфейса"
+ },
+ "setup": {
+ "cors": {
+ "description": "Из-за ограничений безопасности браузера вам необходимо настроить кросс-доменные запросы для правильной работы Ollama.",
+ "linux": {
+ "env": "Добавьте переменную среды OLLAMA_ORIGINS в разделе [Service],",
+ "reboot": "Перезагрузите systemd и перезапустите Ollama.",
+ "systemd": "Вызовите редактирование службы ollama в systemd:"
+ },
+ "macos": "Откройте приложение \"Терминал\", вставьте и выполните следующую команду, затем нажмите Enter.",
+ "reboot": "Пожалуйста, перезагрузите службу Ollama после завершения выполнения команды.",
+ "title": "Настройка разрешений на кросс-доменный доступ для Ollama",
+ "windows": "На Windows откройте \"Панель управления\", зайдите в настройки системных переменных. Создайте новую переменную среды для вашей учетной записи с именем \"OLLAMA_ORIGINS\" и значением * , затем нажмите \"OK/Применить\" для сохранения."
+ },
+ "install": {
+ "description": "Пожалуйста, убедитесь, что вы установили Ollama. Если вы еще не скачали Ollama, перейдите на официальный сайт <1> для загрузки1>",
+ "docker": "Если вы предпочитаете использовать Docker, Ollama также предоставляет официальное образ Docker. Вы можете загрузить его с помощью следующей команды:",
+ "linux": {
+ "command": "Установите с помощью следующей команды:",
+ "manual": "Или вы можете установить его вручную, следуя <1>руководству по установке на Linux1>."
+ },
+ "title": "Установка и запуск приложения Ollama локально",
+ "windowsTab": "Windows (превью)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "Cancel Download",
+ "confirm": "Download",
+ "description": "Enter your Ollama model tag to continue the session",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "Starting download...",
+ "title": "Download specified Ollama model"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI Цифровая Вселенная"
+ },
+ "zhipu": {
+ "title": "Zhipu"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/models.json b/DigitalHumanWeb/locales/ru-RU/models.json
new file mode 100644
index 0000000..1f1740c
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34B, с богатым набором обучающих образцов, демонстрирует превосходные результаты в отраслевых приложениях."
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9B поддерживает 16K токенов, обеспечивая эффективные и плавные возможности генерации языка."
+ },
+ "360gpt-pro": {
+ "description": "360GPT Pro, как важный член серии моделей AI от 360, удовлетворяет разнообразные приложения обработки текста с высокой эффективностью, поддерживает понимание длинных текстов и многораундные диалоги."
+ },
+ "360gpt-turbo": {
+ "description": "360GPT Turbo предлагает мощные вычислительные и диалоговые возможности, обладает выдающимся пониманием семантики и эффективностью генерации, что делает его идеальным решением для интеллектуальных помощников для предприятий и разработчиков."
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8K акцентирует внимание на семантической безопасности и ответственности, специально разработан для приложений с высокими требованиями к безопасности контента, обеспечивая точность и надежность пользовательского опыта."
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Pro — это продвинутая модель обработки естественного языка, выпущенная компанией 360, обладающая выдающимися способностями к генерации и пониманию текста, особенно в области генерации и творчества, способная обрабатывать сложные языковые преобразования и ролевые задачи."
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultra — это самая мощная версия в серии больших моделей Xinghuo, которая, обновив сетевые поисковые связи, улучшает понимание и обобщение текстового контента. Это всестороннее решение для повышения производительности в офисе и точного реагирования на запросы, являющееся ведущим интеллектуальным продуктом в отрасли."
+ },
+ "Baichuan2-Turbo": {
+ "description": "Использует технологии улучшенного поиска для полной связи между большой моделью и отраслевыми знаниями, а также знаниями из сети. Поддерживает загрузку различных документов, таких как PDF и Word, а также ввод URL, обеспечивая своевременное и полное получение информации с точными и профессиональными результатами."
+ },
+ "Baichuan3-Turbo": {
+ "description": "Оптимизирован для высокочастотных корпоративных сценариев, значительно улучшает результаты и предлагает высокую стоимость. По сравнению с моделью Baichuan2, создание контента увеличилось на 20%, ответы на вопросы на 17%, а способности ролевого взаимодействия на 40%. Общая эффективность лучше, чем у GPT3.5."
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "Обладает 128K сверхдлинным контекстным окном, оптимизированным для высокочастотных корпоративных сценариев, значительно улучшает результаты и предлагает высокую стоимость. По сравнению с моделью Baichuan2, создание контента увеличилось на 20%, ответы на вопросы на 17%, а способности ролевого взаимодействия на 40%. Общая эффективность лучше, чем у GPT3.5."
+ },
+ "Baichuan4": {
+ "description": "Модель обладает лучшими возможностями в стране, превосходя зарубежные модели в задачах на знание, длинные тексты и генерацию контента. Также обладает передовыми мультимодальными возможностями и показывает отличные результаты в нескольких авторитетных тестах."
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B) — это инновационная модель, подходящая для многообластных приложений и сложных задач."
+ },
+ "Max-32k": {
+ "description": "Spark Max 32K оснащен высокой способностью обработки контекста, улучшенным пониманием контекста и логическим выводом, поддерживает текстовый ввод до 32K токенов, подходит для чтения длинных документов, частных вопросов и ответов и других сценариев"
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPO — это высокоадаптивная многомодельная комбинация, предназначенная для предоставления выдающегося творческого опыта."
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B) — это высокоточная модель команд, подходящая для сложных вычислений."
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B) предлагает оптимизированный языковой вывод и разнообразные возможности применения."
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "Обновление модели Phi-3-mini."
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "Та же модель Phi-3-medium, но с большим размером контекста для RAG или нескольких подсказок."
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "Модель с 14B параметрами, демонстрирующая лучшее качество, чем Phi-3-mini, с акцентом на высококачественные, насыщенные рассуждениями данные."
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "Та же модель Phi-3-mini, но с большим размером контекста для RAG или нескольких подсказок."
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "Самая маленькая модель в семействе Phi-3. Оптимизирована как для качества, так и для низкой задержки."
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "Та же модель Phi-3-small, но с большим размером контекста для RAG или нескольких подсказок."
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "Модель с 7B параметрами, демонстрирующая лучшее качество, чем Phi-3-mini, с акцентом на высококачественные, насыщенные рассуждениями данные."
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128K оснащен возможностями обработки контекста большого объема, способным обрабатывать до 128K контекстной информации, особенно подходит для анализа длинных текстов и обработки долгосрочных логических связей, обеспечивая плавную и последовательную логику и разнообразную поддержку ссылок в сложных текстовых коммуникациях."
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "Как тестовая версия Qwen2, Qwen1.5 использует большие объемы данных для достижения более точных диалоговых функций."
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B) обеспечивает быстрые ответы и естественные диалоговые возможности, подходящие для многоязычной среды."
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2 — это передовая универсальная языковая модель, поддерживающая множество типов команд."
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5 — это новая серия крупных языковых моделей, предназначенная для оптимизации обработки инструктивных задач."
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5 — это новая серия крупных языковых моделей, предназначенная для оптимизации обработки инструктивных задач."
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5 — это новая серия крупных языковых моделей с более сильными способностями понимания и генерации."
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5 — это новая серия крупных языковых моделей, предназначенная для оптимизации обработки инструктивных задач."
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coder сосредоточен на написании кода."
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Math сосредоточен на решении математических задач, предоставляя профессиональные ответы на сложные вопросы."
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9B — это открытая версия, обеспечивающая оптимизированный диалоговый опыт для приложений."
+ },
+ "abab5.5-chat": {
+ "description": "Ориентирован на производственные сценарии, поддерживает обработку сложных задач и эффективную генерацию текста, подходит для профессиональных приложений."
+ },
+ "abab5.5s-chat": {
+ "description": "Специально разработан для диалогов на китайском языке, обеспечивая высококачественную генерацию диалогов на китайском, подходит для различных приложений."
+ },
+ "abab6.5g-chat": {
+ "description": "Специально разработан для многоязычных диалогов, поддерживает высококачественную генерацию диалогов на английском и других языках."
+ },
+ "abab6.5s-chat": {
+ "description": "Подходит для широкого спектра задач обработки естественного языка, включая генерацию текста, диалоговые системы и т.д."
+ },
+ "abab6.5t-chat": {
+ "description": "Оптимизирован для диалогов на китайском языке, обеспечивая плавную генерацию диалогов, соответствующую китайским языковым привычкам."
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "Открытая модель вызова функций от Fireworks, обеспечивающая выдающиеся возможности выполнения команд и открытые настраиваемые функции."
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "Firefunction-v2 от компании Fireworks — это высокопроизводительная модель вызова функций, разработанная на основе Llama-3 и оптимизированная для вызова функций, диалогов и выполнения команд."
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13b — это визуальная языковая модель, способная одновременно обрабатывать изображения и текстовые вводы, обученная на высококачественных данных, подходящая для мультимодальных задач."
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "Gemma 2 9B для команд, основанная на предыдущих технологиях Google, подходит для ответов на вопросы, резюмирования и вывода текста."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "Модель Llama 3 70B для команд, специально оптимизированная для многоязычных диалогов и понимания естественного языка, превосходит большинство конкурентных моделей."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "Модель Llama 3 70B для команд (HF версия), результаты которой совпадают с официальной реализацией, подходит для высококачественных задач выполнения команд."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "Модель Llama 3 8B для команд, оптимизированная для диалогов и многоязычных задач, демонстрирует выдающиеся и эффективные результаты."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "Модель Llama 3 8B для команд (HF версия), результаты которой совпадают с официальной реализацией, обладает высокой согласованностью и совместимостью между платформами."
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "Модель Llama 3.1 405B для команд, обладающая огромным количеством параметров, подходит для сложных задач и сценариев с высокой нагрузкой."
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "Модель Llama 3.1 70B для команд, обеспечивающая выдающиеся возможности понимания и генерации естественного языка, является идеальным выбором для диалоговых и аналитических задач."
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "Модель Llama 3.1 8B для команд, оптимизированная для многоязычных диалогов, способная превосходить большинство открытых и закрытых моделей по общим отраслевым стандартам."
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "Mixtral MoE 8x22B для команд, с большим количеством параметров и архитектурой с несколькими экспертами, всесторонне поддерживает эффективную обработку сложных задач."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "Mixtral MoE 8x7B для команд, архитектура с несколькими экспертами обеспечивает эффективное выполнение и следование командам."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "Mixtral MoE 8x7B для команд (HF версия), производительность которой совпадает с официальной реализацией, подходит для множества эффективных задач."
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "Модель MythoMax L2 13B, использующая новые технологии объединения, хорошо подходит для повествования и ролевых игр."
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "Phi 3 Vision для команд, легковесная мультимодальная модель, способная обрабатывать сложную визуальную и текстовую информацию, обладая высокой способностью к выводу."
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "Модель StarCoder 15.5B, поддерживающая сложные задачи программирования, с улучшенными многоязычными возможностями, подходит для генерации и понимания сложного кода."
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "Модель StarCoder 7B, обученная на более чем 80 языках программирования, обладает выдающимися способностями к заполнению кода и пониманию контекста."
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "Модель Yi-Large, обладающая выдающимися возможностями обработки нескольких языков, подходит для различных задач генерации и понимания языка."
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "Многоязычная модель с 398B параметрами (94B активных), предлагающая контекстное окно длиной 256K, вызовы функций, структурированный вывод и основанное на фактах генерирование."
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "Многоязычная модель с 52B параметрами (12B активных), предлагающая контекстное окно длиной 256K, вызовы функций, структурированный вывод и основанное на фактах генерирование."
+ },
+ "ai21-jamba-instruct": {
+ "description": "Модель LLM на основе Mamba, предназначенная для достижения наилучших показателей производительности, качества и экономической эффективности."
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "Claude 3.5 Sonnet устанавливает новые отраслевые стандарты, превосходя модели конкурентов и Claude 3 Opus, демонстрируя отличные результаты в широком спектре оценок, при этом обладая скоростью и стоимостью наших моделей среднего уровня."
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "Claude 3 Haiku — это самая быстрая и компактная модель от Anthropic, обеспечивающая почти мгновенную скорость ответа. Она может быстро отвечать на простые запросы и запросы. Клиенты смогут создать бесшовный AI-опыт, имитирующий человеческое взаимодействие. Claude 3 Haiku может обрабатывать изображения и возвращать текстовый вывод, имея контекстное окно в 200K."
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "Claude 3 Opus — это самый мощный AI-модель от Anthropic, обладающая передовыми характеристиками в области высоко сложных задач. Она может обрабатывать открытые подсказки и невидимые сценарии, демонстрируя отличную плавность и человеческое понимание. Claude 3 Opus демонстрирует передовые возможности генеративного AI. Claude 3 Opus может обрабатывать изображения и возвращать текстовый вывод, имея контекстное окно в 200K."
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "Claude 3 Sonnet от Anthropic достигает идеального баланса между интеллектом и скоростью — особенно подходит для корпоративных рабочих нагрузок. Он предлагает максимальную полезность по цене ниже конкурентов и разработан как надежный, высокопрочный основной механизм для масштабируемых AI-развертываний. Claude 3 Sonnet может обрабатывать изображения и возвращать текстовый вывод, имея контекстное окно в 200K."
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "Быстрая, экономичная и все еще очень мощная модель, способная обрабатывать широкий спектр задач, включая повседневные диалоги, текстовый анализ, резюме и вопросы к документам."
+ },
+ "anthropic.claude-v2": {
+ "description": "Модель Anthropic демонстрирует высокие способности в широком спектре задач, от сложных диалогов и генерации креативного контента до детального следования инструкциям."
+ },
+ "anthropic.claude-v2:1": {
+ "description": "Обновленная версия Claude 2, обладающая двойным контекстным окном и улучшениями в надежности, уровне галлюцинаций и точности на основе доказательств в длинных документах и контексте RAG."
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 Haiku — это самая быстрая и компактная модель от Anthropic, предназначенная для почти мгновенных ответов. Она обладает быстрой и точной направленной производительностью."
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opus — это самая мощная модель от Anthropic для обработки высококомплексных задач. Она демонстрирует выдающиеся результаты по производительности, интеллекту, плавности и пониманию."
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 Sonnet предлагает возможности, превосходящие Opus, и скорость, превышающую Sonnet, при этом сохраняя ту же цену. Sonnet особенно хорошо справляется с программированием, наукой о данных, визуальной обработкой и агентскими задачами."
+ },
+ "aya": {
+ "description": "Aya 23 — это многоязычная модель, выпущенная Cohere, поддерживающая 23 языка, обеспечивая удобство для многоязычных приложений."
+ },
+ "aya:35b": {
+ "description": "Aya 23 — это многоязычная модель, выпущенная Cohere, поддерживающая 23 языка, обеспечивая удобство для многоязычных приложений."
+ },
+ "charglm-3": {
+ "description": "CharGLM-3 разработан для ролевых игр и эмоционального сопровождения, поддерживает сверхдлинную многократную память и персонализированные диалоги, имеет широкое применение."
+ },
+ "chatgpt-4o-latest": {
+ "description": "ChatGPT-4o — это динамическая модель, которая обновляется в реальном времени, чтобы оставаться актуальной. Она сочетает в себе мощное понимание языка и генерацию, подходя для масштабных приложений, включая обслуживание клиентов, образование и техническую поддержку."
+ },
+ "claude-2.0": {
+ "description": "Claude 2 предлагает ключевые улучшения для бизнеса, включая ведущие в отрасли 200K токенов контекста, значительное снижение частоты галлюцинаций модели, системные подсказки и новую тестовую функцию: вызов инструментов."
+ },
+ "claude-2.1": {
+ "description": "Claude 2 предлагает ключевые улучшения для бизнеса, включая ведущие в отрасли 200K токенов контекста, значительное снижение частоты галлюцинаций модели, системные подсказки и новую тестовую функцию: вызов инструментов."
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnet предлагает возможности, превосходящие Opus, и скорость, быстрее Sonnet, при этом сохраняя ту же цену. Sonnet особенно хорош в программировании, науке о данных, визуальной обработке и задачах агентов."
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haiku — это самая быстрая и компактная модель от Anthropic, предназначенная для достижения почти мгновенных ответов. Она обладает быстрой и точной направленной производительностью."
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opus — это самая мощная модель от Anthropic для обработки высококомплексных задач. Она демонстрирует выдающиеся результаты по производительности, интеллекту, плавности и пониманию."
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnet обеспечивает идеальный баланс между интеллектом и скоростью для корпоративных рабочих нагрузок. Он предлагает максимальную полезность по более низкой цене, надежен и подходит для масштабного развертывания."
+ },
+ "claude-instant-1.2": {
+ "description": "Модель Anthropic для текстовой генерации с низкой задержкой и высокой пропускной способностью, поддерживающая генерацию сотен страниц текста."
+ },
+ "codegeex-4": {
+ "description": "CodeGeeX-4 — это мощный AI помощник по программированию, поддерживающий интеллектуальные ответы и автозаполнение кода на различных языках программирования, повышая эффективность разработки."
+ },
+ "codegemma": {
+ "description": "CodeGemma — это легковесная языковая модель, специально разработанная для различных задач программирования, поддерживающая быструю итерацию и интеграцию."
+ },
+ "codegemma:2b": {
+ "description": "CodeGemma — это легковесная языковая модель, специально разработанная для различных задач программирования, поддерживающая быструю итерацию и интеграцию."
+ },
+ "codellama": {
+ "description": "Code Llama — это LLM, сосредоточенная на генерации и обсуждении кода, поддерживающая широкий спектр языков программирования, подходит для среды разработчиков."
+ },
+ "codellama:13b": {
+ "description": "Code Llama — это LLM, сосредоточенная на генерации и обсуждении кода, поддерживающая широкий спектр языков программирования, подходит для среды разработчиков."
+ },
+ "codellama:34b": {
+ "description": "Code Llama — это LLM, сосредоточенная на генерации и обсуждении кода, поддерживающая широкий спектр языков программирования, подходит для среды разработчиков."
+ },
+ "codellama:70b": {
+ "description": "Code Llama — это LLM, сосредоточенная на генерации и обсуждении кода, поддерживающая широкий спектр языков программирования, подходит для среды разработчиков."
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5 — это крупномасштабная языковая модель, обученная на большом объёме кодовых данных, специально разработанная для решения сложных задач программирования."
+ },
+ "codestral": {
+ "description": "Codestral — это первая модель кода от Mistral AI, обеспечивающая отличную поддержку для задач генерации кода."
+ },
+ "codestral-latest": {
+ "description": "Codestral — это передовая генеративная модель, сосредоточенная на генерации кода, оптимизированная для промежуточного заполнения и задач дополнения кода."
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22B — это модель, разработанная для соблюдения инструкций, диалогов и программирования."
+ },
+ "cohere-command-r": {
+ "description": "Command R — это масштабируемая генеративная модель, нацеленная на RAG и использование инструментов для обеспечения AI на уровне производства для предприятий."
+ },
+ "cohere-command-r-plus": {
+ "description": "Command R+ — это модель, оптимизированная для RAG, предназначенная для решения задач корпоративного уровня."
+ },
+ "command-r": {
+ "description": "Command R — это LLM, оптимизированная для диалогов и задач с длинным контекстом, особенно подходит для динамического взаимодействия и управления знаниями."
+ },
+ "command-r-plus": {
+ "description": "Command R+ — это высокопроизводительная большая языковая модель, специально разработанная для реальных бизнес-сценариев и сложных приложений."
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instruct предлагает высокую надежность в обработке команд, поддерживая приложения в различных отраслях."
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5 объединяет отличительные черты предыдущих версий, улучшая общие и кодировочные способности."
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67B — это передовая модель, обученная для высококомплексных диалогов."
+ },
+ "deepseek-chat": {
+ "description": "Новая открытая модель, объединяющая общие и кодовые возможности, не только сохраняет общие диалоговые способности оригинальной модели Chat и мощные возможности обработки кода модели Coder, но и лучше согласуется с человеческими предпочтениями. Кроме того, DeepSeek-V2.5 значительно улучшила производительность в таких задачах, как написание текстов и следование инструкциям."
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2 — это открытая смешанная экспертная модель кода, показывающая отличные результаты в задачах кода, сопоставимая с GPT4-Turbo."
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2 — это открытая смешанная экспертная модель кода, показывающая отличные результаты в задачах кода, сопоставимая с GPT4-Turbo."
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2 — это эффективная языковая модель Mixture-of-Experts, подходящая для экономически эффективных потребностей обработки."
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236B — это модель кода DeepSeek, обеспечивающая мощные возможности генерации кода."
+ },
+ "deepseek/deepseek-chat": {
+ "description": "Новая открытая модель, объединяющая общие и кодовые возможности, не только сохраняет общие диалоговые способности оригинальной модели Chat и мощные возможности обработки кода модели Coder, но и лучше соответствует человеческим предпочтениям. Кроме того, DeepSeek-V2.5 значительно улучшила свои результаты в задачах написания, следования инструкциям и других областях."
+ },
+ "emohaa": {
+ "description": "Emohaa — это психологическая модель, обладающая профессиональными консультационными способностями, помогающая пользователям понимать эмоциональные проблемы."
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001 (Тюнинг) предлагает стабильную и настраиваемую производительность, что делает её идеальным выбором для решения сложных задач."
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002 (Тюнинг) предлагает выдающуюся поддержку многомодальности, сосредотачиваясь на эффективном решении сложных задач."
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Pro — это высокопроизводительная модель ИИ от Google, разработанная для масштабирования широкого спектра задач."
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001 — это эффективная многомодальная модель, поддерживающая масштабирование для широкого спектра приложений."
+ },
+ "gemini-1.5-flash-002": {
+ "description": "Gemini 1.5 Flash 002 — это эффективная мультимодальная модель, поддерживающая расширенные применения."
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827 разработан для обработки масштабных задач, обеспечивая непревзойдённую скорость обработки."
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "Gemini 1.5 Flash 8B 0924 — это последняя экспериментальная модель, которая демонстрирует значительное улучшение производительности как в текстовых, так и в мультимодальных задачах."
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827 предлагает оптимизированные многомодальные возможности обработки, подходящие для различных сложных задач."
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flash — это последняя многомодальная модель ИИ от Google, обладающая высокой скоростью обработки и поддерживающая текстовые, графические и видео входы, что делает её эффективной для масштабирования различных задач."
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001 — это масштабируемое решение для многомодального ИИ, поддерживающее широкий спектр сложных задач."
+ },
+ "gemini-1.5-pro-002": {
+ "description": "Gemini 1.5 Pro 002 — это последняя модель, готовая к производству, которая обеспечивает более высокое качество вывода, особенно в математических задачах, длинных контекстах и визуальных задачах."
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801 предлагает выдающиеся возможности многомодальной обработки, обеспечивая большую гибкость для разработки приложений."
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827 сочетает в себе новейшие оптимизационные технологии, обеспечивая более эффективные возможности обработки многомодальных данных."
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Pro поддерживает до 2 миллионов токенов и является идеальным выбором для средних многомодальных моделей, обеспечивая многостороннюю поддержку для сложных задач."
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7B подходит для обработки задач среднего и малого масштаба, обеспечивая экономическую эффективность."
+ },
+ "gemma2": {
+ "description": "Gemma 2 — это высокоэффективная модель, выпущенная Google, охватывающая широкий спектр приложений от малых до сложных задач обработки данных."
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9B — это модель, оптимизированная для конкретных задач и интеграции инструментов."
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2 — это высокоэффективная модель, выпущенная Google, охватывающая широкий спектр приложений от малых до сложных задач обработки данных."
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2 — это высокоэффективная модель, выпущенная Google, охватывающая широкий спектр приложений от малых до сложных задач обработки данных."
+ },
+ "general": {
+ "description": "Spark Lite — это легковесная большая языковая модель с крайне низкой задержкой и высокой эффективностью обработки, полностью бесплатная и открытая, поддерживающая функцию онлайн-поиска в реальном времени. Ее быстрая реакция делает ее выдающимся выбором для применения в низкопроизводительных устройствах и тонкой настройке моделей, обеспечивая пользователям отличное соотношение цены и качества, особенно в задачах на знание, генерацию контента и поисковых сценариях."
+ },
+ "generalv3": {
+ "description": "Spark Pro — это высокопроизводительная большая языковая модель, оптимизированная для профессиональных областей, таких как математика, программирование, медицина и образование, поддерживающая сетевой поиск и встроенные плагины для погоды, даты и т.д. Оптимизированная модель демонстрирует выдающиеся результаты и высокую эффективность в сложных задачах на знание, понимании языка и высокоуровневом создании текстов, что делает ее идеальным выбором для профессиональных приложений."
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Max — это самая полная версия, поддерживающая сетевой поиск и множество встроенных плагинов. Его полностью оптимизированные основные возможности, а также функции настройки системных ролей и вызовов функций делают его выдающимся и эффективным в различных сложных приложениях."
+ },
+ "glm-4": {
+ "description": "GLM-4 — это старая флагманская версия, выпущенная в январе 2024 года, которая была заменена более мощной GLM-4-0520."
+ },
+ "glm-4-0520": {
+ "description": "GLM-4-0520 — это последняя версия модели, специально разработанная для высоко сложных и разнообразных задач, демонстрирующая выдающиеся результаты."
+ },
+ "glm-4-air": {
+ "description": "GLM-4-Air — это экономически эффективная версия, производительность которой близка к GLM-4, обеспечивая высокую скорость и доступную цену."
+ },
+ "glm-4-airx": {
+ "description": "GLM-4-AirX предлагает эффективную версию GLM-4-Air, скорость вывода может достигать 2.6 раз быстрее."
+ },
+ "glm-4-alltools": {
+ "description": "GLM-4-AllTools — это многофункциональная модель агента, оптимизированная для поддержки сложного планирования инструкций и вызовов инструментов, таких как веб-серфинг, интерпретация кода и генерация текста, подходящая для выполнения множества задач."
+ },
+ "glm-4-flash": {
+ "description": "GLM-4-Flash — это идеальный выбор для обработки простых задач, с самой высокой скоростью и самой низкой ценой."
+ },
+ "glm-4-long": {
+ "description": "GLM-4-Long поддерживает сверхдлинные текстовые вводы, подходит для задач, требующих памяти, и обработки больших документов."
+ },
+ "glm-4-plus": {
+ "description": "GLM-4-Plus, как флагман с высоким интеллектом, обладает мощными способностями обработки длинных текстов и сложных задач, с полным улучшением производительности."
+ },
+ "glm-4v": {
+ "description": "GLM-4V предлагает мощные способности понимания и вывода изображений, поддерживает множество визуальных задач."
+ },
+ "glm-4v-plus": {
+ "description": "GLM-4V-Plus обладает способностью понимать видео-контент и множество изображений, подходит для мультимодальных задач."
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827 предлагает оптимизированные мультимодальные возможности обработки, подходящие для различных сложных задач."
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827 сочетает в себе новейшие оптимизационные технологии, обеспечивая более эффективную обработку мультимодальных данных."
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2 продолжает концепцию легковесного и эффективного дизайна."
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2 — это легковесная серия текстовых моделей с открытым исходным кодом от Google."
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2 — это облегченная открытая текстовая модель от Google."
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B) предлагает базовые возможности обработки команд, подходящие для легковесных приложений."
+ },
+ "gpt-3.5-turbo": {
+ "description": "GPT 3.5 Turbo подходит для различных задач генерации и понимания текста, в настоящее время ссылается на gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "GPT 3.5 Turbo подходит для различных задач генерации и понимания текста, в настоящее время ссылается на gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "GPT 3.5 Turbo подходит для различных задач генерации и понимания текста, в настоящее время ссылается на gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "GPT 3.5 Turbo подходит для различных задач генерации и понимания текста, в настоящее время ссылается на gpt-3.5-turbo-0125."
+ },
+ "gpt-4": {
+ "description": "GPT-4 предлагает более широкий контекстный диапазон, способный обрабатывать более длинные текстовые вводы, подходя для сценариев, требующих обширной интеграции информации и анализа данных."
+ },
+ "gpt-4-0125-preview": {
+ "description": "Последняя модель GPT-4 Turbo обладает визуальными функциями. Теперь визуальные запросы могут использовать JSON-формат и вызовы функций. GPT-4 Turbo — это улучшенная версия, обеспечивающая экономически эффективную поддержку для мультимодальных задач. Она находит баланс между точностью и эффективностью, подходя для приложений, требующих взаимодействия в реальном времени."
+ },
+ "gpt-4-0613": {
+ "description": "GPT-4 предлагает более широкий контекстный диапазон, способный обрабатывать более длинные текстовые вводы, подходя для сценариев, требующих обширной интеграции информации и анализа данных."
+ },
+ "gpt-4-1106-preview": {
+ "description": "Последняя модель GPT-4 Turbo обладает визуальными функциями. Теперь визуальные запросы могут использовать JSON-формат и вызовы функций. GPT-4 Turbo — это улучшенная версия, обеспечивающая экономически эффективную поддержку для мультимодальных задач. Она находит баланс между точностью и эффективностью, подходя для приложений, требующих взаимодействия в реальном времени."
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "Последняя модель GPT-4 Turbo обладает визуальными функциями. Теперь визуальные запросы могут использовать JSON-формат и вызовы функций. GPT-4 Turbo — это улучшенная версия, обеспечивающая экономически эффективную поддержку для мультимодальных задач. Она находит баланс между точностью и эффективностью, подходя для приложений, требующих взаимодействия в реальном времени."
+ },
+ "gpt-4-32k": {
+ "description": "GPT-4 предлагает более широкий контекстный диапазон, способный обрабатывать более длинные текстовые вводы, подходя для сценариев, требующих обширной интеграции информации и анализа данных."
+ },
+ "gpt-4-32k-0613": {
+ "description": "GPT-4 предлагает более широкий контекстный диапазон, способный обрабатывать более длинные текстовые вводы, подходя для сценариев, требующих обширной интеграции информации и анализа данных."
+ },
+ "gpt-4-turbo": {
+ "description": "Последняя модель GPT-4 Turbo обладает визуальными функциями. Теперь визуальные запросы могут использовать JSON-формат и вызовы функций. GPT-4 Turbo — это улучшенная версия, обеспечивающая экономически эффективную поддержку для мультимодальных задач. Она находит баланс между точностью и эффективностью, подходя для приложений, требующих взаимодействия в реальном времени."
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "Последняя модель GPT-4 Turbo обладает визуальными функциями. Теперь визуальные запросы могут использовать JSON-формат и вызовы функций. GPT-4 Turbo — это улучшенная версия, обеспечивающая экономически эффективную поддержку для мультимодальных задач. Она находит баланс между точностью и эффективностью, подходя для приложений, требующих взаимодействия в реальном времени."
+ },
+ "gpt-4-turbo-preview": {
+ "description": "Последняя модель GPT-4 Turbo обладает визуальными функциями. Теперь визуальные запросы могут использовать JSON-формат и вызовы функций. GPT-4 Turbo — это улучшенная версия, обеспечивающая экономически эффективную поддержку для мультимодальных задач. Она находит баланс между точностью и эффективностью, подходя для приложений, требующих взаимодействия в реальном времени."
+ },
+ "gpt-4-vision-preview": {
+ "description": "Последняя модель GPT-4 Turbo обладает визуальными функциями. Теперь визуальные запросы могут использовать JSON-формат и вызовы функций. GPT-4 Turbo — это улучшенная версия, обеспечивающая экономически эффективную поддержку для мультимодальных задач. Она находит баланс между точностью и эффективностью, подходя для приложений, требующих взаимодействия в реальном времени."
+ },
+ "gpt-4o": {
+ "description": "ChatGPT-4o — это динамическая модель, которая обновляется в реальном времени, чтобы оставаться актуальной. Она сочетает в себе мощное понимание языка и генерацию, подходя для масштабных приложений, включая обслуживание клиентов, образование и техническую поддержку."
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "ChatGPT-4o — это динамическая модель, которая обновляется в реальном времени, чтобы оставаться актуальной. Она сочетает в себе мощное понимание языка и генерацию, подходя для масштабных приложений, включая обслуживание клиентов, образование и техническую поддержку."
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o — это динамическая модель, которая обновляется в реальном времени, чтобы оставаться актуальной. Она сочетает в себе мощное понимание языка и генерацию, подходя для масштабных приложений, включая обслуживание клиентов, образование и техническую поддержку."
+ },
+ "gpt-4o-mini": {
+ "description": "GPT-4o mini — это последняя модель, выпущенная OpenAI после GPT-4 Omni, поддерживающая ввод изображений и текстов с выводом текста. Как их самый продвинутый компактный модель, она значительно дешевле других недавних передовых моделей и более чем на 60% дешевле GPT-3.5 Turbo. Она сохраняет передовой уровень интеллекта при значительном соотношении цена-качество. GPT-4o mini набрала 82% на тесте MMLU и в настоящее время занимает более высокое место в предпочтениях чата по сравнению с GPT-4."
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13B — это языковая модель, объединяющая креативность и интеллект, основанная на нескольких ведущих моделях."
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "Инновационная открытая модель InternLM2.5, благодаря большому количеству параметров, повышает интеллектуальность диалогов."
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5 предлагает интеллектуальные решения для диалогов в различных сценариях."
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "Модель Llama 3.1 70B для команд, обладающая 70B параметрами, обеспечивает выдающуюся производительность в задачах генерации текста и выполнения команд."
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70B предлагает более мощные возможности ИИ вывода, подходит для сложных приложений, поддерживает огромное количество вычислительных процессов и гарантирует эффективность и точность."
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8B — это высокоэффективная модель, обеспечивающая быструю генерацию текста, идеально подходящая для приложений, требующих масштабной эффективности и экономичности."
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "Модель Llama 3.1 8B для команд, обладающая 8B параметрами, обеспечивает эффективное выполнение задач с указаниями и предлагает высококачественные возможности генерации текста."
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "Модель Llama 3.1 Sonar Huge Online, обладающая 405B параметрами, поддерживает контекст длиной около 127,000 токенов, предназначена для сложных онлайн-чат-приложений."
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "Модель Llama 3.1 Sonar Large Chat, обладающая 70B параметрами, поддерживает контекст длиной около 127,000 токенов, подходит для сложных оффлайн-чатов."
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "Модель Llama 3.1 Sonar Large Online, обладающая 70B параметрами, поддерживает контекст длиной около 127,000 токенов, подходит для задач с высокой нагрузкой и разнообразными чатами."
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "Модель Llama 3.1 Sonar Small Chat, обладающая 8B параметрами, специально разработана для оффлайн-чатов и поддерживает контекст длиной около 127,000 токенов."
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "Модель Llama 3.1 Sonar Small Online, обладающая 8B параметрами, поддерживает контекст длиной около 127,000 токенов, специально разработана для онлайн-чатов и эффективно обрабатывает различные текстовые взаимодействия."
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70B предлагает непревзойдённые возможности обработки сложности, специально разработанные для высоких требований проектов."
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8B обеспечивает высококачественную производительность вывода, подходящую для многообразных приложений."
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Use предлагает мощные возможности вызова инструментов, поддерживая эффективную обработку сложных задач."
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Use — это модель, оптимизированная для эффективного использования инструментов, поддерживающая быструю параллельную обработку."
+ },
+ "llama3.1": {
+ "description": "Llama 3.1 — это передовая модель, выпущенная Meta, поддерживающая до 405B параметров, применимая в сложных диалогах, многоязычном переводе и анализе данных."
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1 — это передовая модель, выпущенная Meta, поддерживающая до 405B параметров, применимая в сложных диалогах, многоязычном переводе и анализе данных."
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1 — это передовая модель, выпущенная Meta, поддерживающая до 405B параметров, применимая в сложных диалогах, многоязычном переводе и анализе данных."
+ },
+ "llava": {
+ "description": "LLaVA — это многомодальная модель, объединяющая визуальный кодировщик и Vicuna, предназначенная для мощного понимания визуальной и языковой информации."
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7B предлагает возможности визуальной обработки, генерируя сложные выходные данные на основе визуальной информации."
+ },
+ "llava:13b": {
+ "description": "LLaVA — это многомодальная модель, объединяющая визуальный кодировщик и Vicuna, предназначенная для мощного понимания визуальной и языковой информации."
+ },
+ "llava:34b": {
+ "description": "LLaVA — это многомодальная модель, объединяющая визуальный кодировщик и Vicuna, предназначенная для мощного понимания визуальной и языковой информации."
+ },
+ "mathstral": {
+ "description": "MathΣtral специально разработан для научных исследований и математического вывода, обеспечивая эффективные вычислительные возможности и интерпретацию результатов."
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "Мощная модель с 70 миллиардами параметров, превосходящая в области рассуждений, кодирования и широких языковых приложений."
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "Универсальная модель с 8 миллиардами параметров, оптимизированная для диалоговых и текстовых задач."
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "Модели Llama 3.1, настроенные на инструкции, оптимизированы для многоязычных диалоговых случаев и превосходят многие доступные модели открытого и закрытого чата по общим отраслевым стандартам."
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "Модели Llama 3.1, настроенные на инструкции, оптимизированы для многоязычных диалоговых случаев и превосходят многие доступные модели открытого и закрытого чата по общим отраслевым стандартам."
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "Модели Llama 3.1, настроенные на инструкции, оптимизированы для многоязычных диалоговых случаев и превосходят многие доступные модели открытого и закрытого чата по общим отраслевым стандартам."
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B) предлагает отличные возможности обработки языка и выдающийся опыт взаимодействия."
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B) — мощная модель для чата, поддерживающая сложные диалоговые запросы."
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B) предлагает многоязычную поддержку и охватывает широкий спектр областей знаний."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Lite подходит для сред, требующих высокой производительности и низкой задержки."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turbo обеспечивает выдающиеся возможности понимания и генерации языка, подходящие для самых требовательных вычислительных задач."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Lite подходит для ресурсов ограниченных сред, обеспечивая отличное соотношение производительности."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turbo — это высокоэффективная большая языковая модель, поддерживающая широкий спектр приложений."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405B — это мощная модель, основанная на предобучении и настройке инструкций."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "Модель Llama 3.1 Turbo 405B предлагает огромную поддержку контекста для обработки больших данных и демонстрирует выдающиеся результаты в масштабных приложениях искусственного интеллекта."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70B предлагает эффективную поддержку диалогов на нескольких языках."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "Модель Llama 3.1 70B была тщательно настроена для высоконагруженных приложений, квантованная до FP8 для повышения вычислительной мощности и точности, обеспечивая выдающиеся результаты в сложных сценариях."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1 предлагает поддержку нескольких языков и является одной из ведущих генеративных моделей в отрасли."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "Модель Llama 3.1 8B использует FP8-квантование и поддерживает до 131,072 контекстных токенов, являясь выдающейся среди открытых моделей, подходящей для сложных задач и превосходящей многие отраслевые стандарты."
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instruct оптимизирован для высококачественных диалоговых сцен и показывает отличные результаты в различных оценках."
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instruct оптимизирован для высококачественных диалоговых сцен, его производительность превосходит многие закрытые модели."
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B Instruct — это последняя версия от Meta, оптимизированная для генерации высококачественных диалогов, превосходящая многие ведущие закрытые модели."
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct разработан для высококачественных диалогов и показывает выдающиеся результаты в оценках, особенно в высокоинтерактивных сценах."
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct — это последняя версия от Meta, оптимизированная для высококачественных диалоговых сцен, превосходящая многие ведущие закрытые модели."
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1 предлагает поддержку нескольких языков и является одной из ведущих генеративных моделей в отрасли."
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 405B Instruct — это самая большая и мощная модель в линейке Llama 3.1 Instruct, представляющая собой высокоразвёрнутую модель для диалогового вывода и генерации синтетических данных, также может использоваться в качестве основы для специализированного предобучения или дообучения в определённых областях. Многоязычные большие языковые модели (LLMs), предлагаемые Llama 3.1, представляют собой набор предобученных генеративных моделей с настройкой на инструкции, включая размеры 8B, 70B и 405B (вход/выход текста). Модели текста с настройкой на инструкции Llama 3.1 (8B, 70B, 405B) оптимизированы для многоязычных диалоговых случаев и превосходят многие доступные открытые модели чата в общепринятых отраслевых бенчмарках. Llama 3.1 предназначена для коммерческого и исследовательского использования на нескольких языках. Модели текста с настройкой на инструкции подходят для диалогов, похожих на помощников, в то время как предобученные модели могут адаптироваться к различным задачам генерации естественного языка. Модели Llama 3.1 также поддерживают использование их вывода для улучшения других моделей, включая генерацию синтетических данных и уточнение. Llama 3.1 является саморегрессионной языковой моделью, использующей оптимизированную архитектуру трансформеров. Настроенные версии используют контролируемое дообучение (SFT) и обучение с подкреплением с человеческой обратной связью (RLHF), чтобы соответствовать предпочтениям людей в отношении полезности и безопасности."
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "Обновленная версия Meta Llama 3.1 70B Instruct, включающая расширенную длину контекста до 128K, многоязычность и улучшенные способности вывода. Многоязычные большие языковые модели (LLMs), предлагаемые Llama 3.1, представляют собой набор предобученных, настроенных на инструкции генеративных моделей, включая размеры 8B, 70B и 405B (ввод/вывод текста). Настроенные на инструкции текстовые модели (8B, 70B, 405B) оптимизированы для многоязычных диалоговых случаев и превосходят многие доступные открытые модели чата в общих отраслевых бенчмарках. Llama 3.1 предназначена для коммерческого и исследовательского использования на нескольких языках. Настроенные на инструкции текстовые модели подходят для диалогов, похожих на помощника, в то время как предобученные модели могут адаптироваться к различным задачам генерации естественного языка. Модели Llama 3.1 также поддерживают использование вывода своих моделей для улучшения других моделей, включая генерацию синтетических данных и уточнение. Llama 3.1 — это саморегрессионная языковая модель, использующая оптимизированную архитектуру трансформеров. Настроенные версии используют контролируемую донастройку (SFT) и обучение с подкреплением с человеческой обратной связью (RLHF), чтобы соответствовать человеческим предпочтениям по полезности и безопасности."
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "Обновленная версия Meta Llama 3.1 8B Instruct, включающая расширенную длину контекста до 128K, многоязычность и улучшенные способности вывода. Многоязычные большие языковые модели (LLMs), предлагаемые Llama 3.1, представляют собой набор предобученных, настроенных на инструкции генеративных моделей, включая размеры 8B, 70B и 405B (ввод/вывод текста). Настроенные на инструкции текстовые модели (8B, 70B, 405B) оптимизированы для многоязычных диалоговых случаев и превосходят многие доступные открытые модели чата в общих отраслевых бенчмарках. Llama 3.1 предназначена для коммерческого и исследовательского использования на нескольких языках. Настроенные на инструкции текстовые модели подходят для диалогов, похожих на помощника, в то время как предобученные модели могут адаптироваться к различным задачам генерации естественного языка. Модели Llama 3.1 также поддерживают использование вывода своих моделей для улучшения других моделей, включая генерацию синтетических данных и уточнение. Llama 3.1 — это саморегрессионная языковая модель, использующая оптимизированную архитектуру трансформеров. Настроенные версии используют контролируемую донастройку (SFT) и обучение с подкреплением с человеческой обратной связью (RLHF), чтобы соответствовать человеческим предпочтениям по полезности и безопасности."
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3 — это открытая большая языковая модель (LLM), ориентированная на разработчиков, исследователей и предприятия, предназначенная для помощи в создании, экспериментировании и ответственном масштабировании их идей по генеративному ИИ. В качестве части базовой системы для инноваций глобального сообщества она идеально подходит для создания контента, диалогового ИИ, понимания языка, НИОКР и корпоративных приложений."
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3 — это открытая большая языковая модель (LLM), ориентированная на разработчиков, исследователей и предприятия, предназначенная для помощи в создании, экспериментировании и ответственном масштабировании их идей по генеративному ИИ. В качестве части базовой системы для инноваций глобального сообщества она идеально подходит для устройств с ограниченными вычислительными мощностями и ресурсами, а также для более быстрого времени обучения."
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7B — это новая быстрая и легкая модель от Microsoft AI, производительность которой близка к 10-кратной производительности существующих открытых моделей."
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22B — это передовая модель Wizard от Microsoft, демонстрирующая исключительно конкурентоспособные результаты."
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-V — это новое поколение мультимодальной большой модели от OpenBMB, обладающее выдающимися возможностями OCR и мультимодального понимания, поддерживающее широкий спектр приложений."
+ },
+ "mistral": {
+ "description": "Mistral — это 7B модель, выпущенная Mistral AI, подходящая для разнообразных языковых задач."
+ },
+ "mistral-large": {
+ "description": "Mixtral Large — это флагманская модель от Mistral, объединяющая возможности генерации кода, математики и вывода, поддерживающая контекстное окно 128k."
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407) — это продвинутая модель языка (LLM) с современными способностями рассуждения, знаний и кодирования."
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Large — это флагманская большая модель, хорошо подходящая для многоязычных задач, сложного вывода и генерации кода, идеальный выбор для высококлассных приложений."
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemo, разработанный в сотрудничестве между Mistral AI и NVIDIA, является высокоэффективной 12B моделью."
+ },
+ "mistral-small": {
+ "description": "Mistral Small может использоваться для любых языковых задач, требующих высокой эффективности и низкой задержки."
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Small — это экономически эффективный, быстрый и надежный вариант для таких случаев, как перевод, резюме и анализ настроений."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instruct известен своей высокой производительностью и подходит для множества языковых задач."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7B — это модель с настройкой по запросу, предлагающая оптимизированные ответы на задачи."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3 обеспечивает эффективные вычислительные возможности и понимание естественного языка, подходящие для широкого спектра приложений."
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B) — это супер большая языковая модель, поддерживающая крайне высокие требования к обработке."
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7B — это предобученная модель разреженных смешанных экспертов, предназначенная для универсальных текстовых задач."
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instruct — это высокопроизводительная модель стандартов отрасли, оптимизированная для скорости и поддержки длинного контекста."
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemo — это модель с 7.3B параметрами, поддерживающая несколько языков и высокопроизводительное программирование."
+ },
+ "mixtral": {
+ "description": "Mixtral — это экспертная модель от Mistral AI, обладающая открытыми весами и поддерживающая генерацию кода и понимание языка."
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7B предлагает высокую отказоустойчивость параллельной обработки, подходящей для сложных задач."
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtral — это экспертная модель от Mistral AI, обладающая открытыми весами и поддерживающая генерацию кода и понимание языка."
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128K — это модель с возможностями обработки сверхдлинного контекста, подходящая для генерации очень длинных текстов, удовлетворяющая требованиям сложных задач генерации, способная обрабатывать до 128 000 токенов, идеально подходящая для научных исследований, академических и крупных документальных приложений."
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32K предлагает возможности обработки контекста средней длины, способная обрабатывать 32 768 токенов, особенно подходит для генерации различных длинных документов и сложных диалогов, применяется в создании контента, генерации отчетов и диалоговых систем."
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8K специально разработан для генерации коротких текстов, обладая высокой производительностью обработки, способный обрабатывать 8 192 токена, идеально подходит для кратких диалогов, стенографирования и быстрой генерации контента."
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8B — это обновленная версия Nous Hermes 2, содержащая последние внутренние разработанные наборы данных."
+ },
+ "o1-mini": {
+ "description": "o1-mini — это быстрое и экономичное модель вывода, разработанная для программирования, математики и научных приложений. Модель имеет контекст 128K и срок знания до октября 2023 года."
+ },
+ "o1-preview": {
+ "description": "o1 — это новая модель вывода от OpenAI, подходящая для сложных задач, требующих обширных общих знаний. Модель имеет контекст 128K и срок знания до октября 2023 года."
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mamba — это языковая модель Mamba 2, сосредоточенная на генерации кода, обеспечивающая мощную поддержку для сложных задач по коду и выводу."
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7B — это компактная, но высокопроизводительная модель, хорошо подходящая для пакетной обработки и простых задач, таких как классификация и генерация текста, обладающая хорошими возможностями вывода."
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemo — это 12B модель, разработанная в сотрудничестве с Nvidia, обеспечивающая выдающиеся возможности вывода и кодирования, легко интегрируемая и заменяемая."
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22B — это более крупная экспертная модель, сосредоточенная на сложных задачах, предлагающая выдающиеся возможности вывода и более высокую пропускную способность."
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7B — это разреженная экспертная модель, использующая несколько параметров для повышения скорости вывода, подходит для обработки многоязычных и кодовых задач."
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o — это динамическая модель, которая обновляется в реальном времени, чтобы оставаться актуальной. Она сочетает в себе мощные возможности понимания и генерации языка, подходящие для масштабных приложений, включая обслуживание клиентов, образование и техническую поддержку."
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o mini — это последняя модель от OpenAI, выпущенная после GPT-4 Omni, поддерживающая ввод изображений и текста с выводом текста. Как их самый продвинутый компактный модель, она значительно дешевле других недавних передовых моделей и более чем на 60% дешевле GPT-3.5 Turbo. Она сохраняет передовой уровень интеллекта при значительном соотношении цена-качество. GPT-4o mini набрала 82% в тесте MMLU и в настоящее время занимает более высокое место по предпочтениям в чате, чем GPT-4."
+ },
+ "openai/o1-mini": {
+ "description": "o1-mini — это быстрое и экономичное модель вывода, разработанная для программирования, математики и научных приложений. Модель имеет контекст 128K и срок знания до октября 2023 года."
+ },
+ "openai/o1-preview": {
+ "description": "o1 — это новая модель вывода от OpenAI, подходящая для сложных задач, требующих обширных общих знаний. Модель имеет контекст 128K и срок знания до октября 2023 года."
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7B — это открытая языковая модель, оптимизированная с помощью стратегии \"C-RLFT (условное обучение с подкреплением)\"."
+ },
+ "openrouter/auto": {
+ "description": "В зависимости от длины контекста, темы и сложности ваш запрос будет отправлен в Llama 3 70B Instruct, Claude 3.5 Sonnet (саморегулирующийся) или GPT-4o."
+ },
+ "phi3": {
+ "description": "Phi-3 — это легковесная открытая модель, выпущенная Microsoft, подходящая для эффективной интеграции и масштабного вывода знаний."
+ },
+ "phi3:14b": {
+ "description": "Phi-3 — это легковесная открытая модель, выпущенная Microsoft, подходящая для эффективной интеграции и масштабного вывода знаний."
+ },
+ "pixtral-12b-2409": {
+ "description": "Модель Pixtral демонстрирует мощные способности в задачах графиков и понимания изображений, вопросов и ответов по документам, многомодального вывода и соблюдения инструкций, способная обрабатывать изображения в естественном разрешении и соотношении сторон, а также обрабатывать произвольное количество изображений в контекстном окне длиной до 128K токенов."
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "Модель кода Tongyi Qwen."
+ },
+ "qwen-long": {
+ "description": "Qwen — это сверхмасштабная языковая модель, поддерживающая длинный контекст текста и диалоговые функции на основе длинных документов и нескольких документов."
+ },
+ "qwen-math-plus-latest": {
+ "description": "Математическая модель Tongyi Qwen, специально разработанная для решения математических задач."
+ },
+ "qwen-math-turbo-latest": {
+ "description": "Математическая модель Tongyi Qwen, специально разработанная для решения математических задач."
+ },
+ "qwen-max-latest": {
+ "description": "Модель языка Tongyi Qwen с уровнем масштабирования в триллионы, поддерживающая ввод на различных языках, включая китайский и английский, является API моделью, лежащей в основе продукта Tongyi Qwen 2.5."
+ },
+ "qwen-plus-latest": {
+ "description": "Улучшенная версия модели языка Tongyi Qwen, поддерживающая ввод на различных языках, включая китайский и английский."
+ },
+ "qwen-turbo-latest": {
+ "description": "Модель языка Tongyi Qwen, поддерживающая ввод на различных языках, включая китайский и английский."
+ },
+ "qwen-vl-chat-v1": {
+ "description": "Qwen VL поддерживает гибкие способы взаимодействия, включая многократные изображения, многократные вопросы и ответы, а также творческие способности."
+ },
+ "qwen-vl-max": {
+ "description": "Qwen — это сверхмасштабная визуально-языковая модель. По сравнению с улучшенной версией, еще больше улучшены способности визуального вывода и соблюдения инструкций, обеспечивая более высокий уровень визуального восприятия и понимания."
+ },
+ "qwen-vl-plus": {
+ "description": "Qwen — это улучшенная версия крупномасштабной визуально-языковой модели. Существенно улучшена способность распознавания деталей и текстов, поддерживает изображения с разрешением более миллиона пикселей и произвольным соотношением сторон."
+ },
+ "qwen-vl-v1": {
+ "description": "Инициализированная языковой моделью Qwen-7B, добавлена модель изображения, предобученная модель с разрешением входного изображения 448."
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2 — это новая серия крупных языковых моделей с более сильными возможностями понимания и генерации."
+ },
+ "qwen2": {
+ "description": "Qwen2 — это новое поколение крупномасштабной языковой модели от Alibaba, обеспечивающее отличные результаты для разнообразных приложений."
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "Модель Tongyi Qwen 2.5 с открытым исходным кодом объемом 14B."
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "Модель Tongyi Qwen 2.5 с открытым исходным кодом объемом 32B."
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "Модель Tongyi Qwen 2.5 с открытым исходным кодом объемом 72B."
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "Модель Tongyi Qwen 2.5 с открытым исходным кодом объемом 7B."
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "Открытая версия модели кода Tongyi Qwen."
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "Открытая версия модели кода Tongyi Qwen."
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "Модель Qwen-Math с мощными способностями решения математических задач."
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "Модель Qwen-Math с мощными способностями решения математических задач."
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "Модель Qwen-Math с мощными способностями решения математических задач."
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2 — это новое поколение крупномасштабной языковой модели от Alibaba, обеспечивающее отличные результаты для разнообразных приложений."
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2 — это новое поколение крупномасштабной языковой модели от Alibaba, обеспечивающее отличные результаты для разнообразных приложений."
+ },
+ "qwen2:72b": {
+ "description": "Qwen2 — это новое поколение крупномасштабной языковой модели от Alibaba, обеспечивающее отличные результаты для разнообразных приложений."
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar Mini — это компактная LLM, производительность которой превосходит GPT-3.5, обладая мощными многоязычными возможностями, поддерживает английский и корейский языки, предлагая эффективное и компактное решение."
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja) расширяет возможности Solar Mini, сосредоточиваясь на японском языке, при этом сохраняя высокую эффективность и выдающуюся производительность в использовании английского и корейского языков."
+ },
+ "solar-pro": {
+ "description": "Solar Pro — это высокоинтеллектуальная LLM, выпущенная Upstage, сосредоточенная на способности следовать инструкциям на одном GPU, с оценкой IFEval выше 80. В настоящее время поддерживает английский язык, официальная версия запланирована на ноябрь 2024 года, с расширением языковой поддержки и длины контекста."
+ },
+ "step-1-128k": {
+ "description": "Балансирует производительность и стоимость, подходит для общих сценариев."
+ },
+ "step-1-256k": {
+ "description": "Обладает сверхдлинной способностью обработки контекста, особенно подходит для анализа длинных документов."
+ },
+ "step-1-32k": {
+ "description": "Поддерживает диалоги средней длины, подходит для различных приложений."
+ },
+ "step-1-8k": {
+ "description": "Маленькая модель, подходящая для легковесных задач."
+ },
+ "step-1-flash": {
+ "description": "Высокоскоростная модель, подходящая для реального времени диалогов."
+ },
+ "step-1v-32k": {
+ "description": "Поддерживает визуальный ввод, улучшая мультимодальный опыт взаимодействия."
+ },
+ "step-1v-8k": {
+ "description": "Небольшая визуальная модель, подходящая для базовых задач с текстом и изображениями."
+ },
+ "step-2-16k": {
+ "description": "Поддерживает масштабные взаимодействия контекста, подходит для сложных диалоговых сценариев."
+ },
+ "taichu_llm": {
+ "description": "Модель языка TaiChu обладает выдающимися способностями к пониманию языка, а также к созданию текстов, ответам на вопросы, программированию, математическим вычислениям, логическому выводу, анализу эмоций и резюмированию текстов. Инновационно сочетает предобучение на больших данных с богатством многопоточных знаний, постоянно совершенствуя алгоритмические технологии и поглощая новые знания о словах, структуре, грамматике и семантике из огромных объемов текстовых данных, обеспечивая пользователям более удобную информацию и услуги, а также более интеллектуальный опыт."
+ },
+ "taichu_vqa": {
+ "description": "Taichu 2.0V объединяет возможности понимания изображений, передачи знаний, логического вывода и других, демонстрируя выдающиеся результаты в области вопросов и ответов на основе текста и изображений."
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B) обеспечивает повышенные вычислительные возможности благодаря эффективным стратегиям и архитектуре модели."
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B) подходит для детализированных командных задач, обеспечивая отличные возможности обработки языка."
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2 — это языковая модель, предоставляемая Microsoft AI, которая особенно хорошо проявляет себя в сложных диалогах, многоязычных задачах, выводе и интеллектуальных помощниках."
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2 — это языковая модель, предоставляемая Microsoft AI, которая особенно хорошо проявляет себя в сложных диалогах, многоязычных задачах, выводе и интеллектуальных помощниках."
+ },
+ "yi-large": {
+ "description": "Совершенно новая модель с триллионом параметров, обеспечивающая выдающиеся возможности для вопросов и ответов, а также генерации текста."
+ },
+ "yi-large-fc": {
+ "description": "На основе модели yi-large поддерживает и усиливает возможности вызова инструментов, подходит для различных бизнес-сценариев, требующих создания агентов или рабочих процессов."
+ },
+ "yi-large-preview": {
+ "description": "Начальная версия, рекомендуется использовать yi-large (новую версию)."
+ },
+ "yi-large-rag": {
+ "description": "Высококлассный сервис на основе модели yi-large, объединяющий технологии поиска и генерации для предоставления точных ответов и услуг по поиску информации в реальном времени."
+ },
+ "yi-large-turbo": {
+ "description": "Высокая стоимость и выдающаяся производительность. Балансировка высокой точности на основе производительности, скорости вывода и затрат."
+ },
+ "yi-medium": {
+ "description": "Модель среднего размера с улучшенной настройкой, сбалансированная по возможностям и стоимости. Глубокая оптимизация способности следовать указаниям."
+ },
+ "yi-medium-200k": {
+ "description": "200K сверхдлинное окно контекста, обеспечивающее глубокое понимание и генерацию длинных текстов."
+ },
+ "yi-spark": {
+ "description": "Маленькая и мощная, легковесная и быстрая модель. Обеспечивает улучшенные математические вычисления и возможности написания кода."
+ },
+ "yi-vision": {
+ "description": "Модель для сложных визуальных задач, обеспечивающая высокую производительность в понимании и анализе изображений."
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/plugin.json b/DigitalHumanWeb/locales/ru-RU/plugin.json
new file mode 100644
index 0000000..7c8bbde
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "Аргументы вызова",
+ "function_call": "Вызов функции",
+ "off": "Отключить отладку",
+ "on": "Включить отображение информации о вызовах плагинов",
+ "payload": "полезная нагрузка",
+ "response": "Ответ",
+ "tool_call": "запрос на вызов инструмента"
+ },
+ "detailModal": {
+ "info": {
+ "description": "Описание API",
+ "name": "Название API"
+ },
+ "tabs": {
+ "info": "Описание плагина",
+ "manifest": "Манифест",
+ "settings": "Настройки"
+ },
+ "title": "Детали плагина"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "Вы собираетесь удалить этот локальный плагин. После удаления его будет невозможно восстановить. Вы уверены, что хотите удалить этот плагин?",
+ "customParams": {
+ "useProxy": {
+ "label": "Использовать прокси (если есть проблемы с доступом к другому домену, попробуйте включить эту опцию перед повторной установкой)"
+ }
+ },
+ "deleteSuccess": "Плагин успешно удалён",
+ "manifest": {
+ "identifier": {
+ "desc": "Уникальный идентификатор плагина",
+ "label": "Идентификатор"
+ },
+ "mode": {
+ "local": "Локальная настройка",
+ "local-tooltip": "Локальная настройка временно недоступна",
+ "url": "Ссылка онлайн"
+ },
+ "name": {
+ "desc": "Название плагина",
+ "label": "Название",
+ "placeholder": "Поиск в поисковике"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "Автор плагина",
+ "label": "Автор"
+ },
+ "avatar": {
+ "desc": "Иконка плагина, можно использовать Emoji или URL",
+ "label": "Иконка"
+ },
+ "description": {
+ "desc": "Описание плагина",
+ "label": "Описание",
+ "placeholder": "Получение информации из поисковика"
+ },
+ "formFieldRequired": "Это поле обязательно для заполнения",
+ "homepage": {
+ "desc": "Домашняя страница плагина",
+ "label": "Домашняя страница"
+ },
+ "identifier": {
+ "desc": "Уникальный идентификатор плагина, который будет автоматически определен из манифеста",
+ "errorDuplicate": "Идентификатор уже используется другим плагином. Пожалуйста, измените его",
+ "label": "Идентификатор",
+ "pattenErrorMessage": "Допустимы только латинские буквы, цифры, дефис и подчеркивание"
+ },
+ "manifest": {
+ "desc": "{{appName}} будет устанавливать плагин по этой ссылке",
+ "label": "URL манифеста плагина",
+ "preview": "Предпросмотр манифеста",
+ "refresh": "Обновить"
+ },
+ "title": {
+ "desc": "Название плагина",
+ "label": "Название",
+ "placeholder": "Поиск в поисковике"
+ }
+ },
+ "metaConfig": "Настройка метаданных плагина",
+ "modalDesc": "После добавления пользовательского плагина его можно использовать для тестирования разработки плагина или непосредственно в сессии. См. документацию по разработке <1>здесь↗>",
+ "openai": {
+ "importUrl": "Импортировать из URL",
+ "schema": "Схема"
+ },
+ "preview": {
+ "card": "Предпросмотр плагина",
+ "desc": "Предпросмотр описания плагина",
+ "title": "Предпросмотр названия плагина"
+ },
+ "save": "Установить плагин",
+ "saveSuccess": "Настройки плагина успешно сохранены",
+ "tabs": {
+ "manifest": "Описание функций (Манифест)",
+ "meta": "Метаданные"
+ },
+ "title": {
+ "create": "Добавить плагин",
+ "edit": "Редактировать плагин"
+ },
+ "type": {
+ "lobe": "Плагин LobeChat",
+ "openai": "Плагин OpenAI"
+ },
+ "update": "Обновить",
+ "updateSuccess": "Настройки плагина успешно обновлены"
+ },
+ "error": {
+ "fetchError": "Не удалось получить доступ к манифесту по указанной ссылке. Проверьте, что ссылка действительна, и доступ к кросс-доменным запросам разрешен",
+ "installError": "Ошибка при установке плагина {{name}}",
+ "manifestInvalid": "Манифест не соответствует стандартам, результат проверки: \n\n {{error}}",
+ "noManifest": "Отсутствует манифест",
+ "openAPIInvalid": "Ошибка разбора OpenAPI: \n\n {{error}}",
+ "reinstallError": "Ошибка при обновлении плагина {{name}}",
+ "urlError": "Ссылка не возвращает данные в формате JSON. Проверьте правильность ссылки"
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "Устарел",
+ "local.config": "Настроить",
+ "local.title": "Пользовательский"
+ }
+ },
+ "loading": {
+ "content": "Загрузка плагина...",
+ "plugin": "Запуск плагина..."
+ },
+ "pluginList": "Список плагинов",
+ "setting": "Настройка плагина",
+ "settings": {
+ "indexUrl": {
+ "title": "Индекс магазина",
+ "tooltip": "Редактирование в настоящее время недоступно"
+ },
+ "modalDesc": "После настройки адреса магазина плагинов можно использовать пользовательский магазин",
+ "title": "Настройки магазина плагинов"
+ },
+ "showInPortal": "Просмотрите подробности в рабочей области",
+ "store": {
+ "actions": {
+ "confirmUninstall": "Вы собираетесь удалить этот плагин. После удаления его настройки будут утрачены. Вы уверены, что хотите продолжить?",
+ "detail": "Подробнее",
+ "install": "Установить",
+ "manifest": "Редактировать манифест",
+ "settings": "Настройки",
+ "uninstall": "Удалить"
+ },
+ "communityPlugin": "Плагин сообщества",
+ "customPlugin": "Пользовательский плагин",
+ "empty": "Плагины не установлены",
+ "installAllPlugins": "Установить все",
+ "networkError": "Не удалось подключиться к магазину плагинов. Пожалуйста, проверьте сетевое соединение и попробуйте ещё раз",
+ "placeholder": "Введите название плагина, описание или ключевое слово...",
+ "releasedAt": "Опубликован {{createdAt}}",
+ "tabs": {
+ "all": "Все",
+ "installed": "Установленные"
+ },
+ "title": "Магазин плагинов"
+ },
+ "unknownPlugin": "Неизвестный плагин"
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/portal.json b/DigitalHumanWeb/locales/ru-RU/portal.json
new file mode 100644
index 0000000..5ffeb14
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "Артефакты",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "Часть",
+ "file": "Файл"
+ }
+ },
+ "Plugins": "Плагины",
+ "actions": {
+ "genAiMessage": "Создать сообщение помощника",
+ "summary": "Сводка",
+ "summaryTooltip": "Сводка текущего содержимого"
+ },
+ "artifacts": {
+ "display": {
+ "code": "Код",
+ "preview": "Предварительный просмотр"
+ },
+ "svg": {
+ "copyAsImage": "Скопировать как изображение",
+ "copyFail": "Не удалось скопировать, причина ошибки: {{error}}",
+ "copySuccess": "Изображение успешно скопировано",
+ "download": {
+ "png": "Скачать как PNG",
+ "svg": "Скачать как SVG"
+ }
+ }
+ },
+ "emptyArtifactList": "Список текущих артефактов пуст. Пожалуйста, используйте плагины во время сеанса и затем просмотрите.",
+ "emptyKnowledgeList": "Текущий список знаний пуст. Пожалуйста, откройте базу знаний по мере необходимости в разговоре, прежде чем просматривать.",
+ "files": "файлы",
+ "messageDetail": "Детали сообщения",
+ "title": "Расширенное окно"
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/providers.json b/DigitalHumanWeb/locales/ru-RU/providers.json
new file mode 100644
index 0000000..c707f7e
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "360 AI — это платформа AI-моделей и услуг, запущенная компанией 360, предлагающая множество передовых моделей обработки естественного языка, включая 360GPT2 Pro, 360GPT Pro, 360GPT Turbo и 360GPT Turbo Responsibility 8K. Эти модели сочетают в себе масштабные параметры и мультимодальные возможности, широко применяются в генерации текста, семантическом понимании, диалоговых системах и генерации кода. Благодаря гибкой ценовой политике 360 AI удовлетворяет разнообразные потребности пользователей, поддерживает интеграцию разработчиков и способствует инновациям и развитию интеллектуальных приложений."
+ },
+ "anthropic": {
+ "description": "Anthropic — это компания, сосредоточенная на исследованиях и разработке искусственного интеллекта, предлагающая ряд передовых языковых моделей, таких как Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus и Claude 3 Haiku. Эти модели достигают идеального баланса между интеллектом, скоростью и стоимостью, подходя для различных сценариев применения, от корпоративных рабочих нагрузок до быстрого реагирования. Claude 3.5 Sonnet, как их последняя модель, показала отличные результаты в нескольких оценках, сохраняя при этом высокую стоимость-эффективность."
+ },
+ "azure": {
+ "description": "Azure предлагает множество передовых AI-моделей, включая GPT-3.5 и новейшую серию GPT-4, поддерживающих различные типы данных и сложные задачи, с акцентом на безопасность, надежность и устойчивые AI-решения."
+ },
+ "baichuan": {
+ "description": "Baichuan Intelligent — это компания, сосредоточенная на разработке больших моделей искусственного интеллекта, чьи модели показывают выдающиеся результаты в области китайских задач, таких как знаниевые энциклопедии, обработка длинных текстов и генерация контента, превосходя зарубежные модели. Baichuan Intelligent также обладает передовыми мультимодальными возможностями и показала отличные результаты в нескольких авторитетных оценках. Их модели включают Baichuan 4, Baichuan 3 Turbo и Baichuan 3 Turbo 128k, оптимизированные для различных сценариев применения, предлагая высокоэффективные решения."
+ },
+ "bedrock": {
+ "description": "Bedrock — это сервис, предоставляемый Amazon AWS, сосредоточенный на предоставлении предприятиям передовых AI-языковых и визуальных моделей. Его семейство моделей включает серию Claude от Anthropic, серию Llama 3.1 от Meta и другие, охватывающие широкий спектр от легковесных до высокопроизводительных решений, поддерживающих текстовую генерацию, диалоги, обработку изображений и другие задачи, подходящие для предприятий различного масштаба и потребностей."
+ },
+ "deepseek": {
+ "description": "DeepSeek — это компания, сосредоточенная на исследованиях и применении технологий искусственного интеллекта, ее последняя модель DeepSeek-V2.5 объединяет возможности общего диалога и обработки кода, достигнув значительных улучшений в области согласования с человеческими предпочтениями, написания текстов и выполнения инструкций."
+ },
+ "fireworksai": {
+ "description": "Fireworks AI — это ведущий поставщик высококлассных языковых моделей, сосредоточенный на вызовах функций и мультимодальной обработке. Их последняя модель Firefunction V2 основана на Llama-3 и оптимизирована для вызовов функций, диалогов и выполнения инструкций. Модель визуального языка FireLLaVA-13B поддерживает смешанный ввод изображений и текста. Другие заметные модели включают серию Llama и серию Mixtral, предлагая эффективную поддержку многоязычных инструкций и генерации."
+ },
+ "github": {
+ "description": "С помощью моделей GitHub разработчики могут стать инженерами ИИ и создавать с использованием ведущих моделей ИИ в отрасли."
+ },
+ "google": {
+ "description": "Серия Gemini от Google является самой передовой и универсальной AI-моделью, разработанной Google DeepMind, специально созданной для мультимодальной обработки, поддерживающей бесшовное понимание и обработку текста, кода, изображений, аудио и видео. Подходит для различных сред, от дата-центров до мобильных устройств, значительно повышая эффективность и универсальность AI-моделей."
+ },
+ "groq": {
+ "description": "Инженерный движок LPU от Groq показал выдающиеся результаты в последних независимых бенчмарках больших языковых моделей (LLM), переопределяя стандарты AI-решений благодаря своей удивительной скорости и эффективности. Groq представляет собой образец мгновенной скорости вывода, демонстрируя хорошие результаты в облачных развертываниях."
+ },
+ "minimax": {
+ "description": "MiniMax — это компания по разработке универсального искусственного интеллекта, основанная в 2021 году, стремящаяся к совместному созданию интеллекта с пользователями. MiniMax самостоятельно разработала универсальные большие модели различных модальностей, включая текстовые модели с триллионом параметров, модели речи и модели изображений. Также были запущены приложения, такие как Conch AI."
+ },
+ "mistral": {
+ "description": "Mistral предлагает передовые универсальные, специализированные и исследовательские модели, широко применяемые в сложном выводе, многоязычных задачах, генерации кода и других областях. Через интерфейсы вызова функций пользователи могут интегрировать пользовательские функции для реализации конкретных приложений."
+ },
+ "moonshot": {
+ "description": "Moonshot — это открытая платформа, запущенная Beijing Dark Side Technology Co., Ltd., предлагающая различные модели обработки естественного языка, охватывающие широкий спектр областей применения, включая, но не ограничиваясь, создание контента, академические исследования, интеллектуальные рекомендации, медицинскую диагностику и т. д., поддерживающая обработку длинных текстов и сложные задачи генерации."
+ },
+ "novita": {
+ "description": "Novita AI — это платформа, предлагающая API-сервисы для различных больших языковых моделей и генерации изображений AI, гибкая, надежная и экономически эффективная. Она поддерживает новейшие открытые модели, такие как Llama3, Mistral и предоставляет комплексные, удобные для пользователя и автоматически масштабируемые API-решения для разработки генеративных AI-приложений, подходящие для быстрого роста AI-стартапов."
+ },
+ "ollama": {
+ "description": "Модели, предлагаемые Ollama, охватывают широкий спектр областей, включая генерацию кода, математические вычисления, многоязыковую обработку и диалоговое взаимодействие, поддерживая разнообразные потребности в развертывании на уровне предприятий и локализации."
+ },
+ "openai": {
+ "description": "OpenAI является ведущим мировым исследовательским институтом в области искусственного интеллекта, чьи модели, такие как серия GPT, продвигают границы обработки естественного языка. OpenAI стремится изменить множество отраслей с помощью инновационных и эффективных AI-решений. Их продукты обладают выдающимися характеристиками и экономичностью, широко используются в исследованиях, бизнесе и инновационных приложениях."
+ },
+ "openrouter": {
+ "description": "OpenRouter — это сервисная платформа, предлагающая интерфейсы для различных передовых больших моделей, поддерживающая OpenAI, Anthropic, LLaMA и другие, подходящая для разнообразных потребностей в разработке и применении. Пользователи могут гибко выбирать оптимальные модели и цены в зависимости от своих потребностей, что способствует улучшению AI-опыта."
+ },
+ "perplexity": {
+ "description": "Perplexity — это ведущий поставщик моделей генерации диалогов, предлагающий множество передовых моделей Llama 3.1, поддерживающих онлайн и оффлайн приложения, особенно подходящих для сложных задач обработки естественного языка."
+ },
+ "qwen": {
+ "description": "Qwen — это сверхбольшая языковая модель, разработанная Alibaba Cloud, обладающая мощными возможностями понимания и генерации естественного языка. Она может отвечать на различные вопросы, создавать текстовый контент, выражать мнения и писать код, играя важную роль в различных областях."
+ },
+ "siliconcloud": {
+ "description": "SiliconFlow стремится ускорить AGI, чтобы принести пользу человечеству, повышая эффективность масштабного AI с помощью простого и экономичного стека GenAI."
+ },
+ "spark": {
+ "description": "Стартап iFlytek Starfire предоставляет мощные AI-возможности в различных областях и языках, используя передовые технологии обработки естественного языка для создания инновационных приложений, подходящих для умных устройств, умного здравоохранения, умных финансов и других вертикальных сценариев."
+ },
+ "stepfun": {
+ "description": "StepFun — это большая модель, обладающая передовыми мультимодальными и сложными выводными возможностями, поддерживающая понимание сверхдлинных текстов и мощные функции автономного поиска."
+ },
+ "taichu": {
+ "description": "Новая генерация мультимодальных больших моделей, разработанная Институтом автоматизации Китайской академии наук и Институтом искусственного интеллекта Уханя, поддерживает многораундные вопросы и ответы, создание текстов, генерацию изображений, 3D-понимание, анализ сигналов и другие комплексные задачи, обладая более сильными когнитивными, понимательными и творческими способностями, предлагая новый опыт взаимодействия."
+ },
+ "togetherai": {
+ "description": "Together AI стремится достичь передовых результатов с помощью инновационных AI-моделей, предлагая широкий спектр возможностей для настройки, включая поддержку быстрого масштабирования и интуитивно понятные процессы развертывания, чтобы удовлетворить различные потребности бизнеса."
+ },
+ "upstage": {
+ "description": "Upstage сосредоточен на разработке AI-моделей для различных бизнес-потребностей, включая Solar LLM и документальный AI, с целью достижения искусственного общего интеллекта (AGI). Создавайте простые диалоговые агенты через Chat API и поддерживайте вызовы функций, переводы, встраивания и приложения в конкретных областях."
+ },
+ "zeroone": {
+ "description": "01.AI сосредоточен на технологиях искусственного интеллекта 2.0, активно продвигая инновации и применение \"человек + искусственный интеллект\", используя мощные модели и передовые AI-технологии для повышения производительности человека и реализации технологического потенциала."
+ },
+ "zhipu": {
+ "description": "Zhipu AI предлагает открытую платформу для мультимодальных и языковых моделей, поддерживающую широкий спектр AI-приложений, включая обработку текста, понимание изображений и помощь в программировании."
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/ragEval.json b/DigitalHumanWeb/locales/ru-RU/ragEval.json
new file mode 100644
index 0000000..34dedfc
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "Создать",
+ "description": {
+ "placeholder": "Описание набора данных (необязательно)"
+ },
+ "name": {
+ "placeholder": "Название набора данных",
+ "required": "Пожалуйста, укажите название набора данных"
+ },
+ "title": "Добавить набор данных"
+ },
+ "dataset": {
+ "addNewButton": "Создать набор данных",
+ "emptyGuide": "Текущий набор данных пуст, пожалуйста, создайте новый набор данных.",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "Импортировать данные"
+ },
+ "columns": {
+ "actions": "Действия",
+ "ideal": {
+ "title": "Ожидаемый ответ"
+ },
+ "question": {
+ "title": "Вопрос"
+ },
+ "referenceFiles": {
+ "title": "Справочные файлы"
+ }
+ },
+ "notSelected": "Пожалуйста, выберите набор данных слева",
+ "title": "Детали набора данных"
+ },
+ "title": "Набор данных"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "Создать",
+ "datasetId": {
+ "placeholder": "Пожалуйста, выберите ваш набор данных для оценки",
+ "required": "Пожалуйста, выберите набор данных для оценки"
+ },
+ "description": {
+ "placeholder": "Описание задачи оценки (необязательно)"
+ },
+ "name": {
+ "placeholder": "Название задачи оценки",
+ "required": "Пожалуйста, укажите название задачи оценки"
+ },
+ "title": "Добавить задачу оценки"
+ },
+ "addNewButton": "Создать оценку",
+ "emptyGuide": "Текущая задача оценки пуста, начните создание оценки.",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "Проверить статус",
+ "confirmDelete": "Вы уверены, что хотите удалить эту оценку?",
+ "confirmRun": "Вы уверены, что хотите запустить? После запуска задача оценки будет выполняться асинхронно в фоновом режиме, закрытие страницы не повлияет на выполнение асинхронной задачи.",
+ "downloadRecords": "Скачать оценки",
+ "retry": "Повторить",
+ "run": "Запустить",
+ "title": "Действия"
+ },
+ "datasetId": {
+ "title": "Набор данных"
+ },
+ "name": {
+ "title": "Название задачи оценки"
+ },
+ "records": {
+ "title": "Количество записей оценки"
+ },
+ "referenceFiles": {
+ "title": "Справочные файлы"
+ },
+ "status": {
+ "error": "Ошибка выполнения",
+ "pending": "Ожидание выполнения",
+ "processing": "В процессе выполнения",
+ "success": "Выполнение успешно",
+ "title": "Статус"
+ }
+ },
+ "title": "Список задач оценки"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/setting.json b/DigitalHumanWeb/locales/ru-RU/setting.json
new file mode 100644
index 0000000..bcffd77
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "О нас"
+ },
+ "agentTab": {
+ "chat": "Предпочтения чата",
+ "meta": "Информация об ассистенте",
+ "modal": "Настройки модели",
+ "plugin": "Настройки плагина",
+ "prompt": "Настройки роли",
+ "tts": "Сервис озвучивания текста"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "Выбрав отправку телеметрических данных, вы можете помочь нам улучшить общий пользовательский опыт {{appName}}",
+ "title": "Отправка анонимных данных использования"
+ },
+ "title": "Аналитика"
+ },
+ "danger": {
+ "clear": {
+ "action": "Очистить сейчас",
+ "confirm": "Вы уверены, что хотите очистить все чаты?",
+ "desc": "Это действие приведет к удалению всех данных сеанса, включая помощника, файлы, сообщения, плагины и прочее.",
+ "success": "Все сообщения сеанса были очищены",
+ "title": "Очистка всех сообщений сеанса"
+ },
+ "reset": {
+ "action": "Сбросить сейчас",
+ "confirm": "Вы уверены, что хотите сбросить все настройки?",
+ "currentVersion": "Текущая версия",
+ "desc": "Сброс всех параметров настройки до значений по умолчанию",
+ "success": "Все настройки были успешно сброшены",
+ "title": "Сброс всех настроек"
+ }
+ },
+ "header": {
+ "desc": "Настройки предпочтений и моделей.",
+ "global": "Глобальные настройки",
+ "session": "Настройки сеанса",
+ "sessionDesc": "Настройки персонажа и предпочтения сессии.",
+ "sessionWithName": "Настройки сеанса · {{name}}",
+ "title": "Настройки"
+ },
+ "llm": {
+ "aesGcm": "Ваши ключи и адреса агентов будут зашифрованы с использованием алгоритма шифрования <1>AES-GCM1>",
+ "apiKey": {
+ "desc": "Введите ваш {{name}} ключ API",
+ "placeholder": "{{name}} ключ API",
+ "title": "Ключ API"
+ },
+ "checker": {
+ "button": "Проверить",
+ "desc": "Проверьте правильность заполнения ключа API и адреса прокси",
+ "pass": "Проверка пройдена",
+ "title": "Проверка доступности"
+ },
+ "customModelCards": {
+ "addNew": "Создать и добавить модель {{id}}",
+ "config": "Настроить модель",
+ "confirmDelete": "Вы уверены, что хотите удалить эту пользовательскую модель? Действие нельзя отменить, будьте осторожны.",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "Поле, фактически запрашиваемое в Azure OpenAI",
+ "placeholder": "Введите название развертывания модели в Azure",
+ "title": "Название развертывания модели"
+ },
+ "displayName": {
+ "placeholder": "Введите отображаемое название модели, например, ChatGPT, GPT-4 и т. д.",
+ "title": "Отображаемое название модели"
+ },
+ "files": {
+ "extra": "Текущая реализация загрузки файлов является лишь временным решением и предназначена только для самостоятельного тестирования. Полная возможность загрузки файлов будет реализована позже",
+ "title": "Поддержка загрузки файлов"
+ },
+ "functionCall": {
+ "extra": "Эта конфигурация только активирует возможность вызова функций в приложении, поддержка вызова функций полностью зависит от самого модели, пожалуйста, протестируйте доступность вызова функций этой модели самостоятельно",
+ "title": "Вызов функций"
+ },
+ "id": {
+ "extra": "Будет отображаться как метка модели",
+ "placeholder": "Введите идентификатор модели, например, gpt-4-turbo-preview или claude-2.1",
+ "title": "Идентификатор модели"
+ },
+ "modalTitle": "Настройка пользовательской модели",
+ "tokens": {
+ "title": "Максимальное количество токенов",
+ "unlimited": "неограниченный"
+ },
+ "vision": {
+ "extra": "Эта конфигурация только активирует возможность загрузки изображений в приложении, поддержка распознавания полностью зависит от самой модели, пожалуйста, протестируйте доступность визуального распознавания этой модели самостоятельно",
+ "title": "Распознавание изображений"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "Режим запроса с клиента позволяет инициировать запрос сеанса непосредственно из браузера, что улучшает скорость ответа",
+ "title": "Использовать режим запроса с клиента"
+ },
+ "fetcher": {
+ "fetch": "Получить список моделей",
+ "fetching": "Идет получение списка моделей...",
+ "latestTime": "Последнее обновление: {{time}}",
+ "noLatestTime": "Список пока не получен"
+ },
+ "helpDoc": "Руководство по настройке",
+ "modelList": {
+ "desc": "Выберите модель для отображения в сеансе, выбранная модель будет отображаться в списке моделей",
+ "placeholder": "Выберите модель из списка",
+ "title": "Список моделей",
+ "total": "Всего доступно {{count}} моделей"
+ },
+ "proxyUrl": {
+ "desc": "За исключением адреса по умолчанию, должен включать http(s)://",
+ "title": "Адрес прокси API"
+ },
+ "waitingForMore": "Больше моделей доступно в <1>плане подключения1>, ожидайте"
+ },
+ "plugin": {
+ "addTooltip": "Добавить настраиваемый плагин",
+ "clearDeprecated": "Удалить устаревшие плагины",
+ "empty": "Установленных плагинов нет. Посетите <1>Магазин плагинов1>, чтобы найти новые",
+ "installStatus": {
+ "deprecated": "Удален"
+ },
+ "settings": {
+ "hint": "Пожалуйста, внесите следующие настройки согласно описанию",
+ "title": "Настройки плагина {{id}}",
+ "tooltip": "Настройки плагина"
+ },
+ "store": "Магазин плагинов"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "Аватар"
+ },
+ "backgroundColor": {
+ "title": "Цвет фона"
+ },
+ "description": {
+ "placeholder": "Введите описание помощника",
+ "title": "Описание помощника"
+ },
+ "name": {
+ "placeholder": "Введите имя помощника",
+ "title": "Имя"
+ },
+ "prompt": {
+ "placeholder": "Введите подсказку для роли",
+ "title": "Подсказка роли"
+ },
+ "tag": {
+ "placeholder": "Введите тег",
+ "title": "Тег"
+ },
+ "title": "Информация о помощнике"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "При превышении этого количества сообщений будет автоматически создана тема",
+ "title": "Порог создания темы"
+ },
+ "chatStyleType": {
+ "title": "Стиль чата",
+ "type": {
+ "chat": "Режим беседы",
+ "docs": "Режим документа"
+ }
+ },
+ "compressThreshold": {
+ "desc": "При превышении количества некомпрессированных сообщений этого значения будет выполнено сжатие",
+ "title": "Порог сжатия истории сообщений"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "Автоматическое создание темы во время беседы, работает только во временных темах",
+ "title": "Автоматическое создание темы"
+ },
+ "enableCompressThreshold": {
+ "title": "Включить сжатие истории сообщений"
+ },
+ "enableHistoryCount": {
+ "alias": "Без ограничений",
+ "limited": "Содержит только {{number}} сообщений",
+ "setlimited": "Установить ограничение на количество использованных сообщений",
+ "title": "Ограничение истории сообщений",
+ "unlimited": "Без ограничения истории сообщений"
+ },
+ "historyCount": {
+ "desc": "Количество сообщений, передаваемых с каждым запросом",
+ "title": "Количество сообщений в истории"
+ },
+ "inputTemplate": {
+ "desc": "Последнее сообщение пользователя будет использовано в этом шаблоне",
+ "placeholder": "Шаблон ввода {{text}} будет заменен на реальные данные",
+ "title": "Шаблон ввода пользователя"
+ },
+ "title": "Настройки чата"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "Включить ограничение максимального количества токенов"
+ },
+ "frequencyPenalty": {
+ "desc": "Чем выше значение, тем меньше вероятность повторения слов",
+ "title": "Штраф за повторение"
+ },
+ "maxTokens": {
+ "desc": "Максимальное количество токенов для одного взаимодействия",
+ "title": "Максимальное количество токенов"
+ },
+ "model": {
+ "desc": "{{provider}} модель",
+ "title": "Модель"
+ },
+ "presencePenalty": {
+ "desc": "Чем выше значение, тем больше вероятность перехода на новые темы",
+ "title": "Штраф за однообразие"
+ },
+ "temperature": {
+ "desc": "Чем выше значение, тем более непредсказуемым будет ответ",
+ "title": "Непредсказуемость",
+ "titleWithValue": "Непредсказуемость {{value}}"
+ },
+ "title": "Настройки модели",
+ "topP": {
+ "desc": "Похоже на непредсказуемость, но не изменяется вместе с параметром непредсказуемости",
+ "title": "Верхний процент P"
+ }
+ },
+ "settingPlugin": {
+ "title": "Список плагинов"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "Администратор включил шифрованный доступ",
+ "placeholder": "Введите код доступа",
+ "title": "Код доступа"
+ },
+ "oauth": {
+ "info": {
+ "desc": "Вход выполнен",
+ "title": "Информация об аккаунте"
+ },
+ "signin": {
+ "action": "Войти",
+ "desc": "Войдите через SSO, чтобы разблокировать приложение",
+ "title": "Вход в аккаунт"
+ },
+ "signout": {
+ "action": "Выйти",
+ "confirm": "Подтвердить выход?",
+ "success": "Вы успешно вышли из системы"
+ }
+ },
+ "title": "Системные настройки"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "Модель распознавания речи OpenAI",
+ "title": "OpenAI",
+ "ttsModel": "Модель синтеза речи OpenAI"
+ },
+ "showAllLocaleVoice": {
+ "desc": "Если отключено, будут отображаться только голоса для текущего языка",
+ "title": "Показать все языковые голоса"
+ },
+ "stt": "Настройки распознавания речи",
+ "sttAutoStop": {
+ "desc": "Если отключено, распознавание речи не остановится автоматически, необходимо будет нажать кнопку остановки вручную",
+ "title": "Автоматическая остановка распознавания речи"
+ },
+ "sttLocale": {
+ "desc": "Язык для ввода речи, этот параметр может улучшить точность распознавания",
+ "title": "Язык распознавания речи"
+ },
+ "sttService": {
+ "desc": "В браузере используется встроенная служба распознавания речи",
+ "title": "Служба распознавания речи"
+ },
+ "title": "Настройки распознавания и синтеза речи",
+ "tts": "Настройки синтеза речи",
+ "ttsService": {
+ "desc": "Если используется служба синтеза речи OpenAI, убедитесь, что служба моделей OpenAI активирована",
+ "title": "Служба синтеза речи"
+ },
+ "voice": {
+ "desc": "Выберите голос для вашего помощника, разные службы синтеза речи поддерживают различные голоса",
+ "preview": "Прослушать голос",
+ "title": "Голос синтезатора речи"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "Аватар"
+ },
+ "fontSize": {
+ "desc": "Размер шрифта для чата",
+ "marks": {
+ "normal": "нормальный"
+ },
+ "title": "Размер шрифта"
+ },
+ "lang": {
+ "autoMode": "Автоматический режим",
+ "title": "Язык"
+ },
+ "neutralColor": {
+ "desc": "Выбор нейтральных оттенков для разных цветовых предпочтений",
+ "title": "Нейтральный цвет"
+ },
+ "primaryColor": {
+ "desc": "Выбор основного цвета темы",
+ "title": "Основной цвет"
+ },
+ "themeMode": {
+ "auto": "Автоматически",
+ "dark": "Темная",
+ "light": "Светлая",
+ "title": "Режим темы"
+ },
+ "title": "Настройки темы"
+ },
+ "submitAgentModal": {
+ "button": "Отправить агента",
+ "identifier": "Идентификатор агента",
+ "metaMiss": "Пожалуйста, заполните информацию о помощнике перед отправкой. Необходимо указать имя, описание и теги",
+ "placeholder": "Введите уникальный идентификатор агента, например, 'web-development'",
+ "tooltips": "Поделиться в магазине агентов"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "Добавьте имя для удобства идентификации",
+ "placeholder": "Введите имя устройства",
+ "title": "Имя устройства"
+ },
+ "title": "Информация об устройстве",
+ "unknownBrowser": "Неизвестный браузер",
+ "unknownOS": "Неизвестная система"
+ },
+ "warning": {
+ "tip": "После длительного общественного тестирования синхронизация WebRTC может не надежно удовлетворять общие потребности в синхронизации данных. Пожалуйста, <1>разверните собственный сигнальный сервер1> перед использованием."
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "WebRTC будет использовать это имя для создания канала синхронизации, убедитесь, что имя канала уникально",
+ "placeholder": "Введите имя канала синхронизации",
+ "shuffle": "Сгенерировать случайно",
+ "title": "Имя канала синхронизации"
+ },
+ "channelPassword": {
+ "desc": "Добавьте пароль для обеспечения конфиденциальности канала, только устройства с правильным паролем могут присоединиться к каналу",
+ "placeholder": "Введите пароль канала синхронизации",
+ "title": "Пароль канала синхронизации"
+ },
+ "desc": "Реальное время, точка-точка передачи данных, устройства должны быть онлайн одновременно для синхронизации",
+ "enabled": {
+ "invalid": "Пожалуйста, введите адрес сигнального сервера и имя канала синхронизации перед включением.",
+ "title": "Включить синхронизацию"
+ },
+ "signaling": {
+ "desc": "WebRTC будет использовать этот адрес для синхронизации",
+ "placeholder": "Введите адрес сигнального сервера",
+ "title": "Сигнальный сервер"
+ },
+ "title": "WebRTC синхронизация"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "Модель генерации метаданных агента",
+ "modelDesc": "Модель, используемая для генерации имени агента, описания, аватара и меток",
+ "title": "Автоматическое создание информации об агенте"
+ },
+ "queryRewrite": {
+ "label": "Модель переформулирования вопросов",
+ "modelDesc": "Модель, предназначенная для оптимизации вопросов пользователей",
+ "title": "База знаний"
+ },
+ "title": "Системный агент",
+ "topic": {
+ "label": "Модель именования тем",
+ "modelDesc": "Модель, используемая для автоматического переименования тем",
+ "title": "Автоматическое именование тем"
+ },
+ "translation": {
+ "label": "Модель перевода",
+ "modelDesc": "Модель, используемая для перевода",
+ "title": "Настройки помощника по переводу"
+ }
+ },
+ "tab": {
+ "about": "О нас",
+ "agent": "Помощник по умолчанию",
+ "common": "Общие настройки",
+ "experiment": "Эксперимент",
+ "llm": "Языковая модель",
+ "sync": "Синхронизация с облаком",
+ "system-agent": "Системный агент",
+ "tts": "Голосовые услуги"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "Встроенные инструменты"
+ },
+ "disabled": "Текущая модель не поддерживает вызов функций и не может использовать плагины",
+ "plugins": {
+ "enabled": "Активировано {{num}}",
+ "groupName": "Плагины",
+ "noEnabled": "Активированные плагины отсутствуют",
+ "store": "Магазин плагинов"
+ },
+ "title": "Дополнительные инструменты"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/tool.json b/DigitalHumanWeb/locales/ru-RU/tool.json
new file mode 100644
index 0000000..94b30e6
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "Автогенерация",
+ "downloading": "Ссылка на изображение, созданное DALL·E3, действительна только в течение 1 часа. Идет кэширование изображения локально...",
+ "generate": "Создать",
+ "generating": "Создание...",
+ "images": "Изображения:",
+ "prompt": "подсказка"
+ }
+}
diff --git a/DigitalHumanWeb/locales/ru-RU/welcome.json b/DigitalHumanWeb/locales/ru-RU/welcome.json
new file mode 100644
index 0000000..20921f0
--- /dev/null
+++ b/DigitalHumanWeb/locales/ru-RU/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "Импорт конфига",
+ "market": "Посетить рынок",
+ "start": "Начать"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "Заменить",
+ "title": "Рекомендации новых агентов:"
+ },
+ "defaultMessage": "Я ваш личный интеллектуальный помощник {{appName}}. Чем я могу вам помочь сейчас?\nЕсли вам нужен более профессиональный или индивидуальный помощник, нажмите `+`, чтобы создать кастомного помощника.",
+ "defaultMessageWithoutCreate": "Я ваш личный интеллектуальный помощник {{appName}}. Чем я могу вам помочь сейчас?",
+ "qa": {
+ "q01": "Что такое LobeHub?",
+ "q02": "Что такое {{appName}}?",
+ "q03": "Есть ли у {{appName}} поддержка сообщества?",
+ "q04": "Какие функции поддерживает {{appName}}?",
+ "q05": "Как развернуть и использовать {{appName}}?",
+ "q06": "Какова цена {{appName}}?",
+ "q07": "Является ли {{appName}} бесплатным?",
+ "q08": "Существует ли облачная версия?",
+ "q09": "Поддерживает ли {{appName}} локальные языковые модели?",
+ "q10": "Поддерживает ли {{appName}} распознавание и генерацию изображений?",
+ "q11": "Поддерживает ли {{appName}} синтез речи и распознавание речи?",
+ "q12": "Поддерживает ли {{appName}} систему плагинов?",
+ "q13": "Есть ли у {{appName}} собственный рынок для получения GPT?",
+ "q14": "Поддерживает ли {{appName}} несколько поставщиков AI услуг?",
+ "q15": "Что делать, если я столкнулся с проблемой при использовании?"
+ },
+ "questions": {
+ "moreBtn": "Узнать больше",
+ "title": "Часто задаваемые вопросы:"
+ },
+ "welcome": {
+ "afternoon": "Добрый день",
+ "morning": "Доброе утро",
+ "night": "Добрый вечер",
+ "noon": "Добрый день"
+ }
+ },
+ "header": "Привет",
+ "pickAgent": "Или выберите один из следующих шаблонов помощника",
+ "skip": "Пропустить",
+ "slogan": {
+ "desc1": "Раскройте силу своего мозга и разожгите свой творческий потенциал. Ваш умный помощник всегда под рукой.",
+ "desc2": "Создайте своего первого помощника и приступим~",
+ "title": "Используйте свой мозг более продуктивно"
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/auth.json b/DigitalHumanWeb/locales/tr-TR/auth.json
new file mode 100644
index 0000000..ba16da2
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "Giriş Yap",
+ "loginOrSignup": "Giriş Yap / Kayıt Ol",
+ "profile": "Profil",
+ "security": "Güvenlik",
+ "signout": "Çıkış Yap",
+ "signup": "Kaydol"
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/chat.json b/DigitalHumanWeb/locales/tr-TR/chat.json
new file mode 100644
index 0000000..4fdb187
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "Model Değiştir"
+ },
+ "agentDefaultMessage": "Merhaba, ben **{{name}}**. Hemen benimle konuşmaya başlayabilir veya [Asistan Ayarları]({{url}}) sayfasına giderek bilgilerimi güncelleyebilirsin.",
+ "agentDefaultMessageWithSystemRole": "Merhaba, Ben **{{name}}**, {{systemRole}}. Hemen sohbet etmeye başlayalım!",
+ "agentDefaultMessageWithoutEdit": "Merhaba, ben **{{name}}**. Konuşmaya başlayalım!",
+ "agents": "Asistan",
+ "artifact": {
+ "generating": "Üretiliyor",
+ "thinking": "Düşünülüyor",
+ "thought": "Düşünce Süreci",
+ "unknownTitle": "İsimsiz Eser"
+ },
+ "backToBottom": "En alta git",
+ "chatList": {
+ "longMessageDetail": "Detayları görüntüle"
+ },
+ "clearCurrentMessages": "Mevcut oturum mesajlarını temizle",
+ "confirmClearCurrentMessages": "Mevcut oturum mesajlarını temizlemek üzeresiniz. Temizlendikten sonra geri alınamazlar. Lütfen eyleminizi onaylayın.",
+ "confirmRemoveSessionItemAlert": "Bu asistanı silmek üzeresiniz. Silindikten sonra geri alınamaz. Lütfen eyleminizi onaylayın.",
+ "confirmRemoveSessionSuccess": "Oturum başarıyla kaldırıldı",
+ "defaultAgent": "Varsayılan Asistan",
+ "defaultList": "Varsayılan Liste",
+ "defaultSession": "Varsayılan Asistan",
+ "duplicateSession": {
+ "loading": "Kopyalanıyor...",
+ "success": "Kopyalama başarılı",
+ "title": "{{title}} Kopyası"
+ },
+ "duplicateTitle": "{{title}} Kopya",
+ "emptyAgent": "Asistan yok",
+ "historyRange": "Geçmiş Aralığı",
+ "inbox": {
+ "desc": "Beyin fırtınasını başlatın ve yaratıcı düşünmeye başlayın. Sanal asistanınız burada, her konuda sizinle iletişim kurmak için hazır.",
+ "title": "Sohbet Et"
+ },
+ "input": {
+ "addAi": "Bir AI mesajı ekleyin",
+ "addUser": "Bir kullanıcı mesajı ekleyin",
+ "more": "Daha fazla",
+ "send": "Gönder",
+ "sendWithCmdEnter": "{{meta}} + Enter tuşuna basarak gönder",
+ "sendWithEnter": "Enter tuşuna basarak gönder",
+ "stop": "Dur",
+ "warp": "Satır atla"
+ },
+ "knowledgeBase": {
+ "all": "Tüm İçerik",
+ "allFiles": "Tüm Dosyalar",
+ "allKnowledgeBases": "Tüm Bilgi Tabanları",
+ "disabled": "Mevcut dağıtım modu bilgi tabanı sohbetini desteklemiyor. Kullanmak istiyorsanız, lütfen sunucu veritabanı dağıtımına geçin veya {{cloud}} hizmetini kullanın.",
+ "library": {
+ "action": {
+ "add": "Ekle",
+ "detail": "Detay",
+ "remove": "Kaldır"
+ },
+ "title": "Dosya/Bilgi Tabanı"
+ },
+ "relativeFilesOrKnowledgeBases": "İlişkili Dosyalar/Bilgi Tabanları",
+ "title": "Bilgi Tabanı",
+ "uploadGuide": "Yüklenen dosyaları 'Bilgi Tabanı' içinde görebilirsiniz.",
+ "viewMore": "Daha Fazla Gör"
+ },
+ "messageAction": {
+ "delAndRegenerate": "Sil ve Yeniden Oluştur",
+ "regenerate": "Yeniden Oluştur"
+ },
+ "newAgent": "Yeni Asistan",
+ "pin": "Pin",
+ "pinOff": "Unpin",
+ "rag": {
+ "referenceChunks": "Referans Parçaları",
+ "userQuery": {
+ "actions": {
+ "delete": "Sorguyu Sil",
+ "regenerate": "Sorguyu Yeniden Oluştur"
+ }
+ }
+ },
+ "regenerate": "Tekrarla",
+ "roleAndArchive": "Rol ve Arşiv",
+ "searchAgentPlaceholder": "Arama Asistanı...",
+ "sendPlaceholder": "Mesajınızı buraya yazın...",
+ "sessionGroup": {
+ "config": "Grup Yönetimi",
+ "confirmRemoveGroupAlert": "Bu grup silinecek, silindikten sonra bu grubun yardımcıları varsayılan listeye taşınacak, işleminizi onaylıyor musunuz?",
+ "createAgentSuccess": "Yardımcı oluşturuldu",
+ "createGroup": "Yeni Grup Ekle",
+ "createSuccess": "Oluşturma Başarılı",
+ "creatingAgent": "Yardımcı oluşturuluyor...",
+ "inputPlaceholder": "Grup adını girin...",
+ "moveGroup": "Gruba Taşı",
+ "newGroup": "Yeni Grup",
+ "rename": "Grup Adını Değiştir",
+ "renameSuccess": "Yeniden Adlandırma Başarılı",
+ "sortSuccess": "Yeniden sıralama başarılı",
+ "sorting": "Grup sıralaması güncelleniyor...",
+ "tooLong": "Grup adı 1-20 karakter arasında olmalıdır"
+ },
+ "shareModal": {
+ "download": "Ekran Görüntüsünü İndir",
+ "imageType": "Format",
+ "screenshot": "Ekran Görüntüsü",
+ "settings": "Ayarlar",
+ "shareToShareGPT": "ShareGPT Link Oluştur",
+ "withBackground": "Arka Plan",
+ "withFooter": "Footer",
+ "withPluginInfo": "Plugin Bilgileri",
+ "withSystemRole": "Asistan Rol"
+ },
+ "stt": {
+ "action": "Ses Girişi",
+ "loading": "Tanımlanıyor...",
+ "prettifying": "İyileştiriliyor..."
+ },
+ "temp": "Geçici",
+ "tokenDetails": {
+ "chats": "Sohbetler",
+ "rest": "Kalan",
+ "systemRole": "Sistem Rolü",
+ "title": "Bağlam Detayları",
+ "tools": "Araçlar",
+ "total": "Toplam",
+ "used": "Kullanılan"
+ },
+ "tokenTag": {
+ "overload": "Limit Aşıldı",
+ "remained": "Kalan",
+ "used": "Kullanılan"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "Akıllı Yeniden Adlandırma",
+ "duplicate": "Kopya Oluştur",
+ "export": "Konuyu Dışa Aktar"
+ },
+ "checkOpenNewTopic": "Yeni bir konu açılsın mı?",
+ "checkSaveCurrentMessages": "Mevcut sohbeti konu olarak kaydetmek istiyor musunuz?",
+ "confirmRemoveAll": "Tüm konuları silmek üzeresiniz. Bir kere silindiğinde, geri alınamazlar. Lütfen dikkatli bir şekilde devam edin.",
+ "confirmRemoveTopic": "Bu konuyu silmek üzeresiniz. Bir kere silindiğinde, geri alınamaz. Lütfen dikkatli bir şekilde devam edin.",
+ "confirmRemoveUnstarred": "Yıldızlanmamış konuları silmek üzeresiniz. Bir kere silindiğinde, geri alınamazlar. Lütfen dikkatli bir şekilde devam edin.",
+ "defaultTitle": "Konu",
+ "duplicateLoading": "Konu kopyalanıyor...",
+ "duplicateSuccess": "Konu başarıyla kopyalandı",
+ "guide": {
+ "desc": "Mevcut oturumu geçmiş konu olarak kaydetmek ve yeni bir oturum başlatmak için sol taraftaki düğmeye tıklayın",
+ "title": "Konu Listesi"
+ },
+ "openNewTopic": "Yeni Konu",
+ "removeAll": "Tüm Konuları Sil",
+ "removeUnstarred": "Tüm Yıldızlanmamış Konuları Sil",
+ "saveCurrentMessages": "Mevcut oturumu konu olarak kaydet",
+ "searchPlaceholder": "Konuları ara...",
+ "title": "Konular"
+ },
+ "translate": {
+ "action": "Çeviri",
+ "clear": "Çeviriyi Temizle"
+ },
+ "tts": {
+ "action": "Text-to-Speech",
+ "clear": "Clear Speech"
+ },
+ "updateAgent": "Asistan Bilgilerini Güncelle",
+ "upload": {
+ "action": {
+ "fileUpload": "Dosya Yükle",
+ "folderUpload": "Klasör Yükle",
+ "imageDisabled": "Mevcut model görsel tanımayı desteklemiyor, lütfen modeli değiştirin ve tekrar deneyin",
+ "imageUpload": "Görüntü Yükle",
+ "tooltip": "Yükle"
+ },
+ "clientMode": {
+ "actionFiletip": "Dosya Yükle",
+ "actionTooltip": "Yükle",
+ "disabled": "Mevcut model görsel tanımayı ve dosya analizini desteklemiyor, lütfen modeli değiştirin ve tekrar deneyin"
+ },
+ "preview": {
+ "prepareTasks": "Parçaları Hazırlıyor...",
+ "status": {
+ "pending": "Yüklemeye Hazırlanıyor...",
+ "processing": "Dosya İşleniyor..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/clerk.json b/DigitalHumanWeb/locales/tr-TR/clerk.json
new file mode 100644
index 0000000..da41b58
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "Geri",
+ "badge__default": "Varsayılan",
+ "badge__otherImpersonatorDevice": "Diğer kişiyi taklit eden cihaz",
+ "badge__primary": "Birincil",
+ "badge__requiresAction": "Eylem gerektirir",
+ "badge__thisDevice": "Bu cihaz",
+ "badge__unverified": "Doğrulanmamış",
+ "badge__userDevice": "Kullanıcı cihazı",
+ "badge__you": "Sen",
+ "createOrganization": {
+ "formButtonSubmit": "Organizasyon oluştur",
+ "invitePage": {
+ "formButtonReset": "Atla"
+ },
+ "title": "Organizasyon oluştur"
+ },
+ "dates": {
+ "lastDay": "Dün {{ date | timeString('tr-TR') }}",
+ "next6Days": "{{ date | weekday('tr-TR','long') }} {{ date | timeString('tr-TR') }}",
+ "nextDay": "Yarın {{ date | timeString('tr-TR') }}",
+ "numeric": "{{ date | numeric('tr-TR') }}",
+ "previous6Days": "Geçen {{ date | weekday('tr-TR','long') }} {{ date | timeString('tr-TR') }}",
+ "sameDay": "Bugün {{ date | timeString('tr-TR') }}"
+ },
+ "dividerText": "veya",
+ "footerActionLink__useAnotherMethod": "Başka bir yöntem kullan",
+ "footerPageLink__help": "Yardım",
+ "footerPageLink__privacy": "Gizlilik",
+ "footerPageLink__terms": "Şartlar",
+ "formButtonPrimary": "Devam",
+ "formButtonPrimary__verify": "Doğrula",
+ "formFieldAction__forgotPassword": "Şifremi unuttum?",
+ "formFieldError__matchingPasswords": "Şifreler eşleşiyor.",
+ "formFieldError__notMatchingPasswords": "Şifreler eşleşmiyor.",
+ "formFieldError__verificationLinkExpired": "Doğrulama bağlantısı süresi doldu. Lütfen yeni bir bağlantı isteyin.",
+ "formFieldHintText__optional": "İsteğe bağlı",
+ "formFieldHintText__slug": "Slug, benzersiz olması gereken insan tarafından okunabilir bir kimliktir. Genellikle URL'lerde kullanılır.",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "Hesabı sil",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "ornek@email.com, ornek2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "benim-org",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "Bu alan için otomatik davetleri etkinleştir",
+ "formFieldLabel__backupCode": "Yedek kod",
+ "formFieldLabel__confirmDeletion": "Onay",
+ "formFieldLabel__confirmPassword": "Şifreyi onayla",
+ "formFieldLabel__currentPassword": "Mevcut şifre",
+ "formFieldLabel__emailAddress": "E-posta adresi",
+ "formFieldLabel__emailAddress_username": "E-posta adresi veya kullanıcı adı",
+ "formFieldLabel__emailAddresses": "E-posta adresleri",
+ "formFieldLabel__firstName": "Ad",
+ "formFieldLabel__lastName": "Soyad",
+ "formFieldLabel__newPassword": "Yeni şifre",
+ "formFieldLabel__organizationDomain": "Alan adı",
+ "formFieldLabel__organizationDomainDeletePending": "Bekleyen davetiyeleri ve önerileri sil",
+ "formFieldLabel__organizationDomainEmailAddress": "Doğrulama e-posta adresi",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "Bu alan adı altında bir e-posta adresi girin, bir kod almak ve bu alan adını doğrulamak için.",
+ "formFieldLabel__organizationName": "Ad",
+ "formFieldLabel__organizationSlug": "Slug",
+ "formFieldLabel__passkeyName": "Anahtar adı",
+ "formFieldLabel__password": "Şifre",
+ "formFieldLabel__phoneNumber": "Telefon numarası",
+ "formFieldLabel__role": "Rol",
+ "formFieldLabel__signOutOfOtherSessions": "Diğer cihazlardan çıkış yap",
+ "formFieldLabel__username": "Kullanıcı adı",
+ "impersonationFab": {
+ "action__signOut": "Çıkış yap",
+ "title": "{{identifier}} olarak oturum açıldı"
+ },
+ "locale": "tr-TR",
+ "maintenanceMode": "Şu anda bakımdayız, ancak endişelenmeyin, birkaç dakikadan fazla sürmemeli.",
+ "membershipRole__admin": "Yönetici",
+ "membershipRole__basicMember": "Üye",
+ "membershipRole__guestMember": "Misafir",
+ "organizationList": {
+ "action__createOrganization": "Organizasyon Oluştur",
+ "action__invitationAccept": "Katıl",
+ "action__suggestionsAccept": "Katılma İsteği",
+ "createOrganization": "Organizasyon Oluştur",
+ "invitationAcceptedLabel": "Katıldı",
+ "subtitle": "{{applicationName}}'e devam etmek için",
+ "suggestionsAcceptedLabel": "Onay Bekliyor",
+ "title": "Bir hesap seç",
+ "titleWithoutPersonal": "Bir organizasyon seç"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "Otomatik Davetler",
+ "badge__automaticSuggestion": "Otomatik Öneriler",
+ "badge__manualInvitation": "Otomatik Kayıt Yok",
+ "badge__unverified": "Doğrulanmamış",
+ "createDomainPage": {
+ "subtitle": "Doğrulamak için alan adını ekleyin. Bu alan adına sahip e-posta adreslerine sahip kullanıcılar organizasyona otomatik olarak katılabilir veya katılma isteğinde bulunabilir.",
+ "title": "Alan Adı Ekle"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "Davetler gönderilemedi. Aşağıdaki e-posta adresleri için zaten bekleyen davetler var: {{email_addresses}}.",
+ "formButtonPrimary__continue": "Davet Gönder",
+ "selectDropdown__role": "Rol Seç",
+ "subtitle": "Bir veya daha fazla e-posta adresi girin veya yapıştırın, boşluklar veya virgüllerle ayırın.",
+ "successMessage": "Davetler başarıyla gönderildi",
+ "title": "Yeni Üyeleri Davet Et"
+ },
+ "membersPage": {
+ "action__invite": "Davet Et",
+ "activeMembersTab": {
+ "menuAction__remove": "Üyeyi Kaldır",
+ "tableHeader__actions": "İşlemler",
+ "tableHeader__joined": "Katılma Tarihi",
+ "tableHeader__role": "Rol",
+ "tableHeader__user": "Kullanıcı"
+ },
+ "detailsTitle__emptyRow": "Gösterilecek üye yok",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "Bir e-posta alanınızı organizasyonunuzla bağlayarak kullanıcıları davet edin. Eşleşen bir e-posta alanıyla kaydolan herkes, istediği zaman organizasyona katılabilecektir.",
+ "headerTitle": "Otomatik Davetler",
+ "primaryButton": "Doğrulanmış Alanları Yönet"
+ },
+ "table__emptyRow": "Gösterilecek davet yok"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "Daveti Geri Çek",
+ "tableHeader__invited": "Davet Edildi"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "Eşleşen bir e-posta alanıyla kaydolan kullanıcılar, organizasyonunuza katılma isteğinde bulunmak için bir öneri görebilecekler.",
+ "headerTitle": "Otomatik Öneriler",
+ "primaryButton": "Doğrulanmış Alanları Yönet"
+ },
+ "menuAction__approve": "Onayla",
+ "menuAction__reject": "Reddet",
+ "tableHeader__requested": "İstek",
+ "table__emptyRow": "Gösterilecek istek yok"
+ },
+ "start": {
+ "headerTitle__invitations": "Davetler",
+ "headerTitle__members": "Üyeler",
+ "headerTitle__requests": "İstekler"
+ }
+ },
+ "navbar": {
+ "description": "Organizasyonunuzu yönetin.",
+ "general": "Genel",
+ "members": "Üyeler",
+ "title": "Organizasyon"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "\"{{organizationName}}\" yazarak devam edin.",
+ "messageLine1": "Bu organizasyonu silmek istediğinizden emin misiniz?",
+ "messageLine2": "Bu işlem kalıcı ve geri alınamaz.",
+ "successMessage": "Organizasyonu sildiniz.",
+ "title": "Organizasyonu Sil"
+ },
+ "leaveOrganization": {
+ "actionDescription": "\"{{organizationName}}\" yazarak devam edin.",
+ "messageLine1": "Bu organizasyondan ayrılmak istediğinizden emin misiniz? Bu organizasyona ve uygulamalarına erişiminiz kaybolacak.",
+ "messageLine2": "Bu işlem kalıcı ve geri alınamaz.",
+ "successMessage": "Organizasyondan ayrıldınız.",
+ "title": "Organizasyondan Ayrıl"
+ },
+ "title": "Riskli"
+ },
+ "domainSection": {
+ "menuAction__manage": "Yönet",
+ "menuAction__remove": "Sil",
+ "menuAction__verify": "Doğrula",
+ "primaryButton": "Alan Adı Ekle",
+ "subtitle": "Doğrulanmış bir e-posta alanına dayanarak kullanıcıların organizasyona otomatik olarak katılmasına veya katılma isteğinde bulunmasına izin verin.",
+ "title": "Doğrulanmış Alanlar"
+ },
+ "successMessage": "Organizasyon güncellendi.",
+ "title": "Profili Güncelle"
+ },
+ "removeDomainPage": {
+ "messageLine1": "{{domain}} e-posta alanı kaldırılacak.",
+ "messageLine2": "Bu işlemden sonra kullanıcılar organizasyona otomatik olarak katılamayacak.",
+ "successMessage": "{{domain}} kaldırıldı.",
+ "title": "Alanı Kaldır"
+ },
+ "start": {
+ "headerTitle__general": "Genel",
+ "headerTitle__members": "Üyeler",
+ "profileSection": {
+ "primaryButton": "Profili Güncelle",
+ "title": "Organizasyon Profili",
+ "uploadAction__title": "Logo Yükle"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "Bu alanı kaldırmak, davet edilen kullanıcıları etkileyecektir.",
+ "removeDomainActionLabel__remove": "Alanı Kaldır",
+ "removeDomainSubtitle": "Bu alanı doğrulanmış alanlarınızdan kaldırın.",
+ "removeDomainTitle": "Alanı Kaldır"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "Kullanıcılar kaydolduğunda organizasyona otomatik davet edilir ve istedikleri zaman katılabilirler.",
+ "automaticInvitationOption__label": "Otomatik Davetler",
+ "automaticSuggestionOption__description": "Kullanıcılar bir öneri alır ve organizasyona katılmadan önce bir yönetici tarafından onaylanmalıdır.",
+ "automaticSuggestionOption__label": "Otomatik Öneriler",
+ "calloutInfoLabel": "Kayıt modunu değiştirmek sadece yeni kullanıcıları etkiler.",
+ "calloutInvitationCountLabel": "Kullanıcılara gönderilen bekleyen davetler: {{count}}",
+ "calloutSuggestionCountLabel": "Kullanıcılara gönderilen bekleyen öneriler: {{count}}",
+ "manualInvitationOption__description": "Kullanıcılar yalnızca manuel olarak organizasyona davet edilebilir.",
+ "manualInvitationOption__label": "Otomatik Kayıt Yok",
+ "subtitle": "Bu alan adından gelen kullanıcıların organizasyona nasıl katılabileceğini seçin."
+ },
+ "start": {
+ "headerTitle__danger": "Riskli",
+ "headerTitle__enrollment": "Kayıt Seçenekleri"
+ },
+ "subtitle": "{{domain}} alanı artık doğrulandı. Kayıt modunu seçerek devam edin.",
+ "title": "{{domain}} Güncelle"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "E-posta adresinize gönderilen doğrulama kodunu girin",
+ "formTitle": "Doğrulama Kodu",
+ "resendButton": "Kodu Almadınız mı? Tekrar Gönder",
+ "subtitle": "{{domainName}} alanı e-posta yoluyla doğrulanmalıdır.",
+ "subtitleVerificationCodeScreen": "{{emailAddress}} adresine bir doğrulama kodu gönderildi. Devam etmek için kodu girin.",
+ "title": "Alanı Doğrula"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "Organizasyon Oluştur",
+ "action__invitationAccept": "Katıl",
+ "action__manageOrganization": "Yönet",
+ "action__suggestionsAccept": "Katılma İsteği",
+ "notSelected": "Seçili organizasyon yok",
+ "personalWorkspace": "Kişisel hesap",
+ "suggestionsAcceptedLabel": "Onay Bekliyor"
+ },
+ "paginationButton__next": "Sonraki",
+ "paginationButton__previous": "Önceki",
+ "paginationRowText__displaying": "Gösteriliyor",
+ "paginationRowText__of": "/",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "Hesap ekle",
+ "action__signOutAll": "Tüm hesaplardan çıkış yap",
+ "subtitle": "Devam etmek istediğiniz hesabı seçin.",
+ "title": "Bir hesap seçin"
+ },
+ "alternativeMethods": {
+ "actionLink": "Yardım al",
+ "actionText": "Bunlardan hiçbiri yok mu?",
+ "blockButton__backupCode": "Yedek kod kullan",
+ "blockButton__emailCode": "{{identifier}} adresine e-posta kodu gönder",
+ "blockButton__emailLink": "{{identifier}} adresine e-posta bağlantısı gönder",
+ "blockButton__passkey": "Parolayla giriş yap",
+ "blockButton__password": "Şifrenizle giriş yap",
+ "blockButton__phoneCode": "{{identifier}} numarasına SMS kodu gönder",
+ "blockButton__totp": "Kimlik doğrulama uygulamanızı kullanın",
+ "getHelp": {
+ "blockButton__emailSupport": "E-posta desteği",
+ "content": "Hesabınıza giriş yaparken zorluk yaşıyorsanız, bize e-posta gönderin ve en kısa sürede erişimi geri yüklemek için size yardımcı olacağız.",
+ "title": "Yardım al"
+ },
+ "subtitle": "Sorun mu yaşıyorsunuz? Giriş yapmak için bu yöntemlerden birini kullanabilirsiniz.",
+ "title": "Başka bir yöntem kullan"
+ },
+ "backupCodeMfa": {
+ "subtitle": "Yedek kodunuz, iki adımlı kimlik doğrulama kurulumu sırasında aldığınız koddur.",
+ "title": "Yedek kodu girin"
+ },
+ "emailCode": {
+ "formTitle": "Doğrulama kodu",
+ "resendButton": "Kod almadınız mı? Tekrar gönder",
+ "subtitle": "{{applicationName}} devam etmek için",
+ "title": "E-postanızı kontrol edin"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "Devam etmek için orijinal sekme'ye dönün.",
+ "title": "Bu doğrulama bağlantısı süresi doldu"
+ },
+ "failed": {
+ "subtitle": "Devam etmek için orijinal sekme'ye dönün.",
+ "title": "Bu doğrulama bağlantısı geçersiz"
+ },
+ "formSubtitle": "E-postanıza gönderilen doğrulama bağlantısını kullanın",
+ "formTitle": "Doğrulama bağlantısı",
+ "loading": {
+ "subtitle": "Yakında yönlendirileceksiniz",
+ "title": "Giriş yapılıyor..."
+ },
+ "resendButton": "Bağlantı almadınız mı? Tekrar gönder",
+ "subtitle": "{{applicationName}} devam etmek için",
+ "title": "E-postanızı kontrol edin",
+ "unusedTab": {
+ "title": "Bu sekme'yi kapatabilirsiniz"
+ },
+ "verified": {
+ "subtitle": "Yakında yönlendirileceksiniz",
+ "title": "Başarıyla giriş yapıldı"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Devam etmek için orijinal sekme'ye dönün",
+ "subtitleNewTab": "Devam etmek için yeni açılan sekme'ye dönün",
+ "titleNewTab": "Diğer sekmede oturum açıldı"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "Şifre sıfırlama kodu",
+ "resendButton": "Kod almadınız mı? Tekrar gönder",
+ "subtitle": "Şifrenizi sıfırlamak için",
+ "subtitle_email": "Önce e-posta adresinize gönderilen kodu girin",
+ "subtitle_phone": "Önce telefonunuza gönderilen kodu girin",
+ "title": "Şifrenizi sıfırlayın"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "Şifrenizi sıfırlayın",
+ "label__alternativeMethods": "Veya başka bir yöntemle oturum açın",
+ "title": "Şifrenizi mi unuttunuz?"
+ },
+ "noAvailableMethods": {
+ "message": "Oturum açmaya devam edilemiyor. Kullanılabilir kimlik doğrulama faktörü yok.",
+ "subtitle": "Bir hata oluştu",
+ "title": "Oturum açılamıyor"
+ },
+ "passkey": {
+ "subtitle": "Parolayı kullanarak giriş yapmak kim olduğunuzu onaylar. Cihazınız parmak izinizi, yüzünüzü veya ekran kilidinizi isteyebilir.",
+ "title": "Parolanızı kullanın"
+ },
+ "password": {
+ "actionLink": "Başka bir yöntem kullan",
+ "subtitle": "Hesabınıza ilişkilendirilmiş şifreyi girin",
+ "title": "Şifrenizi girin"
+ },
+ "passwordPwned": {
+ "title": "Şifre tehlikede"
+ },
+ "phoneCode": {
+ "formTitle": "Doğrulama kodu",
+ "resendButton": "Kod almadınız mı? Tekrar gönder",
+ "subtitle": "{{applicationName}} devam etmek için",
+ "title": "Telefonunuzu kontrol edin"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "Doğrulama kodu",
+ "resendButton": "Kod almadınız mı? Tekrar gönder",
+ "subtitle": "Devam etmek için lütfen telefonunuza gönderilen doğrulama kodunu girin",
+ "title": "Telefonunuzu kontrol edin"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "Şifreyi sıfırla",
+ "requiredMessage": "Güvenlik nedeniyle şifrenizi sıfırlamanız gerekmektedir.",
+ "successMessage": "Şifreniz başarıyla değiştirildi. Sizi oturum açıyoruz, lütfen biraz bekleyin.",
+ "title": "Yeni şifre belirleyin"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "Şifrenizi sıfırlamadan önce kimliğinizi doğrulamamız gerekiyor."
+ },
+ "start": {
+ "actionLink": "Üye ol",
+ "actionLink__use_email": "E-posta kullan",
+ "actionLink__use_email_username": "E-posta veya kullanıcı adı kullan",
+ "actionLink__use_passkey": "Yerine parolayı kullan",
+ "actionLink__use_phone": "Telefon kullan",
+ "actionLink__use_username": "Kullanıcı adı kullan",
+ "actionText": "Hesabınız yok mu?",
+ "subtitle": "Tekrar hoş geldiniz! Devam etmek için lütfen oturum açın",
+ "title": "{{applicationName}}'e oturum açın"
+ },
+ "totpMfa": {
+ "formTitle": "Doğrulama kodu",
+ "subtitle": "Devam etmek için lütfen kimlik doğrulama uygulamanız tarafından oluşturulan doğrulama kodunu girin",
+ "title": "İki adımlı doğrulama"
+ }
+ },
+ "signInEnterPasswordTitle": "Şifrenizi girin",
+ "signUp": {
+ "continue": {
+ "actionLink": "Oturum aç",
+ "actionText": "Zaten bir hesabınız var mı?",
+ "subtitle": "Devam etmek için lütfen kalan detayları doldurun.",
+ "title": "Eksik alanları doldurun"
+ },
+ "emailCode": {
+ "formSubtitle": "E-posta adresinize gönderilen doğrulama kodunu girin",
+ "formTitle": "Doğrulama kodu",
+ "resendButton": "Kod almadınız mı? Tekrar gönder",
+ "subtitle": "E-postanıza gönderilen doğrulama kodunu girin",
+ "title": "E-postanızı doğrulayın"
+ },
+ "emailLink": {
+ "formSubtitle": "E-posta adresinize gönderilen doğrulama bağlantısını kullanın",
+ "formTitle": "Doğrulama bağlantısı",
+ "loading": {
+ "title": "Üye olunuyor..."
+ },
+ "resendButton": "Bağlantı almadınız mı? Tekrar gönder",
+ "subtitle": "{{applicationName}} devam etmek için",
+ "title": "E-postanızı doğrulayın",
+ "verified": {
+ "title": "Başarıyla üye oldunuz"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Devam etmek için yeni açılan sekme'ye dönün",
+ "subtitleNewTab": "Devam etmek için önceki sekme'ye dönün",
+ "title": "E-posta doğrulaması başarılı"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "Telefon numaranıza gönderilen doğrulama kodunu girin",
+ "formTitle": "Doğrulama kodu",
+ "resendButton": "Kod almadınız mı? Tekrar gönder",
+ "subtitle": "Telefonunuza gönderilen doğrulama kodunu girin",
+ "title": "Telefonunuzu doğrulayın"
+ },
+ "start": {
+ "actionLink": "Oturum aç",
+ "actionText": "Zaten bir hesabınız var mı?",
+ "subtitle": "Hoş geldiniz! Başlamak için lütfen detayları doldurun.",
+ "title": "Hesabınızı oluşturun"
+ }
+ },
+ "socialButtonsBlockButton": "Devam et {{provider|titleize}}",
+ "unstable__errors": {
+ "captcha_invalid": "Güvenlik doğrulamaları başarısız olduğu için kaydolma başarısız oldu. Lütfen tekrar denemek için sayfayı yenileyin veya daha fazla yardım için destek ekibine başvurun.",
+ "captcha_unavailable": "Bot doğrulaması başarısız olduğu için kaydolma başarısız oldu. Lütfen tekrar denemek için sayfayı yenileyin veya daha fazla yardım için destek ekibine başvurun.",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "Bu e-posta adresi alınmış. Lütfen başka bir tane deneyin.",
+ "form_identifier_exists__phone_number": "Bu telefon numarası alınmış. Lütfen başka bir tane deneyin.",
+ "form_identifier_exists__username": "Bu kullanıcı adı alınmış. Lütfen başka bir tane deneyin.",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "E-posta adresi geçerli bir e-posta adresi olmalıdır.",
+ "form_param_format_invalid__phone_number": "Telefon numarası geçerli bir uluslararası formatta olmalıdır.",
+ "form_param_max_length_exceeded__first_name": "Adınız 256 karakteri aşmamalıdır.",
+ "form_param_max_length_exceeded__last_name": "Soyadınız 256 karakteri aşmamalıdır.",
+ "form_param_max_length_exceeded__name": "Adınız 256 karakteri aşmamalıdır.",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "Şifreniz yeterince güçlü değil.",
+ "form_password_pwned": "Bu şifre ihlal edilmiş bir parça olarak bulundu ve kullanılamaz, lütfen başka bir şifre deneyin.",
+ "form_password_pwned__sign_in": "Bu şifre ihlal edilmiş bir parça olarak bulundu ve kullanılamaz, lütfen şifrenizi sıfırlayın.",
+ "form_password_size_in_bytes_exceeded": "Şifreniz izin verilen maksimum bayt sayısını aştı, lütfen kısaltın veya bazı özel karakterleri kaldırın.",
+ "form_password_validation_failed": "Yanlış Şifre",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "Son kimliğinizi silemezsiniz.",
+ "not_allowed_access": "",
+ "passkey_already_exists": "Bu cihazda zaten kayıtlı bir geçiş anahtarı var.",
+ "passkey_not_supported": "Bu cihazda geçiş anahtarları desteklenmiyor.",
+ "passkey_pa_not_supported": "Kayıt, platform kimlik doğrulayıcı gerektirir ancak cihaz bunu desteklemiyor.",
+ "passkey_registration_cancelled": "Geçiş anahtarı kaydı iptal edildi veya zaman aşımına uğradı.",
+ "passkey_retrieval_cancelled": "Geçiş anahtarı doğrulaması iptal edildi veya zaman aşımına uğradı.",
+ "passwordComplexity": {
+ "maximumLength": "{{length}} karakterden az olmalıdır",
+ "minimumLength": "{{length}} veya daha fazla karakter",
+ "requireLowercase": "bir küçük harf",
+ "requireNumbers": "bir sayı",
+ "requireSpecialCharacter": "bir özel karakter",
+ "requireUppercase": "büyük harf",
+ "sentencePrefix": "Şifreniz şunları içermelidir"
+ },
+ "phone_number_exists": "Bu telefon numarası alınmış. Lütfen başka bir tane deneyin.",
+ "zxcvbn": {
+ "couldBeStronger": "Şifreniz işe yarıyor, ancak daha güçlü olabilir. Daha fazla karakter eklemeyi deneyin.",
+ "goodPassword": "Şifreniz tüm gerekli gereksinimleri karşılıyor.",
+ "notEnough": "Şifreniz yeterince güçlü değil.",
+ "suggestions": {
+ "allUppercase": "Tüm harfleri büyük yapın.",
+ "anotherWord": "Daha az yaygın olan daha fazla kelime ekleyin.",
+ "associatedYears": "Size ilişkilendirilen yıllardan kaçının.",
+ "capitalization": "Sadece ilk harfi büyük yapın.",
+ "dates": "Tarihlerden kaçının.",
+ "l33t": "'@' yerine 'a' gibi tahmin edilebilir harf değişimlerinden kaçının.",
+ "longerKeyboardPattern": "Daha uzun klavye desenleri kullanın ve yazma yönünü birden çok kez değiştirin.",
+ "noNeed": "Semboller, sayılar veya büyük harfler kullanmadan da güçlü şifreler oluşturabilirsiniz.",
+ "pwned": "Bu şifreyi başka bir yerde kullanıyorsanız, değiştirmelisiniz.",
+ "recentYears": "Son yıllardan kaçının.",
+ "repeated": "Tekrarlanan kelimelerden ve karakterlerden kaçının.",
+ "reverseWords": "Ortak kelimelerin ters yazımlarından kaçının.",
+ "sequences": "Ortak karakter dizilerinden kaçının.",
+ "useWords": "Birden fazla kelime kullanın, ancak ortak ifadelerden kaçının."
+ },
+ "warnings": {
+ "common": "Bu sıkça kullanılan bir şifredir.",
+ "commonNames": "Ortak isimler ve soyadları kolayca tahmin edilebilir.",
+ "dates": "Tarihler kolayca tahmin edilebilir.",
+ "extendedRepeat": "\"abcabcabc\" gibi tekrarlanan karakter desenleri kolayca tahmin edilebilir.",
+ "keyPattern": "Kısa klavye desenleri kolayca tahmin edilebilir.",
+ "namesByThemselves": "Tek başına isimler veya soyadları kolayca tahmin edilebilir.",
+ "pwned": "Şifreniz İnternet'teki bir veri ihlalinde ortaya çıktı.",
+ "recentYears": "Son yıllar kolayca tahmin edilebilir.",
+ "sequences": "\"abc\" gibi ortak karakter dizilerinden kaçının.",
+ "similarToCommon": "Bu sıkça kullanılan bir şifreye benziyor.",
+ "simpleRepeat": "\"aaa\" gibi tekrarlanan karakterler kolayca tahmin edilebilir.",
+ "straightRow": "Klavyenizdeki düz sıralar kolayca tahmin edilebilir.",
+ "topHundred": "Bu sıkça kullanılan bir şifredir.",
+ "topTen": "Bu yoğun bir şekilde kullanılan bir şifredir.",
+ "userInputs": "Kişisel veya sayfa ile ilgili veriler olmamalıdır.",
+ "wordByItself": "Tek kelimeler kolayca tahmin edilebilir."
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "Hesap Ekle",
+ "action__manageAccount": "Hesabı Yönet",
+ "action__signOut": "Çıkış Yap",
+ "action__signOutAll": "Tüm Hesaplardan Çıkış Yap"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "Kopyalandı!",
+ "actionLabel__copy": "Tümünü Kopyala",
+ "actionLabel__download": "İndir .txt",
+ "actionLabel__print": "Yazdır",
+ "infoText1": "Yedek kodlar bu hesap için etkinleştirilecek.",
+ "infoText2": "Yedek kodları gizli tutun ve güvenli bir şekilde saklayın. Şüphelenirseniz yedek kodları yeniden oluşturabilirsiniz.",
+ "subtitle__codelist": "Güvenli bir şekilde saklayın ve gizli tutun.",
+ "successMessage": "Yedek kodlar şimdi etkinleştirildi. Hesabınıza giriş yapmak için bunlardan birini kullanabilirsiniz, kimlik doğrulama cihazınıza erişiminizi kaybederseniz. Her kod yalnızca bir kez kullanılabilir.",
+ "successSubtitle": "Hesabınıza giriş yapmak için bunlardan birini kullanabilirsiniz, kimlik doğrulama cihazınıza erişiminizi kaybederseniz.",
+ "title": "Yedek kod doğrulaması ekle",
+ "title__codelist": "Yedek kodlar"
+ },
+ "connectedAccountPage": {
+ "formHint": "Hesabınızı bağlamak için bir sağlayıcı seçin.",
+ "formHint__noAccounts": "Mevcut harici hesap sağlayıcıları yok.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} bu hesaptan kaldırılacak.",
+ "messageLine2": "Bu bağlı hesabı artık kullanamayacak ve bağımlı özellikler artık çalışmayacak.",
+ "successMessage": "{{connectedAccount}} hesabınızdan kaldırıldı.",
+ "title": "Bağlı hesabı kaldır"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "Sağlayıcı hesabınıza eklendi",
+ "title": "Bağlı hesap ekle"
+ },
+ "deletePage": {
+ "actionDescription": "Devam etmek için \"Hesabı Sil\" yazın.",
+ "confirm": "Hesabı Sil",
+ "messageLine1": "Hesabınızı silmek istediğinizden emin misiniz?",
+ "messageLine2": "Bu işlem kalıcı ve geri alınamaz.",
+ "title": "Hesabı Sil"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "Bu e-posta adresine gönderilen doğrulama kodunu içeren bir e-posta gönderilecektir.",
+ "formSubtitle": "{{identifier}}'e gönderilen doğrulama kodunu girin",
+ "formTitle": "Doğrulama kodu",
+ "resendButton": "Kod almadınız mı? Tekrar gönder",
+ "successMessage": "E-posta {{identifier}} hesabınıza eklendi."
+ },
+ "emailLink": {
+ "formHint": "Bu e-posta adresine gönderilen doğrulama bağlantısını içeren bir e-posta gönderilecektir.",
+ "formSubtitle": "{{identifier}}'e gönderilen e-postadaki doğrulama bağlantısına tıklayın",
+ "formTitle": "Doğrulama bağlantısı",
+ "resendButton": "Bağlantı almadınız mı? Tekrar gönder",
+ "successMessage": "E-posta {{identifier}} hesabınıza eklendi."
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}} bu hesaptan kaldırılacak.",
+ "messageLine2": "Bu e-posta adresini kullanarak artık oturum açamayacaksınız.",
+ "successMessage": "{{emailAddress}} hesabınızdan kaldırıldı.",
+ "title": "E-posta adresini kaldır"
+ },
+ "title": "E-posta adresi ekle",
+ "verifyTitle": "E-posta adresini doğrula"
+ },
+ "formButtonPrimary__add": "Ekle",
+ "formButtonPrimary__continue": "Devam",
+ "formButtonPrimary__finish": "Bitir",
+ "formButtonPrimary__remove": "Kaldır",
+ "formButtonPrimary__save": "Kaydet",
+ "formButtonReset": "İptal",
+ "mfaPage": {
+ "formHint": "Eklemek için bir yöntem seçin.",
+ "title": "İki adımlı doğrulama ekle"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "Mevcut numarayı kullan",
+ "primaryButton__addPhoneNumber": "Telefon numarası ekle",
+ "removeResource": {
+ "messageLine1": "{{identifier}} artık oturum açarken doğrulama kodları almayacak.",
+ "messageLine2": "Hesabınız belki de güvenli olmayabilir. Devam etmek istediğinizden emin misiniz?",
+ "successMessage": "SMS kodu iki adımlı doğrulama {{mfaPhoneCode}} için kaldırıldı",
+ "title": "İki adımlı doğrulamayı kaldır"
+ },
+ "subtitle__availablePhoneNumbers": "SMS kodu iki adımlı doğrulama için mevcut bir telefon numarası seçin veya yeni bir tane ekleyin.",
+ "subtitle__unavailablePhoneNumbers": "SMS kodu iki adımlı doğrulama için mevcut telefon numaraları yok, lütfen yeni bir tane ekleyin.",
+ "successMessage1": "Oturum açarken, bu telefon numarasına gönderilen doğrulama kodunu ek bir adım olarak girmeniz gerekecek.",
+ "successMessage2": "Bu yedek kodları kaydedin ve güvenli bir yere saklayın. Kimlik doğrulama cihazınıza erişiminizi kaybederseniz, yedek kodları kullanabilirsiniz.",
+ "successTitle": "SMS kodu doğrulaması etkinleştirildi",
+ "title": "SMS kodu doğrulaması ekle"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "QR kodunu tarayabilirsiniz",
+ "buttonUnableToScan__nonPrimary": "QR kodunu tarayamıyor musunuz?",
+ "infoText__ableToScan": "Kimlik doğrulayıcı uygulamanızda yeni bir giriş yöntemi kurun ve aşağıdaki QR kodunu bağlamak için tarayın.",
+ "infoText__unableToScan": "Kimlik doğrulayıcıda yeni bir giriş yöntemi kurun ve aşağıda verilen Anahtarı girin.",
+ "inputLabel__unableToScan1": "Zaman tabanlı veya Tek kullanımlık şifrelerin etkin olduğundan emin olun, sonra hesabınızı bağlamayı bitirin.",
+ "inputLabel__unableToScan2": "Ayrıca, kimlik doğrulayıcınız TOTP URI'leri destekliyorsa, tam URI'yi de kopyalayabilirsiniz."
+ },
+ "removeResource": {
+ "messageLine1": "Bu kimlik doğrulayıcıdan gelen doğrulama kodları artık gerekli olmayacak.",
+ "messageLine2": "Hesabınız belki de güvenli olmayabilir. Devam etmek istediğinizden emin misiniz?",
+ "successMessage": "Kimlik doğrulayıcı uygulaması ile iki adımlı doğrulama kaldırıldı.",
+ "title": "İki adımlı doğrulamayı kaldır"
+ },
+ "successMessage": "İki adımlı doğrulama şimdi etkin. Oturum açarken, hesabınıza ek bir adım olarak bu kimlik doğrulayıcıdan bir doğrulama kodu girmeniz gerekecek.",
+ "title": "Kimlik doğrulayıcı uygulaması ekle",
+ "verifySubtitle": "Kimlik doğrulayıcınız tarafından oluşturulan doğrulama kodunu girin",
+ "verifyTitle": "Doğrulama kodu"
+ },
+ "mobileButton__menu": "Menü",
+ "navbar": {
+ "account": "Profil",
+ "description": "Hesap bilgilerinizi yönetin.",
+ "security": "Güvenlik",
+ "title": "Hesap"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} bu hesaptan kaldırılacak.",
+ "title": "Parola kaldır"
+ },
+ "subtitle__rename": "Parola adını değiştirerek daha kolay bulabilirsiniz.",
+ "title__rename": "Parolayı Yeniden Adlandır"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "Eski şifrenizi kullanan diğer cihazlardan çıkış yapmanız önerilir.",
+ "readonly": "Şu anda şifreniz düzenlenemez çünkü yalnızca kurumsal bağlantı aracılığıyla oturum açabilirsiniz.",
+ "successMessage__set": "Şifreniz oluşturuldu.",
+ "successMessage__signOutOfOtherSessions": "Diğer cihazlardan çıkış yapıldı.",
+ "successMessage__update": "Şifreniz güncellendi.",
+ "title__set": "Şifre Oluştur",
+ "title__update": "Şifre Güncelle"
+ },
+ "phoneNumberPage": {
+ "infoText": "Doğrulama kodu içeren bir metin mesajı bu telefon numarasına gönderilecektir. Mesaj ve veri ücretleri uygulanabilir.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} bu hesaptan kaldırılacak.",
+ "messageLine2": "Bu telefon numarasını kullanarak artık oturum açamayacaksınız.",
+ "successMessage": "{{phoneNumber}} hesabınızdan kaldırıldı.",
+ "title": "Telefon Numarasını Kaldır"
+ },
+ "successMessage": "{{identifier}} hesabınıza eklendi.",
+ "title": "Telefon Numarası Ekle",
+ "verifySubtitle": "{{identifier}}'a gönderilen doğrulama kodunu girin",
+ "verifyTitle": "Telefon Numarasını Doğrula"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "Önerilen boyut 1:1, en fazla 10MB.",
+ "imageFormDestructiveActionSubtitle": "Kaldır",
+ "imageFormSubtitle": "Yükle",
+ "imageFormTitle": "Profil resmi",
+ "readonly": "Profil bilgileriniz kurumsal bağlantı tarafından sağlanmıştır ve düzenlenemez.",
+ "successMessage": "Profiliniz güncellendi.",
+ "title": "Profil Güncelle"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "Cihazdan çıkış yap",
+ "title": "Aktif cihazlar"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "Yeniden dene",
+ "actionLabel__reauthorize": "Şimdi yetkilendir",
+ "destructiveActionTitle": "Kaldır",
+ "primaryButton": "Hesabı bağla",
+ "subtitle__reauthorize": "Gerekli kapsamlar güncellendi ve sınırlı işlevsellik yaşayabilirsiniz. Herhangi bir sorun yaşamamak için bu uygulamayı yeniden yetkilendirin",
+ "title": "Bağlı hesaplar"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "Hesabı sil",
+ "title": "Hesabı sil"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "E-postayı kaldır",
+ "detailsAction__nonPrimary": "Ana e-posta olarak ayarla",
+ "detailsAction__primary": "Doğrulamayı tamamla",
+ "detailsAction__unverified": "Doğrula",
+ "primaryButton": "E-posta adresi ekle",
+ "title": "E-posta adresleri"
+ },
+ "enterpriseAccountsSection": {
+ "title": "Kurumsal hesaplar"
+ },
+ "headerTitle__account": "Profil detayları",
+ "headerTitle__security": "Güvenlik",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "Yeniden oluştur",
+ "headerTitle": "Yedek kodlar",
+ "subtitle__regenerate": "Güvenli yedek kodların yeni bir setini alın. Önceki yedek kodlar silinecek ve kullanılamayacak.",
+ "title__regenerate": "Yedek kodları yeniden oluştur"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "Varsayılan olarak ayarla",
+ "destructiveActionLabel": "Kaldır"
+ },
+ "primaryButton": "İki adımlı doğrulama ekle",
+ "title": "İki adımlı doğrulama",
+ "totp": {
+ "destructiveActionTitle": "Kaldır",
+ "headerTitle": "Kimlik doğrulama uygulaması"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "Kaldır",
+ "menuAction__rename": "Yeniden adlandır",
+ "title": "Parola anahtarları"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "Parola ayarla",
+ "primaryButton__updatePassword": "Parolayı güncelle",
+ "title": "Parola"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "Telefon numarasını kaldır",
+ "detailsAction__nonPrimary": "Ana telefon olarak ayarla",
+ "detailsAction__primary": "Doğrulamayı tamamla",
+ "detailsAction__unverified": "Telefon numarasını doğrula",
+ "primaryButton": "Telefon numarası ekle",
+ "title": "Telefon numaraları"
+ },
+ "profileSection": {
+ "primaryButton": "Profili güncelle",
+ "title": "Profil"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "Kullanıcı adı ayarla",
+ "primaryButton__updateUsername": "Kullanıcı adını güncelle",
+ "title": "Kullanıcı adı"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "Cüzdanı kaldır",
+ "primaryButton": "Web3 cüzdanları",
+ "title": "Web3 cüzdanları"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "Kullanıcı adınız güncellendi.",
+ "title__set": "Kullanıcı Adı Belirle",
+ "title__update": "Kullanıcı Adını Güncelle"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} bu hesaptan kaldırılacak.",
+ "messageLine2": "Bu web3 cüzdanını kullanarak artık oturum açamayacaksınız.",
+ "successMessage": "{{web3Wallet}} hesabınızdan kaldırıldı.",
+ "title": "Web3 Cüzdanını Kaldır"
+ },
+ "subtitle__availableWallets": "Hesabınıza bağlanmak için bir web3 cüzdanı seçin.",
+ "subtitle__unavailableWallets": "Uygun web3 cüzdanı bulunmamaktadır.",
+ "successMessage": "Cüzdan hesabınıza eklendi.",
+ "title": "Web3 Cüzdanı Ekle"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/common.json b/DigitalHumanWeb/locales/tr-TR/common.json
new file mode 100644
index 0000000..0c261e9
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "Hakkında",
+ "advanceSettings": "Gelişmiş Ayarlar",
+ "alert": {
+ "cloud": {
+ "action": "Ücretsiz Deneyin",
+ "desc": "Tüm kayıtlı kullanıcılarımıza {{credit}} ücretsiz hesaplama puanı sunuyoruz, karmaşık yapılandırmaya gerek yok, hemen kullanıma hazır, sınırsız konuşma geçmişi ve global bulut senkronizasyonu desteği, daha fazla gelişmiş özellikleri keşfetmeye hazır olun.",
+ "descOnMobile": "Tüm kayıtlı kullanıcılara {{credit}} ücretsiz hesaplama puanı sunuyoruz, karmaşık yapılandırmaya gerek yok, hemen kullanıma hazır.",
+ "title": "{{name}}'i Denemek İçin Hoş Geldiniz"
+ }
+ },
+ "appInitializing": "Uygulama başlatılıyor...",
+ "autoGenerate": "Otomatik Oluştur",
+ "autoGenerateTooltip": "Auto-generate agent description based on prompts",
+ "autoGenerateTooltipDisabled": "Otomatik tamamlama işlevini kullanmadan önce ipucu kelimesini girin",
+ "back": "Geri",
+ "batchDelete": "Toplu Sil",
+ "blog": "Ürün Blogu",
+ "cancel": "İptal",
+ "changelog": "Changelog",
+ "close": "Kapat",
+ "contact": "Bize Ulaşın",
+ "copy": "Kopyala",
+ "copyFail": "Kopyalama başarısız oldu",
+ "copySuccess": "Kopyalama Başarılı",
+ "dataStatistics": {
+ "messages": "Mesajlar",
+ "sessions": "Oturumlar",
+ "today": "Bugün",
+ "topics": "Konular"
+ },
+ "defaultAgent": "Varsayılan Asistan",
+ "defaultSession": "Varsayılan Asistan",
+ "delete": "Sil",
+ "document": "Belge",
+ "download": "İndir",
+ "duplicate": "Kopya oluştur",
+ "edit": "Düzenle",
+ "export": "Dışa Aktar",
+ "exportType": {
+ "agent": "Asistan Ayarlarını Dışa Aktar",
+ "agentWithMessage": "Asistan ve Mesajları Dışa Aktar",
+ "all": "Ayarları ve Asistan Verilerini Dışa Aktar",
+ "allAgent": "Tüm Asistan Verilerini Dışa Aktar",
+ "allAgentWithMessage": "Tüm Asistan ve Mesajları Dışa Aktar",
+ "globalSetting": "Ayarları Dışa Aktar"
+ },
+ "feedback": "Feedback",
+ "follow": "Bizi {{name}} üzerinde takip edin",
+ "footer": {
+ "action": {
+ "feedback": "Geri Bildirimde Bulun",
+ "star": "GitHub'da Yıldız Ekleyin"
+ },
+ "and": "ve",
+ "feedback": {
+ "action": "Geri Bildirim Paylaş",
+ "desc": "Her fikir ve öneriniz bizim için değerlidir, görüşlerinizi duymak için sabırsızlanıyoruz! LobeChat'in ürün özellikleri ve kullanıcı deneyimi hakkında geri bildirim sağlayarak bize yardımcı olun ve platformumuzu daha da geliştirmemize katkıda bulunun.",
+ "title": "GitHub'da Değerli Geri Bildiriminizi Paylaşın"
+ },
+ "later": "Daha Sonra",
+ "star": {
+ "action": "Yıldız Ekleyin",
+ "desc": "Ürünümüzü beğendiyseniz ve bizi desteklemek istiyorsanız, GitHub'da bize bir yıldız verebilir misiniz? Bu küçük jest bizim için büyük anlam taşır ve sizin için özellikler sunmamızı teşvik eder.",
+ "title": "GitHub'da Bize Yıldız Ekleyin"
+ },
+ "title": "Ürünümüzü Beğendiniz mi?"
+ },
+ "fullscreen": "Tam Ekran Modu",
+ "historyRange": "Geçmiş Aralığı",
+ "import": "İçe Aktar",
+ "importModal": {
+ "error": {
+ "desc": "Üzgünüz, veri aktarımı sırasında bir hata oluştu. Lütfen tekrar deneyin veya <1>bir sorun bildirin1>, sorunu çözmek için elimizden geleni yapacağız.",
+ "title": "Veri Aktarımı Başarısız"
+ },
+ "finish": {
+ "onlySettings": "Sistem ayarları başarıyla içe aktarıldı",
+ "start": "Başla",
+ "subTitle": "Veri başarıyla aktarıldı, {{duration}} saniye sürdü. İçe aktarma ayrıntıları aşağıdaki gibidir:",
+ "title": "Veri başarıyla aktarıldı"
+ },
+ "loading": "Veri aktarılıyor, lütfen bekleyin...",
+ "preparing": "Veri aktarımı modülü hazırlanıyor...",
+ "result": {
+ "added": "Başarıyla içe aktarıldı",
+ "errors": "İçe Aktarma Hataları",
+ "messages": "Mesajlar",
+ "sessionGroups": "Gruplar",
+ "sessions": "Asistanlar",
+ "skips": "Geç",
+ "topics": "Konular",
+ "type": "Tip"
+ },
+ "title": "Veri İçe Aktar",
+ "uploading": {
+ "desc": "Dosya şu anda büyük olduğundan, yüklenmeye çalışılıyor...",
+ "restTime": "Kalan Zaman",
+ "speed": "Yükleme Hızı"
+ }
+ },
+ "information": "Topluluk ve Bilgi",
+ "installPWA": "Tarayıcı Uygulamasını Yükle",
+ "lang": {
+ "ar": "Arapça",
+ "bg-BG": "Bulgarca",
+ "bn": "Bengalce",
+ "cs-CZ": "Çekçe",
+ "da-DK": "Danca",
+ "de-DE": "Almanca",
+ "el-GR": "Yunanca",
+ "en": "İngilizce",
+ "en-US": "İngilizce",
+ "es-ES": "İspanyolca",
+ "fi-FI": "Fince",
+ "fr-FR": "Fransızca",
+ "hi-IN": "Hintçe",
+ "hu-HU": "Macarca",
+ "id-ID": "Endonezce",
+ "it-IT": "İtalyanca",
+ "ja-JP": "Japonca",
+ "ko-KR": "Korece",
+ "nl-NL": "Felemenkçe",
+ "no-NO": "Norveççe",
+ "pl-PL": "Polonyaca",
+ "pt-BR": "Portekizce",
+ "pt-PT": "Portekizce",
+ "ro-RO": "Romence",
+ "ru-RU": "Rusça",
+ "sk-SK": "Slovakça",
+ "sr-RS": "Sırpça",
+ "sv-SE": "İsveççe",
+ "th-TH": "Tayca",
+ "tr-TR": "Türkçe",
+ "uk-UA": "Ukraynaca",
+ "vi-VN": "Vietnamca",
+ "zh": "Basitleştirilmiş Çince",
+ "zh-CN": "Basitleştirilmiş Çince",
+ "zh-TW": "Geleneksel Çince"
+ },
+ "layoutInitializing": "Başlatılıyor...",
+ "legal": "Hukuki Bildirim",
+ "loading": "Yükleniyor...",
+ "mail": {
+ "business": "İşbirliği",
+ "support": "E-posta Desteği"
+ },
+ "oauth": "SSO Girişi",
+ "officialSite": "Resmi Site",
+ "ok": "Tamam",
+ "password": "Password",
+ "pin": "Pin",
+ "pinOff": "Unpin",
+ "privacy": "Gizlilik Politikası",
+ "regenerate": "Tekrarla",
+ "rename": "Yeniden İsimlendir",
+ "reset": "Reset",
+ "retry": "Yeniden Dene",
+ "send": "Gönder",
+ "setting": "Ayarlar",
+ "share": "Paylaş",
+ "stop": "Dur",
+ "sync": {
+ "actions": {
+ "settings": "Senkronizasyon Ayarları",
+ "sync": "Hemen Senkronize Et"
+ },
+ "awareness": {
+ "current": "Mevcut Cihaz"
+ },
+ "channel": "Kanal",
+ "disabled": {
+ "actions": {
+ "enable": "Bulut Senkronizasyonunu Etkinleştir",
+ "settings": "Senkronizasyon Ayarlarını Yapılandır"
+ },
+ "desc": "Mevcut oturum verileri sadece bu tarayıcıda depolanır. Verilerinizi birden fazla cihaz arasında senkronize etmeniz gerekiyorsa bulut senkronizasyonunu yapılandırın ve etkinleştirin.",
+ "title": "Veri Senkronizasyonu Devre Dışı"
+ },
+ "enabled": {
+ "title": "Veri Senkronizasyonu"
+ },
+ "status": {
+ "connecting": "Bağlanılıyor",
+ "disabled": "Senkronizasyon Devre Dışı",
+ "ready": "Bağlandı",
+ "synced": "Senkronize Edildi",
+ "syncing": "Senkronize Ediliyor",
+ "unconnected": "Bağlantı Başarısız"
+ },
+ "title": "Senkronizasyon Durumu",
+ "unconnected": {
+ "tip": "Sinyal sunucusuna bağlantı başarısız oldu, noktadan noktaya iletişim kanalı kurulamayabilir, lütfen ağı kontrol edip tekrar deneyin"
+ }
+ },
+ "tab": {
+ "chat": "Chat",
+ "discover": "Keşfet",
+ "files": "Dosyalar",
+ "me": "ben",
+ "setting": "Ayarlar"
+ },
+ "telemetry": {
+ "allow": "İzin ver",
+ "deny": "Reddet",
+ "desc": "Anonim kullanım bilgilerinizi toplamamıza izin vererek LobeChat'i geliştirmemize ve size daha iyi bir ürün deneyimi sunmamıza yardımcı olabilirsiniz. Dilediğiniz zaman 'Ayarlar' - 'Hakkında' bölümünden devre dışı bırakabilirsiniz.",
+ "learnMore": "Daha Fazla Bilgi",
+ "title": "LobeChat'i Geliştirmemize Yardımcı Olun"
+ },
+ "temp": "Geçici",
+ "terms": "Hizmet Koşulları",
+ "updateAgent": "Asistan Bilgilerini Güncelle",
+ "upgradeVersion": {
+ "action": "Güncelle",
+ "hasNew": "Yeni güncelleme mevcut",
+ "newVersion": "Yeni sürüm mevcut: {{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "Anonim Kullanıcı",
+ "billing": "Fatura Yönetimi",
+ "cloud": "{{name}}'i Deneyin",
+ "data": "Veri Depolama",
+ "defaultNickname": "Topluluk Kullanıcısı",
+ "discord": "Topluluk Destek",
+ "docs": "Belgeler",
+ "email": "E-posta Destek",
+ "feedback": "Geribildirim ve Öneriler",
+ "help": "Yardım Merkezi",
+ "moveGuide": "Ayarlar düğmesini buraya taşıyın",
+ "plans": "Planlar",
+ "preview": "Önizleme",
+ "profile": "Hesap Yönetimi",
+ "setting": "Uygulama Ayarları",
+ "usages": "Kullanım İstatistikleri"
+ },
+ "version": "Sürüm"
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/components.json b/DigitalHumanWeb/locales/tr-TR/components.json
new file mode 100644
index 0000000..288a593
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "Dosyaları buraya sürükleyin, birden fazla resim yüklemeyi destekler.",
+ "dragFileDesc": "Resimleri ve dosyaları buraya sürükleyin, birden fazla resim ve dosya yüklemeyi destekler.",
+ "dragFileTitle": "Dosya Yükle",
+ "dragTitle": "Resim Yükle"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "Bilgi tabanına ekle",
+ "addToOtherKnowledgeBase": "Diğer bilgi tabanına ekle",
+ "batchChunking": "Toplu parçalara ayırma",
+ "chunking": "Parçalara ayırma",
+ "chunkingTooltip": "Dosyayı birden fazla metin parçasına ayırıp vektörleştirdikten sonra, anlamsal arama ve dosya diyalogları için kullanılabilir",
+ "confirmDelete": "Bu dosya silinecek, silindikten sonra geri alınamaz, lütfen işleminizi onaylayın",
+ "confirmDeleteMultiFiles": "Seçilen {{count}} dosya silinecek, silindikten sonra geri alınamaz, lütfen işleminizi onaylayın",
+ "confirmRemoveFromKnowledgeBase": "Seçilen {{count}} dosya bilgi tabanından kaldırılacak, kaldırıldıktan sonra dosyalar tüm dosyalar arasında görüntülenebilir, lütfen işleminizi onaylayın",
+ "copyUrl": "Bağlantıyı kopyala",
+ "copyUrlSuccess": "Dosya adresi başarıyla kopyalandı",
+ "createChunkingTask": "Hazırlanıyor...",
+ "deleteSuccess": "Dosya başarıyla silindi",
+ "downloading": "Dosya indiriliyor...",
+ "removeFromKnowledgeBase": "Bilgi tabanından kaldır",
+ "removeFromKnowledgeBaseSuccess": "Dosya başarıyla kaldırıldı"
+ },
+ "bottom": "Artık sona geldik",
+ "config": {
+ "showFilesInKnowledgeBase": "Bilgi tabanındaki içeriği göster"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "Dosya yükle",
+ "folder": "Klasör yükle",
+ "knowledgeBase": "Yeni bilgi tabanı oluştur"
+ },
+ "or": "veya",
+ "title": "Dosyayı veya klasörü buraya sürükleyin"
+ },
+ "title": {
+ "createdAt": "Oluşturulma zamanı",
+ "size": "Boyut",
+ "title": "Dosya"
+ },
+ "total": {
+ "fileCount": "Toplam {{count}} öğe",
+ "selectedCount": "Seçilen {{count}} öğe"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "Metin parçaları henüz tamamen vektörleştirilmedi, bu durum anlamsal arama işlevinin kullanılamamasına neden olabilir, arama kalitesini artırmak için lütfen metin parçalarını vektörleştirin",
+ "error": "Vektörleştirme başarısız oldu",
+ "errorResult": "Vektörleştirme başarısız oldu, lütfen kontrol edip tekrar deneyin. Başarısız olma nedeni:",
+ "processing": "Metin parçaları vektörleştiriliyor, lütfen bekleyin",
+ "success": "Mevcut metin parçaları tamamen vektörleştirildi"
+ },
+ "embeddings": "Vektörleştirme",
+ "status": {
+ "error": "Parçalara ayırma başarısız oldu",
+ "errorResult": "Parçalara ayırma başarısız oldu, lütfen kontrol edip tekrar deneyin. Başarısız olma nedeni:",
+ "processing": "Parçalara ayırma işlemi devam ediyor",
+ "processingTip": "Sunucu metin parçalarını ayırıyor, sayfayı kapatmak parçalama ilerlemesini etkilemez"
+ }
+ }
+ },
+ "GoBack": {
+ "back": "Geri dön"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "Özel model, varsayılan olarak hem fonksiyon çağrısını hem de görüntü tanımayı destekler, yukarıdaki yeteneklerin kullanılabilirliğini doğrulamak için lütfen gerçek durumu kontrol edin",
+ "file": "Bu model dosya yükleme ve tanımayı destekler",
+ "functionCall": "Bu model fonksiyon çağrısını destekler",
+ "tokens": "Bu model tek bir oturumda en fazla {{tokens}} Token destekler",
+ "vision": "Bu model görüntü tanımıyı destekler"
+ },
+ "removed": "Bu model listeden çıkarıldı, seçiminizi kaldırırsanız otomatik olarak kaldırılacaktır"
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "Etkinleştirilmiş model bulunmamaktadır, lütfen ayarlara giderek açın",
+ "provider": "Sağlayıcı"
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/discover.json b/DigitalHumanWeb/locales/tr-TR/discover.json
new file mode 100644
index 0000000..0896540
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "Asistan Ekle",
+ "addAgentAndConverse": "Asistan Ekle ve Sohbet Et",
+ "addAgentSuccess": "Başarıyla Eklendi",
+ "conversation": {
+ "l1": "Merhaba, ben **{{name}}**. Bana her türlü soruyu sorabilirsin, elimden gelenin en iyisini yapacağım ~",
+ "l2": "İşte yeteneklerimin tanıtımı: ",
+ "l3": "Haydi sohbete başlayalım!"
+ },
+ "description": "Asistan Tanıtımı",
+ "detail": "Detaylar",
+ "list": "Asistan Listesi",
+ "more": "Daha Fazla",
+ "plugins": "Entegre Eklentiler",
+ "recentSubmits": "Son Güncellemeler",
+ "suggestions": "İlgili Öneriler",
+ "systemRole": "Asistan Ayarları",
+ "try": "Deneyin"
+ },
+ "back": "Geri Dön",
+ "category": {
+ "assistant": {
+ "academic": "Akademik",
+ "all": "Hepsi",
+ "career": "Kariyer",
+ "copywriting": "Metin Yazarlığı",
+ "design": "Tasarım",
+ "education": "Eğitim",
+ "emotions": "Duygular",
+ "entertainment": "Eğlence",
+ "games": "Oyunlar",
+ "general": "Genel",
+ "life": "Hayat",
+ "marketing": "Pazarlama",
+ "office": "Ofis",
+ "programming": "Programlama",
+ "translation": "Çeviri"
+ },
+ "plugin": {
+ "all": "Hepsi",
+ "gaming-entertainment": "Oyun Eğlencesi",
+ "life-style": "Yaşam Tarzı",
+ "media-generate": "Medya Üretimi",
+ "science-education": "Bilim ve Eğitim",
+ "social": "Sosyal Medya",
+ "stocks-finance": "Hisse Senedi ve Finans",
+ "tools": "Pratik Araçlar",
+ "web-search": "Web Arama"
+ }
+ },
+ "cleanFilter": "Filtreyi Temizle",
+ "create": "Oluştur",
+ "createGuide": {
+ "func1": {
+ "desc1": "Sohbet penceresinde sağ üst köşedeki ayarlar üzerinden eklemek istediğin asistanın ayar sayfasına git;",
+ "desc2": "Sağ üst köşedeki asistan pazarına gönder butonuna tıkla.",
+ "tag": "Yöntem 1",
+ "title": "LobeChat ile Gönder"
+ },
+ "func2": {
+ "button": "Github Asistan Deposu'na Git",
+ "desc": "Asistanı dizine eklemek istiyorsanız, plugins dizininde agent-template.json veya agent-template-full.json kullanarak bir giriş oluşturun, kısa bir açıklama yazın ve uygun şekilde etiketleyin, ardından bir çekme isteği oluşturun.",
+ "tag": "Yöntem 2",
+ "title": "Github ile Gönder"
+ }
+ },
+ "dislike": "Beğenmedim",
+ "filter": "Filtrele",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "Tüm Yazarlar",
+ "followed": "Takip Edilen Yazarlar",
+ "title": "Yazar Aralığı"
+ },
+ "contentLength": "Minimum Bağlam Uzunluğu",
+ "maxToken": {
+ "title": "Maksimum Uzunluğu Belirle (Token)",
+ "unlimited": "Sınırsız"
+ },
+ "other": {
+ "functionCall": "Fonksiyon Çağrısını Destekle",
+ "title": "Diğer",
+ "vision": "Görsel Tanımayı Destekle",
+ "withKnowledge": "Bilgi Tabanı ile",
+ "withTool": "Araç ile"
+ },
+ "pricing": "Model Fiyatı",
+ "timePeriod": {
+ "all": "Tüm Zamanlar",
+ "day": "Son 24 Saat",
+ "month": "Son 30 Gün",
+ "title": "Zaman Aralığı",
+ "week": "Son 7 Gün",
+ "year": "Son 1 Yıl"
+ }
+ },
+ "home": {
+ "featuredAssistants": "Öne Çıkan Asistanlar",
+ "featuredModels": "Öne Çıkan Modeller",
+ "featuredProviders": "Öne Çıkan Model Sağlayıcıları",
+ "featuredTools": "Öne Çıkan Araçlar",
+ "more": "Daha Fazla Keşfet"
+ },
+ "like": "Beğendim",
+ "models": {
+ "chat": "Sohbete Başla",
+ "contentLength": "Maksimum Bağlam Uzunluğu",
+ "free": "Ücretsiz",
+ "guide": "Yapılandırma Kılavuzu",
+ "list": "Model Listesi",
+ "more": "Daha Fazla",
+ "parameterList": {
+ "defaultValue": "Varsayılan Değer",
+ "docs": "Belgeleri Görüntüle",
+ "frequency_penalty": {
+ "desc": "Bu ayar, modelin girdi içinde zaten bulunan belirli kelimelerin tekrar kullanım sıklığını ayarlamak için kullanılır. Daha yüksek değerler, bu tekrarların olasılığını azaltırken, negatif değerler ters etki yaratır. Kelime cezası, tekrar sayısına göre artmaz. Negatif değerler, kelimelerin tekrar kullanımını teşvik eder.",
+ "title": "Frekans Cezası"
+ },
+ "max_tokens": {
+ "desc": "Bu ayar, modelin tek bir yanıtında üretebileceği maksimum uzunluğu tanımlar. Daha yüksek bir değer ayarlamak, modelin daha uzun yanıtlar üretmesine izin verirken, daha düşük bir değer yanıtın uzunluğunu kısıtlayarak daha özlü hale getirir. Farklı uygulama senaryolarına göre bu değeri makul bir şekilde ayarlamak, beklenen yanıt uzunluğuna ve ayrıntı seviyesine ulaşmaya yardımcı olabilir.",
+ "title": "Tek Yanıt Sınırı"
+ },
+ "presence_penalty": {
+ "desc": "Bu ayar, kelimelerin girdi içinde görünme sıklığına göre tekrar kullanımını kontrol etmeyi amaçlar. Girdi içinde daha fazla bulunan kelimeleri daha az kullanmaya çalışır, kullanma sıklığı görünme sıklığı ile orantılıdır. Kelime cezası, tekrar sayısına göre artar. Negatif değerler, kelimelerin tekrar kullanımını teşvik eder.",
+ "title": "Konu Tazeliği"
+ },
+ "range": "Aralık",
+ "temperature": {
+ "desc": "Bu ayar, modelin yanıtlarının çeşitliliğini etkiler. Daha düşük değerler daha öngörülebilir ve tipik yanıtlar verirken, daha yüksek değerler daha çeşitli ve nadir yanıtları teşvik eder. Değer 0 olarak ayarlandığında, model belirli bir girdi için her zaman aynı yanıtı verir.",
+ "title": "Rastgelelik"
+ },
+ "title": "Model Parametreleri",
+ "top_p": {
+ "desc": "Bu ayar, modelin seçimlerini olasılığı en yüksek olan belirli bir oranla sınırlamak için kullanılır: yalnızca P'ye ulaşan toplam olasılığa sahip en iyi kelimeleri seçer. Daha düşük değerler, modelin yanıtlarını daha öngörülebilir hale getirirken, varsayılan ayar modelin tüm kelime yelpazesinden seçim yapmasına izin verir.",
+ "title": "Nükleer Örnekleme"
+ },
+ "type": "Tür"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChat, bu sağlayıcı için özel API anahtarları kullanmayı destekler.",
+ "input": "Girdi Fiyatı",
+ "inputTooltip": "Her milyon Token için maliyet",
+ "latency": "Gecikme",
+ "latencyTooltip": "Sağlayıcının ilk Token'ı gönderme ortalama süresi",
+ "maxOutput": "Maksimum Çıktı Uzunluğu",
+ "maxOutputTooltip": "Bu uç noktanın üretebileceği maksimum Token sayısı",
+ "officialTooltip": "LobeHub Resmi Hizmeti",
+ "output": "Çıktı Fiyatı",
+ "outputTooltip": "Her milyon Token için maliyet",
+ "streamCancellationTooltip": "Bu sağlayıcı akış iptal işlevini destekler.",
+ "throughput": "Verim",
+ "throughputTooltip": "Akış talepleri başına saniyede iletilen ortalama Token sayısı"
+ },
+ "suggestions": "İlgili Modeller",
+ "supportedProviders": "Bu modeli destekleyen sağlayıcılar"
+ },
+ "plugins": {
+ "community": "Topluluk Eklentisi",
+ "install": "Eklenti Yükle",
+ "installed": "Yüklü",
+ "list": "Eklenti Listesi",
+ "meta": {
+ "description": "Açıklama",
+ "parameter": "Parametre",
+ "title": "Araç Parametreleri",
+ "type": "Tür"
+ },
+ "more": "Daha Fazla",
+ "official": "Resmi Eklenti",
+ "recentSubmits": "Son Güncellemeler",
+ "suggestions": "İlgili Öneriler"
+ },
+ "providers": {
+ "config": "Sağlayıcıyı Yapılandır",
+ "list": "Model Sağlayıcıları Listesi",
+ "modelCount": "{{count}} model",
+ "modelSite": "Model belgeleri",
+ "more": "Daha Fazla",
+ "officialSite": "Resmi site",
+ "showAllModels": "Tüm modelleri göster",
+ "suggestions": "İlgili Sağlayıcılar",
+ "supportedModels": "Desteklenen Modeller"
+ },
+ "search": {
+ "placeholder": "İsim, tanım veya anahtar kelime ara...",
+ "result": "{{count}} adet {{keyword}} ile ilgili arama sonucu",
+ "searching": "Aranıyor..."
+ },
+ "sort": {
+ "mostLiked": "En Çok Beğenilen",
+ "mostUsed": "En Çok Kullanılan",
+ "newest": "En Yeniler",
+ "oldest": "En Eski",
+ "recommended": "Tavsiye Edilen"
+ },
+ "tab": {
+ "assistants": "Asistanlar",
+ "home": "Ana Sayfa",
+ "models": "Modeller",
+ "plugins": "Eklentiler",
+ "providers": "Model Sağlayıcıları"
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/error.json b/DigitalHumanWeb/locales/tr-TR/error.json
new file mode 100644
index 0000000..6f3f94c
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "devam et",
+ "desc": "{{greeting}},senin için hizmet vermeye devam edebilmek çok mutluluk verici. Hadi konuşmamıza kaldığımız yerden devam edelim",
+ "title": "Tekrar hoş geldin, {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "Ana Sayfaya Dön",
+ "desc": "Biraz sonra tekrar deneyin veya bilinen dünyaya geri dönün",
+ "retry": "Yeniden Yükle",
+ "title": "Sayfa bir sorunla karşılaştı.."
+ },
+ "fetchError": "İstek başarısız oldu",
+ "fetchErrorDetail": "Hata detayı",
+ "notFound": {
+ "backHome": "Ana Sayfaya Dön",
+ "check": "Lütfen URL'nizin doğru olduğundan emin olun",
+ "desc": "Aradığınız sayfa bulunamadı",
+ "title": "Bilinmeyen bir alana mı girdiniz?"
+ },
+ "pluginSettings": {
+ "desc": "Bu eklentiyi kullanmaya başlamak için aşağıdaki yapılandırmayı tamamlayın",
+ "title": "{{name}} Eklenti Ayarları"
+ },
+ "response": {
+ "400": "Üzgünüm, sunucu isteğinizi anlamadı. Lütfen istek parametrelerinizin doğru olduğundan emin olun.",
+ "401": "Üzgünüm, sunucu isteğinizi reddetti, muhtemelen yetersiz izinler veya geçersiz kimlik doğrulama nedeniyle.",
+ "403": "Üzgünüm, sunucu isteğinizi reddetti. Bu içeriğe erişim izniniz yok.",
+ "404": "Üzgünüm, sunucu istediğiniz sayfa veya kaynağı bulamıyor. Lütfen URL'nizin doğru olduğundan emin olun.",
+ "405": "Üzgünüm, sunucu kullandığınız istek yöntemini desteklemiyor. Lütfen istek yönteminizin doğru olduğundan emin olun.",
+ "406": "Üzgünüz, sunucu isteğinizin içerik özelliklerine göre işlemi tamamlayamadı",
+ "407": "Üzgünüz, devam etmek için bir vekil kimliği doğrulamanız gerekmektedir",
+ "408": "Üzgünüz, sunucu isteği beklerken zaman aşımına uğradı, lütfen ağ bağlantınızı kontrol edip tekrar deneyin",
+ "409": "Üzgünüz, istek uyumsuzluğu nedeniyle çakışma var ve işlenemiyor, muhtemelen kaynak durumu ile istek uyumsuz",
+ "410": "Üzgünüz, istediğiniz kaynak kalıcı olarak kaldırıldı ve bulunamıyor",
+ "411": "Üzgünüz, sunucu geçerli içerik uzunluğu olmayan isteği işleyemiyor",
+ "412": "Üzgünüz, isteğiniz sunucu tarafındaki koşulları karşılamıyor ve işlem tamamlanamıyor",
+ "413": "Üzgünüz, isteğinizin veri boyutu çok büyük, sunucu işleyemiyor",
+ "414": "Üzgünüz, isteğinizin URI'si çok uzun, sunucu işleyemiyor",
+ "415": "Üzgünüz, sunucu isteğe eşlik eden medya formatını işleyemiyor",
+ "416": "Üzgünüz, sunucu isteğinizin aralığını karşılayamıyor",
+ "417": "Üzgünüz, sunucu beklentilerinizi karşılayamıyor",
+ "422": "Üzgünüz, isteğinizin biçimi doğru ancak anlamsal hata içerdiği için yanıt veremiyor",
+ "423": "Üzgünüz, istediğiniz kaynak kilitli",
+ "424": "Üzgünüz, önceki bir istek hatası nedeniyle mevcut istek tamamlanamıyor",
+ "426": "Üzgünüz, sunucu istemcinizin daha yüksek bir protokol sürümüne yükseltilmesini istiyor",
+ "428": "Üzgünüz, sunucu önişlem gerektiriyor, isteğinizin doğru koşul başlıklarını içermesini istiyor",
+ "429": "Üzgünüz, isteğiniz çok fazla, sunucu biraz yoruldu, lütfen daha sonra tekrar deneyin",
+ "431": "Üzgünüz, istek başlık alanı çok büyük, sunucu işleyemiyor",
+ "451": "Üzgünüz, yasal nedenlerle sunucu bu kaynağı sağlamayı reddediyor",
+ "500": "Üzgünüm, sunucu bazı zorluklar yaşıyor ve geçici olarak isteğinizi tamamlayamıyor. Lütfen daha sonra tekrar deneyin.",
+ "502": "Üzgünüm, sunucu kayboldu ve geçici olarak hizmet veremiyor. Lütfen daha sonra tekrar deneyin.",
+ "503": "Üzgünüm, sunucu şu anda isteğinizi işleyemiyor, muhtemelen aşırı yüklenme veya bakım nedeniyle. Lütfen daha sonra tekrar deneyin.",
+ "504": "Üzgünüm, sunucu yukarı akış sunucusundan bir yanıt alamadı. Lütfen daha sonra tekrar deneyin.",
+ "AgentRuntimeError": "Lobe dil modeli çalışma zamanı hatası, lütfen aşağıdaki bilgilere göre sorunu gidermeye çalışın veya tekrar deneyin",
+ "FreePlanLimit": "Şu anda ücretsiz bir kullanıcısınız, bu özelliği kullanamazsınız. Lütfen devam etmek için bir ücretli plana yükseltin.",
+ "InvalidAccessCode": "Geçersiz Erişim Kodu: Geçersiz veya boş bir şifre girdiniz. Lütfen doğru erişim şifresini girin veya özel API Anahtarı ekleyin.",
+ "InvalidBedrockCredentials": "Bedrock kimlik doğrulaması geçersiz, lütfen AccessKeyId/SecretAccessKey bilgilerinizi kontrol edip tekrar deneyin",
+ "InvalidClerkUser": "Üzgünüz, şu anda giriş yapmadınız. Lütfen işlemlere devam etmeden önce giriş yapın veya hesap oluşturun",
+ "InvalidGithubToken": "Github Kişisel Erişim Token'ı hatalı veya boş. Lütfen Github Kişisel Erişim Token'ınızı kontrol edin ve tekrar deneyin.",
+ "InvalidOllamaArgs": "Ollama yapılandırması yanlış, lütfen Ollama yapılandırmasını kontrol edip tekrar deneyin",
+ "InvalidProviderAPIKey": "{{provider}} API Anahtarı geçersiz veya boş, lütfen {{provider}} API Anahtarını kontrol edip tekrar deneyin",
+ "LocationNotSupportError": "Üzgünüz, bulunduğunuz konum bu model hizmetini desteklemiyor, muhtemelen bölge kısıtlamaları veya hizmetin henüz açılmamış olması nedeniyle. Lütfen mevcut konumun bu hizmeti kullanmaya uygun olup olmadığını doğrulayın veya başka bir konum bilgisi kullanmayı deneyin.",
+ "NoOpenAIAPIKey": "OpenAI API Anahtarı boş, lütfen özel bir OpenAI API Anahtarı ekleyin",
+ "OllamaBizError": "Ollama servisine yapılan istekte hata oluştu, lütfen aşağıdaki bilgilere göre sorunu gidermeye çalışın veya tekrar deneyin",
+ "OllamaServiceUnavailable": "Ollama servisi kullanılamıyor, lütfen Ollama'nın düzgün çalışıp çalışmadığını kontrol edin veya Ollama'nın çapraz kaynak yapılandırmasının doğru olup olmadığını kontrol edin",
+ "OpenAIBizError": "OpenAI hizmetinde bir hata oluştu, lütfen aşağıdaki bilgilere göre sorunu giderin veya tekrar deneyin",
+ "PluginApiNotFound": "Üzgünüm, eklentinin bildiriminde API mevcut değil. Lütfen istek yönteminizin eklenti bildirim API'sı ile eşleşip eşleşmediğini kontrol edin",
+ "PluginApiParamsError": "Üzgünüm, eklenti isteği için giriş parametre doğrulaması başarısız oldu. Lütfen giriş parametrelerinin API açıklamasıyla eşleşip eşleşmediğini kontrol edin",
+ "PluginFailToTransformArguments": "Özür dilerim, eklenti çağrı parametrelerini dönüştürme başarısız oldu, lütfen yardımcı mesajı yeniden oluşturmayı deneyin veya daha güçlü bir AI modeli olan Tools Calling'i değiştirip tekrar deneyin",
+ "PluginGatewayError": "Üzgünüz, eklenti ağ geçidinde bir hata oluştu, lütfen eklenti ağ geçidi yapılandırmasını kontrol edin",
+ "PluginManifestInvalid": "Üzgünüm, eklentinin bildirim doğrulaması başarısız oldu. Lütfen bildirim formatının doğru olup olmadığını kontrol edin",
+ "PluginManifestNotFound": "Üzgünüm, sunucu eklentinin bildirim dosyasını (manifest.json) bulamadı. Lütfen eklenti bildirim dosyası adresinin doğru olup olmadığını kontrol edin",
+ "PluginMarketIndexInvalid": "Üzgünüm, eklenti dizini doğrulaması başarısız oldu. Lütfen dizin dosya formatının doğru olup olmadığını kontrol edin",
+ "PluginMarketIndexNotFound": "Üzgünüm, sunucu eklenti dizinini bulamadı. Lütfen dizin adresinin doğru olup olmadığını kontrol edin",
+ "PluginMetaInvalid": "Üzgünüm, eklentinin meta veri doğrulaması başarısız oldu. Lütfen eklenti meta veri formatının doğru olup olmadığını kontrol edin",
+ "PluginMetaNotFound": "Üzgünüm, dizinde eklenti bulunamadı. Lütfen dizindeki eklentinin yapılandırma bilgilerini kontrol edin",
+ "PluginOpenApiInitError": "Üzgünüz, OpenAPI istemci başlatma hatası, lütfen OpenAPI yapılandırma bilgilerini kontrol edin",
+ "PluginServerError": "Eklenti sunucusu isteği bir hata ile döndü. Lütfen aşağıdaki hata bilgilerine dayanarak eklenti bildirim dosyanızı, eklenti yapılandırmanızı veya sunucu uygulamanızı kontrol edin",
+ "PluginSettingsInvalid": "Bu eklenti, kullanılmadan önce doğru şekilde yapılandırılmalıdır. Lütfen yapılandırmanızın doğru olup olmadığını kontrol edin",
+ "ProviderBizError": "Talep {{provider}} hizmetinde bir hata oluştu, lütfen aşağıdaki bilgilere göre sorunu giderin veya tekrar deneyin",
+ "StreamChunkError": "Akış isteği mesaj parçası çözümleme hatası, lütfen mevcut API arayüzünün standartlara uygun olup olmadığını kontrol edin veya API sağlayıcınızla iletişime geçin.",
+ "SubscriptionPlanLimit": "Abonelik kotası tükenmiş, bu özelliği kullanamazsınız. Lütfen daha yüksek bir plana yükseltin veya kaynak paketi satın alarak devam edin.",
+ "UnknownChatFetchError": "Üzgünüm, bilinmeyen bir istek hatasıyla karşılaştık. Lütfen aşağıdaki bilgileri kontrol edin veya tekrar deneyin."
+ },
+ "stt": {
+ "responseError": "Hizmet isteği başarısız oldu, lütfen yapılandırmayı kontrol edin veya tekrar deneyin"
+ },
+ "tts": {
+ "responseError": "Hizmet isteği başarısız oldu, lütfen yapılandırmayı kontrol edin veya tekrar deneyin"
+ },
+ "unlock": {
+ "addProxyUrl": "OpenAI vekil adresi ekle (isteğe bağlı)",
+ "apiKey": {
+ "description": "{{name}} API Anahtarınızı girerek oturumu başlatabilirsiniz",
+ "title": "Özel {{name}} API Anahtarını kullan"
+ },
+ "closeMessage": "Mesajı kapat",
+ "confirm": "Onayla ve Yeniden Dene",
+ "oauth": {
+ "description": "Yönetici, tek oturum açma kimlik doğrulamasını etkinleştirdi. Aşağıdaki düğmeye tıklayarak giriş yapabilir ve uygulamayı kilidini açabilirsiniz.",
+ "success": "Giriş başarılı",
+ "title": "Hesaba giriş yap",
+ "welcome": "Hoş geldiniz!"
+ },
+ "password": {
+ "description": "Uygulama şifrelemesi yönetici tarafından etkinleştirilmiştir. Uygulamayı açmak için uygulama şifresini girin. Şifre sadece bir kez doldurulmalıdır.",
+ "placeholder": "Lütfen şifre girin",
+ "title": "Uygulamayı Açmak için Şifre Girin"
+ },
+ "tabs": {
+ "apiKey": "Özel API Anahtarı",
+ "password": "Şifre"
+ }
+ },
+ "upload": {
+ "desc": "Detay: {{detail}}",
+ "fileOnlySupportInServerMode": "Mevcut dağıtım modu, yalnızca resim dosyalarının yüklenmesini desteklemektedir. {{ext}} formatında bir dosya yüklemek istiyorsanız, lütfen sunucu veritabanı dağıtımına geçin veya {{cloud}} hizmetini kullanın.",
+ "networkError": "Lütfen ağ bağlantınızın düzgün çalıştığından emin olun ve dosya depolama hizmetinin çapraz alan yapılandırmasının doğru olup olmadığını kontrol edin.",
+ "title": "Dosya yükleme başarısız, lütfen ağ bağlantınızı kontrol edin veya daha sonra tekrar deneyin",
+ "unknownError": "Hata nedeni: {{reason}}",
+ "uploadFailed": "Dosya yüklemesi başarısız oldu."
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/file.json b/DigitalHumanWeb/locales/tr-TR/file.json
new file mode 100644
index 0000000..1b8054a
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "Dosyalarınızı ve bilgi tabanınızı yönetin",
+ "detail": {
+ "basic": {
+ "createdAt": "Oluşturulma Zamanı",
+ "filename": "Dosya Adı",
+ "size": "Dosya Boyutu",
+ "title": "Temel Bilgiler",
+ "type": "Format",
+ "updatedAt": "Güncellenme Zamanı"
+ },
+ "data": {
+ "chunkCount": "Parça Sayısı",
+ "embedding": {
+ "default": "Henüz vektörleştirilmedi",
+ "error": "Başarısız",
+ "pending": "Başlatılmayı Bekliyor",
+ "processing": "İşleniyor",
+ "success": "Tamamlandı"
+ },
+ "embeddingStatus": "Vektörleştirme"
+ }
+ },
+ "empty": "Henüz yüklenmiş dosya/klasör yok",
+ "header": {
+ "actions": {
+ "newFolder": "Yeni Klasör",
+ "uploadFile": "Dosya Yükle",
+ "uploadFolder": "Klasör Yükle"
+ },
+ "uploadButton": "Yükle"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "Bu bilgi tabanı silinecek, içindeki dosyalar silinmeyecek, tüm dosyalar içine taşınacaktır. Bilgi tabanı silindikten sonra geri alınamaz, lütfen dikkatli olun.",
+ "empty": "Bilgi tabanı oluşturmak için <1>+1> tıklayın"
+ },
+ "new": "Yeni Bilgi Tabanı",
+ "title": "Bilgi Tabanı"
+ },
+ "networkError": "Bilgi bankası alınamadı, lütfen ağ bağlantınızı kontrol edip tekrar deneyin",
+ "notSupportGuide": {
+ "desc": "Mevcut dağıtım örneği istemci veritabanı modunda, dosya yönetim işlevini kullanamazsınız. Lütfen <1>sunucu veritabanı dağıtım moduna1> geçin veya doğrudan <3>LobeChat Cloud3> kullanın.",
+ "features": {
+ "allKind": {
+ "desc": "Word, PPT, Excel, PDF, TXT gibi yaygın belge formatları ve JS, Python gibi popüler kod dosyalarını destekler.",
+ "title": "Çeşitli Dosya Türleri Analizi"
+ },
+ "embeddings": {
+ "desc": "Yüksek performanslı vektör modelleri kullanarak, metin parçalarını vektörleştirir ve dosya içeriğinin anlamsal olarak aranmasını sağlar.",
+ "title": "Vektör Anlamlandırma"
+ },
+ "repos": {
+ "desc": "Bilgi tabanı oluşturmayı destekler ve farklı türde dosyalar eklemeye izin verir, kendi alan bilginizi oluşturun.",
+ "title": "Bilgi Tabanı"
+ }
+ },
+ "title": "Mevcut dağıtım modu dosya yönetimini desteklemiyor"
+ },
+ "preview": {
+ "downloadFile": "Dosyayı İndir",
+ "unsupportedFileAndContact": "Bu dosya formatı çevrimiçi önizleme için desteklenmiyor. Önizleme talebiniz varsa, lütfen <1>bize geri bildirimde bulunun1>."
+ },
+ "searchFilePlaceholder": "Dosya Ara",
+ "tab": {
+ "all": "Tüm Dosyalar",
+ "audios": "Sesler",
+ "documents": "Belgeler",
+ "images": "Görseller",
+ "videos": "Videolar",
+ "websites": "Web Siteleri"
+ },
+ "title": "Dosya",
+ "uploadDock": {
+ "body": {
+ "collapse": "Kapat",
+ "item": {
+ "done": "Yüklendi",
+ "error": "Yükleme başarısız, lütfen tekrar deneyin",
+ "pending": "Yüklenmek için hazırlanıyor...",
+ "processing": "Dosya işleniyor...",
+ "restTime": "Kalan {{time}}"
+ }
+ },
+ "totalCount": "Toplam {{count}} öğe",
+ "uploadStatus": {
+ "error": "Yükleme hatası",
+ "pending": "Yükleme bekleniyor",
+ "processing": "Yükleniyor",
+ "success": "Yükleme tamamlandı",
+ "uploading": "Yükleniyor"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/knowledgeBase.json b/DigitalHumanWeb/locales/tr-TR/knowledgeBase.json
new file mode 100644
index 0000000..c649901
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "Dosya başarıyla eklendi, <1>hemen görüntüle1>",
+ "confirm": "Ekle",
+ "id": {
+ "placeholder": "Eklenecek bilgi havuzunu seçin",
+ "required": "Lütfen bilgi havuzunu seçin",
+ "title": "Hedef bilgi havuzu"
+ },
+ "title": "Bilgi Havuzuna Ekle",
+ "totalFiles": "Seçilen {{count}} dosya"
+ },
+ "createNew": {
+ "confirm": "Yeni Oluştur",
+ "description": {
+ "placeholder": "Bilgi havuzu tanımı (isteğe bağlı)"
+ },
+ "formTitle": "Temel Bilgiler",
+ "name": {
+ "placeholder": "Bilgi havuzu adı",
+ "required": "Lütfen bilgi havuzu adını girin"
+ },
+ "title": "Yeni Bilgi Havuzu"
+ },
+ "tab": {
+ "evals": "Değerlendirme",
+ "files": "Belgeler",
+ "settings": "Ayarlar",
+ "testing": "Geri Çağırma Testi"
+ },
+ "title": "Bilgi Havuzu"
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/market.json b/DigitalHumanWeb/locales/tr-TR/market.json
new file mode 100644
index 0000000..4a6cb1d
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "Asistan Ekle",
+ "addAgentAndConverse": "Asistan Ekle ve Konuşma Başlat",
+ "addAgentSuccess": "Ekleme Başarılı",
+ "guide": {
+ "func1": {
+ "desc1": "Sohbet penceresinin sağ üst köşesindeki ayarlar simgesine tıklayarak asistana göndermek istediğiniz ayarlar sayfasına girin.",
+ "desc2": "Sağ üst köşedeki 'Asistan Pazarına Gönder' düğmesine tıklayın.",
+ "tag": "Method 1",
+ "title": "LobeChat üzerinden Gönder"
+ },
+ "func2": {
+ "button": "Github Asistan Repositorisine Git",
+ "desc": "Asistanı dizine eklemek istiyorsanız, plugins dizininde agent-template.json veya agent-template-full.json kullanarak bir giriş oluşturun, kısa bir açıklama ve uygun etiketler yazın, ardından bir çekme isteği oluşturun.",
+ "tag": "Method 2",
+ "title": "Github üzerinden Gönder"
+ }
+ },
+ "search": {
+ "placeholder": "Asistanadı, açıklama veya anahtar kelimeleri ara..."
+ },
+ "sidebar": {
+ "comment": "Yorumlar",
+ "prompt": "Prompts",
+ "title": "Asistan Detay"
+ },
+ "submitAgent": "Asistan Ekle",
+ "title": {
+ "allAgents": "Tüm Asistanlar",
+ "recentSubmits": "Son Eklenenler"
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/metadata.json b/DigitalHumanWeb/locales/tr-TR/metadata.json
new file mode 100644
index 0000000..66892ae
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}} size en iyi ChatGPT, Claude, Gemini, OLLaMA WebUI deneyimini sunar",
+ "title": "{{appName}}: Kişisel AI verimlilik aracı, kendinize daha akıllı bir zihin verin"
+ },
+ "discover": {
+ "assistants": {
+ "description": "İçerik oluşturma, metin yazımı, soru-cevap, görsel oluşturma, video oluşturma, ses oluşturma, akıllı Ajan, otomatik iş akışları, size özel AI / GPTs / OLLaMA akıllı asistanınızı özelleştirin.",
+ "title": "Yapay Zeka Asistanları"
+ },
+ "description": "İçerik oluşturma, metin yazımı, soru-cevap, görsel oluşturma, video oluşturma, ses oluşturma, akıllı Ajan, otomatik iş akışları, özelleştirilmiş AI uygulamaları, size özel AI uygulama çalışma alanınızı oluşturun.",
+ "models": {
+ "description": "Ana akım AI modellerini keşfedin: OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek.",
+ "title": "Yapay Zeka Modelleri"
+ },
+ "plugins": {
+ "description": "Grafik oluşturma, akademik, görüntü oluşturma, video oluşturma, ses oluşturma, otomatik iş akışları için asistanınıza zengin eklenti yetenekleri entegre edin.",
+ "title": "Yapay Zeka Eklentileri"
+ },
+ "providers": {
+ "description": "Ana akım model sağlayıcılarını keşfedin: OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter.",
+ "title": "Yapay Zeka Modeli Sağlayıcıları"
+ },
+ "search": "Ara",
+ "title": "Keşfet"
+ },
+ "plugins": {
+ "description": "Arama, grafik oluşturma, akademik, görsel oluşturma, video oluşturma, ses oluşturma, otomatik iş akışları, ChatGPT / Claude için özel ToolCall eklenti yeteneklerini özelleştirin",
+ "title": "Eklenti Pazarı"
+ },
+ "welcome": {
+ "description": "{{appName}} size en iyi ChatGPT, Claude, Gemini, OLLaMA WebUI deneyimini sunar",
+ "title": "Hoş geldiniz {{appName}}: Kişisel AI verimlilik aracı, kendinize daha akıllı bir zihin verin"
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/migration.json b/DigitalHumanWeb/locales/tr-TR/migration.json
new file mode 100644
index 0000000..fbc93a8
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "Yerel Verileri Temizle",
+ "downloadBackup": "Yedeğini İndir",
+ "reUpgrade": "Yeniden Yükselt",
+ "start": "Başla",
+ "upgrade": "Yükselt"
+ },
+ "clear": {
+ "confirm": "Yerel verileri temizlemek üzeresiniz (global ayarlar etkilenmeyecek). Lütfen yedeği indirdiğinizi onaylayın."
+ },
+ "description": "{{appName}}'in yeni versiyonunda veri depolama alanında büyük bir sıçrama yaşandı. Bu nedenle, eski verileri güncelleyerek size daha iyi bir kullanım deneyimi sunmak istiyoruz.",
+ "features": {
+ "capability": {
+ "desc": "IndexedDB teknolojisine dayalı, hayatınız boyunca saklayabileceğiniz sohbet mesajlarını barındıracak kadar geniş",
+ "title": "Büyük Kapasite"
+ },
+ "performance": {
+ "desc": "Milyonlarca mesaj otomatik olarak indekslenir, sorgulama yanıt süresi milisaniye seviyesindedir",
+ "title": "Yüksek Performans"
+ },
+ "use": {
+ "desc": "Başlık, açıklama, etiket, mesaj içeriği ve hatta çeviri metni aramalarını destekler, günlük arama verimliliği büyük ölçüde artar",
+ "title": "Daha Kullanışlı"
+ }
+ },
+ "title": "{{appName}} Veri Evrimi",
+ "upgrade": {
+ "error": {
+ "subTitle": "Üzgünüz, veritabanı güncelleme sürecinde bir hata oluştu. Lütfen aşağıdaki çözümleri deneyin: A. Yerel verileri temizleyip yedek verileri yeniden içe aktarın; B. 'Yeniden Güncelle' butonuna tıklayın.
Hala hata alıyorsanız, lütfen <1>sorun bildirin1>, size en kısa sürede yardımcı olacağız.",
+ "title": "Veritabanı Güncellemesi Başarısız"
+ },
+ "success": {
+ "subTitle": "{{appName}}'in veritabanı en son sürüme güncellendi, hemen deneyimlemeye başlayın.",
+ "title": "Veritabanı Güncellemesi Başarılı"
+ }
+ },
+ "upgradeTip": "Güncelleme yaklaşık 10-20 saniye sürecektir, lütfen güncelleme sırasında {{appName}}'i kapatmayın."
+ },
+ "migrateError": {
+ "missVersion": "İçe aktarılan verilerde bir sürüm numarası eksik. Lütfen dosyayı kontrol edin ve tekrar deneyin.",
+ "noMigration": "Mevcut sürüm için bir geçiş çözümü bulunamadı. Lütfen sürüm numarasını kontrol edin ve tekrar deneyin. Sorun devam ederse, lütfen bir geri bildirim isteği gönderin."
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/modelProvider.json b/DigitalHumanWeb/locales/tr-TR/modelProvider.json
new file mode 100644
index 0000000..26aeb8d
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "Azure'un API versiyonu, YYYY-AA-GG formatına uygun, [en son versiyonu](https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions) kontrol edin",
+ "fetch": "Listeyi al",
+ "title": "Azure API Versiyonu"
+ },
+ "empty": "İlk modeli eklemek için model kimliğini girin",
+ "endpoint": {
+ "desc": "Azure portalından kaynağı kontrol ederken, bu değeri \"Anahtarlar ve uç noktalar\" bölümünde bulabilirsiniz",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "Azure API Adresi"
+ },
+ "modelListPlaceholder": "Dağıttığınız OpenAI modelini seçin veya ekleyin",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "Azure portalından kaynağı kontrol ederken, bu değeri \"Anahtarlar ve uç noktalar\" bölümünde bulabilirsiniz. KEY1 veya KEY2 kullanabilirsiniz",
+ "placeholder": "Azure API Key",
+ "title": "API Key"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "AWS Access Key Id girin",
+ "placeholder": "AWS Access Key Id",
+ "title": "AWS Access Key Id"
+ },
+ "checker": {
+ "desc": "AccessKeyId / SecretAccessKey'in doğru girilip girilmediğini test edin"
+ },
+ "region": {
+ "desc": "AWS Bölgesi girin",
+ "placeholder": "AWS Region",
+ "title": "AWS Bölgesi"
+ },
+ "secretAccessKey": {
+ "desc": "AWS Secret Access Key girin",
+ "placeholder": "AWS Secret Access Key",
+ "title": "AWS Secret Access Key"
+ },
+ "sessionToken": {
+ "desc": "AWS SSO/STS kullanıyorsanız, lütfen AWS Oturum Tokeninizi girin",
+ "placeholder": "AWS Oturum Tokeni",
+ "title": "AWS Oturum Tokeni (isteğe bağlı)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "Özel Bölge",
+ "customSessionToken": "Özel Oturum Tokeni",
+ "description": "AWS AccessKeyId / SecretAccessKey bilgilerinizi girerek oturumu başlatabilirsiniz. Uygulama kimlik doğrulama bilgilerinizi kaydetmez.",
+ "title": "Özel Bedrock Kimlik Bilgilerini Kullan"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "Github PAT'nizi girin, [buraya](https://github.com/settings/tokens) tıklayarak oluşturun",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "Proxy adresinin doğru girilip girilmediğini test edin",
+ "title": "Bağlantı Kontrolü"
+ },
+ "customModelName": {
+ "desc": "Özel modeller ekleyin, birden fazla model için virgül (,) kullanın",
+ "placeholder": "vicuna,llava,codellama,llama2:13b-text",
+ "title": "Özel Model Adları"
+ },
+ "download": {
+ "desc": "Ollama 正在下载该模型,请尽量不要关闭本页面。重新下载时将会中断处继续",
+ "remainingTime": "剩余时间",
+ "speed": "下载速度",
+ "title": "正在下载模型 {{model}} "
+ },
+ "endpoint": {
+ "desc": "Ollama arayüz proxy adresini girin, yerel olarak belirtilmemişse boş bırakılabilir",
+ "title": "Arayüz Proxy Adresi"
+ },
+ "setup": {
+ "cors": {
+ "description": "Ollama'nın normal şekilde çalışabilmesi için, tarayıcı güvenlik kısıtlamaları nedeniyle Ollama'nın çapraz kaynak isteklerine izin verilmesi gerekmektedir.",
+ "linux": {
+ "env": "[Service] bölümüne `Environment` ekleyerek OLLAMA_ORIGINS ortam değişkenini ekleyin:",
+ "reboot": "systemd'yi yeniden yükleyin ve Ollama'yı yeniden başlatın",
+ "systemd": "systemd'yi çağırarak ollama servisini düzenleyin:"
+ },
+ "macos": "Lütfen 'Terminal' uygulamasını açın ve aşağıdaki komutu yapıştırıp Enter tuşuna basın",
+ "reboot": "Komut tamamlandıktan sonra Ollama servisini yeniden başlatın",
+ "title": "Ollama'nın çapraz kaynak erişimine izin vermek için yapılandırma",
+ "windows": "Windows'ta, 'Control Panel'ı tıklayarak sistem ortam değişkenlerini düzenleyin. Kullanıcı hesabınıza * değerinde 'OLLAMA_ORIGINS' adında bir ortam değişkeni oluşturun ve 'OK/Apply' düğmesine tıklayarak kaydedin"
+ },
+ "install": {
+ "description": "Ollama'nın açık olduğundan emin olun. Ollama'yı indirmediyseniz, lütfen resmi web sitesine giderek <1>indirin1>.",
+ "docker": "Docker kullanmayı tercih ediyorsanız, Ollama resmi Docker görüntüsünü aşağıdaki komutla çekebilirsiniz:",
+ "linux": {
+ "command": "Aşağıdaki komutları kullanarak yükleyin:",
+ "manual": "Ya da, <1>Linux için el ile kurulum kılavuzuna1> bakarak kendiniz kurabilirsiniz"
+ },
+ "title": "Yerel olarak Ollama uygulamasını kurun ve başlatın",
+ "windowsTab": "Windows (Önizleme)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "取消下载",
+ "confirm": "下载",
+ "description": "输入你的 Ollama 模型标签,完成即可继续会话",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "开始下载...",
+ "title": "下载指定的 Ollama 模型"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI Sıfır Bir"
+ },
+ "zhipu": {
+ "title": "Zeka Haritası"
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/models.json b/DigitalHumanWeb/locales/tr-TR/models.json
new file mode 100644
index 0000000..3490407
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34B, zengin eğitim örnekleri ile endüstri uygulamalarında üstün performans sunar."
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9B, 16K Token desteği sunar, etkili ve akıcı dil oluşturma yeteneği sağlar."
+ },
+ "360gpt-pro": {
+ "description": "360GPT Pro, 360 AI model serisinin önemli bir üyesi olarak, çeşitli doğal dil uygulama senaryolarını karşılamak için etkili metin işleme yeteneği sunar, uzun metin anlama ve çoklu diyalog gibi işlevleri destekler."
+ },
+ "360gpt-turbo": {
+ "description": "360GPT Turbo, güçlü hesaplama ve diyalog yetenekleri sunar, mükemmel anlam anlama ve oluşturma verimliliğine sahiptir, işletmeler ve geliştiriciler için ideal bir akıllı asistan çözümüdür."
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8K, anlam güvenliği ve sorumluluk odaklılığı vurgular, içerik güvenliği konusunda yüksek gereksinimlere sahip uygulama senaryoları için tasarlanmıştır, kullanıcı deneyiminin doğruluğunu ve sağlamlığını garanti eder."
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Pro, 360 şirketi tarafından sunulan yüksek düzeyde doğal dil işleme modelidir, mükemmel metin oluşturma ve anlama yeteneğine sahiptir, özellikle oluşturma ve yaratma alanında olağanüstü performans gösterir, karmaşık dil dönüşümleri ve rol canlandırma görevlerini işleyebilir."
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultra, Xinghuo büyük model serisinin en güçlü versiyonudur, çevrimiçi arama bağlantısını yükseltirken, metin içeriğini anlama ve özetleme yeteneğini artırır. Ofis verimliliğini artırmak ve taleplere doğru yanıt vermek için kapsamlı bir çözüm sunar, sektördeki akıllı ürünlerin öncüsüdür."
+ },
+ "Baichuan2-Turbo": {
+ "description": "Arama artırma teknolojisi kullanarak büyük model ile alan bilgisi ve tüm ağ bilgisi arasında kapsamlı bir bağlantı sağlar. PDF, Word gibi çeşitli belge yüklemelerini ve URL girişini destekler, bilgi edinimi zamanında ve kapsamlıdır, çıktı sonuçları doğru ve profesyoneldir."
+ },
+ "Baichuan3-Turbo": {
+ "description": "Kurumsal yüksek frekanslı senaryolar için optimize edilmiş, etkisi büyük ölçüde artırılmış ve yüksek maliyet etkinliği sunmaktadır. Baichuan2 modeline kıyasla, içerik üretimi %20, bilgi sorgulama %17, rol oynama yeteneği %40 oranında artmıştır. Genel performansı GPT3.5'ten daha iyidir."
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "128K ultra uzun bağlam penceresine sahip, kurumsal yüksek frekanslı senaryolar için optimize edilmiş, etkisi büyük ölçüde artırılmış ve yüksek maliyet etkinliği sunmaktadır. Baichuan2 modeline kıyasla, içerik üretimi %20, bilgi sorgulama %17, rol oynama yeteneği %40 oranında artmıştır. Genel performansı GPT3.5'ten daha iyidir."
+ },
+ "Baichuan4": {
+ "description": "Model yetenekleri ülke içinde birinci sırada, bilgi ansiklopedisi, uzun metinler, yaratıcı üretim gibi Çince görevlerde yurtdışındaki önde gelen modelleri geride bırakmaktadır. Ayrıca, sektör lideri çok modlu yeteneklere sahiptir ve birçok yetkili değerlendirme kriterinde mükemmel performans göstermektedir."
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B), çok alanlı uygulamalar ve karmaşık görevler için uygun yenilikçi bir modeldir."
+ },
+ "Max-32k": {
+ "description": "Spark Max 32K, büyük bağlam işleme yeteneği, daha güçlü bağlam anlama ve mantıksal akıl yürütme yeteneği ile donatılmıştır. 32K token'lık metin girişi destekler ve uzun belgelerin okunması, özel bilgi sorgulamaları gibi senaryolar için uygundur."
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPO, olağanüstü yaratıcı deneyimler sunmak için tasarlanmış son derece esnek bir çoklu model birleşimidir."
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B), karmaşık hesaplamalar için yüksek hassasiyetli bir talimat modelidir."
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B), optimize edilmiş dil çıktısı ve çeşitli uygulama olasılıkları sunar."
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "Phi-3-mini modelinin yenilenmiş versiyonu."
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "Aynı Phi-3-medium modeli, ancak RAG veya az sayıda örnek isteme için daha büyük bir bağlam boyutuna sahiptir."
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "14B parametreli bir model, Phi-3-mini'den daha iyi kalite sunar, yüksek kaliteli, akıl yürütme yoğun veriye odaklanır."
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "Aynı Phi-3-mini modeli, ancak RAG veya az sayıda örnek isteme için daha büyük bir bağlam boyutuna sahiptir."
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "Phi-3 ailesinin en küçük üyesi. Hem kalite hem de düşük gecikme için optimize edilmiştir."
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "Aynı Phi-3-small modeli, ancak RAG veya az sayıda örnek isteme için daha büyük bir bağlam boyutuna sahiptir."
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "7B parametreli bir model, Phi-3-mini'den daha iyi kalite sunar, yüksek kaliteli, akıl yürütme yoğun veriye odaklanır."
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128K, olağanüstü bağlam işleme yeteneği ile donatılmıştır, 128K'ya kadar bağlam bilgisi işleyebilir, özellikle uzun metin içeriklerinde bütünsel analiz ve uzun vadeli mantıksal bağlantı işleme gerektiren durumlar için uygundur, karmaşık metin iletişiminde akıcı ve tutarlı bir mantık ile çeşitli alıntı desteği sunar."
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "Qwen2'nin test sürümü olan Qwen1.5, büyük ölçekli verilerle daha hassas diyalog yetenekleri sunar."
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B), hızlı yanıt ve doğal diyalog yetenekleri sunar, çok dilli ortamlara uygundur."
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2, çok çeşitli talimat türlerini destekleyen gelişmiş bir genel dil modelidir."
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5, talimat tabanlı görevlerin işlenmesini optimize etmek için tasarlanmış yeni bir büyük dil modeli serisidir."
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5, talimat tabanlı görevlerin işlenmesini optimize etmek için tasarlanmış yeni bir büyük dil modeli serisidir."
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5, daha güçlü anlama ve üretim yeteneklerine sahip yeni bir büyük dil modeli serisidir."
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5, talimat tabanlı görevlerin işlenmesini optimize etmek için tasarlanmış yeni bir büyük dil modeli serisidir."
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coder, kod yazmaya odaklanır."
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Math, matematik alanındaki sorunları çözmeye odaklanır ve yüksek zorlukta sorulara profesyonel yanıtlar sunar."
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9B açık kaynak versiyonu, diyalog uygulamaları için optimize edilmiş bir diyalog deneyimi sunar."
+ },
+ "abab5.5-chat": {
+ "description": "Üretkenlik senaryoları için tasarlanmış, karmaşık görev işleme ve verimli metin üretimini destekler, profesyonel alan uygulamaları için uygundur."
+ },
+ "abab5.5s-chat": {
+ "description": "Çin karakter diyalog senaryoları için tasarlanmış, yüksek kaliteli Çin diyalog üretim yeteneği sunar ve çeşitli uygulama senaryoları için uygundur."
+ },
+ "abab6.5g-chat": {
+ "description": "Çok dilli karakter diyalogları için tasarlanmış, İngilizce ve diğer birçok dilde yüksek kaliteli diyalog üretimini destekler."
+ },
+ "abab6.5s-chat": {
+ "description": "Metin üretimi, diyalog sistemleri gibi geniş doğal dil işleme görevleri için uygundur."
+ },
+ "abab6.5t-chat": {
+ "description": "Çin karakter diyalog senaryoları için optimize edilmiş, akıcı ve Çin ifade alışkanlıklarına uygun diyalog üretim yeteneği sunar."
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "Fireworks açık kaynak fonksiyon çağrı modeli, mükemmel talimat yürütme yetenekleri ve özelleştirilebilir özellikler sunar."
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "Fireworks şirketinin en son ürünü Firefunction-v2, Llama-3 tabanlı, fonksiyon çağrıları, diyalog ve talimat takibi gibi senaryolar için özel olarak optimize edilmiş yüksek performanslı bir modeldir."
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13b, hem görüntü hem de metin girdilerini alabilen, yüksek kaliteli verilerle eğitilmiş bir görsel dil modelidir ve çok modlu görevler için uygundur."
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "Gemma 2 9B talimat modeli, önceki Google teknolojilerine dayanarak, soru yanıtlama, özetleme ve akıl yürütme gibi çeşitli metin üretim görevleri için uygundur."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "Llama 3 70B talimat modeli, çok dilli diyalog ve doğal dil anlama için optimize edilmiştir, çoğu rakip modelden daha iyi performans gösterir."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "Llama 3 70B talimat modeli (HF versiyonu), resmi uygulama sonuçlarıyla uyumlu olup yüksek kaliteli talimat takibi görevleri için uygundur."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "Llama 3 8B talimat modeli, diyalog ve çok dilli görevler için optimize edilmiştir, mükemmel ve etkili performans sunar."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "Llama 3 8B talimat modeli (HF versiyonu), resmi uygulama sonuçlarıyla uyumlu olup yüksek tutarlılık ve platformlar arası uyumluluk sunar."
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "Llama 3.1 405B talimat modeli, devasa parametreler ile karmaşık görevler ve yüksek yük senaryolarında talimat takibi için uygundur."
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "Llama 3.1 70B talimat modeli, mükemmel doğal dil anlama ve üretim yetenekleri sunar, diyalog ve analiz görevleri için idealdir."
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "Llama 3.1 8B talimat modeli, çok dilli diyaloglar için optimize edilmiştir ve yaygın endüstri standartlarını aşmaktadır."
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "Mixtral MoE 8x22B talimat modeli, büyük ölçekli parametreler ve çok uzmanlı mimarisi ile karmaşık görevlerin etkili işlenmesini destekler."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "Mixtral MoE 8x7B talimat modeli, çok uzmanlı mimarisi ile etkili talimat takibi ve yürütme sunar."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "Mixtral MoE 8x7B talimat modeli (HF versiyonu), resmi uygulama ile uyumlu olup çeşitli yüksek verimli görev senaryoları için uygundur."
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "MythoMax L2 13B modeli, yenilikçi birleşim teknolojileri ile hikaye anlatımı ve rol yapma konularında uzmandır."
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "Phi 3 Vision talimat modeli, karmaşık görsel ve metin bilgilerini işleyebilen hafif çok modlu bir modeldir ve güçlü akıl yürütme yeteneklerine sahiptir."
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "StarCoder 15.5B modeli, ileri düzey programlama görevlerini destekler, çok dilli yetenekleri artırır ve karmaşık kod üretimi ve anlama için uygundur."
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "StarCoder 7B modeli, 80'den fazla programlama dili için eğitilmiş olup, mükemmel programlama tamamlama yetenekleri ve bağlam anlama sunar."
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "Yi-Large modeli, mükemmel çok dilli işleme yetenekleri sunar ve her türlü dil üretimi ve anlama görevleri için uygundur."
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "398B parametreli (94B aktif) çok dilli bir model, 256K uzun bağlam penceresi, fonksiyon çağrısı, yapılandırılmış çıktı ve temellendirilmiş üretim sunar."
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "52B parametreli (12B aktif) çok dilli bir model, 256K uzun bağlam penceresi, fonksiyon çağrısı, yapılandırılmış çıktı ve temellendirilmiş üretim sunar."
+ },
+ "ai21-jamba-instruct": {
+ "description": "En iyi performans, kalite ve maliyet verimliliği sağlamak için üretim sınıfı Mamba tabanlı LLM modelidir."
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "Claude 3.5 Sonnet, endüstri standartlarını yükselterek, rakip modelleri ve Claude 3 Opus'u geride bırakarak geniş bir değerlendirmede mükemmel performans sergilerken, orta seviye modellerimizin hızı ve maliyeti ile birlikte gelir."
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "Claude 3 Haiku, Anthropic'in en hızlı ve en kompakt modelidir, neredeyse anında yanıt hızı sunar. Basit sorgular ve taleplere hızlı bir şekilde yanıt verebilir. Müşteriler, insan etkileşimini taklit eden kesintisiz bir AI deneyimi oluşturabileceklerdir. Claude 3 Haiku, görüntüleri işleyebilir ve metin çıktısı döndürebilir, 200K bağlam penceresine sahiptir."
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "Claude 3 Opus, Anthropic'in en güçlü AI modelidir, son derece karmaşık görevlerde en ileri düzey performansa sahiptir. Açık uçlu istemleri ve daha önce görülmemiş senaryoları işleyebilir, mükemmel akıcılık ve insan benzeri anlama yeteneğine sahiptir. Claude 3 Opus, üretken AI olasılıklarının öncüsüdür. Claude 3 Opus, görüntüleri işleyebilir ve metin çıktısı döndürebilir, 200K bağlam penceresine sahiptir."
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "Anthropic'in Claude 3 Sonnet, zeka ve hız arasında ideal bir denge sağlar - özellikle kurumsal iş yükleri için uygundur. Rakiplerine göre daha düşük bir fiyatla maksimum fayda sunar ve ölçeklenebilir AI dağıtımları için güvenilir, dayanıklı bir ana makine olarak tasarlanmıştır. Claude 3 Sonnet, görüntüleri işleyebilir ve metin çıktısı döndürebilir, 200K bağlam penceresine sahiptir."
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "Günlük diyaloglar, metin analizi, özetleme ve belge soru-cevap gibi bir dizi görevi işleyebilen hızlı, ekonomik ve hala oldukça yetenekli bir modeldir."
+ },
+ "anthropic.claude-v2": {
+ "description": "Anthropic, karmaşık diyaloglardan yaratıcı içerik üretimine ve ayrıntılı talimat takibine kadar geniş bir görev yelpazesinde yüksek yetenek sergileyen bir modeldir."
+ },
+ "anthropic.claude-v2:1": {
+ "description": "Claude 2'nin güncellenmiş versiyonu, iki kat daha büyük bir bağlam penceresine sahiptir ve uzun belgeler ve RAG bağlamındaki güvenilirlik, yanılsama oranı ve kanıta dayalı doğrulukta iyileştirmeler sunar."
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 Haiku, Anthropic'in en hızlı ve en kompakt modelidir; neredeyse anlık yanıtlar sağlamak için tasarlanmıştır. Hızlı ve doğru yönlendirme performansına sahiptir."
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opus, Anthropic'in son derece karmaşık görevleri işlemek için en güçlü modelidir. Performans, zeka, akıcılık ve anlama açısından olağanüstü bir performans sergiler."
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 Sonnet, Opus'tan daha fazla yetenek ve Sonnet'ten daha hızlı bir hız sunar; aynı zamanda Sonnet ile aynı fiyatı korur. Sonnet, programlama, veri bilimi, görsel işleme ve ajan görevlerinde özellikle başarılıdır."
+ },
+ "aya": {
+ "description": "Aya 23, Cohere tarafından sunulan çok dilli bir modeldir, 23 dili destekler ve çok dilli uygulamalar için kolaylık sağlar."
+ },
+ "aya:35b": {
+ "description": "Aya 23, Cohere tarafından sunulan çok dilli bir modeldir, 23 dili destekler ve çok dilli uygulamalar için kolaylık sağlar."
+ },
+ "charglm-3": {
+ "description": "CharGLM-3, rol yapma ve duygusal destek için tasarlanmış, ultra uzun çok turlu bellek ve kişiselleştirilmiş diyalog desteği sunan bir modeldir, geniş bir uygulama yelpazesine sahiptir."
+ },
+ "chatgpt-4o-latest": {
+ "description": "ChatGPT-4o, güncel versiyonunu korumak için gerçek zamanlı olarak güncellenen dinamik bir modeldir. Güçlü dil anlama ve üretme yeteneklerini birleştirir, müşteri hizmetleri, eğitim ve teknik destek gibi geniş ölçekli uygulama senaryoları için uygundur."
+ },
+ "claude-2.0": {
+ "description": "Claude 2, işletmelere kritik yeteneklerin ilerlemesini sunar, sektördeki en iyi 200K token bağlamı, model yanılsamalarının önemli ölçüde azaltılması, sistem ipuçları ve yeni bir test özelliği: araç çağrısı içerir."
+ },
+ "claude-2.1": {
+ "description": "Claude 2, işletmelere kritik yeteneklerin ilerlemesini sunar, sektördeki en iyi 200K token bağlamı, model yanılsamalarının önemli ölçüde azaltılması, sistem ipuçları ve yeni bir test özelliği: araç çağrısı içerir."
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnet, Opus'tan daha fazla yetenek ve Sonnet'ten daha hızlı bir performans sunar, aynı zamanda Sonnet ile aynı fiyatı korur. Sonnet, programlama, veri bilimi, görsel işleme ve ajan görevlerinde özellikle başarılıdır."
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haiku, Anthropic'in en hızlı ve en kompakt modelidir, neredeyse anlık yanıtlar sağlamak için tasarlanmıştır. Hızlı ve doğru yönlendirme performansına sahiptir."
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opus, Anthropic'in yüksek karmaşıklıkta görevleri işlemek için en güçlü modelidir. Performans, zeka, akıcılık ve anlama açısından mükemmel bir şekilde öne çıkar."
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnet, akıllı ve hızlı bir denge sunarak kurumsal iş yükleri için idealdir. Daha düşük bir fiyatla maksimum fayda sağlar, güvenilir ve büyük ölçekli dağıtım için uygundur."
+ },
+ "claude-instant-1.2": {
+ "description": "Anthropic'in modeli, düşük gecikme ve yüksek verimlilikte metin üretimi için kullanılır, yüzlerce sayfa metin üretebilir."
+ },
+ "codegeex-4": {
+ "description": "CodeGeeX-4, çeşitli programlama dillerinde akıllı soru-cevap ve kod tamamlama desteği sunan güçlü bir AI programlama asistanıdır, geliştirme verimliliğini artırır."
+ },
+ "codegemma": {
+ "description": "CodeGemma, farklı programlama görevleri için özel olarak tasarlanmış hafif bir dil modelidir, hızlı iterasyon ve entegrasyonu destekler."
+ },
+ "codegemma:2b": {
+ "description": "CodeGemma, farklı programlama görevleri için özel olarak tasarlanmış hafif bir dil modelidir, hızlı iterasyon ve entegrasyonu destekler."
+ },
+ "codellama": {
+ "description": "Code Llama, kod üretimi ve tartışmalarına odaklanan bir LLM'dir, geniş programlama dili desteği ile geliştirici ortamları için uygundur."
+ },
+ "codellama:13b": {
+ "description": "Code Llama, kod üretimi ve tartışmalarına odaklanan bir LLM'dir, geniş programlama dili desteği ile geliştirici ortamları için uygundur."
+ },
+ "codellama:34b": {
+ "description": "Code Llama, kod üretimi ve tartışmalarına odaklanan bir LLM'dir, geniş programlama dili desteği ile geliştirici ortamları için uygundur."
+ },
+ "codellama:70b": {
+ "description": "Code Llama, kod üretimi ve tartışmalarına odaklanan bir LLM'dir, geniş programlama dili desteği ile geliştirici ortamları için uygundur."
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5, büyük miktarda kod verisi ile eğitilmiş büyük bir dil modelidir, karmaşık programlama görevlerini çözmek için özel olarak tasarlanmıştır."
+ },
+ "codestral": {
+ "description": "Codestral, Mistral AI'nın ilk kod modelidir, kod üretim görevlerine mükemmel destek sunar."
+ },
+ "codestral-latest": {
+ "description": "Codestral, kod üretimine odaklanan son teknoloji bir üretim modelidir, ara doldurma ve kod tamamlama görevlerini optimize etmiştir."
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22B, talimat takibi, diyalog ve programlama için tasarlanmış bir modeldir."
+ },
+ "cohere-command-r": {
+ "description": "Command R, üretim ölçeğinde AI sağlamak için RAG ve Araç Kullanımına yönelik ölçeklenebilir bir üretken modeldir."
+ },
+ "cohere-command-r-plus": {
+ "description": "Command R+, kurumsal düzeyde iş yüklerini ele almak için tasarlanmış en son RAG optimize edilmiş bir modeldir."
+ },
+ "command-r": {
+ "description": "Command R, diyalog ve uzun bağlam görevleri için optimize edilmiş bir LLM'dir, dinamik etkileşim ve bilgi yönetimi için özellikle uygundur."
+ },
+ "command-r-plus": {
+ "description": "Command R+, gerçek işletme senaryoları ve karmaşık uygulamalar için tasarlanmış yüksek performanslı bir büyük dil modelidir."
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instruct, yüksek güvenilirlikte talimat işleme yetenekleri sunar ve çok çeşitli endüstri uygulamalarını destekler."
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5, önceki sürümlerin mükemmel özelliklerini bir araya getirir, genel ve kodlama yeteneklerini artırır."
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67B, yüksek karmaşıklıkta diyaloglar için eğitilmiş gelişmiş bir modeldir."
+ },
+ "deepseek-chat": {
+ "description": "Genel ve kod yeteneklerini birleştiren yeni bir açık kaynak modeli, yalnızca mevcut Chat modelinin genel diyalog yeteneklerini ve Coder modelinin güçlü kod işleme yeteneklerini korumakla kalmaz, aynı zamanda insan tercihleri ile daha iyi hizalanmıştır. Ayrıca, DeepSeek-V2.5 yazım görevleri, talimat takibi gibi birçok alanda büyük iyileştirmeler sağlamıştır."
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2, açık kaynaklı bir karışık uzman kod modelidir, kod görevlerinde mükemmel performans sergiler ve GPT4-Turbo ile karşılaştırılabilir."
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2, açık kaynaklı bir karışık uzman kod modelidir, kod görevlerinde mükemmel performans sergiler ve GPT4-Turbo ile karşılaştırılabilir."
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2, ekonomik ve verimli işleme ihtiyaçları için uygun, etkili bir Mixture-of-Experts dil modelidir."
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236B, DeepSeek'in tasarım kodu modelidir, güçlü kod üretim yetenekleri sunar."
+ },
+ "deepseek/deepseek-chat": {
+ "description": "Genel ve kod yeteneklerini birleştiren yeni açık kaynak model, yalnızca mevcut Chat modelinin genel diyalog yeteneklerini ve Coder modelinin güçlü kod işleme yeteneklerini korumakla kalmaz, aynı zamanda insan tercihleriyle daha iyi hizalanmıştır. Ayrıca, DeepSeek-V2.5 yazma görevleri, talimat takibi gibi birçok alanda da büyük iyileştirmeler sağlamıştır."
+ },
+ "emohaa": {
+ "description": "Emohaa, duygusal sorunları anlamalarına yardımcı olmak için profesyonel danışmanlık yeteneklerine sahip bir psikolojik modeldir."
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001 (Tuning), kararlı ve ayarlanabilir bir performans sunar, karmaşık görev çözümleri için ideal bir seçimdir."
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002 (Tuning), mükemmel çok modlu destek sunar ve karmaşık görevlerin etkili bir şekilde çözülmesine odaklanır."
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Pro, Google'ın yüksek performanslı AI modelidir ve geniş görev genişletmeleri için tasarlanmıştır."
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001, geniş uygulama alanları için destekleyen verimli bir çok modlu modeldir."
+ },
+ "gemini-1.5-flash-002": {
+ "description": "Gemini 1.5 Flash 002, geniş uygulama yelpazesini destekleyen verimli bir çok modlu modeldir."
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827, büyük ölçekli görev senaryolarını işlemek için tasarlanmış, eşsiz bir işleme hızı sunar."
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "Gemini 1.5 Flash 8B 0924, metin ve çok modlu kullanım durumlarında önemli performans artışları sunan en son deneysel modeldir."
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827, çeşitli karmaşık görev senaryoları için optimize edilmiş çok modlu işleme yeteneği sunar."
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flash, Google'ın en son çok modlu AI modelidir, hızlı işleme yeteneğine sahiptir ve metin, görüntü ve video girişi destekler, çeşitli görevlerin verimli bir şekilde genişletilmesine olanak tanır."
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001, geniş karmaşık görevleri destekleyen ölçeklenebilir bir çok modlu AI çözümüdür."
+ },
+ "gemini-1.5-pro-002": {
+ "description": "Gemini 1.5 Pro 002, daha yüksek kaliteli çıktılar sunan en son üretim hazır modeldir; özellikle matematik, uzun bağlam ve görsel görevlerde önemli iyileştirmeler sağlamaktadır."
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801, mükemmel çok modlu işleme yeteneği sunar ve uygulama geliştirmeye daha fazla esneklik kazandırır."
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827, en son optimize edilmiş teknolojileri bir araya getirerek daha verimli çok modlu veri işleme yeteneği sunar."
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Pro, 2 milyon token'a kadar destekler, orta ölçekli çok modlu modeller için ideal bir seçimdir ve karmaşık görevler için çok yönlü destek sunar."
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7B, orta ölçekli görev işleme için uygundur ve maliyet etkinliği sunar."
+ },
+ "gemma2": {
+ "description": "Gemma 2, Google tarafından sunulan verimli bir modeldir, küçük uygulamalardan karmaşık veri işleme senaryolarına kadar çeşitli uygulama alanlarını kapsar."
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9B, belirli görevler ve araç entegrasyonu için optimize edilmiş bir modeldir."
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2, Google tarafından sunulan verimli bir modeldir, küçük uygulamalardan karmaşık veri işleme senaryolarına kadar çeşitli uygulama alanlarını kapsar."
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2, Google tarafından sunulan verimli bir modeldir, küçük uygulamalardan karmaşık veri işleme senaryolarına kadar çeşitli uygulama alanlarını kapsar."
+ },
+ "general": {
+ "description": "Spark Lite, son derece düşük gecikme ve yüksek verimlilik sunan hafif bir büyük dil modelidir, tamamen ücretsiz ve açık olup, gerçek zamanlı çevrimiçi arama işlevini destekler. Hızlı yanıt verme özelliği, düşük hesaplama gücüne sahip cihazlarda çıkarım uygulamaları ve model ince ayarlarında mükemmel performans gösterir, kullanıcılara mükemmel maliyet etkinliği ve akıllı deneyim sunar, özellikle bilgi sorgulama, içerik oluşturma ve arama senaryolarında başarılıdır."
+ },
+ "generalv3": {
+ "description": "Spark Pro, profesyonel alanlar için optimize edilmiş yüksek performanslı büyük dil modelidir, matematik, programlama, sağlık, eğitim gibi birçok alana odaklanır ve çevrimiçi arama ile yerleşik hava durumu, tarih gibi eklentileri destekler. Optimize edilmiş modeli, karmaşık bilgi sorgulama, dil anlama ve yüksek düzeyde metin oluşturma konularında mükemmel performans ve yüksek verimlilik sergiler, profesyonel uygulama senaryoları için ideal bir seçimdir."
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Max, en kapsamlı özelliklere sahip versiyondur, çevrimiçi arama ve birçok yerleşik eklentiyi destekler. Kapsamlı optimize edilmiş temel yetenekleri ve sistem rol ayarları ile fonksiyon çağırma özellikleri, çeşitli karmaşık uygulama senaryolarında son derece mükemmel ve olağanüstü performans sergiler."
+ },
+ "glm-4": {
+ "description": "GLM-4, Ocak 2024'te piyasaya sürülen eski amiral gemisi versiyonudur, şu anda daha güçlü GLM-4-0520 ile değiştirilmiştir."
+ },
+ "glm-4-0520": {
+ "description": "GLM-4-0520, son derece karmaşık ve çeşitli görevler için tasarlanmış en yeni model versiyonudur, olağanüstü performans sergiler."
+ },
+ "glm-4-air": {
+ "description": "GLM-4-Air, maliyet etkin bir versiyondur, GLM-4'e yakın performans sunar ve hızlı hız ve uygun fiyat sağlar."
+ },
+ "glm-4-airx": {
+ "description": "GLM-4-AirX, GLM-4-Air'ın verimli bir versiyonunu sunar, çıkarım hızı 2.6 katına kadar çıkabilir."
+ },
+ "glm-4-alltools": {
+ "description": "GLM-4-AllTools, karmaşık talimat planlaması ve araç çağrıları gibi çok işlevli görevleri desteklemek için optimize edilmiş bir akıllı modeldir. İnternet tarayıcıları, kod açıklamaları ve metin üretimi gibi çoklu görevleri yerine getirmek için uygundur."
+ },
+ "glm-4-flash": {
+ "description": "GLM-4-Flash, basit görevleri işlemek için ideal bir seçimdir, en hızlı ve en uygun fiyatlıdır."
+ },
+ "glm-4-long": {
+ "description": "GLM-4-Long, ultra uzun metin girişlerini destekler, bellek tabanlı görevler ve büyük ölçekli belge işleme için uygundur."
+ },
+ "glm-4-plus": {
+ "description": "GLM-4-Plus, güçlü uzun metin işleme ve karmaşık görevler için yeteneklere sahip yüksek akıllı bir amiral gemisidir, performansı tamamen artırılmıştır."
+ },
+ "glm-4v": {
+ "description": "GLM-4V, güçlü görüntü anlama ve akıl yürütme yetenekleri sunar, çeşitli görsel görevleri destekler."
+ },
+ "glm-4v-plus": {
+ "description": "GLM-4V-Plus, video içeriği ve çoklu görüntüleri anlama yeteneğine sahiptir, çok modlu görevler için uygundur."
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827, optimize edilmiş çok modlu işleme yetenekleri sunar ve çeşitli karmaşık görev senaryolarına uygundur."
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827, en son optimize edilmiş teknolojileri birleştirerek daha verimli çok modlu veri işleme yetenekleri sunar."
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2, hafiflik ve verimlilik tasarım felsefesini sürdürmektedir."
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2, Google'ın hafif açık kaynak metin modeli serisidir."
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2, Google'ın hafif açık kaynak metin modeli serisidir."
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B), temel talimat işleme yetenekleri sunar ve hafif uygulamalar için uygundur."
+ },
+ "gpt-3.5-turbo": {
+ "description": "GPT 3.5 Turbo, çeşitli metin üretimi ve anlama görevleri için uygundur, şu anda gpt-3.5-turbo-0125'e işaret ediyor."
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "GPT 3.5 Turbo, çeşitli metin üretimi ve anlama görevleri için uygundur, şu anda gpt-3.5-turbo-0125'e işaret ediyor."
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "GPT 3.5 Turbo, çeşitli metin üretimi ve anlama görevleri için uygundur, şu anda gpt-3.5-turbo-0125'e işaret ediyor."
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "GPT 3.5 Turbo, çeşitli metin üretimi ve anlama görevleri için uygundur, şu anda gpt-3.5-turbo-0125'e işaret ediyor."
+ },
+ "gpt-4": {
+ "description": "GPT-4, daha büyük bir bağlam penceresi sunarak daha uzun metin girişlerini işleyebilir, geniş bilgi entegrasyonu ve veri analizi gerektiren senaryolar için uygundur."
+ },
+ "gpt-4-0125-preview": {
+ "description": "En son GPT-4 Turbo modeli görsel işlevselliğe sahiptir. Artık görsel talepler JSON formatı ve fonksiyon çağrıları ile işlenebilir. GPT-4 Turbo, çok modlu görevler için maliyet etkin bir destek sunan geliştirilmiş bir versiyondur. Doğruluk ve verimlilik arasında bir denge sağlar, gerçek zamanlı etkileşim gerektiren uygulama senaryoları için uygundur."
+ },
+ "gpt-4-0613": {
+ "description": "GPT-4, daha büyük bir bağlam penceresi sunarak daha uzun metin girişlerini işleyebilir, geniş bilgi entegrasyonu ve veri analizi gerektiren senaryolar için uygundur."
+ },
+ "gpt-4-1106-preview": {
+ "description": "En son GPT-4 Turbo modeli görsel işlevselliğe sahiptir. Artık görsel talepler JSON formatı ve fonksiyon çağrıları ile işlenebilir. GPT-4 Turbo, çok modlu görevler için maliyet etkin bir destek sunan geliştirilmiş bir versiyondur. Doğruluk ve verimlilik arasında bir denge sağlar, gerçek zamanlı etkileşim gerektiren uygulama senaryoları için uygundur."
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "En son GPT-4 Turbo modeli görsel işlevselliğe sahiptir. Artık görsel talepler JSON formatı ve fonksiyon çağrıları ile işlenebilir. GPT-4 Turbo, çok modlu görevler için maliyet etkin bir destek sunan geliştirilmiş bir versiyondur. Doğruluk ve verimlilik arasında bir denge sağlar, gerçek zamanlı etkileşim gerektiren uygulama senaryoları için uygundur."
+ },
+ "gpt-4-32k": {
+ "description": "GPT-4, daha büyük bir bağlam penceresi sunarak daha uzun metin girişlerini işleyebilir, geniş bilgi entegrasyonu ve veri analizi gerektiren senaryolar için uygundur."
+ },
+ "gpt-4-32k-0613": {
+ "description": "GPT-4, daha büyük bir bağlam penceresi sunarak daha uzun metin girişlerini işleyebilir, geniş bilgi entegrasyonu ve veri analizi gerektiren senaryolar için uygundur."
+ },
+ "gpt-4-turbo": {
+ "description": "En son GPT-4 Turbo modeli görsel işlevselliğe sahiptir. Artık görsel talepler JSON formatı ve fonksiyon çağrıları ile işlenebilir. GPT-4 Turbo, çok modlu görevler için maliyet etkin bir destek sunan geliştirilmiş bir versiyondur. Doğruluk ve verimlilik arasında bir denge sağlar, gerçek zamanlı etkileşim gerektiren uygulama senaryoları için uygundur."
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "En son GPT-4 Turbo modeli görsel işlevselliğe sahiptir. Artık görsel talepler JSON formatı ve fonksiyon çağrıları ile işlenebilir. GPT-4 Turbo, çok modlu görevler için maliyet etkin bir destek sunan geliştirilmiş bir versiyondur. Doğruluk ve verimlilik arasında bir denge sağlar, gerçek zamanlı etkileşim gerektiren uygulama senaryoları için uygundur."
+ },
+ "gpt-4-turbo-preview": {
+ "description": "En son GPT-4 Turbo modeli görsel işlevselliğe sahiptir. Artık görsel talepler JSON formatı ve fonksiyon çağrıları ile işlenebilir. GPT-4 Turbo, çok modlu görevler için maliyet etkin bir destek sunan geliştirilmiş bir versiyondur. Doğruluk ve verimlilik arasında bir denge sağlar, gerçek zamanlı etkileşim gerektiren uygulama senaryoları için uygundur."
+ },
+ "gpt-4-vision-preview": {
+ "description": "En son GPT-4 Turbo modeli görsel işlevselliğe sahiptir. Artık görsel talepler JSON formatı ve fonksiyon çağrıları ile işlenebilir. GPT-4 Turbo, çok modlu görevler için maliyet etkin bir destek sunan geliştirilmiş bir versiyondur. Doğruluk ve verimlilik arasında bir denge sağlar, gerçek zamanlı etkileşim gerektiren uygulama senaryoları için uygundur."
+ },
+ "gpt-4o": {
+ "description": "ChatGPT-4o, güncel versiyonunu korumak için gerçek zamanlı olarak güncellenen dinamik bir modeldir. Güçlü dil anlama ve üretme yeteneklerini birleştirir, müşteri hizmetleri, eğitim ve teknik destek gibi geniş ölçekli uygulama senaryoları için uygundur."
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "ChatGPT-4o, güncel versiyonunu korumak için gerçek zamanlı olarak güncellenen dinamik bir modeldir. Güçlü dil anlama ve üretme yeteneklerini birleştirir, müşteri hizmetleri, eğitim ve teknik destek gibi geniş ölçekli uygulama senaryoları için uygundur."
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o, güncel versiyonunu korumak için gerçek zamanlı olarak güncellenen dinamik bir modeldir. Güçlü dil anlama ve üretme yeteneklerini birleştirir, müşteri hizmetleri, eğitim ve teknik destek gibi geniş ölçekli uygulama senaryoları için uygundur."
+ },
+ "gpt-4o-mini": {
+ "description": "GPT-4o mini, OpenAI'nin GPT-4 Omni'den sonra tanıttığı en yeni modeldir. Görsel ve metin girişi destekler ve metin çıktısı verir. En gelişmiş küçük model olarak, diğer son zamanlardaki öncü modellere göre çok daha ucuzdur ve GPT-3.5 Turbo'dan %60'tan fazla daha ucuzdur. En son teknolojiyi korurken, önemli bir maliyet etkinliği sunar. GPT-4o mini, MMLU testinde %82 puan almış olup, şu anda sohbet tercihleri açısından GPT-4'ün üzerinde yer almaktadır."
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13B, birden fazla üst düzey modelin birleşimiyle yaratıcı ve zeka odaklı bir dil modelidir."
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "Yenilikçi açık kaynak modeli InternLM2.5, büyük ölçekli parametreler ile diyalog zekasını artırmıştır."
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5, çoklu senaryolarda akıllı diyalog çözümleri sunar."
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct modeli, 70B parametreye sahiptir ve büyük metin üretimi ve talimat görevlerinde mükemmel performans sunar."
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70B, daha güçlü AI akıl yürütme yeteneği sunar, karmaşık uygulamalar için uygundur ve yüksek verimlilik ve doğruluk sağlamak için çok sayıda hesaplama işlemini destekler."
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8B, hızlı metin üretim yeteneği sunan yüksek performanslı bir modeldir ve büyük ölçekli verimlilik ve maliyet etkinliği gerektiren uygulama senaryoları için son derece uygundur."
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct modeli, 8B parametreye sahiptir ve görsel talimat görevlerinin etkili bir şekilde yürütülmesini sağlar, kaliteli metin üretim yetenekleri sunar."
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "Llama 3.1 Sonar Huge Online modeli, 405B parametreye sahiptir ve yaklaşık 127,000 belirteçlik bağlam uzunluğunu destekler, karmaşık çevrimiçi sohbet uygulamaları için tasarlanmıştır."
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "Llama 3.1 Sonar Large Chat modeli, 70B parametreye sahiptir ve yaklaşık 127,000 belirteçlik bağlam uzunluğunu destekler, karmaşık çevrimdışı sohbet görevleri için uygundur."
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "Llama 3.1 Sonar Large Online modeli, 70B parametreye sahiptir ve yaklaşık 127,000 belirteçlik bağlam uzunluğunu destekler, yüksek kapasiteli ve çeşitli sohbet görevleri için uygundur."
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "Llama 3.1 Sonar Small Chat modeli, 8B parametreye sahiptir ve çevrimdışı sohbet için tasarlanmıştır, yaklaşık 127,000 belirteçlik bağlam uzunluğunu destekler."
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "Llama 3.1 Sonar Small Online modeli, 8B parametreye sahiptir ve yaklaşık 127,000 belirteçlik bağlam uzunluğunu destekler, çevrimiçi sohbet için tasarlanmıştır ve çeşitli metin etkileşimlerini etkili bir şekilde işler."
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70B, eşsiz karmaşıklık işleme yeteneği sunar ve yüksek talepli projeler için özel olarak tasarlanmıştır."
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8B, yüksek kaliteli akıl yürütme performansı sunar ve çok çeşitli uygulama ihtiyaçları için uygundur."
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Use, güçlü araç çağırma yetenekleri sunar ve karmaşık görevlerin verimli bir şekilde işlenmesini destekler."
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Use, verimli araç kullanımı için optimize edilmiş bir modeldir ve hızlı paralel hesaplamayı destekler."
+ },
+ "llama3.1": {
+ "description": "Llama 3.1, Meta tarafından sunulan öncü bir modeldir, 405B parametreye kadar destekler ve karmaşık diyaloglar, çok dilli çeviri ve veri analizi alanlarında kullanılabilir."
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1, Meta tarafından sunulan öncü bir modeldir, 405B parametreye kadar destekler ve karmaşık diyaloglar, çok dilli çeviri ve veri analizi alanlarında kullanılabilir."
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1, Meta tarafından sunulan öncü bir modeldir, 405B parametreye kadar destekler ve karmaşık diyaloglar, çok dilli çeviri ve veri analizi alanlarında kullanılabilir."
+ },
+ "llava": {
+ "description": "LLaVA, görsel kodlayıcı ve Vicuna'yı birleştiren çok modlu bir modeldir, güçlü görsel ve dil anlama yetenekleri sunar."
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7B, görsel işleme yeteneklerini birleştirir ve görsel bilgi girişi ile karmaşık çıktılar üretir."
+ },
+ "llava:13b": {
+ "description": "LLaVA, görsel kodlayıcı ve Vicuna'yı birleştiren çok modlu bir modeldir, güçlü görsel ve dil anlama yetenekleri sunar."
+ },
+ "llava:34b": {
+ "description": "LLaVA, görsel kodlayıcı ve Vicuna'yı birleştiren çok modlu bir modeldir, güçlü görsel ve dil anlama yetenekleri sunar."
+ },
+ "mathstral": {
+ "description": "MathΣtral, bilimsel araştırma ve matematik akıl yürütme için tasarlanmış, etkili hesaplama yetenekleri ve sonuç açıklamaları sunar."
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "Akıl yürütme, kodlama ve geniş dil uygulamalarında mükemmel bir 70 milyar parametreli model."
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "Diyalog ve metin üretim görevleri için optimize edilmiş çok yönlü bir 8 milyar parametreli model."
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 talimat ayarlı yalnızca metin modelleri, çok dilli diyalog kullanım durumları için optimize edilmiştir ve mevcut açık kaynak ve kapalı sohbet modellerinin çoğunu yaygın endüstri standartlarında geride bırakmaktadır."
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 talimat ayarlı yalnızca metin modelleri, çok dilli diyalog kullanım durumları için optimize edilmiştir ve mevcut açık kaynak ve kapalı sohbet modellerinin çoğunu yaygın endüstri standartlarında geride bırakmaktadır."
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 talimat ayarlı yalnızca metin modelleri, çok dilli diyalog kullanım durumları için optimize edilmiştir ve mevcut açık kaynak ve kapalı sohbet modellerinin çoğunu yaygın endüstri standartlarında geride bırakmaktadır."
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B), mükemmel dil işleme yetenekleri ve olağanüstü etkileşim deneyimi sunar."
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B), karmaşık diyalog ihtiyaçlarını destekleyen güçlü bir sohbet modelidir."
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B), çok dilli desteği ile zengin alan bilgilerini kapsar."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Lite, yüksek performans ve düşük gecikme gerektiren ortamlara uygundur."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turbo, en zorlu hesaplama görevleri için mükemmel dil anlama ve üretim yetenekleri sunar."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Lite, kaynak kısıtlı ortamlara uygun, mükemmel denge performansı sunar."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turbo, geniş uygulama alanlarını destekleyen yüksek performanslı bir büyük dil modelidir."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405B, ön eğitim ve talimat ayarlaması için güçlü bir modeldir."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "405B Llama 3.1 Turbo modeli, büyük veri işleme için devasa bağlam desteği sunar ve büyük ölçekli AI uygulamalarında öne çıkar."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70B, çok dilli yüksek verimli diyalog desteği sunmaktadır."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "Llama 3.1 70B modeli, yüksek yük uygulamaları için ince ayar yapılmış, FP8'e kuantize edilerek daha verimli hesaplama gücü ve doğruluk sağlar, karmaşık senaryolarda mükemmel performans sunar."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1, çok dilli destek sunan, sektördeki önde gelen üretim modellerinden biridir."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "Llama 3.1 8B modeli, FP8 kuantizasyonu ile 131,072'ye kadar bağlam belirteci destekler, karmaşık görevler için mükemmel bir açık kaynak modelidir ve birçok endüstri standardını aşar."
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instruct, yüksek kaliteli diyalog senaryoları için optimize edilmiştir ve çeşitli insan değerlendirmelerinde mükemmel performans göstermektedir."
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instruct, yüksek kaliteli diyalog senaryoları için optimize edilmiştir ve birçok kapalı kaynak modelden daha iyi performans göstermektedir."
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B Instruct, Meta'nın en son sunduğu versiyon olup, yüksek kaliteli diyalog üretimi için optimize edilmiştir ve birçok önde gelen kapalı kaynak modelden daha iyi performans göstermektedir."
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct, yüksek kaliteli diyalog için tasarlanmış olup, insan değerlendirmelerinde öne çıkmakta ve özellikle yüksek etkileşimli senaryolar için uygundur."
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct, Meta tarafından sunulan en son versiyon olup, yüksek kaliteli diyalog senaryoları için optimize edilmiştir ve birçok önde gelen kapalı kaynak modelden daha iyi performans göstermektedir."
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1, çok dilli destek sunar ve sektördeki en önde gelen üretim modellerinden biridir."
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 405B Instruct, Llama 3.1 Instruct modelinin en büyük ve en güçlü versiyonudur. Bu, son derece gelişmiş bir diyalog akıl yürütme ve veri sentezleme modelidir ve belirli alanlarda uzmanlaşmış sürekli ön eğitim veya ince ayar için bir temel olarak da kullanılabilir. Llama 3.1, çok dilli büyük dil modelleri (LLM'ler) sunar ve 8B, 70B ve 405B boyutlarında önceden eğitilmiş, talimat ayarlı üretim modellerinden oluşur (metin girişi/çıkışı). Llama 3.1'in talimat ayarlı metin modelleri (8B, 70B, 405B), çok dilli diyalog kullanım durumları için optimize edilmiştir ve yaygın endüstri benchmark testlerinde birçok mevcut açık kaynaklı sohbet modelini geride bırakmıştır. Llama 3.1, çok dilli ticari ve araştırma amaçları için tasarlanmıştır. Talimat ayarlı metin modelleri, asistan benzeri sohbetler için uygundur, önceden eğitilmiş modeller ise çeşitli doğal dil üretim görevlerine uyum sağlayabilir. Llama 3.1 modeli, diğer modellerin çıktısını iyileştirmek için de kullanılabilir, bu da veri sentezleme ve rafine etme işlemlerini içerir. Llama 3.1, optimize edilmiş bir transformer mimarisi kullanarak oluşturulmuş bir otoregresif dil modelidir. Ayarlanmış versiyon, insan yardımseverliği ve güvenlik tercihleri ile uyumlu hale getirmek için denetimli ince ayar (SFT) ve insan geri bildirimi ile güçlendirilmiş öğrenme (RLHF) kullanır."
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 70B Instruct'un güncellenmiş versiyonu, genişletilmiş 128K bağlam uzunluğu, çok dilli yetenek ve geliştirilmiş akıl yürütme yetenekleri içerir. Llama 3.1 tarafından sağlanan çok dilli büyük dil modelleri (LLM'ler), 8B, 70B ve 405B boyutlarında önceden eğitilmiş, talimat ayarlı üretim modelleridir (metin girişi/çıkışı). Llama 3.1 talimat ayarlı metin modelleri (8B, 70B, 405B), çok dilli diyalog kullanım durumları için optimize edilmiştir ve birçok mevcut açık kaynaklı sohbet modelini yaygın endüstri benchmark testlerinde geçmiştir. Llama 3.1, çok dilli ticari ve araştırma amaçları için kullanılmak üzere tasarlanmıştır. Talimat ayarlı metin modelleri, asistan benzeri sohbetler için uygundur, önceden eğitilmiş modeller ise çeşitli doğal dil üretim görevlerine uyum sağlayabilir. Llama 3.1 modeli, diğer modellerin çıktısını iyileştirmek için de kullanılabilir, bu da sentetik veri üretimi ve rafine etme işlemlerini içerir. Llama 3.1, optimize edilmiş bir dönüştürücü mimarisi kullanarak oluşturulmuş bir otoregresif dil modelidir. Ayarlanmış versiyonlar, insan yardımseverliği ve güvenlik tercihlerini karşılamak için denetimli ince ayar (SFT) ve insan geri bildirimli pekiştirmeli öğrenme (RLHF) kullanır."
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 8B Instruct'un güncellenmiş versiyonu, genişletilmiş 128K bağlam uzunluğu, çok dilli yetenek ve geliştirilmiş akıl yürütme yetenekleri içerir. Llama 3.1 tarafından sağlanan çok dilli büyük dil modelleri (LLM'ler), 8B, 70B ve 405B boyutlarında önceden eğitilmiş, talimat ayarlı üretim modelleridir (metin girişi/çıkışı). Llama 3.1 talimat ayarlı metin modelleri (8B, 70B, 405B), çok dilli diyalog kullanım durumları için optimize edilmiştir ve birçok mevcut açık kaynaklı sohbet modelini yaygın endüstri benchmark testlerinde geçmiştir. Llama 3.1, çok dilli ticari ve araştırma amaçları için kullanılmak üzere tasarlanmıştır. Talimat ayarlı metin modelleri, asistan benzeri sohbetler için uygundur, önceden eğitilmiş modeller ise çeşitli doğal dil üretim görevlerine uyum sağlayabilir. Llama 3.1 modeli, diğer modellerin çıktısını iyileştirmek için de kullanılabilir, bu da sentetik veri üretimi ve rafine etme işlemlerini içerir. Llama 3.1, optimize edilmiş bir dönüştürücü mimarisi kullanarak oluşturulmuş bir otoregresif dil modelidir. Ayarlanmış versiyonlar, insan yardımseverliği ve güvenlik tercihlerini karşılamak için denetimli ince ayar (SFT) ve insan geri bildirimli pekiştirmeli öğrenme (RLHF) kullanır."
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3, geliştiriciler, araştırmacılar ve işletmeler için açık bir büyük dil modelidir (LLM) ve onların üretken AI fikirlerini inşa etmelerine, denemelerine ve sorumlu bir şekilde genişletmelerine yardımcı olmak için tasarlanmıştır. Küresel topluluk yeniliğinin temel sistemlerinden biri olarak, içerik oluşturma, diyalog AI, dil anlama, araştırma ve işletme uygulamaları için son derece uygundur."
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3, geliştiriciler, araştırmacılar ve işletmeler için açık bir büyük dil modelidir (LLM) ve onların üretken AI fikirlerini inşa etmelerine, denemelerine ve sorumlu bir şekilde genişletmelerine yardımcı olmak için tasarlanmıştır. Küresel topluluk yeniliğinin temel sistemlerinden biri olarak, sınırlı hesaplama gücü ve kaynaklara sahip, kenar cihazları ve daha hızlı eğitim süreleri için son derece uygundur."
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7B, Microsoft AI'nın en son hızlı ve hafif modelidir ve mevcut açık kaynak lider modellerin performansına yakın bir performans sunmaktadır."
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22B, Microsoft'un en gelişmiş AI Wizard modelidir ve son derece rekabetçi bir performans sergiler."
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-V, OpenBMB tarafından sunulan yeni nesil çok modlu büyük bir modeldir; olağanüstü OCR tanıma ve çok modlu anlama yeteneklerine sahiptir ve geniş bir uygulama yelpazesini destekler."
+ },
+ "mistral": {
+ "description": "Mistral, Mistral AI tarafından sunulan 7B modelidir, değişken dil işleme ihtiyaçları için uygundur."
+ },
+ "mistral-large": {
+ "description": "Mixtral Large, Mistral'ın amiral gemisi modelidir, kod üretimi, matematik ve akıl yürütme yeteneklerini birleştirir, 128k bağlam penceresini destekler."
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407), en son akıl yürütme, bilgi ve kodlama yetenekleri ile gelişmiş bir Büyük Dil Modelidir (LLM)."
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Large, çok dilli görevler, karmaşık akıl yürütme ve kod üretimi için ideal bir seçimdir ve yüksek uç uygulamalar için tasarlanmıştır."
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemo, Mistral AI ve NVIDIA işbirliği ile sunulan, yüksek verimli 12B modelidir."
+ },
+ "mistral-small": {
+ "description": "Mistral Small, yüksek verimlilik ve düşük gecikme gerektiren her dil tabanlı görevde kullanılabilir."
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Small, çeviri, özetleme ve duygu analizi gibi kullanım durumları için maliyet etkin, hızlı ve güvenilir bir seçenektir."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instruct, yüksek performansıyla tanınır ve çeşitli dil görevleri için uygundur."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7B, talebe göre ince ayar yapılmış bir modeldir ve görevler için optimize edilmiş yanıtlar sunar."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3, geniş uygulamalar için etkili hesaplama gücü ve doğal dil anlama sunar."
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B), son derece büyük bir dil modelidir ve çok yüksek işleme taleplerini destekler."
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7B, genel metin görevleri için kullanılan önceden eğitilmiş seyrek karışık uzman modelidir."
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instruct, hız optimizasyonu ve uzun bağlam desteği sunan yüksek performanslı bir endüstri standart modelidir."
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemo, çok dilli destek ve yüksek performanslı programlama sunan 7.3B parametreli bir modeldir."
+ },
+ "mixtral": {
+ "description": "Mixtral, Mistral AI'nın uzman modelidir, açık kaynak ağırlıkları ile birlikte gelir ve kod üretimi ve dil anlama konularında destek sunar."
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7B, yüksek hata toleransına sahip paralel hesaplama yeteneği sunar ve karmaşık görevler için uygundur."
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtral, Mistral AI'nın uzman modelidir, açık kaynak ağırlıkları ile birlikte gelir ve kod üretimi ve dil anlama konularında destek sunar."
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128K, ultra uzun bağlam işleme yeteneğine sahip bir modeldir, karmaşık üretim görevlerini karşılamak için ultra uzun metinler üretmekte kullanılabilir, 128,000 token'a kadar içeriği işleyebilir, araştırma, akademik ve büyük belgelerin üretilmesi gibi uygulama senaryoları için son derece uygundur."
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32K, orta uzunlukta bağlam işleme yeteneği sunar, 32,768 token'ı işleyebilir, çeşitli uzun belgeler ve karmaşık diyaloglar üretmek için özellikle uygundur, içerik oluşturma, rapor üretimi ve diyalog sistemleri gibi alanlarda kullanılabilir."
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8K, kısa metin görevleri için tasarlanmış, yüksek verimlilikte işleme performansı sunar, 8,192 token'ı işleyebilir, kısa diyaloglar, not alma ve hızlı içerik üretimi için son derece uygundur."
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8B, Nous Hermes 2'nin güncellenmiş versiyonudur ve en son iç geliştirme veri setlerini içermektedir."
+ },
+ "o1-mini": {
+ "description": "o1-mini, programlama, matematik ve bilim uygulama senaryoları için tasarlanmış hızlı ve ekonomik bir akıl yürütme modelidir. Bu model, 128K bağlam ve Ekim 2023 bilgi kesim tarihi ile donatılmıştır."
+ },
+ "o1-preview": {
+ "description": "o1, OpenAI'nin geniş genel bilgiye ihtiyaç duyan karmaşık görevler için uygun yeni bir akıl yürütme modelidir. Bu model, 128K bağlam ve Ekim 2023 bilgi kesim tarihi ile donatılmıştır."
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mamba, kod üretimine odaklanan Mamba 2 dil modelidir ve ileri düzey kod ve akıl yürütme görevlerine güçlü destek sunar."
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7B, kompakt ama yüksek performanslı bir modeldir, sınıflandırma ve metin üretimi gibi basit görevlerde iyi bir akıl yürütme yeteneğine sahiptir."
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemo, Nvidia ile işbirliği içinde geliştirilmiş 12B modelidir, mükemmel akıl yürütme ve kodlama performansı sunar, entegrasyonu ve değiştirilmesi kolaydır."
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22B, karmaşık görevler için odaklanmış daha büyük bir uzman modelidir, mükemmel akıl yürütme yeteneği ve daha yüksek bir verim sunar."
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7B, birden fazla parametre kullanarak akıl yürütme hızını artıran seyrek uzman modelidir, çok dilli ve kod üretim görevlerini işlemek için uygundur."
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o, dinamik bir modeldir; en güncel versiyonu korumak için gerçek zamanlı olarak güncellenir. Güçlü dil anlama ve üretme yeteneklerini birleştirir, geniş ölçekli uygulama senaryoları için uygundur; müşteri hizmetleri, eğitim ve teknik destek gibi."
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o mini, OpenAI'nin GPT-4 Omni'den sonra sunduğu en son modeldir; görsel ve metin girişi destekler ve metin çıktısı verir. En gelişmiş küçük model olarak, diğer son zamanlardaki öncü modellere göre çok daha ucuzdur ve GPT-3.5 Turbo'dan %60'tan fazla daha ucuzdur. En son teknolojiyi korurken, önemli bir maliyet etkinliği sunar. GPT-4o mini, MMLU testinde %82 puan almış olup, şu anda sohbet tercihleri açısından GPT-4'ün üzerinde bir sıralamaya sahiptir."
+ },
+ "openai/o1-mini": {
+ "description": "o1-mini, programlama, matematik ve bilim uygulama senaryoları için tasarlanmış hızlı ve ekonomik bir akıl yürütme modelidir. Bu model, 128K bağlam ve Ekim 2023 bilgi kesim tarihi ile donatılmıştır."
+ },
+ "openai/o1-preview": {
+ "description": "o1, OpenAI'nin geniş genel bilgiye ihtiyaç duyan karmaşık görevler için uygun yeni bir akıl yürütme modelidir. Bu model, 128K bağlam ve Ekim 2023 bilgi kesim tarihi ile donatılmıştır."
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7B, 'C-RLFT (koşullu pekiştirme öğrenimi ince ayarı)' stratejisi ile ince ayar yapılmış açık kaynak dil modeli kütüphanesidir."
+ },
+ "openrouter/auto": {
+ "description": "Bağlam uzunluğu, konu ve karmaşıklığa göre isteğiniz, Llama 3 70B Instruct, Claude 3.5 Sonnet (kendini ayarlama) veya GPT-4o'ya gönderilecektir."
+ },
+ "phi3": {
+ "description": "Phi-3, Microsoft tarafından sunulan hafif bir açık modeldir, verimli entegrasyon ve büyük ölçekli bilgi akıl yürütme için uygundur."
+ },
+ "phi3:14b": {
+ "description": "Phi-3, Microsoft tarafından sunulan hafif bir açık modeldir, verimli entegrasyon ve büyük ölçekli bilgi akıl yürütme için uygundur."
+ },
+ "pixtral-12b-2409": {
+ "description": "Pixtral modeli, grafik ve görüntü anlama, belge yanıtı, çok modlu akıl yürütme ve talimat takibi gibi görevlerde güçlü yetenekler sergiler, doğal çözünürlük ve en boy oranında görüntüleri alabilir ve 128K token uzunluğunda bir bağlam penceresinde herhangi bir sayıda görüntüyü işleyebilir."
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "Tongyi Qianwen kodlama modeli."
+ },
+ "qwen-long": {
+ "description": "Tongyi Qianwen, uzun metin bağlamını destekleyen ve uzun belgeler, çoklu belgeler gibi çeşitli senaryolar için diyalog işlevselliği sunan büyük ölçekli bir dil modelidir."
+ },
+ "qwen-math-plus-latest": {
+ "description": "Tongyi Qianwen matematik modeli, matematik problemlerini çözmek için özel olarak tasarlanmış bir dil modelidir."
+ },
+ "qwen-math-turbo-latest": {
+ "description": "Tongyi Qianwen matematik modeli, matematik problemlerini çözmek için özel olarak tasarlanmış bir dil modelidir."
+ },
+ "qwen-max-latest": {
+ "description": "Tongyi Qianwen, 100 milyar seviyesinde büyük bir dil modeli, Çince, İngilizce ve diğer dillerde girişleri destekler, şu anda Tongyi Qianwen 2.5 ürün versiyonunun arkasındaki API modelidir."
+ },
+ "qwen-plus-latest": {
+ "description": "Tongyi Qianwen'in geliştirilmiş versiyonu, çok dilli girişleri destekler."
+ },
+ "qwen-turbo-latest": {
+ "description": "Tongyi Qianwen, çok dilli bir dil modeli, Çince, İngilizce ve diğer dillerde girişleri destekler."
+ },
+ "qwen-vl-chat-v1": {
+ "description": "Tongyi Qianwen VL, çoklu görüntü, çok turlu soru-cevap, yaratım gibi esnek etkileşim yöntemlerini destekleyen bir modeldir."
+ },
+ "qwen-vl-max": {
+ "description": "Tongyi Qianwen, büyük ölçekli görsel dil modelidir. Geliştirilmiş versiyonuna göre görsel akıl yürütme ve talimatları takip etme yeteneklerini daha da artırır, daha yüksek görsel algı ve biliş düzeyi sunar."
+ },
+ "qwen-vl-plus": {
+ "description": "Tongyi Qianwen, büyük ölçekli görsel dil modelinin geliştirilmiş versiyonudur. Detay tanıma ve metin tanıma yeteneklerini önemli ölçüde artırır, bir milyondan fazla piksel çözünürlüğü ve herhangi bir en-boy oranı spesifikasyonunu destekler."
+ },
+ "qwen-vl-v1": {
+ "description": "Qwen-7B dil modeli ile başlatılan, 448 çözünürlükte görüntü girişi olan önceden eğitilmiş bir modeldir."
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2, daha güçlü anlama ve üretme yeteneklerine sahip yeni bir büyük dil modeli serisidir."
+ },
+ "qwen2": {
+ "description": "Qwen2, Alibaba'nın yeni nesil büyük ölçekli dil modelidir, mükemmel performans ile çeşitli uygulama ihtiyaçlarını destekler."
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "Tongyi Qianwen 2.5, halka açık 14B ölçeğinde bir modeldir."
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "Tongyi Qianwen 2.5, halka açık 32B ölçeğinde bir modeldir."
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "Tongyi Qianwen 2.5, halka açık 72B ölçeğinde bir modeldir."
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "Tongyi Qianwen 2.5, halka açık 7B ölçeğinde bir modeldir."
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "Tongyi Qianwen kodlama modelinin açık kaynak versiyonu."
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "Tongyi Qianwen kodlama modelinin açık kaynak versiyonu."
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "Qwen-Math modeli, güçlü matematik problem çözme yeteneklerine sahiptir."
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "Qwen-Math modeli, güçlü matematik problem çözme yeteneklerine sahiptir."
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "Qwen-Math modeli, güçlü matematik problem çözme yeteneklerine sahiptir."
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2, Alibaba'nın yeni nesil büyük ölçekli dil modelidir, mükemmel performans ile çeşitli uygulama ihtiyaçlarını destekler."
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2, Alibaba'nın yeni nesil büyük ölçekli dil modelidir, mükemmel performans ile çeşitli uygulama ihtiyaçlarını destekler."
+ },
+ "qwen2:72b": {
+ "description": "Qwen2, Alibaba'nın yeni nesil büyük ölçekli dil modelidir, mükemmel performans ile çeşitli uygulama ihtiyaçlarını destekler."
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar Mini, kompakt bir LLM'dir, GPT-3.5'ten daha iyi performans gösterir, güçlü çok dilli yeteneklere sahiptir, İngilizce ve Koreceyi destekler, etkili ve hafif bir çözüm sunar."
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja), Solar Mini'nin yeteneklerini genişletir, Japonca'ya odaklanır ve İngilizce ile Korece kullanımında yüksek verimlilik ve mükemmel performans sunar."
+ },
+ "solar-pro": {
+ "description": "Solar Pro, Upstage tarafından sunulan yüksek akıllı LLM'dir, tek GPU talimat takibi yeteneğine odaklanır, IFEval puanı 80'in üzerindedir. Şu anda İngilizceyi desteklemekte olup, resmi versiyonu 2024 Kasım'da piyasaya sürülmesi planlanmaktadır ve dil desteği ile bağlam uzunluğunu genişletecektir."
+ },
+ "step-1-128k": {
+ "description": "Performans ve maliyet arasında denge sağlar, genel senaryolar için uygundur."
+ },
+ "step-1-256k": {
+ "description": "Ultra uzun bağlam işleme yeteneklerine sahiptir, özellikle uzun belgelerin analizine uygundur."
+ },
+ "step-1-32k": {
+ "description": "Orta uzunlukta diyalogları destekler, çeşitli uygulama senaryoları için uygundur."
+ },
+ "step-1-8k": {
+ "description": "Küçük model, hafif görevler için uygundur."
+ },
+ "step-1-flash": {
+ "description": "Yüksek hızlı model, gerçek zamanlı diyaloglar için uygundur."
+ },
+ "step-1v-32k": {
+ "description": "Görsel girdi desteği sunar, çok modlu etkileşim deneyimini artırır."
+ },
+ "step-1v-8k": {
+ "description": "Küçük görsel model, temel metin ve görsel görevler için uygundur."
+ },
+ "step-2-16k": {
+ "description": "Büyük ölçekli bağlam etkileşimlerini destekler, karmaşık diyalog senaryoları için uygundur."
+ },
+ "taichu_llm": {
+ "description": "Zidong Taichu dil büyük modeli, güçlü dil anlama yeteneği ile metin oluşturma, bilgi sorgulama, kod programlama, matematik hesaplama, mantıksal akıl yürütme, duygu analizi, metin özeti gibi yeteneklere sahiptir. Yenilikçi bir şekilde büyük veri ön eğitimi ile çok kaynaklı zengin bilgiyi birleştirir, algoritma teknolojisini sürekli olarak geliştirir ve büyük metin verilerinden kelime, yapı, dil bilgisi, anlam gibi yeni bilgileri sürekli olarak edinir, modelin performansını sürekli olarak evrimleştirir. Kullanıcılara daha kolay bilgi ve hizmetler sunar ve daha akıllı bir deneyim sağlar."
+ },
+ "taichu_vqa": {
+ "description": "Taichu 2.0V, görüntü anlama, bilgi aktarımı, mantıksal çıkarım gibi yetenekleri birleştirerek, metin ve görsel soru-cevap alanında öne çıkmaktadır."
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B), etkili stratejiler ve model mimarisi ile artırılmış hesaplama yetenekleri sunar."
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B), ince ayar gerektiren talimat görevleri için uygundur ve mükemmel dil işleme yetenekleri sunar."
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2, Microsoft AI tarafından sunulan bir dil modelidir, karmaşık diyaloglar, çok dilli, akıl yürütme ve akıllı asistan alanlarında özellikle başarılıdır."
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2, Microsoft AI tarafından sunulan bir dil modelidir, karmaşık diyaloglar, çok dilli, akıl yürütme ve akıllı asistan alanlarında özellikle başarılıdır."
+ },
+ "yi-large": {
+ "description": "Yeni nesil yüz milyar parametreli model, güçlü soru yanıtlama ve metin üretim yetenekleri sunar."
+ },
+ "yi-large-fc": {
+ "description": "yi-large modelinin temelinde, araç çağrısı yeteneklerini destekleyip güçlendiren bir yapı sunar, çeşitli ajan veya iş akışı kurma gereksinimleri için uygundur."
+ },
+ "yi-large-preview": {
+ "description": "Erken sürüm, yi-large (yeni sürüm) kullanılması önerilir."
+ },
+ "yi-large-rag": {
+ "description": "yi-large modelinin güçlü bir hizmeti, arama ve üretim teknolojilerini birleştirerek doğru yanıtlar sunar, gerçek zamanlı olarak tüm ağdan bilgi arama hizmeti sağlar."
+ },
+ "yi-large-turbo": {
+ "description": "Son derece yüksek maliyet performansı ve mükemmel performans. Performans ve akıl yürütme hızı, maliyet açısından yüksek hassasiyetli ayarlama yapılır."
+ },
+ "yi-medium": {
+ "description": "Orta boyutlu model, dengeli yetenekler ve yüksek maliyet performansı sunar. Talimat takibi yetenekleri derinlemesine optimize edilmiştir."
+ },
+ "yi-medium-200k": {
+ "description": "200K ultra uzun bağlam penceresi, uzun metinlerin derinlemesine anlaşılması ve üretilmesi yetenekleri sunar."
+ },
+ "yi-spark": {
+ "description": "Küçük ama etkili, hafif ve hızlı bir modeldir. Güçlendirilmiş matematiksel işlemler ve kod yazma yetenekleri sunar."
+ },
+ "yi-vision": {
+ "description": "Karmaşık görsel görevler için model, yüksek performanslı resim anlama ve analiz yetenekleri sunar."
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/plugin.json b/DigitalHumanWeb/locales/tr-TR/plugin.json
new file mode 100644
index 0000000..45374ab
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "Argümanlar",
+ "function_call": "Fonksiyon Çağrısı",
+ "off": "Hata Ayıklamayı Kapat",
+ "on": "Eklenti Çağrı Bilgilerini Görüntüle",
+ "payload": "eklenti yükü",
+ "response": "Yanıt",
+ "tool_call": "aracı çağrı"
+ },
+ "detailModal": {
+ "info": {
+ "description": "API Açıklaması",
+ "name": "API Adı"
+ },
+ "tabs": {
+ "info": "Eklenti Yetenekleri",
+ "manifest": "Yükleme Dosyası",
+ "settings": "Ayarlar"
+ },
+ "title": "Eklenti Detayları"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "Bu eklentiyi silmek istediğinizden emin misiniz? Bir kere silindiğinde, geri alınamaz.",
+ "customParams": {
+ "useProxy": {
+ "label": "Proxy kullanarak yükle (Çapraz kaynak erişim hatası ile karşılaşılırsa, bu seçeneği etkinleştirerek yeniden yükleme deneyebilirsiniz)"
+ }
+ },
+ "deleteSuccess": "Eklenti başarıyla silindi",
+ "manifest": {
+ "identifier": {
+ "desc": "Eklenti için benzersiz tanımlayıcı",
+ "label": "Tanımlayıcı"
+ },
+ "mode": {
+ "local": "Yapılandırma",
+ "local-tooltip": "Yapılandırma geçici olarak desteklenmiyor",
+ "url": "Çevrimiçi Bağlantı"
+ },
+ "name": {
+ "desc": "Eklenti başlığı",
+ "label": "Başlık",
+ "placeholder": "Arama Motoru"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "Eklentinin yazarı",
+ "label": "Yazar"
+ },
+ "avatar": {
+ "desc": "Eklenti simgesi, Emoji veya URL kullanılabilir",
+ "label": "Simge"
+ },
+ "description": {
+ "desc": "Eklenti açıklaması",
+ "label": "Açıklama",
+ "placeholder": "Bilgi için arama motorunu sorgula"
+ },
+ "formFieldRequired": "Bu alan gereklidir",
+ "homepage": {
+ "desc": "Eklentinin ana sayfası",
+ "label": "Ana Sayfa"
+ },
+ "identifier": {
+ "desc": "Eklenti için benzersiz tanımlayıcı, manifestten otomatik olarak tanınacak",
+ "errorDuplicate": "Tanımlayıcı başka bir eklenti tarafından kullanılıyor, lütfen tanımlayıcıyı değiştirin",
+ "label": "Tanımlayıcı",
+ "pattenErrorMessage": "Sadece İngilizce karakterler, sayılar, - ve _ kullanılabilir"
+ },
+ "manifest": {
+ "desc": "{{appName}} bu bağlantı aracılığıyla eklentiyi yükleyecektir.",
+ "label": "Eklenti Tanım Dosyası (Manifest) URL'si",
+ "preview": "Manifesti Önizle",
+ "refresh": "Yenile"
+ },
+ "title": {
+ "desc": "Eklenti başlığı",
+ "label": "Başlık",
+ "placeholder": "Arama Motoru"
+ }
+ },
+ "metaConfig": "Eklenti meta veri yapılandırması",
+ "modalDesc": "Özel bir eklenti ekledikten sonra, eklenti doğrulama için veya doğrudan oturumda kullanılabilir. Eklenti geliştirme için lütfen <1>geliştirme dokümantasyonuna↗> başvurun.",
+ "openai": {
+ "importUrl": "URL Bağlantısından İçe Aktar",
+ "schema": "Şema"
+ },
+ "preview": {
+ "card": "Eklenti Görünümünü Önizle",
+ "desc": "Eklenti Açıklamasını Önizle",
+ "title": "Eklenti Adı Önizlemesi"
+ },
+ "save": "Eklentiyi Yükle",
+ "saveSuccess": "Eklenti ayarları başarıyla kaydedildi",
+ "tabs": {
+ "manifest": "Manifest Tanımı",
+ "meta": "Eklenti Meta Verileri"
+ },
+ "title": {
+ "create": "Özel Eklenti Ekle",
+ "edit": "Özel Eklentiyi Düzenle"
+ },
+ "type": {
+ "lobe": "LobeChat Eklentisi",
+ "openai": "OpenAI Eklentisi"
+ },
+ "update": "Güncelle",
+ "updateSuccess": "Eklenti ayarları başarıyla güncellendi"
+ },
+ "error": {
+ "fetchError": "Manifest bağlantısı alınamadı. Lütfen bağlantının geçerli olduğundan ve çapraz köken erişimine izin verdiğinden emin olun.",
+ "installError": "{{name}} eklentisi yüklenemedi",
+ "manifestInvalid": "Manifest, şartnameye uygun değil. Doğrulama sonucu: \n\n {{error}}",
+ "noManifest": "Manifest dosyası mevcut değil",
+ "openAPIInvalid": "OpenAPI ayrıştırma hatası, hata: \n\n {{error}}",
+ "reinstallError": "{{name}} eklentisi yenilenemedi",
+ "urlError": "Bağlantı JSON formatında içerik döndürmedi. Lütfen geçerli bir bağlantı olduğundan emin olun"
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "Eski",
+ "local.config": "Yapılandırma",
+ "local.title": "Özel"
+ }
+ },
+ "loading": {
+ "content": "Eklenti çağrılıyor...",
+ "plugin": "Eklenti çalışıyor..."
+ },
+ "pluginList": "Eklenti Listesi",
+ "setting": "Eklenti Ayarları",
+ "settings": {
+ "indexUrl": {
+ "title": "Pazar Endeksi",
+ "tooltip": "Çevrimiçi düzenleme şu anda desteklenmiyor. Lütfen dağıtım sırasında çevre değişkenleri üzerinden ayarlayın."
+ },
+ "modalDesc": "Eklenti pazarı adresini yapılandırdıktan sonra, özel bir eklenti pazarı kullanabilirsiniz.",
+ "title": "Eklenti Pazarı Ayarla"
+ },
+ "showInPortal": "Detayları çalışma alanında görüntüleyin",
+ "store": {
+ "actions": {
+ "confirmUninstall": "Bu eklentiyi kaldırmak üzeresiniz, kaldırdıktan sonra eklenti yapılandırması temizlenecektir, işleminizi onaylayın",
+ "detail": "Detay",
+ "install": "Yükle",
+ "manifest": "Yükleme Dosyasını Düzenle",
+ "settings": "Ayarlar",
+ "uninstall": "Kaldır"
+ },
+ "communityPlugin": "Topluluk Eklentisi",
+ "customPlugin": "Özel",
+ "empty": "Henüz yüklenmiş eklenti yok",
+ "installAllPlugins": "Tümünü Yükle",
+ "networkError": "Eklenti mağazası alınamadı. Lütfen ağ bağlantınızı kontrol edin ve tekrar deneyin.",
+ "placeholder": "Eklenti adını, açıklamasını veya anahtar kelimeleri ara...",
+ "releasedAt": "{{createdAt}} tarihinde yayınlandı",
+ "tabs": {
+ "all": "Tümü",
+ "installed": "Yüklü"
+ },
+ "title": "Eklenti Mağazası"
+ },
+ "unknownPlugin": "Bilinmeyen eklenti"
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/portal.json b/DigitalHumanWeb/locales/tr-TR/portal.json
new file mode 100644
index 0000000..11bcdca
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "Artefaktlar",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "Parça",
+ "file": "Dosya"
+ }
+ },
+ "Plugins": "Eklentiler",
+ "actions": {
+ "genAiMessage": "Yapay Zeka Mesajı Oluştur",
+ "summary": "Özet",
+ "summaryTooltip": "Mevcut içeriği özetle"
+ },
+ "artifacts": {
+ "display": {
+ "code": "Kod",
+ "preview": "Önizleme"
+ },
+ "svg": {
+ "copyAsImage": "Resmi Kopyala",
+ "copyFail": "Kopyalama başarısız, hata nedeni: {{error}}",
+ "copySuccess": "Resim başarıyla kopyalandı",
+ "download": {
+ "png": "PNG olarak indir",
+ "svg": "SVG olarak indir"
+ }
+ }
+ },
+ "emptyArtifactList": "Mevcut Artefakt listesi boş, lütfen eklentileri kullanarak oturumda gerektiğinde göz atın",
+ "emptyKnowledgeList": "Mevcut bilgi listesi boş. Lütfen sohbet sırasında ihtiyaç duyduğunuz bilgi havuzunu açtıktan sonra tekrar kontrol edin.",
+ "files": "Dosyalar",
+ "messageDetail": "Mesaj Detayı",
+ "title": "Genişletme Penceresi"
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/providers.json b/DigitalHumanWeb/locales/tr-TR/providers.json
new file mode 100644
index 0000000..73a7b3a
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "360 AI, 360 şirketi tarafından sunulan yapay zeka modeli ve hizmet platformudur. 360GPT2 Pro, 360GPT Pro, 360GPT Turbo ve 360GPT Turbo Responsibility 8K gibi çeşitli gelişmiş doğal dil işleme modelleri sunmaktadır. Bu modeller, büyük ölçekli parametreler ve çok modlu yetenekleri birleştirerek metin üretimi, anlamsal anlama, diyalog sistemleri ve kod üretimi gibi alanlarda geniş bir uygulama yelpazesine sahiptir. Esnek fiyatlandırma stratejileri ile 360 AI, çeşitli kullanıcı ihtiyaçlarını karşılamakta ve geliştiricilerin entegrasyonunu destekleyerek akıllı uygulamaların yenilik ve gelişimini teşvik etmektedir."
+ },
+ "anthropic": {
+ "description": "Anthropic, yapay zeka araştırma ve geliştirmeye odaklanan bir şirkettir. Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus ve Claude 3 Haiku gibi bir dizi gelişmiş dil modeli sunmaktadır. Bu modeller, zeka, hız ve maliyet arasında ideal bir denge sağlamaktadır ve kurumsal düzeydeki iş yüklerinden hızlı yanıt gerektiren çeşitli uygulama senaryolarına kadar geniş bir yelpazede kullanılmaktadır. Claude 3.5 Sonnet, en son modeli olarak, birçok değerlendirmede mükemmel performans sergilemekte ve yüksek maliyet etkinliğini korumaktadır."
+ },
+ "azure": {
+ "description": "Azure, GPT-3.5 ve en son GPT-4 serisi gibi çeşitli gelişmiş yapay zeka modelleri sunar. Farklı veri türlerini ve karmaşık görevleri destekleyerek güvenli, güvenilir ve sürdürülebilir yapay zeka çözümleri sağlamaya odaklanmaktadır."
+ },
+ "baichuan": {
+ "description": "Baichuan Intelligent, yapay zeka büyük modellerinin geliştirilmesine odaklanan bir şirkettir. Modelleri, yerel bilgi ansiklopedisi, uzun metin işleme ve üretim gibi Çince görevlerde mükemmel performans sergilemekte ve uluslararası ana akım modelleri aşmaktadır. Baichuan Intelligent ayrıca sektördeki lider çok modlu yeteneklere sahiptir ve birçok otoriter değerlendirmede mükemmel sonuçlar elde etmiştir. Modelleri, Baichuan 4, Baichuan 3 Turbo ve Baichuan 3 Turbo 128k gibi farklı uygulama senaryolarına yönelik optimize edilmiş yüksek maliyet etkinliği çözümleri sunmaktadır."
+ },
+ "bedrock": {
+ "description": "Bedrock, Amazon AWS tarafından sunulan bir hizmettir ve işletmelere gelişmiş yapay zeka dil modelleri ve görsel modeller sağlamaya odaklanmaktadır. Model ailesi, Anthropic'in Claude serisi, Meta'nın Llama 3.1 serisi gibi seçenekleri içermekte olup, metin üretimi, diyalog, görüntü işleme gibi çeşitli görevleri desteklemektedir. Farklı ölçek ve ihtiyaçlara uygun kurumsal uygulamalar için geniş bir yelpaze sunmaktadır."
+ },
+ "deepseek": {
+ "description": "DeepSeek, yapay zeka teknolojisi araştırma ve uygulamalarına odaklanan bir şirkettir. En son modeli DeepSeek-V2.5, genel diyalog ve kod işleme yeteneklerini birleştirerek, insan tercihleriyle uyum, yazma görevleri ve talimat takibi gibi alanlarda önemli iyileştirmeler sağlamaktadır."
+ },
+ "fireworksai": {
+ "description": "Fireworks AI, işlev çağrısı ve çok modlu işleme üzerine odaklanan önde gelen bir gelişmiş dil modeli hizmet sağlayıcısıdır. En son modeli Firefunction V2, Llama-3 tabanlıdır ve işlev çağrısı, diyalog ve talimat takibi için optimize edilmiştir. Görsel dil modeli FireLLaVA-13B, görüntü ve metin karışık girişi desteklemektedir. Diğer dikkat çekici modeller arasında Llama serisi ve Mixtral serisi bulunmaktadır ve etkili çok dilli talimat takibi ve üretim desteği sunmaktadır."
+ },
+ "github": {
+ "description": "GitHub Modelleri ile geliştiriciler, AI mühendisleri olabilir ve sektörün önde gelen AI modelleri ile inşa edebilirler."
+ },
+ "google": {
+ "description": "Google'ın Gemini serisi, Google DeepMind tarafından geliştirilen en gelişmiş ve genel yapay zeka modelleridir. Çok modlu tasarımı sayesinde metin, kod, görüntü, ses ve video gibi çeşitli veri türlerini sorunsuz bir şekilde anlama ve işleme yeteneğine sahiptir. Veri merkezlerinden mobil cihazlara kadar çeşitli ortamlarda kullanılabilir, yapay zeka modellerinin verimliliğini ve uygulama kapsamını büyük ölçüde artırmaktadır."
+ },
+ "groq": {
+ "description": "Groq'un LPU çıkarım motoru, en son bağımsız büyük dil modeli (LLM) benchmark testlerinde mükemmel performans sergilemekte ve olağanüstü hız ve verimliliği ile yapay zeka çözümlerinin standartlarını yeniden tanımlamaktadır. Groq, bulut tabanlı dağıtımlarda iyi bir performans sergileyen anlık çıkarım hızının temsilcisidir."
+ },
+ "minimax": {
+ "description": "MiniMax, 2021 yılında kurulan genel yapay zeka teknolojisi şirketidir ve kullanıcılarla birlikte akıllı çözümler yaratmayı hedeflemektedir. MiniMax, farklı modlarda genel büyük modeller geliştirmiştir. Bunlar arasında trilyon parametreli MoE metin büyük modeli, ses büyük modeli ve görüntü büyük modeli bulunmaktadır. Ayrıca, Conch AI gibi uygulamalar da sunmaktadır."
+ },
+ "mistral": {
+ "description": "Mistral, karmaşık akıl yürütme, çok dilli görevler, kod üretimi gibi alanlarda geniş bir uygulama yelpazesine sahip gelişmiş genel, profesyonel ve araştırma modelleri sunmaktadır. Fonksiyon çağrısı arayüzü aracılığıyla kullanıcılar, özel işlevleri entegre ederek belirli uygulamalar gerçekleştirebilirler."
+ },
+ "moonshot": {
+ "description": "Moonshot, Beijing Yuezhi Anmian Technology Co., Ltd. tarafından sunulan açık kaynaklı bir platformdur. İçerik oluşturma, akademik araştırma, akıllı öneri, tıbbi teşhis gibi geniş bir uygulama alanına sahip çeşitli doğal dil işleme modelleri sunmaktadır. Uzun metin işleme ve karmaşık üretim görevlerini desteklemektedir."
+ },
+ "novita": {
+ "description": "Novita AI, çeşitli büyük dil modelleri ve yapay zeka görüntü üretimi API hizmetleri sunan bir platformdur. Esnek, güvenilir ve maliyet etkin bir yapıya sahiptir. Llama3, Mistral gibi en son açık kaynak modelleri desteklemekte ve üretken yapay zeka uygulama geliştirme için kapsamlı, kullanıcı dostu ve otomatik ölçeklenebilir API çözümleri sunmaktadır. Bu, yapay zeka girişimlerinin hızlı gelişimi için uygundur."
+ },
+ "ollama": {
+ "description": "Ollama'nın sunduğu modeller, kod üretimi, matematiksel işlemler, çok dilli işleme ve diyalog etkileşimi gibi alanları kapsamaktadır. Kurumsal düzeyde ve yerelleştirilmiş dağıtım için çeşitli ihtiyaçları desteklemektedir."
+ },
+ "openai": {
+ "description": "OpenAI, dünya çapında lider bir yapay zeka araştırma kuruluşudur. Geliştirdiği modeller, GPT serisi gibi, doğal dil işleme alanında öncü adımlar atmaktadır. OpenAI, yenilikçi ve etkili yapay zeka çözümleri ile birçok sektörü dönüştürmeyi hedeflemektedir. Ürünleri, belirgin performans ve maliyet etkinliği ile araştırma, ticaret ve yenilikçi uygulamalarda yaygın olarak kullanılmaktadır."
+ },
+ "openrouter": {
+ "description": "OpenRouter, OpenAI, Anthropic, LLaMA ve daha fazlasını destekleyen çeşitli öncü büyük model arayüzleri sunan bir hizmet platformudur. Kullanıcılar, ihtiyaçlarına göre en uygun modeli ve fiyatı esnek bir şekilde seçerek yapay zeka deneyimlerini geliştirebilirler."
+ },
+ "perplexity": {
+ "description": "Perplexity, çeşitli gelişmiş Llama 3.1 modelleri sunan önde gelen bir diyalog üretim modeli sağlayıcısıdır. Hem çevrimiçi hem de çevrimdışı uygulamaları desteklemekte olup, özellikle karmaşık doğal dil işleme görevleri için uygundur."
+ },
+ "qwen": {
+ "description": "Tongyi Qianwen, Alibaba Cloud tarafından geliştirilen büyük ölçekli bir dil modelidir ve güçlü doğal dil anlama ve üretme yeteneklerine sahiptir. Çeşitli soruları yanıtlayabilir, metin içeriği oluşturabilir, görüşlerini ifade edebilir ve kod yazabilir. Birçok alanda etkili bir şekilde kullanılmaktadır."
+ },
+ "siliconcloud": {
+ "description": "SiliconFlow, insanlığa fayda sağlamak amacıyla AGI'yi hızlandırmaya odaklanmakta ve kullanıcı dostu ve maliyet etkin GenAI yığınları ile büyük ölçekli yapay zeka verimliliğini artırmayı hedeflemektedir."
+ },
+ "spark": {
+ "description": "iFlytek'in Xinghuo büyük modeli, çok alanlı ve çok dilli güçlü yapay zeka yetenekleri sunmaktadır. Gelişmiş doğal dil işleme teknolojisini kullanarak, akıllı donanım, akıllı sağlık, akıllı finans gibi çeşitli dikey senaryolar için yenilikçi uygulamalar geliştirmektedir."
+ },
+ "stepfun": {
+ "description": "StepFun büyük modeli, sektördeki lider çok modlu ve karmaşık akıl yürütme yeteneklerine sahiptir. Uzun metin anlama ve güçlü kendi kendine yönlendirme arama motoru işlevlerini desteklemektedir."
+ },
+ "taichu": {
+ "description": "Çin Bilimler Akademisi Otomasyon Araştırma Enstitüsü ve Wuhan Yapay Zeka Araştırma Enstitüsü, çok modlu büyük modelin yeni neslini sunmaktadır. Çoklu soru-cevap, metin oluşturma, görüntü üretimi, 3D anlama, sinyal analizi gibi kapsamlı soru-cevap görevlerini desteklemekte ve daha güçlü bilişsel, anlama ve yaratma yetenekleri sunarak yeni bir etkileşim deneyimi sağlamaktadır."
+ },
+ "togetherai": {
+ "description": "Together AI, yenilikçi yapay zeka modelleri aracılığıyla lider performans elde etmeye odaklanmaktadır. Hızlı ölçeklenme desteği ve sezgisel dağıtım süreçleri dahil olmak üzere geniş özelleştirme yetenekleri sunarak işletmelerin çeşitli ihtiyaçlarını karşılamaktadır."
+ },
+ "upstage": {
+ "description": "Upstage, çeşitli ticari ihtiyaçlar için yapay zeka modelleri geliştirmeye odaklanmaktadır. Solar LLM ve belge AI gibi modeller, insan yapımı genel zeka (AGI) hedeflemektedir. Chat API aracılığıyla basit diyalog ajanları oluşturmakta ve işlev çağrısı, çeviri, gömme ve belirli alan uygulamalarını desteklemektedir."
+ },
+ "zeroone": {
+ "description": "01.AI, yapay zeka 2.0 çağının yapay zeka teknolojisine odaklanmakta ve 'insan + yapay zeka' yenilik ve uygulamalarını teşvik etmektedir. Son derece güçlü modeller ve gelişmiş yapay zeka teknolojileri kullanarak insan üretkenliğini artırmayı ve teknolojik güçlendirmeyi hedeflemektedir."
+ },
+ "zhipu": {
+ "description": "Zhipu AI, çok modlu ve dil modellerinin açık platformunu sunmakta, metin işleme, görüntü anlama ve programlama yardımı gibi geniş bir yapay zeka uygulama senaryosunu desteklemektedir."
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/ragEval.json b/DigitalHumanWeb/locales/tr-TR/ragEval.json
new file mode 100644
index 0000000..2f24e0c
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "Yeni Oluştur",
+ "description": {
+ "placeholder": "Veri seti açıklaması (isteğe bağlı)"
+ },
+ "name": {
+ "placeholder": "Veri seti adı",
+ "required": "Lütfen veri seti adını girin"
+ },
+ "title": "Veri Seti Ekle"
+ },
+ "dataset": {
+ "addNewButton": "Veri Seti Oluştur",
+ "emptyGuide": "Mevcut veri seti boş, lütfen bir veri seti oluşturun.",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "Veri İçe Aktar"
+ },
+ "columns": {
+ "actions": "İşlemler",
+ "ideal": {
+ "title": "Beklenen Yanıt"
+ },
+ "question": {
+ "title": "Soru"
+ },
+ "referenceFiles": {
+ "title": "Referans Dosyaları"
+ }
+ },
+ "notSelected": "Lütfen soldan bir veri seti seçin",
+ "title": "Veri Seti Detayları"
+ },
+ "title": "Veri Seti"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "Yeni Oluştur",
+ "datasetId": {
+ "placeholder": "Lütfen değerlendirme veri setinizi seçin",
+ "required": "Lütfen değerlendirme veri setini seçin"
+ },
+ "description": {
+ "placeholder": "Değerlendirme görevi açıklaması (isteğe bağlı)"
+ },
+ "name": {
+ "placeholder": "Değerlendirme görevi adı",
+ "required": "Lütfen değerlendirme görevi adını girin"
+ },
+ "title": "Değerlendirme Görevi Ekle"
+ },
+ "addNewButton": "Değerlendirme Oluştur",
+ "emptyGuide": "Mevcut değerlendirme görevi boş, değerlendirme oluşturmaya başlayın.",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "Durumu Kontrol Et",
+ "confirmDelete": "Bu değerlendirmeyi silmek istiyor musunuz?",
+ "confirmRun": "Çalıştırmaya başlamak istiyor musunuz? Çalıştırmaya başladıktan sonra arka planda asenkron olarak değerlendirme görevi yürütülecek, sayfayı kapatmak asenkron görevin yürütülmesini etkilemeyecektir.",
+ "downloadRecords": "Değerlendirmeyi İndir",
+ "retry": "Tekrar Dene",
+ "run": "Çalıştır",
+ "title": "İşlemler"
+ },
+ "datasetId": {
+ "title": "Veri Seti"
+ },
+ "name": {
+ "title": "Değerlendirme Görevi Adı"
+ },
+ "records": {
+ "title": "Değerlendirme Kayıt Sayısı"
+ },
+ "referenceFiles": {
+ "title": "Referans Dosyaları"
+ },
+ "status": {
+ "error": "Yürütme Hatası",
+ "pending": "Çalıştırmayı Bekliyor",
+ "processing": "Yürütülüyor",
+ "success": "Yürütme Başarılı",
+ "title": "Durum"
+ }
+ },
+ "title": "Değerlendirme Görevleri Listesi"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/setting.json b/DigitalHumanWeb/locales/tr-TR/setting.json
new file mode 100644
index 0000000..772eabb
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "Hakkında"
+ },
+ "agentTab": {
+ "chat": "Sohbet Tercihi",
+ "meta": "Asistan Bilgisi",
+ "modal": "Model Ayarları",
+ "plugin": "Eklenti Ayarları",
+ "prompt": "Karakter Ayarları",
+ "tts": "Metin Okuma Hizmeti"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "{{appName}} genel kullanıcı deneyimini geliştirmemize yardımcı olmak için telemetri verilerini göndermeyi seçerek katkıda bulunabilirsiniz.",
+ "title": "Anonim Kullanım Verileri Gönder"
+ },
+ "title": "Analitik"
+ },
+ "danger": {
+ "clear": {
+ "action": "Temizle",
+ "confirm": "Tüm sohbet verilerini temizlemeyi onaylıyor musunuz?",
+ "desc": "Bu, oturum verilerini, asistanı, dosyaları, mesajları, eklentileri vb. temizleyecektir.",
+ "success": "Tüm oturum mesajları temizlendi",
+ "title": "Tüm Oturum Mesajlarını Temizle"
+ },
+ "reset": {
+ "action": "Sıfırla",
+ "confirm": "Tüm ayarları sıfırlamayı onaylıyor musunuz?",
+ "currentVersion": "Geçerli Sürüm",
+ "desc": "Tüm ayarları varsayılan değerlere sıfırlar",
+ "success": "Tüm ayarlar sıfırlandı",
+ "title": "Tüm Ayarları Sıfırla"
+ }
+ },
+ "header": {
+ "desc": "Tercihler ve model ayarları.",
+ "global": "Genel Ayarlar",
+ "session": "Oturum Ayarları",
+ "sessionDesc": "Karakter ayarları ve oturum tercihleri.",
+ "sessionWithName": "Oturum Ayarları · {{name}}",
+ "title": "Ayarlar"
+ },
+ "llm": {
+ "aesGcm": "Anahtarınız ve vekil adresiniz <1>AES-GCM1> şifreleme algoritması kullanılarak şifrelenecektir",
+ "apiKey": {
+ "desc": "Lütfen {{name}} API Anahtarınızı girin",
+ "placeholder": "{{name}} API Anahtarı",
+ "title": "API Anahtarı"
+ },
+ "checker": {
+ "button": "Kontrol Et",
+ "desc": "API Anahtarı ve vekil adresinin doğru şekilde doldurulup doldurulmadığını test eder",
+ "pass": "Kontrol Başarılı",
+ "title": "Bağlantı Kontrolü"
+ },
+ "customModelCards": {
+ "addNew": "{{id}} modelini oluştur ve ekle",
+ "config": "Modeli Yapılandır",
+ "confirmDelete": "Özel modeli silmek üzeresiniz, silindikten sonra geri alınamaz, lütfen dikkatli olun.",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "Azure OpenAI'da gerçekleştirilen istek alanı",
+ "placeholder": "Azure'daki model dağıtım adını girin",
+ "title": "Model Dağıtım Adı"
+ },
+ "displayName": {
+ "placeholder": "ChatGPT, GPT-4 vb. gibi modelin görüntü adını girin",
+ "title": "Model Görüntü Adı"
+ },
+ "files": {
+ "extra": "Mevcut dosya yükleme uygulaması yalnızca bir Hack çözümüdür ve yalnızca kendi denemeleriniz için geçerlidir. Tam dosya yükleme yeteneği için lütfen sonraki uygulamaları bekleyin.",
+ "title": "Dosya Yükleme Desteği"
+ },
+ "functionCall": {
+ "extra": "Bu yapılandırma yalnızca uygulamadaki işlev çağırma yeteneğini açacaktır; işlev çağırmanın desteklenip desteklenmediği tamamen modele bağlıdır, lütfen bu modelin işlev çağırma yeteneğini kendiniz test edin.",
+ "title": "Fonksiyon Çağrısını Destekle"
+ },
+ "id": {
+ "extra": "Model etiketi olarak görüntülenecektir",
+ "placeholder": "Örneğin gpt-4-turbo-preview veya claude-2.1 gibi bir model kimliği girin",
+ "title": "Model Kimliği"
+ },
+ "modalTitle": "Özel Model Yapılandırması",
+ "tokens": {
+ "title": "Maksimum token sayısı",
+ "unlimited": "Sınırsız"
+ },
+ "vision": {
+ "extra": "Bu yapılandırma yalnızca uygulamadaki resim yükleme yapılandırmasını açacaktır; tanıma desteği tamamen modele bağlıdır, lütfen bu modelin görsel tanıma yeteneğini kendiniz test edin.",
+ "title": "Görüntü Tanımayı Destekle"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "İstemci tarafından alınan veri: Tarayıcı, doğrudan oturum isteği gönderir ve yanıt hızını artırabilir.",
+ "title": "İstemci Tarafından Veri Alımı"
+ },
+ "fetcher": {
+ "fetch": "Modelleri Al",
+ "fetching": "Modelleri alınıyor...",
+ "latestTime": "Son güncelleme zamanı: {{time}}",
+ "noLatestTime": "Liste henüz alınamadı"
+ },
+ "helpDoc": "Yardım Belgeleri",
+ "modelList": {
+ "desc": "Görüntülenecek modeli seçin, seçilen model model listesinde görüntülenecektir",
+ "placeholder": "Lütfen listeden bir model seçin",
+ "title": "Model Listesi",
+ "total": "Toplam {{count}} model kullanılabilir"
+ },
+ "proxyUrl": {
+ "desc": "Varsayılan adres dışında, http(s):// içermelidir",
+ "title": "API Proxy Adresi"
+ },
+ "waitingForMore": "Daha fazla model eklenmesi planlanıyor"
+ },
+ "plugin": {
+ "addTooltip": "Eklenti Ekle",
+ "clearDeprecated": "Kullanım Dışı Eklentileri Kaldır",
+ "empty": "Henüz eklenti yok, <1>Eklenti Mağazası1>'nı keşfetmekten çekinmeyin",
+ "installStatus": {
+ "deprecated": "Kaldırıldı"
+ },
+ "settings": {
+ "hint": "Açıklamaya dayalı olarak aşağıdaki yapılandırmaları doldurun",
+ "title": "{{id}} Eklenti Yapılandırması",
+ "tooltip": "Eklenti Yapılandırması"
+ },
+ "store": "Eklenti Mağazası"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "Avatar"
+ },
+ "backgroundColor": {
+ "title": "Arka Plan Rengi"
+ },
+ "description": {
+ "placeholder": "Asistan açıklamasını girin",
+ "title": "Asistan Açıklaması"
+ },
+ "name": {
+ "placeholder": "Asistan adını girin",
+ "title": "Ad"
+ },
+ "prompt": {
+ "placeholder": "Prompt girin",
+ "title": "Rol Ayarı"
+ },
+ "tag": {
+ "placeholder": "Etiket girin",
+ "title": "Etiket"
+ },
+ "title": "Asistan Bilgileri"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "Mevcut mesaj sayısı bu değeri aştığında otomatik olarak bir konu oluşturulur",
+ "title": "Mesaj Sınırı"
+ },
+ "chatStyleType": {
+ "title": "Sohbet Pencere Stili",
+ "type": {
+ "chat": "Konuşma Modu",
+ "docs": "Belge Modu"
+ }
+ },
+ "compressThreshold": {
+ "desc": "Sıkıştırılmamış geçmiş mesajlar bu değeri aştığında sıkıştırma uygulanır",
+ "title": "Geçmiş Mesaj Uzunluğu Sıkıştırma Eşiği"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "Sohbet sırasında otomatik olarak bir konu oluşturup oluşturmayacağınız, yalnızca geçici konularda etkilidir",
+ "title": "Otomatik Konu Oluştur"
+ },
+ "enableCompressThreshold": {
+ "title": "Geçmiş Mesaj Uzunluğu Sıkıştırma Eşiği Kullan"
+ },
+ "enableHistoryCount": {
+ "alias": "Sınırsız",
+ "limited": "Yalnızca {{number}} konuşma mesajını içerir",
+ "setlimited": "Kullanılan mesaj sayısı",
+ "title": "Geçmiş Mesaj Sayısı Sınırlama",
+ "unlimited": "Sınırsız geçmiş mesaj sayısı"
+ },
+ "historyCount": {
+ "desc": "Her istekle taşınan tarihsel mesaj sayısı",
+ "title": "Eklenen Geçmiş Mesaj Sayısı"
+ },
+ "inputTemplate": {
+ "desc": "Kullanıcının son mesajı bu şablona doldurulur",
+ "placeholder": "Ön işleme şablonu {{text}}, gerçek zamanlı giriş bilgileri ile değiştirilir",
+ "title": "Kullanıcı Girişi Ön İşleme"
+ },
+ "title": "Sohbet Ayarları"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "Max Token Sınırlamasını Etkinleştir"
+ },
+ "frequencyPenalty": {
+ "desc": "Değer ne kadar yüksekse, tekrarlayan kelimeleri azaltma olasılığı o kadar yüksektir",
+ "title": "Frequency Penalty"
+ },
+ "maxTokens": {
+ "desc": "Her etkileşim için kullanılan maksimum token sayısı",
+ "title": "Max Token Sınırlaması"
+ },
+ "model": {
+ "desc": "{{provider}} Model",
+ "title": "Model"
+ },
+ "presencePenalty": {
+ "desc": "Değer ne kadar yüksekse, yeni konulara genişleme olasılığı o kadar yüksektir",
+ "title": "Presence Penalty"
+ },
+ "temperature": {
+ "desc": "Değer ne kadar yüksekse, yanıt o kadar rastgele olur",
+ "title": "Randomness",
+ "titleWithValue": "temperature {{value}}"
+ },
+ "title": "Model Ayarları",
+ "topP": {
+ "desc": "temperature gibi, ancak temperature ile birlikte değişmez",
+ "title": "Top P"
+ }
+ },
+ "settingPlugin": {
+ "title": "Eklenti Listesi"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "Yönetici tarafından şifreleme erişimi etkinleştirildi",
+ "placeholder": "Erişim şifresini girin",
+ "title": "Erişim Şifresi"
+ },
+ "oauth": {
+ "info": {
+ "desc": "Oturum açıldı",
+ "title": "Hesap Bilgisi"
+ },
+ "signin": {
+ "action": "Oturum aç",
+ "desc": "Uygulamayı kilidini açmak için SSO ile oturum açın",
+ "title": "Hesaba Giriş Yap"
+ },
+ "signout": {
+ "action": "Oturumu kapat",
+ "confirm": "Çıkış yapmak istediğinize emin misiniz?",
+ "success": "Oturum kapatma başarılı"
+ }
+ },
+ "title": "Sistem Ayarları"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "OpenAI Konuşmadan Metne Modeli",
+ "title": "OpenAI",
+ "ttsModel": "OpenAI Metin Seslendirme Modeli"
+ },
+ "showAllLocaleVoice": {
+ "desc": "Kapalıysa, yalnızca mevcut dildeki sesler görüntülenir",
+ "title": "Tüm Yerel Sesleri Göster"
+ },
+ "stt": "Konuşmadan Metne Ayarlar",
+ "sttAutoStop": {
+ "desc": "Kapalıysa, konuşmadan metni otomatik olarak sona ermez ve manuel olarak durdurmak için tıklamanız gerekir",
+ "title": "Otomatik Durdur Konuşmadan Metin"
+ },
+ "sttLocale": {
+ "desc": "Konuşmadan metin dilini, bu seçenek konuşmadan metin tanıma doğruluğunu artırabilir",
+ "title": "Konuşmadan Metin Dil"
+ },
+ "sttService": {
+ "desc": "'Tarayıcı' yerel konuşmadan metin hizmeti olduğundan",
+ "title": "Konuşmadan Metin Hizmeti"
+ },
+ "title": "Konuşma Hizmeti",
+ "tts": "Metin Seslendirme Ayarlar",
+ "ttsService": {
+ "desc": "OpenAI metin seslendirme hizmetini kullanıyorsanız, OpenAI model hizmetinin etkin olduğundan emin olun",
+ "title": "Metin Seslendirme Hizmeti"
+ },
+ "voice": {
+ "desc": "Mevcut asistan için bir ses seçin, farklı TTS hizmetleri farklı sesleri destekler",
+ "preview": "Ses Önizlemesi",
+ "title": "Metin Seslendirme"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "Avatar"
+ },
+ "fontSize": {
+ "desc": "Sohbet içeriği için yazı boyutu",
+ "marks": {
+ "normal": "Normal"
+ },
+ "title": "Yazı Boyutu"
+ },
+ "lang": {
+ "autoMode": "Sistem Takibi",
+ "title": "Dil"
+ },
+ "neutralColor": {
+ "desc": "Farklı renk eğilimleri için özel nötr renk",
+ "title": "Nötr Renk"
+ },
+ "primaryColor": {
+ "desc": "Özel ana tema rengi",
+ "title": "Ana Renk"
+ },
+ "themeMode": {
+ "auto": "Oto",
+ "dark": "Karanlık",
+ "light": "Açık",
+ "title": "Tema"
+ },
+ "title": "Tema Ayarları"
+ },
+ "submitAgentModal": {
+ "button": "Asistan Gönder",
+ "identifier": "Asistan Kimliği",
+ "metaMiss": "Lütfen göndermeden önce asistan bilgilerini tamamlayın. Bu, ad, açıklama ve etiketleri içermelidir.",
+ "placeholder": "Asistan için benzersiz bir kimlik girin, örneğin web-geliştirme",
+ "tooltips": "Asistan pazarına paylaşın"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "Kolay tanımlama için isim ekleyin",
+ "placeholder": "Lütfen cihaz adını girin",
+ "title": "Cihaz Adı"
+ },
+ "title": "Cihaz Bilgisi",
+ "unknownBrowser": "Bilinmeyen Tarayıcı",
+ "unknownOS": "Bilinmeyen Sistem"
+ },
+ "warning": {
+ "tip": "WebRTC'nin uzun bir topluluk beta testinden sonra, genel veri senkronizasyon ihtiyaçlarını kararlı bir şekilde karşılayamayabileceği uyarısı. Lütfen <1> sinyal sunucusunu dağıtın 1> ve ardından kullanın."
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "WebRTC bu adı kullanarak senkronizasyon kanalı oluşturacak, kanal adının benzersiz olduğundan emin olun",
+ "placeholder": "Lütfen senkronizasyon kanalı adını girin",
+ "shuffle": "Rastgele Oluştur",
+ "title": "Senkronizasyon Kanalı Adı"
+ },
+ "channelPassword": {
+ "desc": "Kanalın gizliliğini sağlamak için şifre ekleyin, sadece doğru şifre girildiğinde cihaz kanala katılabilir",
+ "placeholder": "Lütfen senkronizasyon kanalı şifresini girin",
+ "title": "Senkronizasyon Kanalı Şifresi"
+ },
+ "desc": "Gerçek zamanlı, noktadan noktaya veri iletişimi, senkronizasyon için cihazların aynı anda çevrimiçi olması gerekir",
+ "enabled": {
+ "invalid": "Lütfen sinyal sunucusu ve senkronizasyon kanal adını girerek etkinleştirin",
+ "title": "Senkronizasyonu Etkinleştir"
+ },
+ "signaling": {
+ "desc": "WebRTC senkronizasyon için bu adresi kullanacak",
+ "placeholder": "Lütfen sinyal sunucusu adresini girin",
+ "title": "Sinyal Sunucusu"
+ },
+ "title": "WebRTC Senkronizasyonu"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "Asistan Meta Veri Oluşturma Modeli",
+ "modelDesc": "Asistan adı, açıklaması, avatar ve etiket oluşturmak için belirlenen model",
+ "title": "Asistan Bilgilerini Otomatik Oluştur"
+ },
+ "queryRewrite": {
+ "label": "Soru Yeniden Yazım Modeli",
+ "modelDesc": "Kullanıcı sorularını optimize etmek için kullanılan model",
+ "title": "Bilgi Tabanı"
+ },
+ "title": "Sistem Asistanı",
+ "topic": {
+ "label": "Konu Adlandırma Modeli",
+ "modelDesc": "Konuların otomatik olarak yeniden adlandırılması için belirlenen model",
+ "title": "Konu Otomatik Adlandırma"
+ },
+ "translation": {
+ "label": "Çeviri Modeli",
+ "modelDesc": "Çeviri için belirlenen model",
+ "title": "Çeviri Asistanı Ayarları"
+ }
+ },
+ "tab": {
+ "about": "Hakkında",
+ "agent": "Varsayılan Asistan",
+ "common": "Genel Ayarlar",
+ "experiment": "Deney",
+ "llm": "Modeller",
+ "sync": "Bulut Senkronizasyonu",
+ "system-agent": "Sistem Asistanı",
+ "tts": "Metin Seslendirme"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "Dahili Araçlar"
+ },
+ "disabled": "Mevcut model fonksiyon çağrılarını desteklemez, eklenti kullanılamaz",
+ "plugins": {
+ "enabled": "Etkin: {{num}}",
+ "groupName": "Eklentiler",
+ "noEnabled": "Etkin eklenti yok",
+ "store": "Eklenti Mağazası"
+ },
+ "title": "Uzantı Araçları"
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/tool.json b/DigitalHumanWeb/locales/tr-TR/tool.json
new file mode 100644
index 0000000..4a5b096
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "Otomatik Oluştur",
+ "downloading": "DallE3 tarafından oluşturulan resim bağlantıları sadece 1 saat geçerlidir, resim yerel olarak önbelleğe alınıyor...",
+ "generate": "Oluştur",
+ "generating": "Oluşturuluyor...",
+ "images": "Görseller:",
+ "prompt": "İpucu"
+ }
+}
diff --git a/DigitalHumanWeb/locales/tr-TR/welcome.json b/DigitalHumanWeb/locales/tr-TR/welcome.json
new file mode 100644
index 0000000..1cfd864
--- /dev/null
+++ b/DigitalHumanWeb/locales/tr-TR/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "İçe Aktar",
+ "market": "Pazara Göz At",
+ "start": "Başla"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "Başka bir grup",
+ "title": "Yeni Asistan Önerileri:"
+ },
+ "defaultMessage": "Ben sizin özel akıllı asistanınız {{appName}}. Şu anda size nasıl yardımcı olabilirim?\nDaha profesyonel veya özel bir asistan istiyorsanız, özel asistan oluşturmak için `+` butonuna tıklayabilirsiniz.",
+ "defaultMessageWithoutCreate": "Ben sizin özel akıllı asistanınız {{appName}}. Şu anda size nasıl yardımcı olabilirim?",
+ "qa": {
+ "q01": "LobeHub nedir?",
+ "q02": "{{appName}} nedir?",
+ "q03": "{{appName}}'in topluluk desteği var mı?",
+ "q04": "{{appName}} hangi özellikleri destekliyor?",
+ "q05": "{{appName}} nasıl dağıtılır ve kullanılır?",
+ "q06": "{{appName}}'in fiyatlandırması nasıldır?",
+ "q07": "{{appName}} ücretsiz mi?",
+ "q08": "Bulut hizmeti versiyonu var mı?",
+ "q09": "Yerel dil modellerini destekliyor mu?",
+ "q10": "Görüntü tanıma ve oluşturma destekleniyor mu?",
+ "q11": "Ses sentezi ve ses tanıma destekleniyor mu?",
+ "q12": "Eklenti sistemi destekleniyor mu?",
+ "q13": "GPT'leri almak için kendi pazarımız var mı?",
+ "q14": "Birden fazla AI hizmet sağlayıcısını destekliyor mu?",
+ "q15": "Kullanım sırasında sorun yaşarsam ne yapmalıyım?"
+ },
+ "questions": {
+ "moreBtn": "Daha Fazla Bilgi",
+ "title": "Herkesin Sorduğu Sorular:"
+ },
+ "welcome": {
+ "afternoon": "İyi akşamlar",
+ "morning": "Günaydın",
+ "night": "İyi geceler",
+ "noon": "Tünaydın"
+ }
+ },
+ "header": "Hoş geldiniz",
+ "pickAgent": "Veya aşağıdaki asistan şablonlarından birini seçin",
+ "skip": "Atla",
+ "slogan": {
+ "desc1": "Düşünmenin ve yaratmanın yeni çağının öncüsü. Size, Süper Birey'e özel olarak oluşturuldu.",
+ "desc2": "İlk asistanınızı oluşturun ve başlayalım~",
+ "title": "Beyninizin süper gücünü açığa çıkarın"
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/auth.json b/DigitalHumanWeb/locales/vi-VN/auth.json
new file mode 100644
index 0000000..0848222
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "Đăng nhập",
+ "loginOrSignup": "Đăng nhập / Đăng ký",
+ "profile": "Hồ sơ cá nhân",
+ "security": "Bảo mật",
+ "signout": "Đăng xuất",
+ "signup": "Đăng ký"
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/chat.json b/DigitalHumanWeb/locales/vi-VN/chat.json
new file mode 100644
index 0000000..f36945b
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/chat.json
@@ -0,0 +1,188 @@
+{
+ "ModelSwitch": {
+ "title": "Mô hình"
+ },
+ "agentDefaultMessage": "Xin chào, tôi là **{{name}}**, bạn có thể bắt đầu trò chuyện với tôi ngay bây giờ, hoặc bạn có thể đến [Cài đặt trợ lý]({{url}}) để hoàn thiện thông tin của tôi.",
+ "agentDefaultMessageWithSystemRole": "Xin chào, tôi là **{{name}}**, {{systemRole}}. Hãy bắt đầu trò chuyện ngay!",
+ "agentDefaultMessageWithoutEdit": "Xin chào, tôi là **{{name}}**, chúng ta hãy bắt đầu trò chuyện nào!",
+ "agents": "Trợ lý",
+ "artifact": {
+ "generating": "Đang tạo",
+ "thinking": "Đang suy nghĩ",
+ "thought": "Quá trình suy nghĩ",
+ "unknownTitle": "Tác phẩm chưa được đặt tên"
+ },
+ "backToBottom": "Quay về dưới cùng",
+ "chatList": {
+ "longMessageDetail": "Xem chi tiết"
+ },
+ "clearCurrentMessages": "Xóa tin nhắn hiện tại",
+ "confirmClearCurrentMessages": "Bạn sắp xóa tin nhắn hiện tại. Hành động này không thể hoàn tác, vui lòng xác nhận.",
+ "confirmRemoveSessionItemAlert": "Bạn sắp xóa trợ lý này. Hành động này không thể hoàn tác, vui lòng xác nhận.",
+ "confirmRemoveSessionSuccess": "Xóa trợ lý thành công",
+ "defaultAgent": "Trợ lý mặc định",
+ "defaultList": "Danh sách mặc định",
+ "defaultSession": "Trợ lý mặc định",
+ "duplicateSession": {
+ "loading": "Đang sao chép...",
+ "success": "Sao chép thành công",
+ "title": "{{title}} Bản sao"
+ },
+ "duplicateTitle": "{{title}} Bản sao",
+ "emptyAgent": "Không có trợ lý",
+ "historyRange": "Phạm vi lịch sử",
+ "inbox": {
+ "desc": "Kích hoạt cụm não, khơi dậy tia lửa tư duy. Trợ lý thông minh của bạn, ở đây để trò chuyện với bạn về mọi thứ.",
+ "title": "Chuyện phiếm"
+ },
+ "input": {
+ "addAi": "Thêm một tin nhắn AI",
+ "addUser": "Thêm một tin nhắn người dùng",
+ "more": "Thêm",
+ "send": "Gửi",
+ "sendWithCmdEnter": "Nhấn {{meta}} + Enter để gửi",
+ "sendWithEnter": "Nhấn Enter để gửi",
+ "stop": "Dừng",
+ "warp": "Xuống dòng"
+ },
+ "knowledgeBase": {
+ "all": "Tất cả nội dung",
+ "allFiles": "Tất cả tệp",
+ "allKnowledgeBases": "Tất cả kho kiến thức",
+ "disabled": "Chế độ triển khai hiện tại không hỗ trợ đối thoại với cơ sở kiến thức. Nếu bạn muốn sử dụng, hãy chuyển sang triển khai cơ sở dữ liệu máy chủ hoặc sử dụng dịch vụ {{cloud}}.",
+ "library": {
+ "action": {
+ "add": "Thêm",
+ "detail": "Chi tiết",
+ "remove": "Xóa"
+ },
+ "title": "Tệp/Kho kiến thức"
+ },
+ "relativeFilesOrKnowledgeBases": "Tệp/Kho kiến thức liên quan",
+ "title": "Kho kiến thức",
+ "uploadGuide": "Các tệp đã tải lên có thể được xem trong 'Kho kiến thức'",
+ "viewMore": "Xem thêm"
+ },
+ "messageAction": {
+ "delAndRegenerate": "Xóa và tạo lại",
+ "regenerate": "Tạo lại"
+ },
+ "newAgent": "Tạo trợ lý mới",
+ "pin": "Ghim",
+ "pinOff": "Bỏ ghim",
+ "rag": {
+ "referenceChunks": "Trích dẫn nguồn",
+ "userQuery": {
+ "actions": {
+ "delete": "Xóa truy vấn",
+ "regenerate": "Tạo lại truy vấn"
+ }
+ }
+ },
+ "regenerate": "Tạo lại",
+ "roleAndArchive": "Vai trò và lưu trữ",
+ "searchAgentPlaceholder": "Trợ lý tìm kiếm...",
+ "sendPlaceholder": "Nhập nội dung trò chuyện...",
+ "sessionGroup": {
+ "config": "Quản lý nhóm",
+ "confirmRemoveGroupAlert": "Bạn sẽ xóa nhóm này, sau khi xóa, trợ lý của nhóm sẽ được di chuyển vào danh sách mặc định, vui lòng xác nhận hành động của bạn",
+ "createAgentSuccess": "Tạo trợ lý thành công",
+ "createGroup": "Thêm nhóm mới",
+ "createSuccess": "Tạo thành công",
+ "creatingAgent": "Đang tạo trợ lý...",
+ "inputPlaceholder": "Vui lòng nhập tên nhóm...",
+ "moveGroup": "Di chuyển vào nhóm",
+ "newGroup": "Nhóm mới",
+ "rename": "Đổi tên nhóm",
+ "renameSuccess": "Đổi tên thành công",
+ "sortSuccess": "Sắp xếp lại thành công",
+ "sorting": "Đang cập nhật sắp xếp nhóm...",
+ "tooLong": "Tên nhóm phải có độ dài từ 1-20 ký tự"
+ },
+ "shareModal": {
+ "download": "Tải xuống ảnh chụp màn hình",
+ "imageType": "Định dạng ảnh",
+ "screenshot": "Ảnh chụp màn hình",
+ "settings": "Cài đặt xuất",
+ "shareToShareGPT": "Tạo liên kết chia sẻ ShareGPT",
+ "withBackground": "Bao gồm hình nền",
+ "withFooter": "Bao gồm chân trang",
+ "withPluginInfo": "Bao gồm thông tin plugin",
+ "withSystemRole": "Bao gồm thiết lập vai trò trợ lý"
+ },
+ "stt": {
+ "action": "Nhập bằng giọng nói",
+ "loading": "Đang nhận dạng...",
+ "prettifying": "Đang tinh chỉnh..."
+ },
+ "temp": "Tạm thời",
+ "tokenDetails": {
+ "chats": "Tin nhắn trò chuyện",
+ "rest": "Còn lại",
+ "systemRole": "Vai trò hệ thống",
+ "title": "Chi tiết Ngữ cảnh",
+ "tools": "Công cụ",
+ "total": "Tổng cộng",
+ "used": "Đã sử dụng"
+ },
+ "tokenTag": {
+ "overload": "Vượt quá giới hạn",
+ "remained": "Còn lại",
+ "used": "Đã sử dụng"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "Đổi tên tự động",
+ "duplicate": "Tạo bản sao",
+ "export": "Xuất chủ đề"
+ },
+ "checkOpenNewTopic": "Có muốn mở chủ đề mới không?",
+ "checkSaveCurrentMessages": "Bạn có muốn lưu cuộc trò chuyện hiện tại thành chủ đề không?",
+ "confirmRemoveAll": "Bạn sắp xóa tất cả chủ đề. Hành động này không thể hoàn tác, vui lòng xác nhận.",
+ "confirmRemoveTopic": "Bạn sắp xóa chủ đề này. Hành động này không thể hoàn tác, vui lòng xác nhận.",
+ "confirmRemoveUnstarred": "Bạn sắp xóa các chủ đề chưa được đánh dấu. Hành động này không thể hoàn tác, vui lòng xác nhận.",
+ "defaultTitle": "Chủ đề mặc định",
+ "duplicateLoading": "Đang sao chép chủ đề...",
+ "duplicateSuccess": "Chủ đề đã được sao chép thành công",
+ "guide": {
+ "desc": "Nhấn vào nút bên trái để lưu cuộc trò chuyện hiện tại như một chủ đề lịch sử và bắt đầu một cuộc trò chuyện mới",
+ "title": "Danh sách chủ đề"
+ },
+ "openNewTopic": "Mở chủ đề mới",
+ "removeAll": "Xóa tất cả chủ đề",
+ "removeUnstarred": "Xóa chủ đề chưa được đánh dấu",
+ "saveCurrentMessages": "Lưu cuộc trò chuyện hiện tại thành chủ đề",
+ "searchPlaceholder": "Tìm kiếm chủ đề...",
+ "title": "Danh sách chủ đề"
+ },
+ "translate": {
+ "action": "Dịch",
+ "clear": "Xóa dịch"
+ },
+ "tts": {
+ "action": "Đọc bằng giọng nói",
+ "clear": "Xóa giọng nói"
+ },
+ "updateAgent": "Cập nhật thông tin trợ lý",
+ "upload": {
+ "action": {
+ "fileUpload": "Tải lên tệp",
+ "folderUpload": "Tải lên thư mục",
+ "imageDisabled": "Mô hình hiện tại không hỗ trợ nhận diện hình ảnh, vui lòng chuyển đổi mô hình để sử dụng",
+ "imageUpload": "Tải lên hình ảnh",
+ "tooltip": "Tải lên"
+ },
+ "clientMode": {
+ "actionFiletip": "Tải lên tệp",
+ "actionTooltip": "Tải lên",
+ "disabled": "Mô hình hiện tại không hỗ trợ nhận diện hình ảnh và phân tích tệp, vui lòng chuyển đổi mô hình để sử dụng"
+ },
+ "preview": {
+ "prepareTasks": "Chuẩn bị phân đoạn...",
+ "status": {
+ "pending": "Đang chuẩn bị tải lên...",
+ "processing": "Đang xử lý tệp..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/clerk.json b/DigitalHumanWeb/locales/vi-VN/clerk.json
new file mode 100644
index 0000000..c4f656d
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "Quay lại",
+ "badge__default": "Mặc định",
+ "badge__otherImpersonatorDevice": "Thiết bị giả mạo khác",
+ "badge__primary": "Chính",
+ "badge__requiresAction": "Yêu cầu hành động",
+ "badge__thisDevice": "Thiết bị này",
+ "badge__unverified": "Chưa xác minh",
+ "badge__userDevice": "Thiết bị người dùng",
+ "badge__you": "Bạn",
+ "createOrganization": {
+ "formButtonSubmit": "Tạo tổ chức",
+ "invitePage": {
+ "formButtonReset": "Bỏ qua"
+ },
+ "title": "Tạo tổ chức"
+ },
+ "dates": {
+ "lastDay": "Hôm qua vào {{ date | timeString('vi-VN') }}",
+ "next6Days": "{{ date | weekday('vi-VN','long') }} vào {{ date | timeString('vi-VN') }}",
+ "nextDay": "Ngày mai vào {{ date | timeString('vi-VN') }}",
+ "numeric": "{{ date | numeric('vi-VN') }}",
+ "previous6Days": "Vào {{ date | weekday('vi-VN','long') }} trước vào {{ date | timeString('vi-VN') }}",
+ "sameDay": "Hôm nay vào {{ date | timeString('vi-VN') }}"
+ },
+ "dividerText": "hoặc",
+ "footerActionLink__useAnotherMethod": "Sử dụng phương pháp khác",
+ "footerPageLink__help": "Trợ giúp",
+ "footerPageLink__privacy": "Quyền riêng tư",
+ "footerPageLink__terms": "Điều khoản",
+ "formButtonPrimary": "Tiếp tục",
+ "formButtonPrimary__verify": "Xác minh",
+ "formFieldAction__forgotPassword": "Quên mật khẩu?",
+ "formFieldError__matchingPasswords": "Mật khẩu khớp.",
+ "formFieldError__notMatchingPasswords": "Mật khẩu không khớp.",
+ "formFieldError__verificationLinkExpired": "Liên kết xác minh đã hết hạn. Vui lòng yêu cầu một liên kết mới.",
+ "formFieldHintText__optional": "Tùy chọn",
+ "formFieldHintText__slug": "Slug là một ID có thể đọc được cho con người phải là duy nhất. Thường được sử dụng trong URL.",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "Xóa tài khoản",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "ví dụ@email.com, ví dụ2@email.com",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "tổ-chức-của-tôi",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "Bật mời tự động cho miền này",
+ "formFieldLabel__backupCode": "Mã sao lưu",
+ "formFieldLabel__confirmDeletion": "Xác nhận",
+ "formFieldLabel__confirmPassword": "Xác nhận mật khẩu",
+ "formFieldLabel__currentPassword": "Mật khẩu hiện tại",
+ "formFieldLabel__emailAddress": "Địa chỉ email",
+ "formFieldLabel__emailAddress_username": "Địa chỉ email hoặc tên người dùng",
+ "formFieldLabel__emailAddresses": "Địa chỉ email",
+ "formFieldLabel__firstName": "Tên",
+ "formFieldLabel__lastName": "Họ",
+ "formFieldLabel__newPassword": "Mật khẩu mới",
+ "formFieldLabel__organizationDomain": "Miền",
+ "formFieldLabel__organizationDomainDeletePending": "Xóa lời mời và gợi ý đang chờ",
+ "formFieldLabel__organizationDomainEmailAddress": "Địa chỉ email xác minh",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "Nhập một địa chỉ email dưới miền này để nhận mã xác minh và xác minh miền này.",
+ "formFieldLabel__organizationName": "Tên",
+ "formFieldLabel__organizationSlug": "Slug",
+ "formFieldLabel__passkeyName": "Tên của passkey",
+ "formFieldLabel__password": "Mật khẩu",
+ "formFieldLabel__phoneNumber": "Số điện thoại",
+ "formFieldLabel__role": "Vai trò",
+ "formFieldLabel__signOutOfOtherSessions": "Đăng xuất khỏi tất cả các thiết bị khác",
+ "formFieldLabel__username": "Tên người dùng",
+ "impersonationFab": {
+ "action__signOut": "Đăng xuất",
+ "title": "Đã đăng nhập với tư cách {{identifier}}"
+ },
+ "locale": "vi-VN",
+ "maintenanceMode": "Chúng tôi đang tiến hành bảo trì, nhưng đừng lo, nó không nên mất nhiều hơn vài phút.",
+ "membershipRole__admin": "Quản trị viên",
+ "membershipRole__basicMember": "Thành viên",
+ "membershipRole__guestMember": "Khách",
+ "organizationList": {
+ "action__createOrganization": "Tạo tổ chức",
+ "action__invitationAccept": "Tham gia",
+ "action__suggestionsAccept": "Yêu cầu tham gia",
+ "createOrganization": "Tạo Tổ chức",
+ "invitationAcceptedLabel": "Đã tham gia",
+ "subtitle": "để tiếp tục {{applicationName}}",
+ "suggestionsAcceptedLabel": "Chờ phê duyệt",
+ "title": "Chọn một tài khoản",
+ "titleWithoutPersonal": "Chọn một tổ chức"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "Mời tự động",
+ "badge__automaticSuggestion": "Đề xuất tự động",
+ "badge__manualInvitation": "Không tự động",
+ "badge__unverified": "Chưa xác minh",
+ "createDomainPage": {
+ "subtitle": "Thêm miền để xác minh. Người dùng có địa chỉ email tại miền này có thể tham gia tự động hoặc yêu cầu tham gia tổ chức.",
+ "title": "Thêm miền"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "Không thể gửi lời mời. Đã có lời mời đang chờ xử lý cho các địa chỉ email sau: {{email_addresses}}.",
+ "formButtonPrimary__continue": "Gửi lời mời",
+ "selectDropdown__role": "Chọn vai trò",
+ "subtitle": "Nhập hoặc dán một hoặc nhiều địa chỉ email, cách nhau bằng dấu cách hoặc dấu phẩy.",
+ "successMessage": "Lời mời đã được gửi thành công",
+ "title": "Mời thành viên mới"
+ },
+ "membersPage": {
+ "action__invite": "Mời",
+ "activeMembersTab": {
+ "menuAction__remove": "Xóa thành viên",
+ "tableHeader__actions": "",
+ "tableHeader__joined": "Đã tham gia",
+ "tableHeader__role": "Vai trò",
+ "tableHeader__user": "Người dùng"
+ },
+ "detailsTitle__emptyRow": "Không có thành viên để hiển thị",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "Mời người dùng bằng cách kết nối miền email với tổ chức của bạn. Bất kỳ ai đăng ký với miền email phù hợp sẽ có thể tham gia tổ chức bất cứ lúc nào.",
+ "headerTitle": "Mời tự động",
+ "primaryButton": "Quản lý miền đã xác minh"
+ },
+ "table__emptyRow": "Không có lời mời để hiển thị"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "Thu hồi lời mời",
+ "tableHeader__invited": "Đã mời"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "Người dùng đăng ký với miền email phù hợp, sẽ thấy một đề xuất để yêu cầu tham gia tổ chức của bạn.",
+ "headerTitle": "Đề xuất tự động",
+ "primaryButton": "Quản lý miền đã xác minh"
+ },
+ "menuAction__approve": "Phê duyệt",
+ "menuAction__reject": "Từ chối",
+ "tableHeader__requested": "Yêu cầu truy cập",
+ "table__emptyRow": "Không có yêu cầu để hiển thị"
+ },
+ "start": {
+ "headerTitle__invitations": "Lời mời",
+ "headerTitle__members": "Thành viên",
+ "headerTitle__requests": "Yêu cầu"
+ }
+ },
+ "navbar": {
+ "description": "Quản lý tổ chức của bạn",
+ "general": "Chung",
+ "members": "Thành viên",
+ "title": "Tổ chức"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "Nhập \"{{organizationName}}\" bên dưới để tiếp tục.",
+ "messageLine1": "Bạn có chắc chắn muốn xóa tổ chức này không?",
+ "messageLine2": "Hành động này là vĩnh viễn và không thể hoàn tác.",
+ "successMessage": "Bạn đã xóa tổ chức.",
+ "title": "Xóa tổ chức"
+ },
+ "leaveOrganization": {
+ "actionDescription": "Nhập \"{{organizationName}}\" bên dưới để tiếp tục.",
+ "messageLine1": "Bạn có chắc chắn muốn rời khỏi tổ chức này không? Bạn sẽ mất quyền truy cập vào tổ chức và các ứng dụng của nó.",
+ "messageLine2": "Hành động này là vĩnh viễn và không thể hoàn tác.",
+ "successMessage": "Bạn đã rời khỏi tổ chức.",
+ "title": "Rời khỏi tổ chức"
+ },
+ "title": "Nguy hiểm"
+ },
+ "domainSection": {
+ "menuAction__manage": "Quản lý",
+ "menuAction__remove": "Xóa",
+ "menuAction__verify": "Xác minh",
+ "primaryButton": "Thêm miền",
+ "subtitle": "Cho phép người dùng tham gia tự động hoặc yêu cầu tham gia dựa trên miền email đã xác minh.",
+ "title": "Miền đã xác minh"
+ },
+ "successMessage": "Tổ chức đã được cập nhật.",
+ "title": "Cập nhật hồ sơ"
+ },
+ "removeDomainPage": {
+ "messageLine1": "Miền email {{domain}} sẽ bị xóa.",
+ "messageLine2": "Người dùng sẽ không thể tham gia tự động vào tổ chức sau đây.",
+ "successMessage": "{{domain}} đã được xóa.",
+ "title": "Xóa miền"
+ },
+ "start": {
+ "headerTitle__general": "Chung",
+ "headerTitle__members": "Thành viên",
+ "profileSection": {
+ "primaryButton": "Cập nhật hồ sơ",
+ "title": "Hồ sơ tổ chức",
+ "uploadAction__title": "Logo"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "Việc xóa miền này sẽ ảnh hưởng đến người dùng đã được mời.",
+ "removeDomainActionLabel__remove": "Xóa miền",
+ "removeDomainSubtitle": "Xóa miền này khỏi các miền đã xác minh của bạn",
+ "removeDomainTitle": "Xóa miền"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "Người dùng được mời tự động tham gia tổ chức khi đăng ký và có thể tham gia bất kỳ lúc nào.",
+ "automaticInvitationOption__label": "Mời tự động",
+ "automaticSuggestionOption__description": "Người dùng nhận được đề xuất để yêu cầu tham gia, nhưng phải được quản trị viên phê duyệt trước khi họ có thể tham gia tổ chức.",
+ "automaticSuggestionOption__label": "Đề xuất tự động",
+ "calloutInfoLabel": "Thay đổi chế độ nhập học chỉ ảnh hưởng đến người dùng mới.",
+ "calloutInvitationCountLabel": "Lời mời đang chờ xử lý: {{count}}",
+ "calloutSuggestionCountLabel": "Đề xuất đang chờ xử lý: {{count}}",
+ "manualInvitationOption__description": "Người dùng chỉ có thể được mời thủ công vào tổ chức.",
+ "manualInvitationOption__label": "Không tự động",
+ "subtitle": "Chọn cách người dùng từ miền này có thể tham gia tổ chức."
+ },
+ "start": {
+ "headerTitle__danger": "Nguy hiểm",
+ "headerTitle__enrollment": "Tùy chọn nhập học"
+ },
+ "subtitle": "Miền {{domain}} đã được xác minh. Tiếp tục bằng cách chọn chế độ nhập học.",
+ "title": "Cập nhật {{domain}}"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "Nhập mã xác minh được gửi đến địa chỉ email của bạn",
+ "formTitle": "Mã xác minh",
+ "resendButton": "Không nhận được mã? Gửi lại",
+ "subtitle": "Miền {{domainName}} cần được xác minh qua email.",
+ "subtitleVerificationCodeScreen": "Một mã xác minh đã được gửi đến {{emailAddress}}. Nhập mã để tiếp tục.",
+ "title": "Xác minh miền"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "Tạo tổ chức",
+ "action__invitationAccept": "Tham gia",
+ "action__manageOrganization": "Quản lý",
+ "action__suggestionsAccept": "Yêu cầu tham gia",
+ "notSelected": "Không có tổ chức nào được chọn",
+ "personalWorkspace": "Tài khoản cá nhân",
+ "suggestionsAcceptedLabel": "Chờ phê duyệt"
+ },
+ "paginationButton__next": "Tiếp theo",
+ "paginationButton__previous": "Trước",
+ "paginationRowText__displaying": "Hiển thị",
+ "paginationRowText__of": "của",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "Thêm tài khoản",
+ "action__signOutAll": "Đăng xuất khỏi tất cả các tài khoản",
+ "subtitle": "Chọn tài khoản mà bạn muốn tiếp tục.",
+ "title": "Chọn một tài khoản"
+ },
+ "alternativeMethods": {
+ "actionLink": "Nhận trợ giúp",
+ "actionText": "Chưa có trong số này?",
+ "blockButton__backupCode": "Sử dụng mã sao lưu",
+ "blockButton__emailCode": "Gửi mã qua email đến {{identifier}}",
+ "blockButton__emailLink": "Gửi liên kết qua email đến {{identifier}}",
+ "blockButton__passkey": "Đăng nhập bằng passkey của bạn",
+ "blockButton__password": "Đăng nhập bằng mật khẩu của bạn",
+ "blockButton__phoneCode": "Gửi mã qua tin nhắn SMS đến {{identifier}}",
+ "blockButton__totp": "Sử dụng ứng dụng xác thực của bạn",
+ "getHelp": {
+ "blockButton__emailSupport": "Hỗ trợ qua email",
+ "content": "Nếu bạn gặp khó khăn khi đăng nhập vào tài khoản của mình, hãy gửi email cho chúng tôi và chúng tôi sẽ làm việc với bạn để khôi phục truy cập càng sớm càng tốt.",
+ "title": "Nhận trợ giúp"
+ },
+ "subtitle": "Gặp vấn đề? Bạn có thể sử dụng bất kỳ phương pháp nào sau để đăng nhập.",
+ "title": "Sử dụng phương pháp khác"
+ },
+ "backupCodeMfa": {
+ "subtitle": "Mã sao lưu của bạn là mã bạn nhận được khi thiết lập xác thực hai bước.",
+ "title": "Nhập mã sao lưu"
+ },
+ "emailCode": {
+ "formTitle": "Mã xác minh",
+ "resendButton": "Không nhận được mã? Gửi lại",
+ "subtitle": "để tiếp tục đến {{applicationName}}",
+ "title": "Kiểm tra email của bạn"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "Quay trở lại tab ban đầu để tiếp tục.",
+ "title": "Liên kết xác minh này đã hết hạn"
+ },
+ "failed": {
+ "subtitle": "Quay trở lại tab ban đầu để tiếp tục.",
+ "title": "Liên kết xác minh này không hợp lệ"
+ },
+ "formSubtitle": "Sử dụng liên kết xác minh được gửi đến email của bạn",
+ "formTitle": "Liên kết xác minh",
+ "loading": {
+ "subtitle": "Bạn sẽ được chuyển hướng sớm.",
+ "title": "Đang đăng nhập..."
+ },
+ "resendButton": "Không nhận được liên kết? Gửi lại",
+ "subtitle": "để tiếp tục đến {{applicationName}}",
+ "title": "Kiểm tra email của bạn",
+ "unusedTab": {
+ "title": "Bạn có thể đóng tab này"
+ },
+ "verified": {
+ "subtitle": "Bạn sẽ được chuyển hướng sớm.",
+ "title": "Đăng nhập thành công"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Quay trở lại tab ban đầu để tiếp tục",
+ "subtitleNewTab": "Quay lại tab mới mở để tiếp tục",
+ "titleNewTab": "Đã đăng nhập trên tab khác"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "Mã đặt lại mật khẩu",
+ "resendButton": "Không nhận được mã? Gửi lại",
+ "subtitle": "để đặt lại mật khẩu của bạn",
+ "subtitle_email": "Đầu tiên, nhập mã được gửi đến địa chỉ email của bạn",
+ "subtitle_phone": "Đầu tiên, nhập mã được gửi đến điện thoại của bạn",
+ "title": "Đặt lại mật khẩu"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "Đặt lại mật khẩu của bạn",
+ "label__alternativeMethods": "Hoặc, đăng nhập bằng phương pháp khác",
+ "title": "Quên mật khẩu?"
+ },
+ "noAvailableMethods": {
+ "message": "Không thể tiếp tục đăng nhập. Không có yếu tố xác thực nào khả dụng.",
+ "subtitle": "Đã xảy ra lỗi",
+ "title": "Không thể đăng nhập"
+ },
+ "passkey": {
+ "subtitle": "Sử dụng passkey của bạn xác nhận đó là bạn. Thiết bị của bạn có thể yêu cầu vân tay, khuôn mặt hoặc khóa màn hình của bạn.",
+ "title": "Sử dụng passkey của bạn"
+ },
+ "password": {
+ "actionLink": "Sử dụng phương pháp khác",
+ "subtitle": "Nhập mật khẩu liên kết với tài khoản của bạn",
+ "title": "Nhập mật khẩu của bạn"
+ },
+ "passwordPwned": {
+ "title": "Mật khẩu đã bị đánh cắp"
+ },
+ "phoneCode": {
+ "formTitle": "Mã xác minh",
+ "resendButton": "Không nhận được mã? Gửi lại",
+ "subtitle": "để tiếp tục đến {{applicationName}}",
+ "title": "Kiểm tra điện thoại của bạn"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "Mã xác minh",
+ "resendButton": "Không nhận được mã? Gửi lại",
+ "subtitle": "Để tiếp tục, vui lòng nhập mã xác minh được gửi đến điện thoại của bạn",
+ "title": "Kiểm tra điện thoại của bạn"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "Đặt lại mật khẩu",
+ "requiredMessage": "Vì lý do bảo mật, cần phải đặt lại mật khẩu của bạn.",
+ "successMessage": "Mật khẩu của bạn đã được thay đổi thành công. Đang đăng nhập, vui lòng đợi một chút.",
+ "title": "Thiết lập mật khẩu mới"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "Chúng tôi cần xác minh danh tính của bạn trước khi đặt lại mật khẩu của bạn."
+ },
+ "start": {
+ "actionLink": "Đăng ký",
+ "actionLink__use_email": "Sử dụng email",
+ "actionLink__use_email_username": "Sử dụng email hoặc tên người dùng",
+ "actionLink__use_passkey": "Sử dụng passkey thay thế",
+ "actionLink__use_phone": "Sử dụng điện thoại",
+ "actionLink__use_username": "Sử dụng tên người dùng",
+ "actionText": "Chưa có tài khoản?",
+ "subtitle": "Chào mừng trở lại! Vui lòng đăng nhập để tiếp tục",
+ "title": "Đăng nhập vào {{applicationName}}"
+ },
+ "totpMfa": {
+ "formTitle": "Mã xác minh",
+ "subtitle": "Để tiếp tục, vui lòng nhập mã xác minh được tạo bởi ứng dụng xác thực của bạn",
+ "title": "Xác thực hai bước"
+ }
+ },
+ "signInEnterPasswordTitle": "Nhập mật khẩu của bạn",
+ "signUp": {
+ "continue": {
+ "actionLink": "Đăng nhập",
+ "actionText": "Đã có tài khoản?",
+ "subtitle": "Vui lòng điền thông tin còn thiếu để tiếp tục.",
+ "title": "Điền vào các trường còn thiếu"
+ },
+ "emailCode": {
+ "formSubtitle": "Nhập mã xác minh được gửi đến địa chỉ email của bạn",
+ "formTitle": "Mã xác minh",
+ "resendButton": "Không nhận được mã? Gửi lại",
+ "subtitle": "Nhập mã xác minh được gửi đến email của bạn",
+ "title": "Xác minh email của bạn"
+ },
+ "emailLink": {
+ "formSubtitle": "Sử dụng liên kết xác minh được gửi đến địa chỉ email của bạn",
+ "formTitle": "Liên kết xác minh",
+ "loading": {
+ "title": "Đang đăng ký..."
+ },
+ "resendButton": "Không nhận được liên kết? Gửi lại",
+ "subtitle": "để tiếp tục đến {{applicationName}}",
+ "title": "Xác minh email của bạn",
+ "verified": {
+ "title": "Đăng ký thành công"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "Quay lại tab mới mở để tiếp tục",
+ "subtitleNewTab": "Quay lại tab trước để tiếp tục",
+ "title": "Đã xác minh email thành công"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "Nhập mã xác minh được gửi đến số điện thoại của bạn",
+ "formTitle": "Mã xác minh",
+ "resendButton": "Không nhận được mã? Gửi lại",
+ "subtitle": "Nhập mã xác minh được gửi đến số điện thoại của bạn",
+ "title": "Xác minh số điện thoại của bạn"
+ },
+ "start": {
+ "actionLink": "Đăng nhập",
+ "actionText": "Đã có tài khoản?",
+ "subtitle": "Chào mừng! Vui lòng điền thông tin để bắt đầu.",
+ "title": "Tạo tài khoản của bạn"
+ }
+ },
+ "socialButtonsBlockButton": "Tiếp tục với {{provider|titleize}}",
+ "unstable__errors": {
+ "captcha_invalid": "Đăng ký không thành công do việc xác thực bảo mật không thành công. Vui lòng làm mới trang để thử lại hoặc liên hệ với bộ phận hỗ trợ để được hỗ trợ thêm.",
+ "captcha_unavailable": "Đăng ký không thành công do việc xác thực bot không thành công. Vui lòng làm mới trang để thử lại hoặc liên hệ với bộ phận hỗ trợ để được hỗ trợ thêm.",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "Địa chỉ email này đã được sử dụng. Vui lòng thử lại.",
+ "form_identifier_exists__phone_number": "Số điện thoại này đã được sử dụng. Vui lòng thử lại.",
+ "form_identifier_exists__username": "Tên người dùng này đã được sử dụng. Vui lòng thử lại.",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "Địa chỉ email phải là một địa chỉ email hợp lệ.",
+ "form_param_format_invalid__phone_number": "Số điện thoại phải ở định dạng quốc tế hợp lệ.",
+ "form_param_max_length_exceeded__first_name": "Tên không được vượt quá 256 ký tự.",
+ "form_param_max_length_exceeded__last_name": "Họ không được vượt quá 256 ký tự.",
+ "form_param_max_length_exceeded__name": "Tên không được vượt quá 256 ký tự.",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "Mật khẩu của bạn không đủ mạnh.",
+ "form_password_pwned": "Mật khẩu này đã được tìm thấy trong một vụ vi phạm và không thể sử dụng, vui lòng thử mật khẩu khác.",
+ "form_password_pwned__sign_in": "Mật khẩu này đã được tìm thấy trong một vụ vi phạm và không thể sử dụng, vui lòng đặt lại mật khẩu của bạn.",
+ "form_password_size_in_bytes_exceeded": "Mật khẩu của bạn đã vượt quá số lượng byte tối đa cho phép, vui lòng rút ngắn hoặc loại bỏ một số ký tự đặc biệt.",
+ "form_password_validation_failed": "Mật khẩu không chính xác",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "Bạn không thể xóa thông tin nhận dạng cuối cùng của mình.",
+ "not_allowed_access": "",
+ "passkey_already_exists": "Một passkey đã được đăng ký với thiết bị này.",
+ "passkey_not_supported": "Passkeys không được hỗ trợ trên thiết bị này.",
+ "passkey_pa_not_supported": "Đăng ký yêu cầu một bộ xác thực nền tảng nhưng thiết bị không hỗ trợ.",
+ "passkey_registration_cancelled": "Đăng ký passkey đã bị hủy bỏ hoặc hết thời gian.",
+ "passkey_retrieval_cancelled": "Xác minh passkey đã bị hủy bỏ hoặc hết thời gian.",
+ "passwordComplexity": {
+ "maximumLength": "ít hơn {{length}} ký tự",
+ "minimumLength": "{{length}} hoặc nhiều hơn ký tự",
+ "requireLowercase": "một chữ thường",
+ "requireNumbers": "một số",
+ "requireSpecialCharacter": "một ký tự đặc biệt",
+ "requireUppercase": "một chữ hoa",
+ "sentencePrefix": "Mật khẩu của bạn phải chứa"
+ },
+ "phone_number_exists": "Số điện thoại này đã được sử dụng. Vui lòng thử lại.",
+ "zxcvbn": {
+ "couldBeStronger": "Mật khẩu của bạn hoạt động, nhưng có thể mạnh hơn. Hãy thêm nhiều ký tự hơn.",
+ "goodPassword": "Mật khẩu của bạn đáp ứng tất cả các yêu cầu cần thiết.",
+ "notEnough": "Mật khẩu của bạn không đủ mạnh.",
+ "suggestions": {
+ "allUppercase": "Viết hoa một số chữ, nhưng không phải tất cả.",
+ "anotherWord": "Thêm nhiều từ ít phổ biến hơn.",
+ "associatedYears": "Tránh các năm mà bạn liên kết với mình.",
+ "capitalization": "Viết hoa nhiều hơn chỉ chữ đầu tiên.",
+ "dates": "Tránh các ngày và năm mà bạn liên kết với mình.",
+ "l33t": "Tránh việc thay thế chữ dễ đoán như '@' cho 'a'.",
+ "longerKeyboardPattern": "Sử dụng các mẫu bàn phím dài hơn và thay đổi hướng gõ nhiều lần.",
+ "noNeed": "Bạn có thể tạo mật khẩu mạnh mà không cần sử dụng ký tự đặc biệt, số hoặc chữ hoa.",
+ "pwned": "Nếu bạn sử dụng mật khẩu này ở nơi khác, bạn nên thay đổi nó.",
+ "recentYears": "Tránh các năm gần đây.",
+ "repeated": "Tránh các từ và ký tự lặp lại.",
+ "reverseWords": "Tránh việc đảo ngược chính tả của các từ thông thường.",
+ "sequences": "Tránh các chuỗi ký tự phổ biến.",
+ "useWords": "Sử dụng nhiều từ, nhưng tránh các cụm từ phổ biến."
+ },
+ "warnings": {
+ "common": "Đây là một mật khẩu phổ biến.",
+ "commonNames": "Tên và họ phổ biến dễ đoán.",
+ "dates": "Ngày tháng dễ đoán.",
+ "extendedRepeat": "Mẫu ký tự lặp lại như \"abcabcabc\" dễ đoán.",
+ "keyPattern": "Mẫu bàn phím ngắn dễ đoán.",
+ "namesByThemselves": "Tên đơn hoặc họ dễ đoán.",
+ "pwned": "Mật khẩu của bạn đã bị tiết lộ trong một vụ vi phạm trên Internet.",
+ "recentYears": "Các năm gần đây dễ đoán.",
+ "sequences": "Các chuỗi ký tự phổ biến như \"abc\" dễ đoán.",
+ "similarToCommon": "Đây giống với một mật khẩu phổ biến.",
+ "simpleRepeat": "Các ký tự lặp lại như \"aaa\" dễ đoán.",
+ "straightRow": "Các hàng ký tự thẳng trên bàn phím dễ đoán.",
+ "topHundred": "Đây là một mật khẩu được sử dụng thường xuyên.",
+ "topTen": "Đây là một mật khẩu được sử dụng nhiều.",
+ "userInputs": "Không nên có bất kỳ dữ liệu cá nhân hoặc liên quan đến trang nào.",
+ "wordByItself": "Các từ đơn dễ đoán."
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "Thêm tài khoản",
+ "action__manageAccount": "Quản lý tài khoản",
+ "action__signOut": "Đăng xuất",
+ "action__signOutAll": "Đăng xuất khỏi tất cả các tài khoản"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "Đã sao chép!",
+ "actionLabel__copy": "Sao chép tất cả",
+ "actionLabel__download": "Tải xuống .txt",
+ "actionLabel__print": "In",
+ "infoText1": "Mã sao lưu sẽ được kích hoạt cho tài khoản này.",
+ "infoText2": "Giữ bí mật mã sao lưu và lưu trữ chúng một cách an toàn. Bạn có thể tạo lại mã sao lưu nếu nghi ngờ rằng chúng đã bị xâm phạm.",
+ "subtitle__codelist": "Lưu trữ chúng một cách an toàn và giữ chúng bí mật.",
+ "successMessage": "Mã sao lưu đã được kích hoạt. Bạn có thể sử dụng một trong số chúng để đăng nhập vào tài khoản của mình, nếu bạn mất quyền truy cập vào thiết bị xác thực của mình. Mỗi mã chỉ có thể sử dụng một lần.",
+ "successSubtitle": "Bạn có thể sử dụng một trong số chúng để đăng nhập vào tài khoản của mình, nếu bạn mất quyền truy cập vào thiết bị xác thực của mình.",
+ "title": "Thêm xác minh mã sao lưu",
+ "title__codelist": "Mã sao lưu"
+ },
+ "connectedAccountPage": {
+ "formHint": "Chọn một nhà cung cấp để kết nối tài khoản của bạn.",
+ "formHint__noAccounts": "Không có nhà cung cấp tài khoản bên ngoài nào có sẵn.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} sẽ bị xóa khỏi tài khoản này.",
+ "messageLine2": "Bạn sẽ không còn có khả năng sử dụng tài khoản kết nối này và bất kỳ tính năng phụ thuộc nào cũng sẽ không hoạt động nữa.",
+ "successMessage": "{{connectedAccount}} đã bị xóa khỏi tài khoản của bạn.",
+ "title": "Xóa tài khoản kết nối"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "Nhà cung cấp đã được thêm vào tài khoản của bạn",
+ "title": "Thêm tài khoản kết nối"
+ },
+ "deletePage": {
+ "actionDescription": "Nhập \"Xóa tài khoản\" bên dưới để tiếp tục.",
+ "confirm": "Xóa tài khoản",
+ "messageLine1": "Bạn có chắc chắn muốn xóa tài khoản của mình không?",
+ "messageLine2": "Hành động này là vĩnh viễn và không thể đảo ngược.",
+ "title": "Xóa tài khoản"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "Một email chứa mã xác minh sẽ được gửi đến địa chỉ email này.",
+ "formSubtitle": "Nhập mã xác minh được gửi đến {{identifier}}",
+ "formTitle": "Mã xác minh",
+ "resendButton": "Không nhận được mã? Gửi lại",
+ "successMessage": "Email {{identifier}} đã được thêm vào tài khoản của bạn."
+ },
+ "emailLink": {
+ "formHint": "Một email chứa liên kết xác minh sẽ được gửi đến địa chỉ email này.",
+ "formSubtitle": "Nhấp vào liên kết xác minh trong email được gửi đến {{identifier}}",
+ "formTitle": "Liên kết xác minh",
+ "resendButton": "Không nhận được liên kết? Gửi lại",
+ "successMessage": "Email {{identifier}} đã được thêm vào tài khoản của bạn."
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}} sẽ bị xóa khỏi tài khoản này.",
+ "messageLine2": "Bạn sẽ không còn có khả năng đăng nhập bằng địa chỉ email này nữa.",
+ "successMessage": "{{emailAddress}} đã bị xóa khỏi tài khoản của bạn.",
+ "title": "Xóa địa chỉ email"
+ },
+ "title": "Thêm địa chỉ email",
+ "verifyTitle": "Xác minh địa chỉ email"
+ },
+ "formButtonPrimary__add": "Thêm",
+ "formButtonPrimary__continue": "Tiếp tục",
+ "formButtonPrimary__finish": "Hoàn tất",
+ "formButtonPrimary__remove": "Xóa",
+ "formButtonPrimary__save": "Lưu",
+ "formButtonReset": "Hủy",
+ "mfaPage": {
+ "formHint": "Chọn một phương pháp để thêm.",
+ "title": "Thêm xác minh hai bước"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "Sử dụng số hiện có",
+ "primaryButton__addPhoneNumber": "Thêm số điện thoại",
+ "removeResource": {
+ "messageLine1": "{{identifier}} sẽ không còn nhận được mã xác minh khi đăng nhập nữa.",
+ "messageLine2": "Tài khoản của bạn có thể không an toàn. Bạn có chắc chắn muốn tiếp tục không?",
+ "successMessage": "Xác minh hai bước qua mã SMS đã bị xóa cho {{mfaPhoneCode}}",
+ "title": "Xóa xác minh hai bước"
+ },
+ "subtitle__availablePhoneNumbers": "Chọn một số điện thoại hiện có để đăng ký xác minh hai bước qua mã SMS hoặc thêm một số mới.",
+ "subtitle__unavailablePhoneNumbers": "Không có số điện thoại nào có sẵn để đăng ký xác minh hai bước qua mã SMS, vui lòng thêm một số mới.",
+ "successMessage1": "Khi đăng nhập, bạn sẽ cần nhập mã xác minh được gửi đến số điện thoại này như một bước bổ sung.",
+ "successMessage2": "Lưu các mã sao lưu này và lưu trữ chúng một cách an toàn. Nếu bạn mất quyền truy cập vào thiết bị xác thực của mình, bạn có thể sử dụng mã sao lưu để đăng nhập.",
+ "successTitle": "Xác minh qua mã SMS đã được kích hoạt",
+ "title": "Thêm xác minh qua mã SMS"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "Quét mã QR thay vì",
+ "buttonUnableToScan__nonPrimary": "Không thể quét mã QR?",
+ "infoText__ableToScan": "Thiết lập một phương pháp đăng nhập mới trong ứng dụng xác thực của bạn và quét mã QR sau để liên kết nó với tài khoản của bạn.",
+ "infoText__unableToScan": "Thiết lập một phương pháp đăng nhập mới trong ứng dụng xác thực của bạn và nhập Khóa được cung cấp dưới đây.",
+ "inputLabel__unableToScan1": "Đảm bảo rằng Mật khẩu dựa trên thời gian hoặc Mật khẩu một lần đã được kích hoạt, sau đó hoàn tất việc liên kết tài khoản của bạn.",
+ "inputLabel__unableToScan2": "Hoặc nếu ứng dụng xác thực của bạn hỗ trợ URI TOTP, bạn cũng có thể sao chép toàn bộ URI."
+ },
+ "removeResource": {
+ "messageLine1": "Mã xác minh từ ứng dụng xác thực này sẽ không còn được yêu cầu khi đăng nhập nữa.",
+ "messageLine2": "Tài khoản của bạn có thể không an toàn. Bạn có chắc chắn muốn tiếp tục không?",
+ "successMessage": "Xác minh hai bước qua ứng dụng xác thực đã bị xóa.",
+ "title": "Xóa xác minh hai bước"
+ },
+ "successMessage": "Xác minh hai bước hiện đã được kích hoạt. Khi đăng nhập, bạn sẽ cần nhập mã xác minh từ ứng dụng xác thực này như một bước bổ sung.",
+ "title": "Thêm ứng dụng xác thực",
+ "verifySubtitle": "Nhập mã xác minh được tạo bởi ứng dụng xác thực của bạn",
+ "verifyTitle": "Mã xác minh"
+ },
+ "mobileButton__menu": "Menu",
+ "navbar": {
+ "account": "Hồ sơ",
+ "description": "Quản lý thông tin tài khoản của bạn.",
+ "security": "Bảo mật",
+ "title": "Tài khoản"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} sẽ được xóa khỏi tài khoản này.",
+ "title": "Xóa passkey"
+ },
+ "subtitle__rename": "Bạn có thể đổi tên passkey để dễ dàng tìm kiếm.",
+ "title__rename": "Đổi tên Passkey"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "Đề nghị đăng xuất khỏi tất cả các thiết bị khác có thể đã sử dụng mật khẩu cũ của bạn.",
+ "readonly": "Mật khẩu của bạn hiện không thể chỉnh sửa vì bạn chỉ có thể đăng nhập qua kết nối doanh nghiệp.",
+ "successMessage__set": "Mật khẩu của bạn đã được thiết lập.",
+ "successMessage__signOutOfOtherSessions": "Tất cả các thiết bị khác đã được đăng xuất.",
+ "successMessage__update": "Mật khẩu của bạn đã được cập nhật.",
+ "title__set": "Thiết lập mật khẩu",
+ "title__update": "Cập nhật mật khẩu"
+ },
+ "phoneNumberPage": {
+ "infoText": "Một tin nhắn chứa mã xác minh sẽ được gửi đến số điện thoại này. Có thể áp dụng cước phí tin nhắn và dữ liệu.",
+ "removeResource": {
+ "messageLine1": "{{identifier}} sẽ được xóa khỏi tài khoản này.",
+ "messageLine2": "Bạn sẽ không thể đăng nhập bằng số điện thoại này nữa.",
+ "successMessage": "{{phoneNumber}} đã được xóa khỏi tài khoản của bạn.",
+ "title": "Xóa số điện thoại"
+ },
+ "successMessage": "{{identifier}} đã được thêm vào tài khoản của bạn.",
+ "title": "Thêm số điện thoại",
+ "verifySubtitle": "Nhập mã xác minh được gửi đến {{identifier}}",
+ "verifyTitle": "Xác minh số điện thoại"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "Kích thước khuyến nghị 1:1, tối đa 10MB.",
+ "imageFormDestructiveActionSubtitle": "Xóa",
+ "imageFormSubtitle": "Tải lên",
+ "imageFormTitle": "Ảnh hồ sơ",
+ "readonly": "Thông tin hồ sơ của bạn đã được cung cấp bởi kết nối doanh nghiệp và không thể chỉnh sửa.",
+ "successMessage": "Hồ sơ của bạn đã được cập nhật.",
+ "title": "Cập nhật hồ sơ"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "Đăng xuất khỏi thiết bị",
+ "title": "Thiết bị hoạt động"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "Thử lại",
+ "actionLabel__reauthorize": "Ủy quyền ngay",
+ "destructiveActionTitle": "Xóa",
+ "primaryButton": "Kết nối tài khoản",
+ "subtitle__reauthorize": "Phạm vi yêu cầu đã được cập nhật, và bạn có thể gặp hạn chế về chức năng. Vui lòng ủy quyền lại ứng dụng này để tránh bất kỳ vấn đề nào",
+ "title": "Tài khoản đã kết nối"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "Xóa tài khoản",
+ "title": "Xóa tài khoản"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "Xóa email",
+ "detailsAction__nonPrimary": "Đặt làm chính",
+ "detailsAction__primary": "Hoàn tất xác minh",
+ "detailsAction__unverified": "Xác minh",
+ "primaryButton": "Thêm địa chỉ email",
+ "title": "Địa chỉ email"
+ },
+ "enterpriseAccountsSection": {
+ "title": "Tài khoản doanh nghiệp"
+ },
+ "headerTitle__account": "Chi tiết hồ sơ",
+ "headerTitle__security": "Bảo mật",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "Tạo lại",
+ "headerTitle": "Mã dự phòng",
+ "subtitle__regenerate": "Nhận một bộ mã dự phòng an toàn mới. Các mã dự phòng trước sẽ bị xóa và không thể sử dụng.",
+ "title__regenerate": "Tạo lại mã dự phòng"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "Đặt làm mặc định",
+ "destructiveActionLabel": "Xóa"
+ },
+ "primaryButton": "Thêm xác minh hai bước",
+ "title": "Xác minh hai bước",
+ "totp": {
+ "destructiveActionTitle": "Xóa",
+ "headerTitle": "Ứng dụng xác thực"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "Xóa",
+ "menuAction__rename": "Đổi tên",
+ "title": "Passkeys"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "Đặt mật khẩu",
+ "primaryButton__updatePassword": "Cập nhật mật khẩu",
+ "title": "Mật khẩu"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "Xóa số điện thoại",
+ "detailsAction__nonPrimary": "Đặt làm chính",
+ "detailsAction__primary": "Hoàn tất xác minh",
+ "detailsAction__unverified": "Xác minh số điện thoại",
+ "primaryButton": "Thêm số điện thoại",
+ "title": "Số điện thoại"
+ },
+ "profileSection": {
+ "primaryButton": "Cập nhật hồ sơ",
+ "title": "Hồ sơ"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "Đặt tên người dùng",
+ "primaryButton__updateUsername": "Cập nhật tên người dùng",
+ "title": "Tên người dùng"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "Xóa ví",
+ "primaryButton": "Ví Web3",
+ "title": "Ví Web3"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "Tên người dùng của bạn đã được cập nhật.",
+ "title__set": "Thiết lập tên người dùng",
+ "title__update": "Cập nhật tên người dùng"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} sẽ được xóa khỏi tài khoản này.",
+ "messageLine2": "Bạn sẽ không thể đăng nhập bằng ví web3 này nữa.",
+ "successMessage": "{{web3Wallet}} đã được xóa khỏi tài khoản của bạn.",
+ "title": "Xóa ví web3"
+ },
+ "subtitle__availableWallets": "Chọn một ví web3 để kết nối với tài khoản của bạn.",
+ "subtitle__unavailableWallets": "Không có ví web3 nào khả dụng.",
+ "successMessage": "Ví đã được thêm vào tài khoản của bạn.",
+ "title": "Thêm ví web3"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/common.json b/DigitalHumanWeb/locales/vi-VN/common.json
new file mode 100644
index 0000000..2f27fdf
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "Giới thiệu",
+ "advanceSettings": "Cài đặt nâng cao",
+ "alert": {
+ "cloud": {
+ "action": "Trải nghiệm miễn phí",
+ "desc": "Chúng tôi cung cấp {{credit}} điểm tính toán miễn phí cho tất cả người dùng đăng ký, không cần cấu hình phức tạp, sẵn sàng sử dụng ngay, hỗ trợ lưu trữ vô hạn lịch sử trò chuyện và đồng bộ toàn cầu, cùng nhiều tính năng cao cấp khác đang chờ bạn khám phá.",
+ "descOnMobile": "Chúng tôi cung cấp {{credit}} điểm tính toán miễn phí cho tất cả người dùng đăng ký, không cần cấu hình phức tạp, sẵn sàng sử dụng ngay.",
+ "title": "Chào mừng bạn trải nghiệm {{name}}"
+ }
+ },
+ "appInitializing": "Đang khởi động ứng dụng...",
+ "autoGenerate": "Tự động tạo",
+ "autoGenerateTooltip": "Tự động hoàn thành mô tả trợ lý dựa trên từ gợi ý",
+ "autoGenerateTooltipDisabled": "Vui lòng nhập từ gợi ý trước khi sử dụng tính năng tự động hoàn thành",
+ "back": "Quay lại",
+ "batchDelete": "Xóa hàng loạt",
+ "blog": "Blog sản phẩm",
+ "cancel": "Hủy",
+ "changelog": "Nhật ký cập nhật",
+ "close": "Đóng",
+ "contact": "Liên hệ chúng tôi",
+ "copy": "Sao chép",
+ "copyFail": "Sao chép thất bại",
+ "copySuccess": "Sao chép thành công",
+ "dataStatistics": {
+ "messages": "Tin nhắn",
+ "sessions": "Phiên làm việc",
+ "today": "Hôm nay",
+ "topics": "Chủ đề"
+ },
+ "defaultAgent": "Trợ lý mặc định",
+ "defaultSession": "Phiên mặc định",
+ "delete": "Xóa",
+ "document": "Tài Liệu Sử Dụng",
+ "download": "Tải xuống",
+ "duplicate": "Tạo bản sao",
+ "edit": "Chỉnh sửa",
+ "export": "Xuất cấu hình",
+ "exportType": {
+ "agent": "Xuất cấu hình trợ lý",
+ "agentWithMessage": "Xuất trợ lý và tin nhắn",
+ "all": "Xuất cài đặt toàn cầu và tất cả dữ liệu trợ lý",
+ "allAgent": "Xuất tất cả cấu hình trợ lý",
+ "allAgentWithMessage": "Xuất tất cả trợ lý và tin nhắn",
+ "globalSetting": "Xuất cài đặt toàn cầu"
+ },
+ "feedback": "Phản hồi và đề xuất",
+ "follow": "Theo dõi chúng tôi trên {{name}}",
+ "footer": {
+ "action": {
+ "feedback": "Chia sẻ ý kiến quý báu của bạn",
+ "star": "Đánh giá sao trên GitHub"
+ },
+ "and": "và",
+ "feedback": {
+ "action": "Chia sẻ phản hồi",
+ "desc": "Mỗi ý tưởng và đề xuất của bạn đều rất quý giá đối với chúng tôi, chúng tôi rất mong muốn biết ý kiến của bạn! Hãy liên hệ với chúng tôi để cung cấp phản hồi về tính năng và trải nghiệm sử dụng sản phẩm, giúp chúng tôi phát triển LobeChat tốt hơn.",
+ "title": "Chia sẻ phản hồi quý báu của bạn trên GitHub"
+ },
+ "later": "Sau",
+ "star": {
+ "action": "Đánh giá sao",
+ "desc": "Nếu bạn yêu thích sản phẩm của chúng tôi và muốn ủng hộ chúng tôi, hãy đánh giá sao cho chúng tôi trên GitHub nhé? Hành động nhỏ này có ý nghĩa lớn đối với chúng tôi, giúp chúng tôi tiếp tục cung cấp trải nghiệm sản phẩm tốt cho bạn.",
+ "title": "Đánh giá sao cho chúng tôi trên GitHub"
+ },
+ "title": "Yêu thích sản phẩm của chúng tôi?"
+ },
+ "fullscreen": "Chế độ toàn màn hình",
+ "historyRange": "Phạm vi lịch sử",
+ "import": "Nhập cấu hình",
+ "importModal": {
+ "error": {
+ "desc": "Xin lỗi vì quá trình nhập dữ liệu gặp sự cố. Vui lòng thử nhập lại hoặc <1>gửi vấn đề1>, chúng tôi sẽ kiểm tra vấn đề ngay lập tức.",
+ "title": "Nhập dữ liệu thất bại"
+ },
+ "finish": {
+ "onlySettings": "Nhập cài đặt hệ thống thành công",
+ "start": "Bắt đầu sử dụng",
+ "subTitle": "Dữ liệu đã được nhập thành công, mất {{duration}} giây. Chi tiết nhập như sau:",
+ "title": "Hoàn tất nhập dữ liệu"
+ },
+ "loading": "Đang nhập dữ liệu, vui lòng chờ...",
+ "preparing": "Đang chuẩn bị mô-đun nhập dữ liệu...",
+ "result": {
+ "added": "Nhập thành công",
+ "errors": "Lỗi nhập",
+ "messages": "Tin nhắn",
+ "sessionGroups": "Nhóm phiên",
+ "sessions": "Trợ lý",
+ "skips": "Bỏ qua trùng lặp",
+ "topics": "Chủ đề",
+ "type": "Loại dữ liệu"
+ },
+ "title": "Nhập dữ liệu",
+ "uploading": {
+ "desc": "Tập tin hiện tại quá lớn, đang cố gắng tải lên...",
+ "restTime": "Thời gian còn lại",
+ "speed": "Tốc độ tải lên"
+ }
+ },
+ "information": "Cộng đồng và Thông tin",
+ "installPWA": "Cài đặt ứng dụng trình duyệt",
+ "lang": {
+ "ar": "Tiếng Ả Rập",
+ "bg-BG": "Tiếng Bun-ga-ri",
+ "bn": "Tiếng Bengal",
+ "cs-CZ": "Tiếng Séc",
+ "da-DK": "Tiếng Đan Mạch",
+ "de-DE": "Tiếng Đức",
+ "el-GR": "Tiếng Hy Lạp",
+ "en": "Tiếng Anh",
+ "en-US": "Tiếng Anh (Mỹ)",
+ "es-ES": "Tiếng Tây Ban Nha",
+ "fi-FI": "Tiếng Phần Lan",
+ "fr-FR": "Tiếng Pháp",
+ "hi-IN": "Tiếng Hin-ddi",
+ "hu-HU": "Tiếng Hungary",
+ "id-ID": "Tiếng Indonesia",
+ "it-IT": "Tiếng Ý",
+ "ja-JP": "Tiếng Nhật",
+ "ko-KR": "Tiếng Hàn",
+ "nl-NL": "Tiếng Hà Lan",
+ "no-NO": "Tiếng Na Uy",
+ "pl-PL": "Tiếng Ba Lan",
+ "pt-BR": "Tiếng Bồ Đào Nha (Braxin)",
+ "pt-PT": "Tiếng Bồ Đào Nha (Bồ Đào Nha)",
+ "ro-RO": "Tiếng Romania",
+ "ru-RU": "Tiếng Nga",
+ "sk-SK": "Tiếng Slovak",
+ "sr-RS": "Tiếng Serbia",
+ "sv-SE": "Tiếng Thụy Điển",
+ "th-TH": "Tiếng Thái",
+ "tr-TR": "Tiếng Thổ Nhĩ Kỳ",
+ "uk-UA": "Tiếng Ukraina",
+ "vi-VN": "Tiếng Việt",
+ "zh": "Tiếng Trung",
+ "zh-CN": "Tiếng Trung (giản thể)",
+ "zh-TW": "Tiếng Trung (phồn thể)"
+ },
+ "layoutInitializing": "Đang tải bố cục...",
+ "legal": "Tuyên bố về pháp lý",
+ "loading": "Đang tải...",
+ "mail": {
+ "business": "Hợp tác kinh doanh",
+ "support": "Hỗ trợ qua email"
+ },
+ "oauth": "Đăng nhập SSO",
+ "officialSite": "Trang web chính thức",
+ "ok": "Đồng ý",
+ "password": "Mật khẩu",
+ "pin": "Ghim",
+ "pinOff": "Bỏ ghim",
+ "privacy": "Chính sách bảo mật",
+ "regenerate": "Tạo lại",
+ "rename": "Đổi tên",
+ "reset": "Đặt lại",
+ "retry": "Thử lại",
+ "send": "Gửi",
+ "setting": "Cài đặt",
+ "share": "Chia sẻ",
+ "stop": "Dừng",
+ "sync": {
+ "actions": {
+ "settings": "Cài đặt đồng bộ hóa",
+ "sync": "Đồng bộ ngay"
+ },
+ "awareness": {
+ "current": "Thiết bị hiện tại"
+ },
+ "channel": "Kênh",
+ "disabled": {
+ "actions": {
+ "enable": "Bật đồng bộ hóa đám mây",
+ "settings": "Cấu hình tham số đồng bộ hóa"
+ },
+ "desc": "Dữ liệu phiên hiện tại chỉ lưu trữ trong trình duyệt này. Nếu bạn cần đồng bộ dữ liệu qua nhiều thiết bị, vui lòng cấu hình và bật đồng bộ hóa đám mây.",
+ "title": "Dữ liệu chưa được đồng bộ hóa"
+ },
+ "enabled": {
+ "title": "Đồng bộ dữ liệu"
+ },
+ "status": {
+ "connecting": "Đang kết nối",
+ "disabled": "Đồng bộ hóa chưa được bật",
+ "ready": "Đã kết nối",
+ "synced": "Đã đồng bộ",
+ "syncing": "Đang đồng bộ",
+ "unconnected": "Kết nối không thành công"
+ },
+ "title": "Trạng thái đồng bộ hóa",
+ "unconnected": {
+ "tip": "Kết nối đến máy chủ tín hiệu thất bại, không thể thiết lập kênh truyền thông điểm-điểm, vui lòng kiểm tra lại mạng và thử lại"
+ }
+ },
+ "tab": {
+ "chat": "Trò chuyện",
+ "discover": "Khám phá",
+ "files": "Tệp",
+ "me": "Tôi",
+ "setting": "Cài đặt"
+ },
+ "telemetry": {
+ "allow": "Cho phép",
+ "deny": "Từ chối",
+ "desc": "Chúng tôi muốn thu thập thông tin về cách bạn sử dụng một cách ẩn danh để giúp chúng tôi cải thiện LobeChat và cung cấp trải nghiệm sản phẩm tốt hơn cho bạn. Bạn có thể tắt tính năng này bất kỳ lúc nào trong \"Cài đặt\" - \"Về\".",
+ "learnMore": "Tìm hiểu thêm",
+ "title": "Hỗ trợ LobeChat hoạt động tốt hơn"
+ },
+ "temp": "Tạm thời",
+ "terms": "Điều khoản dịch vụ",
+ "updateAgent": "Cập nhật thông tin trợ lý",
+ "upgradeVersion": {
+ "action": "Nâng cấp",
+ "hasNew": "Có bản cập nhật mới",
+ "newVersion": "Có phiên bản mới: {{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "Người dùng ẩn danh",
+ "billing": "Quản lý hóa đơn",
+ "cloud": "Trải nghiệm {{name}}",
+ "data": "Lưu trữ dữ liệu",
+ "defaultNickname": "Người dùng phiên bản cộng đồng",
+ "discord": "Hỗ trợ cộng đồng",
+ "docs": "Tài liệu sử dụng",
+ "email": "Hỗ trợ qua email",
+ "feedback": "Phản hồi và đề xuất",
+ "help": "Trung tâm trợ giúp",
+ "moveGuide": "Đã di chuyển nút cài đặt đến đây",
+ "plans": "Kế hoạch đăng ký",
+ "preview": "Phiên bản xem trước",
+ "profile": "Quản lý tài khoản",
+ "setting": "Cài đặt ứng dụng",
+ "usages": "Thống kê sử dụng"
+ },
+ "version": "Phiên bản"
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/components.json b/DigitalHumanWeb/locales/vi-VN/components.json
new file mode 100644
index 0000000..d3353d8
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "Kéo và thả tệp vào đây, hỗ trợ tải lên nhiều hình ảnh.",
+ "dragFileDesc": "Kéo và thả hình ảnh và tệp vào đây, hỗ trợ tải lên nhiều hình ảnh và tệp.",
+ "dragFileTitle": "Tải lên tệp",
+ "dragTitle": "Tải lên hình ảnh"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "Thêm vào kho tri thức",
+ "addToOtherKnowledgeBase": "Thêm vào kho tri thức khác",
+ "batchChunking": "Chia nhỏ theo lô",
+ "chunking": "Chia nhỏ",
+ "chunkingTooltip": "Chia tách tệp thành nhiều khối văn bản và vector hóa, có thể sử dụng cho tìm kiếm ngữ nghĩa và đối thoại tệp",
+ "confirmDelete": "Bạn sắp xóa tệp này, sau khi xóa sẽ không thể khôi phục, vui lòng xác nhận hành động của bạn",
+ "confirmDeleteMultiFiles": "Bạn sắp xóa {{count}} tệp đã chọn, sau khi xóa sẽ không thể khôi phục, vui lòng xác nhận hành động của bạn",
+ "confirmRemoveFromKnowledgeBase": "Bạn sắp xóa {{count}} tệp đã chọn khỏi kho tri thức, sau khi xóa tệp vẫn có thể xem trong tất cả các tệp, vui lòng xác nhận hành động của bạn",
+ "copyUrl": "Sao chép liên kết",
+ "copyUrlSuccess": "Địa chỉ tệp đã được sao chép thành công",
+ "createChunkingTask": "Đang chuẩn bị...",
+ "deleteSuccess": "Tệp đã được xóa thành công",
+ "downloading": "Đang tải tệp...",
+ "removeFromKnowledgeBase": "Xóa khỏi kho tri thức",
+ "removeFromKnowledgeBaseSuccess": "Tệp đã được xóa thành công"
+ },
+ "bottom": "Đã đến cuối rồi",
+ "config": {
+ "showFilesInKnowledgeBase": "Hiển thị nội dung trong kho tri thức"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "Tải lên tệp",
+ "folder": "Tải lên thư mục",
+ "knowledgeBase": "Tạo kho tri thức mới"
+ },
+ "or": "hoặc",
+ "title": "Kéo tệp hoặc thư mục vào đây"
+ },
+ "title": {
+ "createdAt": "Thời gian tạo",
+ "size": "Kích thước",
+ "title": "Tệp"
+ },
+ "total": {
+ "fileCount": "Tổng cộng {{count}} mục",
+ "selectedCount": "Đã chọn {{count}} mục"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "Các khối văn bản chưa được vector hóa hoàn toàn, sẽ dẫn đến chức năng tìm kiếm ngữ nghĩa không khả dụng, để nâng cao chất lượng tìm kiếm, vui lòng vector hóa các khối văn bản",
+ "error": "Lỗi vector hóa",
+ "errorResult": "Lỗi vector hóa, vui lòng kiểm tra và thử lại. Nguyên nhân thất bại:",
+ "processing": "Các khối văn bản đang được vector hóa, vui lòng chờ",
+ "success": "Hiện tại tất cả các khối văn bản đã được vector hóa"
+ },
+ "embeddings": "Vector hóa",
+ "status": {
+ "error": "Chia nhỏ thất bại",
+ "errorResult": "Chia nhỏ thất bại, vui lòng kiểm tra và thử lại. Nguyên nhân thất bại:",
+ "processing": "Đang chia nhỏ",
+ "processingTip": "Máy chủ đang chia tách các khối văn bản, đóng trang không ảnh hưởng đến tiến trình chia nhỏ"
+ }
+ }
+ },
+ "GoBack": {
+ "back": "Quay lại"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "Mô hình tùy chỉnh, mặc định hỗ trợ cả cuộc gọi hàm và nhận diện hình ảnh, vui lòng xác minh khả năng sử dụng của chúng theo tình hình cụ thể",
+ "file": "Mô hình này hỗ trợ tải lên và nhận diện tệp",
+ "functionCall": "Mô hình này hỗ trợ cuộc gọi hàm (Function Call)",
+ "tokens": "Mỗi phiên của mô hình này hỗ trợ tối đa {{tokens}} Tokens",
+ "vision": "Mô hình này hỗ trợ nhận diện hình ảnh"
+ },
+ "removed": "Mô hình này không còn trong danh sách, nếu bỏ chọn sẽ tự động xóa"
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "Không có mô hình nào được kích hoạt, vui lòng điều chỉnh trong cài đặt",
+ "provider": "Nhà cung cấp"
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/discover.json b/DigitalHumanWeb/locales/vi-VN/discover.json
new file mode 100644
index 0000000..804488c
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/discover.json
@@ -0,0 +1,204 @@
+{
+ "assistants": {
+ "addAgent": "Thêm trợ lý",
+ "addAgentAndConverse": "Thêm trợ lý và trò chuyện",
+ "addAgentSuccess": "Thêm thành công",
+ "conversation": {
+ "l1": "Xin chào, tôi là **{{name}}**, bạn có thể hỏi tôi bất kỳ câu hỏi nào, tôi sẽ cố gắng trả lời bạn ~",
+ "l2": "Dưới đây là giới thiệu về khả năng của tôi: ",
+ "l3": "Hãy bắt đầu cuộc trò chuyện nào!"
+ },
+ "description": "Giới thiệu trợ lý",
+ "detail": "Chi tiết",
+ "list": "Danh sách trợ lý",
+ "more": "Thêm",
+ "plugins": "Tích hợp plugin",
+ "recentSubmits": "Cập nhật gần đây",
+ "suggestions": "Gợi ý liên quan",
+ "systemRole": "Cài đặt trợ lý",
+ "try": "Thử ngay"
+ },
+ "back": "Quay lại khám phá",
+ "category": {
+ "assistant": {
+ "academic": "Học thuật",
+ "all": "Tất cả",
+ "career": "Nghề nghiệp",
+ "copywriting": "Viết nội dung",
+ "design": "Thiết kế",
+ "education": "Giáo dục",
+ "emotions": "Cảm xúc",
+ "entertainment": "Giải trí",
+ "games": "Trò chơi",
+ "general": "Chung",
+ "life": "Cuộc sống",
+ "marketing": "Tiếp thị",
+ "office": "Văn phòng",
+ "programming": "Lập trình",
+ "translation": "Dịch thuật"
+ },
+ "plugin": {
+ "all": "Tất cả",
+ "gaming-entertainment": "Giải trí trò chơi",
+ "life-style": "Phong cách sống",
+ "media-generate": "Tạo nội dung truyền thông",
+ "science-education": "Khoa học và giáo dục",
+ "social": "Mạng xã hội",
+ "stocks-finance": "Chứng khoán và tài chính",
+ "tools": "Công cụ hữu ích",
+ "web-search": "Tìm kiếm trên web"
+ }
+ },
+ "cleanFilter": "Xóa bộ lọc",
+ "create": "Tạo mới",
+ "createGuide": {
+ "func1": {
+ "desc1": "Vào trang cài đặt của trợ lý bạn muốn gửi thông qua biểu tượng cài đặt ở góc trên bên phải trong cửa sổ trò chuyện;",
+ "desc2": "Nhấn nút gửi đến thị trường trợ lý ở góc trên bên phải.",
+ "tag": "Phương pháp một",
+ "title": "Gửi qua LobeChat"
+ },
+ "func2": {
+ "button": "Đi đến kho trợ lý trên Github",
+ "desc": "Nếu bạn muốn thêm trợ lý vào chỉ mục, hãy sử dụng agent-template.json hoặc agent-template-full.json để tạo một mục trong thư mục plugins, viết mô tả ngắn gọn và gán thẻ phù hợp, sau đó tạo một yêu cầu kéo.",
+ "tag": "Phương pháp hai",
+ "title": "Gửi qua Github"
+ }
+ },
+ "dislike": "Không thích",
+ "filter": "Bộ lọc",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "Tất cả tác giả",
+ "followed": "Tác giả đã theo dõi",
+ "title": "Phạm vi tác giả"
+ },
+ "contentLength": "Độ dài ngữ cảnh tối thiểu",
+ "maxToken": {
+ "title": "Đặt độ dài tối đa (Token)",
+ "unlimited": "Không giới hạn"
+ },
+ "other": {
+ "functionCall": "Hỗ trợ gọi hàm",
+ "title": "Khác",
+ "vision": "Hỗ trợ nhận diện hình ảnh",
+ "withKnowledge": "Kèm theo kho kiến thức",
+ "withTool": "Kèm theo plugin"
+ },
+ "pricing": "Giá mô hình",
+ "timePeriod": {
+ "all": "Tất cả thời gian",
+ "day": "24 giờ qua",
+ "month": "30 ngày qua",
+ "title": "Phạm vi thời gian",
+ "week": "7 ngày qua",
+ "year": "1 năm qua"
+ }
+ },
+ "home": {
+ "featuredAssistants": "Trợ lý nổi bật",
+ "featuredModels": "Mô hình nổi bật",
+ "featuredProviders": "Nhà cung cấp mô hình nổi bật",
+ "featuredTools": "Plugin nổi bật",
+ "more": "Khám phá thêm"
+ },
+ "like": "Thích",
+ "models": {
+ "chat": "Bắt đầu cuộc trò chuyện",
+ "contentLength": "Độ dài ngữ cảnh tối đa",
+ "free": "Miễn phí",
+ "guide": "Hướng dẫn cấu hình",
+ "list": "Danh sách mô hình",
+ "more": "Thêm",
+ "parameterList": {
+ "defaultValue": "Giá trị mặc định",
+ "docs": "Xem tài liệu",
+ "frequency_penalty": {
+ "desc": "Cài đặt này điều chỉnh tần suất mà mô hình lặp lại các từ cụ thể đã xuất hiện trong đầu vào. Giá trị cao hơn làm giảm khả năng lặp lại này, trong khi giá trị âm tạo ra hiệu ứng ngược lại. Hình phạt từ vựng không tăng theo số lần xuất hiện. Giá trị âm sẽ khuyến khích việc lặp lại từ vựng.",
+ "title": "Hình phạt tần suất"
+ },
+ "max_tokens": {
+ "desc": "Cài đặt này xác định độ dài tối đa mà mô hình có thể tạo ra trong một lần phản hồi. Việc đặt giá trị cao hơn cho phép mô hình tạo ra những phản hồi dài hơn, trong khi giá trị thấp hơn sẽ giới hạn độ dài của phản hồi, giúp nó ngắn gọn hơn. Tùy thuộc vào các tình huống ứng dụng khác nhau, điều chỉnh giá trị này một cách hợp lý có thể giúp đạt được độ dài và mức độ chi tiết mong muốn của phản hồi.",
+ "title": "Giới hạn phản hồi một lần"
+ },
+ "presence_penalty": {
+ "desc": "Cài đặt này nhằm kiểm soát việc lặp lại từ vựng dựa trên tần suất xuất hiện của từ trong đầu vào. Nó cố gắng sử dụng ít hơn những từ đã xuất hiện nhiều trong đầu vào, với tần suất sử dụng tỷ lệ thuận với tần suất xuất hiện. Hình phạt từ vựng tăng theo số lần xuất hiện. Giá trị âm sẽ khuyến khích việc lặp lại từ vựng.",
+ "title": "Độ mới của chủ đề"
+ },
+ "range": "Phạm vi",
+ "temperature": {
+ "desc": "Cài đặt này ảnh hưởng đến sự đa dạng trong phản hồi của mô hình. Giá trị thấp hơn dẫn đến phản hồi dễ đoán và điển hình hơn, trong khi giá trị cao hơn khuyến khích phản hồi đa dạng và không thường gặp. Khi giá trị được đặt là 0, mô hình sẽ luôn đưa ra cùng một phản hồi cho đầu vào nhất định.",
+ "title": "Ngẫu nhiên"
+ },
+ "title": "Tham số mô hình",
+ "top_p": {
+ "desc": "Cài đặt này giới hạn lựa chọn của mô hình chỉ trong một tỷ lệ từ có khả năng cao nhất: chỉ chọn những từ hàng đầu có xác suất tích lũy đạt P. Giá trị thấp hơn làm cho phản hồi của mô hình dễ đoán hơn, trong khi cài đặt mặc định cho phép mô hình chọn từ toàn bộ phạm vi từ vựng.",
+ "title": "Lấy mẫu hạt nhân"
+ },
+ "type": "Loại"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChat hỗ trợ sử dụng khóa API tùy chỉnh cho nhà cung cấp này.",
+ "input": "Giá đầu vào",
+ "inputTooltip": "Chi phí cho mỗi triệu Token",
+ "latency": "Độ trễ",
+ "latencyTooltip": "Thời gian phản hồi trung bình để nhà cung cấp gửi Token đầu tiên",
+ "maxOutput": "Độ dài đầu ra tối đa",
+ "maxOutputTooltip": "Số Token tối đa mà điểm cuối này có thể tạo ra",
+ "officialTooltip": "Dịch vụ chính thức của LobeHub",
+ "output": "Giá đầu ra",
+ "outputTooltip": "Chi phí cho mỗi triệu Token",
+ "streamCancellationTooltip": "Nhà cung cấp này hỗ trợ chức năng hủy luồng.",
+ "throughput": "Thông lượng",
+ "throughputTooltip": "Số Token trung bình được truyền trong mỗi yêu cầu luồng mỗi giây"
+ },
+ "suggestions": "Mô hình liên quan",
+ "supportedProviders": "Nhà cung cấp hỗ trợ mô hình này"
+ },
+ "plugins": {
+ "community": "Plugin cộng đồng",
+ "install": "Cài đặt plugin",
+ "installed": "Đã cài đặt",
+ "list": "Danh sách plugin",
+ "meta": {
+ "description": "Mô tả",
+ "parameter": "Tham số",
+ "title": "Tham số công cụ",
+ "type": "Loại"
+ },
+ "more": "Thêm",
+ "official": "Plugin chính thức",
+ "recentSubmits": "Cập nhật gần đây",
+ "suggestions": "Gợi ý liên quan"
+ },
+ "providers": {
+ "config": "Cấu hình nhà cung cấp",
+ "list": "Danh sách nhà cung cấp mô hình",
+ "modelCount": "{{count}} mô hình",
+ "modelSite": "Tài liệu mô hình",
+ "more": "Thêm",
+ "officialSite": "Trang web chính thức",
+ "showAllModels": "Hiển thị tất cả các mô hình",
+ "suggestions": "Nhà cung cấp liên quan",
+ "supportedModels": "Mô hình được hỗ trợ"
+ },
+ "search": {
+ "placeholder": "Tìm kiếm tên, mô tả hoặc từ khóa...",
+ "result": "{{count}} kết quả tìm kiếm về {{keyword}}",
+ "searching": "Đang tìm kiếm..."
+ },
+ "sort": {
+ "mostLiked": "Nhiều người thích nhất",
+ "mostUsed": "Sử dụng nhiều nhất",
+ "newest": "Mới nhất",
+ "oldest": "Cũ nhất",
+ "recommended": "Được đề xuất"
+ },
+ "tab": {
+ "assistants": "Trợ lý",
+ "home": "Trang chủ",
+ "models": "Mô hình",
+ "plugins": "Plugin",
+ "providers": "Nhà cung cấp mô hình"
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/error.json b/DigitalHumanWeb/locales/vi-VN/error.json
new file mode 100644
index 0000000..329e4d6
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "Tiếp tục cuộc trò chuyện",
+ "desc": "{{greeting}}, rất vui được tiếp tục phục vụ bạn. Hãy tiếp tục cuộc trò chuyện chúng ta vừa mới bắt đầu nhé",
+ "title": "Chào mừng trở lại, {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "Quay về Trang chủ",
+ "desc": "Hãy thử lại sau, hoặc quay về thế giới đã biết",
+ "retry": "Thử lại",
+ "title": "Trang gặp một chút vấn đề.."
+ },
+ "fetchError": "Yêu cầu thất bại",
+ "fetchErrorDetail": "Chi tiết lỗi",
+ "notFound": {
+ "backHome": "Quay về Trang chủ",
+ "check": "Vui lòng kiểm tra xem URL của bạn có đúng không",
+ "desc": "Chúng tôi không thể tìm thấy trang bạn đang tìm kiếm",
+ "title": "Bước vào vùng đất chưa biết?"
+ },
+ "pluginSettings": {
+ "desc": "Hoàn thành cấu hình sau để bắt đầu sử dụng plugin này",
+ "title": "Cấu hình Plugin {{name}}"
+ },
+ "response": {
+ "400": "Xin lỗi, máy chủ không hiểu yêu cầu của bạn, vui lòng xác nhận tham số yêu cầu của bạn có đúng không",
+ "401": "Xin lỗi, máy chủ từ chối yêu cầu của bạn, có thể do quyền hạn của bạn không đủ hoặc không cung cấp xác thực danh tính hợp lệ",
+ "403": "Xin lỗi, máy chủ từ chối yêu cầu của bạn, bạn không có quyền truy cập nội dung này",
+ "404": "Xin lỗi, máy chủ không tìm thấy trang hoặc tài nguyên bạn yêu cầu, vui lòng xác nhận URL của bạn có đúng không",
+ "405": "Xin lỗi, máy chủ không hỗ trợ phương thức yêu cầu bạn đang sử dụng, vui lòng xác nhận phương thức yêu cầu của bạn có đúng không",
+ "406": "Xin lỗi, máy chủ không thể hoàn thành yêu cầu dựa trên đặc tính nội dung bạn yêu cầu",
+ "407": "Xin lỗi, bạn cần xác thực proxy trước khi tiếp tục yêu cầu này",
+ "408": "Xin lỗi, máy chủ đã vượt quá thời gian chờ khi đang chờ đợi yêu cầu, vui lòng kiểm tra kết nối mạng của bạn và thử lại",
+ "409": "Xin lỗi, yêu cầu gặp xung đột và không thể xử lý, có thể do trạng thái tài nguyên không tương thích với yêu cầu",
+ "410": "Xin lỗi, tài nguyên bạn yêu cầu đã bị xóa vĩnh viễn và không thể tìm thấy",
+ "411": "Xin lỗi, máy chủ không thể xử lý yêu cầu không chứa độ dài nội dung hợp lệ",
+ "412": "Xin lỗi, yêu cầu của bạn không đáp ứng điều kiện của máy chủ và không thể hoàn thành",
+ "413": "Xin lỗi, lượng dữ liệu yêu cầu của bạn quá lớn, máy chủ không thể xử lý",
+ "414": "Xin lỗi, URI của yêu cầu của bạn quá dài, máy chủ không thể xử lý",
+ "415": "Xin lỗi, máy chủ không thể xử lý định dạng phương tiện đi kèm với yêu cầu",
+ "416": "Xin lỗi, máy chủ không thể đáp ứng phạm vi yêu cầu của bạn",
+ "417": "Xin lỗi, máy chủ không thể đáp ứng giá trị kỳ vọng của bạn",
+ "422": "Xin lỗi, định dạng yêu cầu của bạn đúng, nhưng do chứa lỗi ngữ nghĩa nên không thể phản hồi",
+ "423": "Xin lỗi, tài nguyên bạn yêu cầu đã bị khóa",
+ "424": "Xin lỗi, yêu cầu hiện tại không thể hoàn thành do yêu cầu trước đó thất bại",
+ "426": "Xin lỗi, máy chủ yêu cầu bạn nâng cấp phiên bản giao thức của khách hàng lên cao hơn",
+ "428": "Xin lỗi, máy chủ yêu cầu điều kiện tiên quyết, yêu cầu của bạn phải chứa tiêu đề điều kiện chính xác",
+ "429": "Xin lỗi, yêu cầu của bạn quá nhiều, máy chủ hơi mệt, vui lòng thử lại sau",
+ "431": "Xin lỗi, trường tiêu đề yêu cầu của bạn quá lớn, máy chủ không thể xử lý",
+ "451": "Xin lỗi, do lý do pháp lý, máy chủ từ chối cung cấp tài nguyên này",
+ "500": "Xin lỗi, máy chủ có vẻ gặp một số khó khăn, tạm thời không thể hoàn thành yêu cầu của bạn, vui lòng thử lại sau",
+ "502": "Xin lỗi, máy chủ có vẻ lạc đường, tạm thời không thể cung cấp dịch vụ, vui lòng thử lại sau",
+ "503": "Xin lỗi, máy chủ hiện không thể xử lý yêu cầu của bạn, có thể do quá tải hoặc đang bảo trì, vui lòng thử lại sau",
+ "504": "Xin lỗi, máy chủ không đợi được phản hồi từ máy chủ upstream, vui lòng thử lại sau",
+ "AgentRuntimeError": "Lobe mô hình ngôn ngữ thực thi gặp lỗi, vui lòng kiểm tra và thử lại dựa trên thông tin dưới đây",
+ "FreePlanLimit": "Hiện tại bạn đang sử dụng tài khoản miễn phí, không thể sử dụng tính năng này. Vui lòng nâng cấp lên gói trả phí để tiếp tục sử dụng.",
+ "InvalidAccessCode": "Mật khẩu truy cập không hợp lệ hoặc trống, vui lòng nhập mật khẩu truy cập đúng hoặc thêm Khóa API tùy chỉnh",
+ "InvalidBedrockCredentials": "Xác thực Bedrock không thành công, vui lòng kiểm tra AccessKeyId/SecretAccessKey và thử lại",
+ "InvalidClerkUser": "Xin lỗi, bạn chưa đăng nhập. Vui lòng đăng nhập hoặc đăng ký tài khoản trước khi tiếp tục.",
+ "InvalidGithubToken": "Mã truy cập cá nhân Github không chính xác hoặc để trống, vui lòng kiểm tra lại Mã truy cập cá nhân Github và thử lại",
+ "InvalidOllamaArgs": "Cấu hình Ollama không hợp lệ, vui lòng kiểm tra lại cấu hình Ollama và thử lại",
+ "InvalidProviderAPIKey": "{{provider}} API Key không hợp lệ hoặc trống, vui lòng kiểm tra và thử lại",
+ "LocationNotSupportError": "Xin lỗi, vị trí của bạn không hỗ trợ dịch vụ mô hình này, có thể do hạn chế vùng miền hoặc dịch vụ chưa được mở. Vui lòng xác nhận xem vị trí hiện tại có hỗ trợ sử dụng dịch vụ này không, hoặc thử sử dụng thông tin vị trí khác.",
+ "NoOpenAIAPIKey": "Khóa API OpenAI trống, vui lòng thêm Khóa API OpenAI tùy chỉnh",
+ "OllamaBizError": "Yêu cầu dịch vụ Ollama gặp lỗi, vui lòng kiểm tra thông tin dưới đây hoặc thử lại",
+ "OllamaServiceUnavailable": "Dịch vụ Ollama không khả dụng, vui lòng kiểm tra xem Ollama có hoạt động bình thường không, hoặc xem xét cấu hình chéo đúng của Ollama",
+ "OpenAIBizError": "Yêu cầu dịch vụ OpenAI gặp sự cố, vui lòng kiểm tra thông tin dưới đây hoặc thử lại",
+ "PluginApiNotFound": "Xin lỗi, không có API nào trong tệp mô tả plugin, vui lòng kiểm tra phương thức yêu cầu của bạn có khớp với API mô tả plugin không",
+ "PluginApiParamsError": "Xin lỗi, kiểm tra tham số đầu vào yêu cầu của plugin không thông qua, vui lòng kiểm tra tham số đầu vào có khớp với thông tin mô tả API không",
+ "PluginFailToTransformArguments": "Xin lỗi, không thể chuyển đổi đối số của plugin, vui lòng thử tạo lại tin nhắn trợ giúp hoặc thay đổi mô hình AI có khả năng gọi Tools mạnh hơn và thử lại",
+ "PluginGatewayError": "Xin lỗi, cổng plugin gặp lỗi, vui lòng kiểm tra cấu hình cổng plugin có đúng không",
+ "PluginManifestInvalid": "Xin lỗi, kiểm tra mô tả plugin không thông qua, vui lòng kiểm tra định dạng mô tả có đúng không",
+ "PluginManifestNotFound": "Xin lỗi, máy chủ không tìm thấy tệp mô tả plugin (manifest.json), vui lòng kiểm tra địa chỉ tệp mô tả plugin có đúng không",
+ "PluginMarketIndexInvalid": "Xin lỗi, kiểm tra chỉ mục plugin không thông qua, vui lòng kiểm tra định dạng tệp chỉ mục có đúng không",
+ "PluginMarketIndexNotFound": "Xin lỗi, máy chủ không tìm thấy chỉ mục plugin, vui lòng kiểm tra xem địa chỉ chỉ mục có đúng không",
+ "PluginMetaInvalid": "Xin lỗi, kiểm tra thông tin cấu hình plugin không thông qua, vui lòng kiểm tra định dạng thông tin cấu hình có đúng không",
+ "PluginMetaNotFound": "Xin lỗi, không tìm thấy thông tin cấu hình plugin trong chỉ mục",
+ "PluginOpenApiInitError": "Xin lỗi, khởi tạo khách hàng OpenAPI thất bại, vui lòng kiểm tra thông tin cấu hình OpenAPI có đúng không",
+ "PluginServerError": "Lỗi trả về từ máy chủ plugin, vui lòng kiểm tra tệp mô tả plugin, cấu hình plugin hoặc triển khai máy chủ theo thông tin lỗi dưới đây",
+ "PluginSettingsInvalid": "Plugin cần phải được cấu hình đúng trước khi sử dụng, vui lòng kiểm tra cấu hình của bạn có đúng không",
+ "ProviderBizError": "Yêu cầu dịch vụ {{provider}} gặp sự cố, vui lòng kiểm tra thông tin dưới đây hoặc thử lại",
+ "StreamChunkError": "Lỗi phân tích khối tin nhắn yêu cầu luồng, vui lòng kiểm tra xem API hiện tại có tuân thủ tiêu chuẩn hay không, hoặc liên hệ với nhà cung cấp API của bạn để được tư vấn.",
+ "SubscriptionPlanLimit": "Số lượng đăng ký của bạn đã hết, không thể sử dụng tính năng này. Vui lòng nâng cấp lên gói cao hơn hoặc mua gói tài nguyên để tiếp tục sử dụng.",
+ "UnknownChatFetchError": "Xin lỗi, đã xảy ra lỗi yêu cầu không xác định. Vui lòng kiểm tra hoặc thử lại theo thông tin dưới đây."
+ },
+ "stt": {
+ "responseError": "Yêu cầu dịch vụ thất bại, vui lòng kiểm tra cấu hình hoặc thử lại"
+ },
+ "tts": {
+ "responseError": "Yêu cầu dịch vụ thất bại, vui lòng kiểm tra cấu hình hoặc thử lại"
+ },
+ "unlock": {
+ "addProxyUrl": "Thêm URL proxy OpenAI (tùy chọn)",
+ "apiKey": {
+ "description": "Nhập {{name}} API Key của bạn để bắt đầu phiên làm việc",
+ "title": "Sử dụng {{name}} API Key tùy chỉnh"
+ },
+ "closeMessage": "Đóng thông báo",
+ "confirm": "Xác nhận và thử lại",
+ "oauth": {
+ "description": "Quản trị viên đã mở tính năng xác thực đăng nhập thống nhất. Nhấn vào nút bên dưới để đăng nhập và mở khóa ứng dụng",
+ "success": "Đăng nhập thành công",
+ "title": "Đăng nhập tài khoản",
+ "welcome": "Chào mừng bạn!"
+ },
+ "password": {
+ "description": "Quản trị viên đã kích hoạt mã hóa ứng dụng. Nhập mật khẩu ứng dụng để mở khóa. Chỉ cần nhập mật khẩu một lần",
+ "placeholder": "Nhập mật khẩu",
+ "title": "Nhập mật khẩu để mở khóa ứng dụng"
+ },
+ "tabs": {
+ "apiKey": "Khóa API tùy chỉnh",
+ "password": "Mật khẩu"
+ }
+ },
+ "upload": {
+ "desc": "Chi tiết: {{detail}}",
+ "fileOnlySupportInServerMode": "Chế độ triển khai hiện tại không hỗ trợ tải lên các tệp không phải hình ảnh. Nếu bạn muốn tải lên định dạng {{ext}}, vui lòng chuyển sang triển khai cơ sở dữ liệu trên máy chủ hoặc sử dụng dịch vụ {{cloud}}.",
+ "networkError": "Vui lòng kiểm tra xem mạng của bạn có hoạt động bình thường không và kiểm tra cấu hình chia sẻ tệp giữa các miền có đúng không",
+ "title": "Tải lên tệp thất bại, vui lòng kiểm tra kết nối mạng hoặc thử lại sau",
+ "unknownError": "Lỗi: {{reason}}",
+ "uploadFailed": "Tải tệp lên không thành công"
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/file.json b/DigitalHumanWeb/locales/vi-VN/file.json
new file mode 100644
index 0000000..334a5fd
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "Quản lý tệp và kho tri thức của bạn",
+ "detail": {
+ "basic": {
+ "createdAt": "Thời gian tạo",
+ "filename": "Tên tệp",
+ "size": "Kích thước tệp",
+ "title": "Thông tin cơ bản",
+ "type": "Định dạng",
+ "updatedAt": "Thời gian cập nhật"
+ },
+ "data": {
+ "chunkCount": "Số lượng phân đoạn",
+ "embedding": {
+ "default": "Chưa được vector hóa",
+ "error": "Thất bại",
+ "pending": "Đang chờ khởi động",
+ "processing": "Đang xử lý",
+ "success": "Đã hoàn thành"
+ },
+ "embeddingStatus": "Trạng thái vector hóa"
+ }
+ },
+ "empty": "Chưa có tệp/tệp tin nào được tải lên",
+ "header": {
+ "actions": {
+ "newFolder": "Tạo thư mục mới",
+ "uploadFile": "Tải tệp lên",
+ "uploadFolder": "Tải thư mục lên"
+ },
+ "uploadButton": "Tải lên"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "Bạn sắp xóa kho tri thức này, các tệp trong đó sẽ không bị xóa mà sẽ được chuyển vào tất cả tệp. Kho tri thức sau khi xóa sẽ không thể khôi phục, vui lòng cẩn thận khi thực hiện.",
+ "empty": "Nhấp vào <1>+1> để bắt đầu tạo kho tri thức"
+ },
+ "new": "Tạo kho tri thức mới",
+ "title": "Kho tri thức"
+ },
+ "networkError": "Không thể lấy kho tri thức, vui lòng kiểm tra kết nối mạng và thử lại",
+ "notSupportGuide": {
+ "desc": "Phiên bản triển khai hiện tại là chế độ cơ sở dữ liệu khách hàng, không thể sử dụng chức năng quản lý tệp. Vui lòng chuyển sang <1>chế độ triển khai cơ sở dữ liệu máy chủ1>, hoặc sử dụng trực tiếp <3>LobeChat Cloud3>",
+ "features": {
+ "allKind": {
+ "desc": "Hỗ trợ các loại tệp phổ biến, bao gồm các định dạng tài liệu như Word, PPT, Excel, PDF, TXT, cũng như các tệp mã nguồn phổ biến như JS, Python",
+ "title": "Phân tích nhiều loại tệp"
+ },
+ "embeddings": {
+ "desc": "Sử dụng mô hình vector hiệu suất cao để vector hóa các phân đoạn văn bản, thực hiện tìm kiếm ngữ nghĩa nội dung tệp",
+ "title": "Ngữ nghĩa hóa vector"
+ },
+ "repos": {
+ "desc": "Hỗ trợ tạo kho tri thức và cho phép thêm các loại tệp khác nhau, xây dựng kiến thức thuộc lĩnh vực của bạn",
+ "title": "Kho tri thức"
+ }
+ },
+ "title": "Chế độ triển khai hiện tại không hỗ trợ quản lý tệp"
+ },
+ "preview": {
+ "downloadFile": "Tải tệp",
+ "unsupportedFileAndContact": "Định dạng tệp này hiện không hỗ trợ xem trước trực tuyến. Nếu bạn có yêu cầu xem trước, vui lòng <1>phản hồi cho chúng tôi1>"
+ },
+ "searchFilePlaceholder": "Tìm kiếm tệp",
+ "tab": {
+ "all": "Tất cả tệp",
+ "audios": "Âm thanh",
+ "documents": "Tài liệu",
+ "images": "Hình ảnh",
+ "videos": "Video",
+ "websites": "Trang web"
+ },
+ "title": "Tệp",
+ "uploadDock": {
+ "body": {
+ "collapse": "Thu gọn",
+ "item": {
+ "done": "Đã tải lên",
+ "error": "Tải lên thất bại, vui lòng thử lại",
+ "pending": "Chuẩn bị tải lên...",
+ "processing": "Đang xử lý tệp...",
+ "restTime": "Thời gian còn lại {{time}}"
+ }
+ },
+ "totalCount": "Tổng cộng {{count}} mục",
+ "uploadStatus": {
+ "error": "Lỗi tải lên",
+ "pending": "Đang chờ tải lên",
+ "processing": "Đang tải lên",
+ "success": "Tải lên hoàn tất",
+ "uploading": "Đang tải lên"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/knowledgeBase.json b/DigitalHumanWeb/locales/vi-VN/knowledgeBase.json
new file mode 100644
index 0000000..955d924
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "Tài liệu đã được thêm thành công, <1>xem ngay1>",
+ "confirm": "Thêm",
+ "id": {
+ "placeholder": "Vui lòng chọn kho kiến thức để thêm",
+ "required": "Vui lòng chọn kho kiến thức",
+ "title": "Kho kiến thức mục tiêu"
+ },
+ "title": "Thêm vào kho kiến thức",
+ "totalFiles": "Đã chọn {{count}} tệp"
+ },
+ "createNew": {
+ "confirm": "Tạo mới",
+ "description": {
+ "placeholder": "Mô tả kho kiến thức (tùy chọn)"
+ },
+ "formTitle": "Thông tin cơ bản",
+ "name": {
+ "placeholder": "Tên kho kiến thức",
+ "required": "Vui lòng điền tên kho kiến thức"
+ },
+ "title": "Tạo kho kiến thức mới"
+ },
+ "tab": {
+ "evals": "Đánh giá",
+ "files": "Tài liệu",
+ "settings": "Cài đặt",
+ "testing": "Kiểm tra hồi phục"
+ },
+ "title": "Kho kiến thức"
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/market.json b/DigitalHumanWeb/locales/vi-VN/market.json
new file mode 100644
index 0000000..94dcb7d
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "Thêm trợ lý",
+ "addAgentAndConverse": "Thêm trợ lý và trò chuyện",
+ "addAgentSuccess": "Thêm thành công",
+ "guide": {
+ "func1": {
+ "desc1": "Trong cửa sổ trò chuyện, nhấp vào cài đặt ở góc trên bên phải để vào trang cài đặt trợ lý bạn muốn gửi;",
+ "desc2": "Nhấp vào nút gửi đến thị trường trợ lý ở góc trên bên phải.",
+ "tag": "Phương pháp một",
+ "title": "Gửi thông qua LobeChat"
+ },
+ "func2": {
+ "button": "Đi đến kho trợ lý trên Github",
+ "desc": "Nếu bạn muốn thêm trợ lý vào chỉ mục, hãy sử dụng agent-template.json hoặc agent-template-full.json để tạo mục nhập trong thư mục plugins, viết mô tả ngắn gọn và đánh dấu phù hợp, sau đó tạo một yêu cầu kéo.",
+ "tag": "Phương pháp hai",
+ "title": "Gửi thông qua Github"
+ }
+ },
+ "search": {
+ "placeholder": "Tìm kiếm tên trợ lý, giới thiệu hoặc từ khóa..."
+ },
+ "sidebar": {
+ "comment": "Diễn đàn",
+ "prompt": "Gợi ý",
+ "title": "Chi tiết trợ lý"
+ },
+ "submitAgent": "Gửi trợ lý",
+ "title": {
+ "allAgents": "Tất cả trợ lý",
+ "recentSubmits": "Gần đây thêm mới"
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/metadata.json b/DigitalHumanWeb/locales/vi-VN/metadata.json
new file mode 100644
index 0000000..365a60c
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}} mang đến cho bạn trải nghiệm tốt nhất với ChatGPT, Claude, Gemini, OLLaMA WebUI",
+ "title": "{{appName}}: Công cụ AI cá nhân, giúp bạn có một bộ não thông minh hơn"
+ },
+ "discover": {
+ "assistants": {
+ "description": "Sáng tạo nội dung, viết quảng cáo, hỏi đáp, tạo hình ảnh, tạo video, tạo giọng nói, Agent thông minh, quy trình tự động hóa, tùy chỉnh trợ lý AI / GPTs / OLLaMA của riêng bạn",
+ "title": "Trợ lý AI"
+ },
+ "description": "Sáng tạo nội dung, viết quảng cáo, hỏi đáp, tạo hình ảnh, tạo video, tạo giọng nói, Agent thông minh, quy trình tự động hóa, ứng dụng AI tùy chỉnh, tùy chỉnh bảng điều khiển ứng dụng AI của riêng bạn",
+ "models": {
+ "description": "Khám phá các mô hình AI phổ biến như OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek",
+ "title": "Mô hình AI"
+ },
+ "plugins": {
+ "description": "Tìm kiếm biểu đồ, học thuật, tạo hình ảnh, tạo video, tạo giọng nói, tự động hóa quy trình làm việc, tích hợp khả năng phong phú của các plugin cho trợ lý của bạn",
+ "title": "Plugin AI"
+ },
+ "providers": {
+ "description": "Khám phá các nhà cung cấp mô hình phổ biến như OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter",
+ "title": "Nhà cung cấp dịch vụ mô hình AI"
+ },
+ "search": "Tìm kiếm",
+ "title": "Khám Phá"
+ },
+ "plugins": {
+ "description": "Tìm kiếm, tạo biểu đồ, học thuật, tạo hình ảnh, tạo video, tạo giọng nói, quy trình tự động hóa, tùy chỉnh khả năng plugin ToolCall dành riêng cho ChatGPT / Claude",
+ "title": "Thị trường plugin"
+ },
+ "welcome": {
+ "description": "{{appName}} mang đến cho bạn trải nghiệm tốt nhất với ChatGPT, Claude, Gemini, OLLaMA WebUI",
+ "title": "Chào mừng bạn đến với {{appName}}: Công cụ AI cá nhân, giúp bạn có một bộ não thông minh hơn"
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/migration.json b/DigitalHumanWeb/locales/vi-VN/migration.json
new file mode 100644
index 0000000..8bf7e8f
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "Xóa dữ liệu cục bộ",
+ "downloadBackup": "Tải sao lưu dữ liệu",
+ "reUpgrade": "Tái nâng cấp",
+ "start": "Bắt đầu sử dụng",
+ "upgrade": "Nâng cấp ngay"
+ },
+ "clear": {
+ "confirm": "Dữ liệu cục bộ sẽ được xóa (cài đặt toàn cầu không bị ảnh hưởng), vui lòng xác nhận bạn đã tải sao lưu dữ liệu."
+ },
+ "description": "Trong phiên bản mới, việc lưu trữ dữ liệu của {{appName}} đã có bước nhảy vọt lớn. Do đó, chúng tôi cần nâng cấp dữ liệu cũ để mang đến cho bạn trải nghiệm sử dụng tốt hơn.",
+ "features": {
+ "capability": {
+ "desc": "Dựa trên công nghệ IndexedDB, đủ sức chứa tất cả các tin nhắn trò chuyện trong suốt cuộc đời bạn",
+ "title": "Dung lượng lớn"
+ },
+ "performance": {
+ "desc": "Tự động lập chỉ mục hàng triệu tin nhắn, truy vấn phản hồi trong mili giây",
+ "title": "Hiệu suất cao"
+ },
+ "use": {
+ "desc": "Hỗ trợ tìm kiếm theo tiêu đề, mô tả, thẻ, nội dung tin nhắn và cả văn bản dịch, hiệu quả tìm kiếm hàng ngày được nâng cao đáng kể",
+ "title": "Dễ sử dụng hơn"
+ }
+ },
+ "title": "Sự tiến hóa dữ liệu của {{appName}}",
+ "upgrade": {
+ "error": {
+ "subTitle": "Chúng tôi rất tiếc, đã xảy ra sự cố trong quá trình nâng cấp cơ sở dữ liệu. Vui lòng thử các giải pháp sau: A. Xóa dữ liệu cục bộ và nhập lại dữ liệu sao lưu; B. Nhấn nút 'Nâng cấp lại'.
Nếu vẫn gặp lỗi, vui lòng <1>gửi vấn đề1>, chúng tôi sẽ giúp bạn kiểm tra ngay lập tức.",
+ "title": "Nâng cấp cơ sở dữ liệu thất bại"
+ },
+ "success": {
+ "subTitle": "Cơ sở dữ liệu của {{appName}} đã được nâng cấp lên phiên bản mới nhất, hãy bắt đầu trải nghiệm ngay!",
+ "title": "Nâng cấp cơ sở dữ liệu thành công"
+ }
+ },
+ "upgradeTip": "Việc nâng cấp sẽ mất khoảng 10~20 giây, trong quá trình nâng cấp, vui lòng không đóng {{appName}}."
+ },
+ "migrateError": {
+ "missVersion": "Dữ liệu nhập không có số phiên bản, vui lòng kiểm tra lại tệp và thử lại",
+ "noMigration": "Không tìm thấy phương án di chuyển tương ứng với phiên bản hiện tại, vui lòng kiểm tra lại số phiên bản. Nếu vẫn gặp vấn đề, vui lòng gửi phản hồi về vấn đề"
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/modelProvider.json b/DigitalHumanWeb/locales/vi-VN/modelProvider.json
new file mode 100644
index 0000000..a643f9f
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/modelProvider.json
@@ -0,0 +1,121 @@
+{
+ "azure": {
+ "azureApiVersion": {
+ "desc": "Phiên bản API của Azure, tuân theo định dạng YYYY-MM-DD, tham khảo [phiên bản mới nhất](https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions)",
+ "fetch": "Lấy danh sách",
+ "title": "Phiên bản API Azure"
+ },
+ "empty": "Vui lòng nhập ID mô hình để thêm mô hình đầu tiên",
+ "endpoint": {
+ "desc": "Kiểm tra tài nguyên từ cổng Azure, bạn có thể tìm thấy giá trị này trong phần 'Khóa và điểm cuối'",
+ "placeholder": "https://docs-test-001.openai.azure.com",
+ "title": "Địa chỉ API Azure"
+ },
+ "modelListPlaceholder": "Chọn hoặc thêm mô hình OpenAI bạn đã triển khai",
+ "title": "Azure OpenAI",
+ "token": {
+ "desc": "Kiểm tra tài nguyên từ cổng Azure, bạn có thể tìm thấy giá trị này trong phần 'Khóa và điểm cuối'. Có thể sử dụng KEY1 hoặc KEY2",
+ "placeholder": "Azure API Key",
+ "title": "API Key"
+ }
+ },
+ "bedrock": {
+ "accessKeyId": {
+ "desc": "Nhập AWS Access Key Id",
+ "placeholder": "AWS Access Key Id",
+ "title": "AWS Access Key Id"
+ },
+ "checker": {
+ "desc": "Kiểm tra AccessKeyId / SecretAccessKey có được nhập chính xác không"
+ },
+ "region": {
+ "desc": "Nhập AWS Region",
+ "placeholder": "AWS Region",
+ "title": "AWS Region"
+ },
+ "secretAccessKey": {
+ "desc": "Nhập AWS Secret Access Key",
+ "placeholder": "AWS Secret Access Key",
+ "title": "AWS Secret Access Key"
+ },
+ "sessionToken": {
+ "desc": "Nếu bạn đang sử dụng AWS SSO/STS, hãy nhập AWS Session Token của bạn",
+ "placeholder": "AWS Session Token",
+ "title": "AWS Session Token (tùy chọn)"
+ },
+ "title": "Bedrock",
+ "unlock": {
+ "customRegion": "Vùng Dịch vụ Tùy chỉnh",
+ "customSessionToken": "Mã thông báo phiên tùy chỉnh",
+ "description": "Nhập AWS AccessKeyId / SecretAccessKey của bạn để bắt đầu phiên làm việc. Ứng dụng sẽ không lưu trữ cấu hình xác thực của bạn",
+ "title": "Sử dụng Thông tin Xác thực Bedrock tùy chỉnh"
+ }
+ },
+ "github": {
+ "personalAccessToken": {
+ "desc": "Nhập mã truy cập cá nhân Github của bạn, nhấp vào [đây](https://github.com/settings/tokens) để tạo",
+ "placeholder": "ghp_xxxxxx",
+ "title": "GitHub PAT"
+ }
+ },
+ "ollama": {
+ "checker": {
+ "desc": "Kiểm tra địa chỉ proxy có được nhập chính xác không",
+ "title": "Kiểm tra tính liên thông"
+ },
+ "customModelName": {
+ "desc": "Thêm mô hình tùy chỉnh, sử dụng dấu phẩy (,) để tách biệt nhiều mô hình",
+ "placeholder": "vicuna,llava,codellama,llama2:13b-text",
+ "title": "Tên mô hình tùy chỉnh"
+ },
+ "download": {
+ "desc": "Ollama đang tải xuống mô hình này, vui lòng không đóng trang này. Quá trình tải xuống sẽ tiếp tục từ nơi đã bị gián đoạn khi tải lại",
+ "remainingTime": "Thời gian còn lại",
+ "speed": "Tốc độ tải xuống",
+ "title": "Đang tải mô hình {{model}}"
+ },
+ "endpoint": {
+ "desc": "Nhập địa chỉ proxy API của Ollama, có thể để trống nếu không chỉ định cụ thể",
+ "title": "Địa chỉ proxy API"
+ },
+ "setup": {
+ "cors": {
+ "description": "Do vấn đề về an ninh trình duyệt, bạn cần cấu hình CORS cho Ollama trước khi có thể sử dụng bình thường.",
+ "linux": {
+ "env": "Trong phần [Service], thêm `Environment`, thêm biến môi trường OLLAMA_ORIGINS:",
+ "reboot": "Tải lại systemd và khởi động lại Ollama",
+ "systemd": "Gọi systemd để chỉnh sửa dịch vụ ollama:"
+ },
+ "macos": "Vui lòng mở ứng dụng «Terminal», dán lệnh sau và nhấn Enter để chạy",
+ "reboot": "Vui lòng khởi động lại dịch vụ Ollama sau khi hoàn thành",
+ "title": "Cấu hình Ollama cho phép truy cập từ xa",
+ "windows": "Trên Windows, nhấp vào «Control Panel», vào chỉnh sửa biến môi trường hệ thống. Tạo biến môi trường tên là «OLLAMA_ORIGINS» cho tài khoản người dùng của bạn, giá trị là * , nhấp vào «OK/Áp dụng» để lưu lại"
+ },
+ "install": {
+ "description": "Vui lòng xác nhận rằng bạn đã bật Ollama. Nếu chưa tải Ollama, vui lòng truy cập trang web chính thức để <1>tải xuống1>",
+ "docker": "Nếu bạn muốn sử dụng Docker, Ollama cũng cung cấp hình ảnh Docker chính thức, bạn có thể kéo theo lệnh sau:",
+ "linux": {
+ "command": "Cài đặt bằng lệnh sau:",
+ "manual": "Hoặc bạn cũng có thể tham khảo <1>Hướng dẫn cài đặt thủ công trên Linux1> để tự cài đặt"
+ },
+ "title": "Cài đặt và mở Ollama ứng dụng trên máy cục bộ",
+ "windowsTab": "Windows (Bản xem trước)"
+ }
+ },
+ "title": "Ollama",
+ "unlock": {
+ "cancel": "Hủy tải xuống",
+ "confirm": "Tải xuống",
+ "description": "Nhập nhãn mô hình Ollama của bạn để tiếp tục phiên làm việc",
+ "downloaded": "{{completed}} / {{total}}",
+ "starting": "Bắt đầu tải xuống...",
+ "title": "Tải xuống mô hình Ollama đã chỉ định"
+ }
+ },
+ "zeroone": {
+ "title": "01.AI Zero One"
+ },
+ "zhipu": {
+ "title": "Zhipu"
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/models.json b/DigitalHumanWeb/locales/vi-VN/models.json
new file mode 100644
index 0000000..96bab9d
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/models.json
@@ -0,0 +1,928 @@
+{
+ "01-ai/Yi-1.5-34B-Chat-16K": {
+ "description": "Yi-1.5 34B, với mẫu huấn luyện phong phú, cung cấp hiệu suất vượt trội trong ứng dụng ngành."
+ },
+ "01-ai/Yi-1.5-9B-Chat-16K": {
+ "description": "Yi-1.5 9B hỗ trợ 16K Tokens, cung cấp khả năng tạo ngôn ngữ hiệu quả và mượt mà."
+ },
+ "360gpt-pro": {
+ "description": "360GPT Pro là thành viên quan trọng trong dòng mô hình AI của 360, đáp ứng nhu cầu đa dạng của các ứng dụng ngôn ngữ tự nhiên với khả năng xử lý văn bản hiệu quả, hỗ trợ hiểu văn bản dài và đối thoại nhiều vòng."
+ },
+ "360gpt-turbo": {
+ "description": "360GPT Turbo cung cấp khả năng tính toán và đối thoại mạnh mẽ, có khả năng hiểu ngữ nghĩa và hiệu suất tạo ra xuất sắc, là giải pháp trợ lý thông minh lý tưởng cho doanh nghiệp và nhà phát triển."
+ },
+ "360gpt-turbo-responsibility-8k": {
+ "description": "360GPT Turbo Responsibility 8K nhấn mạnh an toàn ngữ nghĩa và định hướng trách nhiệm, được thiết kế đặc biệt cho các tình huống ứng dụng có yêu cầu cao về an toàn nội dung, đảm bảo độ chính xác và độ ổn định trong trải nghiệm người dùng."
+ },
+ "360gpt2-pro": {
+ "description": "360GPT2 Pro là mô hình xử lý ngôn ngữ tự nhiên cao cấp do công ty 360 phát hành, có khả năng tạo và hiểu văn bản xuất sắc, đặc biệt trong lĩnh vực tạo ra và sáng tạo, có thể xử lý các nhiệm vụ chuyển đổi ngôn ngữ phức tạp và diễn xuất vai trò."
+ },
+ "4.0Ultra": {
+ "description": "Spark4.0 Ultra là phiên bản mạnh mẽ nhất trong dòng mô hình lớn Xinghuo, nâng cao khả năng hiểu và tóm tắt nội dung văn bản trong khi nâng cấp liên kết tìm kiếm trực tuyến. Đây là giải pháp toàn diện nhằm nâng cao năng suất văn phòng và đáp ứng chính xác nhu cầu, là sản phẩm thông minh dẫn đầu ngành."
+ },
+ "Baichuan2-Turbo": {
+ "description": "Sử dụng công nghệ tăng cường tìm kiếm để kết nối toàn diện giữa mô hình lớn và kiến thức lĩnh vực, kiến thức toàn cầu. Hỗ trợ tải lên nhiều loại tài liệu như PDF, Word và nhập URL, thông tin được thu thập kịp thời và toàn diện, kết quả đầu ra chính xác và chuyên nghiệp."
+ },
+ "Baichuan3-Turbo": {
+ "description": "Tối ưu hóa cho các tình huống doanh nghiệp thường xuyên, hiệu quả được cải thiện đáng kể, chi phí hiệu quả cao. So với mô hình Baichuan2, sáng tạo nội dung tăng 20%, trả lời câu hỏi kiến thức tăng 17%, khả năng đóng vai tăng 40%. Hiệu quả tổng thể tốt hơn GPT3.5."
+ },
+ "Baichuan3-Turbo-128k": {
+ "description": "Có cửa sổ ngữ cảnh siêu dài 128K, tối ưu hóa cho các tình huống doanh nghiệp thường xuyên, hiệu quả được cải thiện đáng kể, chi phí hiệu quả cao. So với mô hình Baichuan2, sáng tạo nội dung tăng 20%, trả lời câu hỏi kiến thức tăng 17%, khả năng đóng vai tăng 40%. Hiệu quả tổng thể tốt hơn GPT3.5."
+ },
+ "Baichuan4": {
+ "description": "Mô hình có khả năng hàng đầu trong nước, vượt trội hơn các mô hình chính thống nước ngoài trong các nhiệm vụ tiếng Trung như bách khoa toàn thư, văn bản dài, sáng tạo nội dung. Cũng có khả năng đa phương tiện hàng đầu trong ngành, thể hiện xuất sắc trong nhiều tiêu chuẩn đánh giá uy tín."
+ },
+ "Gryphe/MythoMax-L2-13b": {
+ "description": "MythoMax-L2 (13B) là một mô hình sáng tạo, phù hợp cho nhiều lĩnh vực ứng dụng và nhiệm vụ phức tạp."
+ },
+ "Max-32k": {
+ "description": "Spark Max 32K được cấu hình với khả năng xử lý ngữ cảnh lớn, khả năng hiểu ngữ cảnh và lý luận logic mạnh mẽ hơn, hỗ trợ đầu vào văn bản 32K token, phù hợp cho việc đọc tài liệu dài, hỏi đáp kiến thức riêng tư và các tình huống khác."
+ },
+ "Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Hermes 2 Mixtral 8x7B DPO là một mô hình kết hợp đa dạng, nhằm cung cấp trải nghiệm sáng tạo xuất sắc."
+ },
+ "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO": {
+ "description": "Nous Hermes 2 - Mixtral 8x7B-DPO (46.7B) là mô hình chỉ dẫn chính xác cao, phù hợp cho tính toán phức tạp."
+ },
+ "NousResearch/Nous-Hermes-2-Yi-34B": {
+ "description": "Nous Hermes-2 Yi (34B) cung cấp đầu ra ngôn ngữ tối ưu và khả năng ứng dụng đa dạng."
+ },
+ "Phi-3-5-mini-instruct": {
+ "description": "Cập nhật mô hình Phi-3-mini."
+ },
+ "Phi-3-medium-128k-instruct": {
+ "description": "Mô hình Phi-3-medium giống nhau, nhưng với kích thước ngữ cảnh lớn hơn cho RAG hoặc gợi ý ít."
+ },
+ "Phi-3-medium-4k-instruct": {
+ "description": "Mô hình 14B tham số, chứng minh chất lượng tốt hơn Phi-3-mini, tập trung vào dữ liệu dày đặc lý luận chất lượng cao."
+ },
+ "Phi-3-mini-128k-instruct": {
+ "description": "Mô hình Phi-3-mini giống nhau, nhưng với kích thước ngữ cảnh lớn hơn cho RAG hoặc gợi ý ít."
+ },
+ "Phi-3-mini-4k-instruct": {
+ "description": "Thành viên nhỏ nhất của gia đình Phi-3. Tối ưu hóa cho cả chất lượng và độ trễ thấp."
+ },
+ "Phi-3-small-128k-instruct": {
+ "description": "Mô hình Phi-3-small giống nhau, nhưng với kích thước ngữ cảnh lớn hơn cho RAG hoặc gợi ý ít."
+ },
+ "Phi-3-small-8k-instruct": {
+ "description": "Mô hình 7B tham số, chứng minh chất lượng tốt hơn Phi-3-mini, tập trung vào dữ liệu dày đặc lý luận chất lượng cao."
+ },
+ "Pro-128k": {
+ "description": "Spark Pro-128K được cấu hình với khả năng xử lý ngữ cảnh cực lớn, có thể xử lý tới 128K thông tin ngữ cảnh, đặc biệt phù hợp cho việc phân tích toàn bộ và xử lý mối liên hệ logic lâu dài trong nội dung văn bản dài, có thể cung cấp logic mạch lạc và hỗ trợ trích dẫn đa dạng trong giao tiếp văn bản phức tạp."
+ },
+ "Qwen/Qwen1.5-110B-Chat": {
+ "description": "Là phiên bản thử nghiệm của Qwen2, Qwen1.5 sử dụng dữ liệu quy mô lớn để đạt được chức năng đối thoại chính xác hơn."
+ },
+ "Qwen/Qwen1.5-72B-Chat": {
+ "description": "Qwen 1.5 Chat (72B) cung cấp phản hồi nhanh và khả năng đối thoại tự nhiên, phù hợp cho môi trường đa ngôn ngữ."
+ },
+ "Qwen/Qwen2-72B-Instruct": {
+ "description": "Qwen2 là mô hình ngôn ngữ tổng quát tiên tiến, hỗ trợ nhiều loại chỉ dẫn."
+ },
+ "Qwen/Qwen2.5-14B-Instruct": {
+ "description": "Qwen2.5 là một loạt mô hình ngôn ngữ lớn hoàn toàn mới, nhằm tối ưu hóa việc xử lý các nhiệm vụ theo hướng dẫn."
+ },
+ "Qwen/Qwen2.5-32B-Instruct": {
+ "description": "Qwen2.5 là một loạt mô hình ngôn ngữ lớn hoàn toàn mới, nhằm tối ưu hóa việc xử lý các nhiệm vụ theo hướng dẫn."
+ },
+ "Qwen/Qwen2.5-72B-Instruct": {
+ "description": "Qwen2.5 là một loạt mô hình ngôn ngữ lớn hoàn toàn mới, có khả năng hiểu và tạo ra mạnh mẽ hơn."
+ },
+ "Qwen/Qwen2.5-7B-Instruct": {
+ "description": "Qwen2.5 là một loạt mô hình ngôn ngữ lớn hoàn toàn mới, nhằm tối ưu hóa việc xử lý các nhiệm vụ theo hướng dẫn."
+ },
+ "Qwen/Qwen2.5-Coder-7B-Instruct": {
+ "description": "Qwen2.5-Coder tập trung vào việc viết mã."
+ },
+ "Qwen/Qwen2.5-Math-72B-Instruct": {
+ "description": "Qwen2.5-Math tập trung vào việc giải quyết các vấn đề trong lĩnh vực toán học, cung cấp giải pháp chuyên nghiệp cho các bài toán khó."
+ },
+ "THUDM/glm-4-9b-chat": {
+ "description": "GLM-4 9B là phiên bản mã nguồn mở, cung cấp trải nghiệm đối thoại tối ưu cho các ứng dụng hội thoại."
+ },
+ "abab5.5-chat": {
+ "description": "Hướng đến các tình huống sản xuất, hỗ trợ xử lý nhiệm vụ phức tạp và sinh văn bản hiệu quả, phù hợp cho các ứng dụng trong lĩnh vực chuyên môn."
+ },
+ "abab5.5s-chat": {
+ "description": "Được thiết kế đặc biệt cho các tình huống đối thoại bằng tiếng Trung, cung cấp khả năng sinh đối thoại chất lượng cao bằng tiếng Trung, phù hợp cho nhiều tình huống ứng dụng."
+ },
+ "abab6.5g-chat": {
+ "description": "Được thiết kế đặc biệt cho các cuộc đối thoại đa ngôn ngữ, hỗ trợ sinh đối thoại chất lượng cao bằng tiếng Anh và nhiều ngôn ngữ khác."
+ },
+ "abab6.5s-chat": {
+ "description": "Phù hợp cho nhiều nhiệm vụ xử lý ngôn ngữ tự nhiên, bao gồm sinh văn bản, hệ thống đối thoại, v.v."
+ },
+ "abab6.5t-chat": {
+ "description": "Tối ưu hóa cho các tình huống đối thoại bằng tiếng Trung, cung cấp khả năng sinh đối thoại mượt mà và phù hợp với thói quen diễn đạt tiếng Trung."
+ },
+ "accounts/fireworks/models/firefunction-v1": {
+ "description": "Mô hình gọi hàm mã nguồn mở của Fireworks, cung cấp khả năng thực hiện chỉ dẫn xuất sắc và tính năng tùy chỉnh mở."
+ },
+ "accounts/fireworks/models/firefunction-v2": {
+ "description": "Firefunction-v2 mới nhất của công ty Fireworks là một mô hình gọi hàm hiệu suất cao, được phát triển dựa trên Llama-3 và được tối ưu hóa nhiều, đặc biệt phù hợp cho các tình huống gọi hàm, đối thoại và theo dõi chỉ dẫn."
+ },
+ "accounts/fireworks/models/firellava-13b": {
+ "description": "fireworks-ai/FireLLaVA-13b là một mô hình ngôn ngữ hình ảnh, có thể nhận cả hình ảnh và văn bản đầu vào, được huấn luyện bằng dữ liệu chất lượng cao, phù hợp cho các nhiệm vụ đa mô hình."
+ },
+ "accounts/fireworks/models/gemma2-9b-it": {
+ "description": "Mô hình chỉ dẫn Gemma 2 9B, dựa trên công nghệ trước đó của Google, phù hợp cho nhiều nhiệm vụ tạo văn bản như trả lời câu hỏi, tóm tắt và suy luận."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct": {
+ "description": "Mô hình chỉ dẫn Llama 3 70B, được tối ưu hóa cho đối thoại đa ngôn ngữ và hiểu ngôn ngữ tự nhiên, hiệu suất vượt trội hơn nhiều mô hình cạnh tranh."
+ },
+ "accounts/fireworks/models/llama-v3-70b-instruct-hf": {
+ "description": "Mô hình chỉ dẫn Llama 3 70B (phiên bản HF), giữ nguyên kết quả với thực hiện chính thức, phù hợp cho các nhiệm vụ theo dõi chỉ dẫn chất lượng cao."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct": {
+ "description": "Mô hình chỉ dẫn Llama 3 8B, được tối ưu hóa cho đối thoại và các nhiệm vụ đa ngôn ngữ, thể hiện hiệu suất xuất sắc và hiệu quả."
+ },
+ "accounts/fireworks/models/llama-v3-8b-instruct-hf": {
+ "description": "Mô hình chỉ dẫn Llama 3 8B (phiên bản HF), kết quả nhất quán với thực hiện chính thức, có tính nhất quán cao và tương thích đa nền tảng."
+ },
+ "accounts/fireworks/models/llama-v3p1-405b-instruct": {
+ "description": "Mô hình chỉ dẫn Llama 3.1 405B, có số lượng tham số cực lớn, phù hợp cho các nhiệm vụ phức tạp và theo dõi chỉ dẫn trong các tình huống tải cao."
+ },
+ "accounts/fireworks/models/llama-v3p1-70b-instruct": {
+ "description": "Mô hình chỉ dẫn Llama 3.1 70B, cung cấp khả năng hiểu và sinh ngôn ngữ tự nhiên xuất sắc, là lựa chọn lý tưởng cho các nhiệm vụ đối thoại và phân tích."
+ },
+ "accounts/fireworks/models/llama-v3p1-8b-instruct": {
+ "description": "Mô hình chỉ dẫn Llama 3.1 8B, được tối ưu hóa cho đối thoại đa ngôn ngữ, có thể vượt qua hầu hết các mô hình mã nguồn mở và đóng trong các tiêu chuẩn ngành phổ biến."
+ },
+ "accounts/fireworks/models/mixtral-8x22b-instruct": {
+ "description": "Mô hình chỉ dẫn Mixtral MoE 8x22B, với số lượng tham số lớn và kiến trúc nhiều chuyên gia, hỗ trợ toàn diện cho việc xử lý hiệu quả các nhiệm vụ phức tạp."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct": {
+ "description": "Mô hình chỉ dẫn Mixtral MoE 8x7B, kiến trúc nhiều chuyên gia cung cấp khả năng theo dõi và thực hiện chỉ dẫn hiệu quả."
+ },
+ "accounts/fireworks/models/mixtral-8x7b-instruct-hf": {
+ "description": "Mô hình chỉ dẫn Mixtral MoE 8x7B (phiên bản HF), hiệu suất nhất quán với thực hiện chính thức, phù hợp cho nhiều tình huống nhiệm vụ hiệu quả."
+ },
+ "accounts/fireworks/models/mythomax-l2-13b": {
+ "description": "Mô hình MythoMax L2 13B, kết hợp công nghệ hợp nhất mới, xuất sắc trong việc kể chuyện và đóng vai."
+ },
+ "accounts/fireworks/models/phi-3-vision-128k-instruct": {
+ "description": "Mô hình chỉ dẫn Phi 3 Vision, mô hình đa mô hình nhẹ, có khả năng xử lý thông tin hình ảnh và văn bản phức tạp, với khả năng suy luận mạnh mẽ."
+ },
+ "accounts/fireworks/models/starcoder-16b": {
+ "description": "Mô hình StarCoder 15.5B, hỗ trợ các nhiệm vụ lập trình nâng cao, khả năng đa ngôn ngữ được cải thiện, phù hợp cho việc tạo và hiểu mã phức tạp."
+ },
+ "accounts/fireworks/models/starcoder-7b": {
+ "description": "Mô hình StarCoder 7B, được huấn luyện cho hơn 80 ngôn ngữ lập trình, có khả năng điền mã và hiểu ngữ cảnh xuất sắc."
+ },
+ "accounts/yi-01-ai/models/yi-large": {
+ "description": "Mô hình Yi-Large, có khả năng xử lý đa ngôn ngữ xuất sắc, có thể được sử dụng cho nhiều nhiệm vụ sinh và hiểu ngôn ngữ."
+ },
+ "ai21-jamba-1.5-large": {
+ "description": "Mô hình đa ngôn ngữ với 398B tham số (94B hoạt động), cung cấp cửa sổ ngữ cảnh dài 256K, gọi hàm, đầu ra có cấu trúc và tạo ra nội dung có căn cứ."
+ },
+ "ai21-jamba-1.5-mini": {
+ "description": "Mô hình đa ngôn ngữ với 52B tham số (12B hoạt động), cung cấp cửa sổ ngữ cảnh dài 256K, gọi hàm, đầu ra có cấu trúc và tạo ra nội dung có căn cứ."
+ },
+ "ai21-jamba-instruct": {
+ "description": "Mô hình LLM dựa trên Mamba đạt hiệu suất, chất lượng và hiệu quả chi phí tốt nhất trong ngành."
+ },
+ "anthropic.claude-3-5-sonnet-20240620-v1:0": {
+ "description": "Claude 3.5 Sonnet nâng cao tiêu chuẩn ngành, hiệu suất vượt trội hơn các mô hình cạnh tranh và Claude 3 Opus, thể hiện xuất sắc trong nhiều đánh giá, đồng thời có tốc độ và chi phí của mô hình tầm trung của chúng tôi."
+ },
+ "anthropic.claude-3-haiku-20240307-v1:0": {
+ "description": "Claude 3 Haiku là mô hình nhanh nhất và gọn nhẹ nhất của Anthropic, cung cấp tốc độ phản hồi gần như ngay lập tức. Nó có thể nhanh chóng trả lời các truy vấn và yêu cầu đơn giản. Khách hàng sẽ có thể xây dựng trải nghiệm AI liền mạch mô phỏng tương tác của con người. Claude 3 Haiku có thể xử lý hình ảnh và trả về đầu ra văn bản, với cửa sổ ngữ cảnh 200K."
+ },
+ "anthropic.claude-3-opus-20240229-v1:0": {
+ "description": "Claude 3 Opus là mô hình AI mạnh nhất của Anthropic, có hiệu suất tiên tiến trong các nhiệm vụ phức tạp. Nó có thể xử lý các gợi ý mở và các tình huống chưa thấy, với độ trôi chảy và khả năng hiểu giống con người xuất sắc. Claude 3 Opus thể hiện những khả năng tiên tiến của AI sinh. Claude 3 Opus có thể xử lý hình ảnh và trả về đầu ra văn bản, với cửa sổ ngữ cảnh 200K."
+ },
+ "anthropic.claude-3-sonnet-20240229-v1:0": {
+ "description": "Claude 3 Sonnet của Anthropic đạt được sự cân bằng lý tưởng giữa trí thông minh và tốc độ - đặc biệt phù hợp cho khối lượng công việc doanh nghiệp. Nó cung cấp hiệu quả tối đa với giá thấp hơn đối thủ, được thiết kế để trở thành một máy chủ đáng tin cậy và bền bỉ, phù hợp cho triển khai AI quy mô lớn. Claude 3 Sonnet có thể xử lý hình ảnh và trả về đầu ra văn bản, với cửa sổ ngữ cảnh 200K."
+ },
+ "anthropic.claude-instant-v1": {
+ "description": "Một mô hình nhanh chóng, kinh tế nhưng vẫn rất mạnh mẽ, có thể xử lý một loạt các nhiệm vụ bao gồm đối thoại hàng ngày, phân tích văn bản, tóm tắt và hỏi đáp tài liệu."
+ },
+ "anthropic.claude-v2": {
+ "description": "Mô hình của Anthropic thể hiện khả năng cao trong nhiều nhiệm vụ từ đối thoại phức tạp và sinh nội dung sáng tạo đến tuân thủ chỉ dẫn chi tiết."
+ },
+ "anthropic.claude-v2:1": {
+ "description": "Phiên bản cập nhật của Claude 2, có cửa sổ ngữ cảnh gấp đôi, cùng với độ tin cậy, tỷ lệ ảo giác và độ chính xác dựa trên bằng chứng được cải thiện trong các tài liệu dài và ngữ cảnh RAG."
+ },
+ "anthropic/claude-3-haiku": {
+ "description": "Claude 3 Haiku là mô hình nhanh nhất và nhỏ gọn nhất của Anthropic, được thiết kế để đạt được phản hồi gần như ngay lập tức. Nó có hiệu suất định hướng nhanh chóng và chính xác."
+ },
+ "anthropic/claude-3-opus": {
+ "description": "Claude 3 Opus là mô hình mạnh mẽ nhất của Anthropic, được sử dụng để xử lý các nhiệm vụ phức tạp cao. Nó thể hiện xuất sắc về hiệu suất, trí thông minh, sự trôi chảy và khả năng hiểu biết."
+ },
+ "anthropic/claude-3.5-sonnet": {
+ "description": "Claude 3.5 Sonnet cung cấp khả năng vượt trội hơn Opus và tốc độ nhanh hơn Sonnet, trong khi vẫn giữ giá tương tự. Sonnet đặc biệt xuất sắc trong lập trình, khoa học dữ liệu, xử lý hình ảnh và các nhiệm vụ đại lý."
+ },
+ "aya": {
+ "description": "Aya 23 là mô hình đa ngôn ngữ do Cohere phát hành, hỗ trợ 23 ngôn ngữ, tạo điều kiện thuận lợi cho các ứng dụng ngôn ngữ đa dạng."
+ },
+ "aya:35b": {
+ "description": "Aya 23 là mô hình đa ngôn ngữ do Cohere phát hành, hỗ trợ 23 ngôn ngữ, tạo điều kiện thuận lợi cho các ứng dụng ngôn ngữ đa dạng."
+ },
+ "charglm-3": {
+ "description": "CharGLM-3 được thiết kế đặc biệt cho vai trò và đồng hành cảm xúc, hỗ trợ trí nhớ nhiều vòng siêu dài và đối thoại cá nhân hóa, ứng dụng rộng rãi."
+ },
+ "chatgpt-4o-latest": {
+ "description": "ChatGPT-4o là một mô hình động, được cập nhật theo thời gian thực để giữ phiên bản mới nhất. Nó kết hợp khả năng hiểu và sinh ngôn ngữ mạnh mẽ, phù hợp cho các ứng dụng quy mô lớn, bao gồm dịch vụ khách hàng, giáo dục và hỗ trợ kỹ thuật."
+ },
+ "claude-2.0": {
+ "description": "Claude 2 cung cấp những tiến bộ quan trọng trong khả năng cho doanh nghiệp, bao gồm ngữ cảnh 200K token hàng đầu trong ngành, giảm đáng kể tỷ lệ ảo giác của mô hình, nhắc nhở hệ thống và một tính năng kiểm tra mới: gọi công cụ."
+ },
+ "claude-2.1": {
+ "description": "Claude 2 cung cấp những tiến bộ quan trọng trong khả năng cho doanh nghiệp, bao gồm ngữ cảnh 200K token hàng đầu trong ngành, giảm đáng kể tỷ lệ ảo giác của mô hình, nhắc nhở hệ thống và một tính năng kiểm tra mới: gọi công cụ."
+ },
+ "claude-3-5-sonnet-20240620": {
+ "description": "Claude 3.5 Sonnet cung cấp khả năng vượt trội so với Opus và tốc độ nhanh hơn Sonnet, đồng thời giữ nguyên mức giá như Sonnet. Sonnet đặc biệt xuất sắc trong lập trình, khoa học dữ liệu, xử lý hình ảnh và các nhiệm vụ đại lý."
+ },
+ "claude-3-haiku-20240307": {
+ "description": "Claude 3 Haiku là mô hình nhanh nhất và gọn nhẹ nhất của Anthropic, được thiết kế để đạt được phản hồi gần như ngay lập tức. Nó có hiệu suất định hướng nhanh và chính xác."
+ },
+ "claude-3-opus-20240229": {
+ "description": "Claude 3 Opus là mô hình mạnh mẽ nhất của Anthropic để xử lý các nhiệm vụ phức tạp. Nó thể hiện xuất sắc về hiệu suất, trí thông minh, sự trôi chảy và khả năng hiểu biết."
+ },
+ "claude-3-sonnet-20240229": {
+ "description": "Claude 3 Sonnet cung cấp sự cân bằng lý tưởng giữa trí thông minh và tốc độ cho khối lượng công việc doanh nghiệp. Nó cung cấp hiệu suất tối đa với mức giá thấp hơn, đáng tin cậy và phù hợp cho triển khai quy mô lớn."
+ },
+ "claude-instant-1.2": {
+ "description": "Mô hình của Anthropic được sử dụng cho việc sinh văn bản với độ trễ thấp và thông lượng cao, hỗ trợ sinh hàng trăm trang văn bản."
+ },
+ "codegeex-4": {
+ "description": "CodeGeeX-4 là trợ lý lập trình AI mạnh mẽ, hỗ trợ nhiều ngôn ngữ lập trình với câu hỏi thông minh và hoàn thành mã, nâng cao hiệu suất phát triển."
+ },
+ "codegemma": {
+ "description": "CodeGemma là mô hình ngôn ngữ nhẹ chuyên dụng cho các nhiệm vụ lập trình khác nhau, hỗ trợ lặp lại và tích hợp nhanh chóng."
+ },
+ "codegemma:2b": {
+ "description": "CodeGemma là mô hình ngôn ngữ nhẹ chuyên dụng cho các nhiệm vụ lập trình khác nhau, hỗ trợ lặp lại và tích hợp nhanh chóng."
+ },
+ "codellama": {
+ "description": "Code Llama là một LLM tập trung vào việc sinh và thảo luận mã, kết hợp hỗ trợ cho nhiều ngôn ngữ lập trình, phù hợp cho môi trường phát triển."
+ },
+ "codellama:13b": {
+ "description": "Code Llama là một LLM tập trung vào việc sinh và thảo luận mã, kết hợp hỗ trợ cho nhiều ngôn ngữ lập trình, phù hợp cho môi trường phát triển."
+ },
+ "codellama:34b": {
+ "description": "Code Llama là một LLM tập trung vào việc sinh và thảo luận mã, kết hợp hỗ trợ cho nhiều ngôn ngữ lập trình, phù hợp cho môi trường phát triển."
+ },
+ "codellama:70b": {
+ "description": "Code Llama là một LLM tập trung vào việc sinh và thảo luận mã, kết hợp hỗ trợ cho nhiều ngôn ngữ lập trình, phù hợp cho môi trường phát triển."
+ },
+ "codeqwen": {
+ "description": "CodeQwen1.5 là mô hình ngôn ngữ quy mô lớn được đào tạo trên một lượng lớn dữ liệu mã, chuyên giải quyết các nhiệm vụ lập trình phức tạp."
+ },
+ "codestral": {
+ "description": "Codestral là mô hình mã đầu tiên của Mistral AI, cung cấp hỗ trợ xuất sắc cho các nhiệm vụ sinh mã."
+ },
+ "codestral-latest": {
+ "description": "Codestral là mô hình sinh mã tiên tiến tập trung vào việc sinh mã, tối ưu hóa cho các nhiệm vụ điền vào khoảng trống và hoàn thiện mã."
+ },
+ "cognitivecomputations/dolphin-mixtral-8x22b": {
+ "description": "Dolphin Mixtral 8x22B là mô hình được thiết kế cho việc tuân thủ hướng dẫn, đối thoại và lập trình."
+ },
+ "cohere-command-r": {
+ "description": "Command R là một mô hình sinh tạo có thể mở rộng, nhắm đến RAG và Sử dụng Công cụ để cho phép AI quy mô sản xuất cho doanh nghiệp."
+ },
+ "cohere-command-r-plus": {
+ "description": "Command R+ là mô hình tối ưu hóa RAG hiện đại, được thiết kế để xử lý khối lượng công việc cấp doanh nghiệp."
+ },
+ "command-r": {
+ "description": "Command R là LLM được tối ưu hóa cho các nhiệm vụ đối thoại và ngữ cảnh dài, đặc biệt phù hợp cho tương tác động và quản lý kiến thức."
+ },
+ "command-r-plus": {
+ "description": "Command R+ là một mô hình ngôn ngữ lớn hiệu suất cao, được thiết kế cho các tình huống doanh nghiệp thực tế và ứng dụng phức tạp."
+ },
+ "databricks/dbrx-instruct": {
+ "description": "DBRX Instruct cung cấp khả năng xử lý chỉ dẫn đáng tin cậy, hỗ trợ nhiều ứng dụng trong ngành."
+ },
+ "deepseek-ai/DeepSeek-V2.5": {
+ "description": "DeepSeek V2.5 kết hợp các đặc điểm xuất sắc của các phiên bản trước, tăng cường khả năng tổng quát và mã hóa."
+ },
+ "deepseek-ai/deepseek-llm-67b-chat": {
+ "description": "DeepSeek 67B là mô hình tiên tiến được huấn luyện cho các cuộc đối thoại phức tạp."
+ },
+ "deepseek-chat": {
+ "description": "Mô hình mã nguồn mở mới kết hợp khả năng tổng quát và mã, không chỉ giữ lại khả năng đối thoại tổng quát của mô hình Chat ban đầu và khả năng xử lý mã mạnh mẽ của mô hình Coder, mà còn tốt hơn trong việc phù hợp với sở thích của con người. Hơn nữa, DeepSeek-V2.5 cũng đã đạt được sự cải thiện lớn trong nhiều khía cạnh như nhiệm vụ viết, theo dõi chỉ dẫn."
+ },
+ "deepseek-coder-v2": {
+ "description": "DeepSeek Coder V2 là mô hình mã nguồn mở hỗn hợp chuyên gia, thể hiện xuất sắc trong các nhiệm vụ mã, tương đương với GPT4-Turbo."
+ },
+ "deepseek-coder-v2:236b": {
+ "description": "DeepSeek Coder V2 là mô hình mã nguồn mở hỗn hợp chuyên gia, thể hiện xuất sắc trong các nhiệm vụ mã, tương đương với GPT4-Turbo."
+ },
+ "deepseek-v2": {
+ "description": "DeepSeek V2 là mô hình ngôn ngữ Mixture-of-Experts hiệu quả, phù hợp cho các nhu cầu xử lý tiết kiệm."
+ },
+ "deepseek-v2:236b": {
+ "description": "DeepSeek V2 236B là mô hình mã thiết kế của DeepSeek, cung cấp khả năng sinh mã mạnh mẽ."
+ },
+ "deepseek/deepseek-chat": {
+ "description": "Mô hình mã nguồn mở mới kết hợp khả năng tổng quát và mã, không chỉ giữ lại khả năng đối thoại tổng quát của mô hình Chat ban đầu và khả năng xử lý mã mạnh mẽ của mô hình Coder, mà còn tốt hơn trong việc phù hợp với sở thích của con người. Hơn nữa, DeepSeek-V2.5 cũng đã đạt được sự cải thiện lớn trong nhiều lĩnh vực như nhiệm vụ viết, theo dõi chỉ dẫn."
+ },
+ "emohaa": {
+ "description": "Emohaa là mô hình tâm lý, có khả năng tư vấn chuyên nghiệp, giúp người dùng hiểu các vấn đề cảm xúc."
+ },
+ "gemini-1.0-pro-001": {
+ "description": "Gemini 1.0 Pro 001 (Tuning) cung cấp hiệu suất ổn định và có thể điều chỉnh, là lựa chọn lý tưởng cho các giải pháp nhiệm vụ phức tạp."
+ },
+ "gemini-1.0-pro-002": {
+ "description": "Gemini 1.0 Pro 002 (Tuning) cung cấp hỗ trợ đa phương thức xuất sắc, tập trung vào việc giải quyết hiệu quả các nhiệm vụ phức tạp."
+ },
+ "gemini-1.0-pro-latest": {
+ "description": "Gemini 1.0 Pro là mô hình AI hiệu suất cao của Google, được thiết kế để mở rộng cho nhiều nhiệm vụ."
+ },
+ "gemini-1.5-flash-001": {
+ "description": "Gemini 1.5 Flash 001 là một mô hình đa phương thức hiệu quả, hỗ trợ mở rộng cho nhiều ứng dụng."
+ },
+ "gemini-1.5-flash-002": {
+ "description": "Gemini 1.5 Flash 002 là một mô hình đa phương thức hiệu quả, hỗ trợ mở rộng cho nhiều ứng dụng."
+ },
+ "gemini-1.5-flash-8b-exp-0827": {
+ "description": "Gemini 1.5 Flash 8B 0827 được thiết kế để xử lý các tình huống nhiệm vụ quy mô lớn, cung cấp tốc độ xử lý vô song."
+ },
+ "gemini-1.5-flash-8b-exp-0924": {
+ "description": "Gemini 1.5 Flash 8B 0924 là mô hình thử nghiệm mới nhất, có sự cải thiện đáng kể về hiệu suất trong các trường hợp sử dụng văn bản và đa phương thức."
+ },
+ "gemini-1.5-flash-exp-0827": {
+ "description": "Gemini 1.5 Flash 0827 cung cấp khả năng xử lý đa phương thức được tối ưu hóa, phù hợp cho nhiều tình huống nhiệm vụ phức tạp."
+ },
+ "gemini-1.5-flash-latest": {
+ "description": "Gemini 1.5 Flash là mô hình AI đa phương thức mới nhất của Google, có khả năng xử lý nhanh, hỗ trợ đầu vào văn bản, hình ảnh và video, phù hợp cho việc mở rộng hiệu quả cho nhiều nhiệm vụ."
+ },
+ "gemini-1.5-pro-001": {
+ "description": "Gemini 1.5 Pro 001 là giải pháp AI đa phương thức có thể mở rộng, hỗ trợ nhiều nhiệm vụ phức tạp."
+ },
+ "gemini-1.5-pro-002": {
+ "description": "Gemini 1.5 Pro 002 là mô hình sẵn sàng cho sản xuất mới nhất, cung cấp đầu ra chất lượng cao hơn, đặc biệt là trong các nhiệm vụ toán học, ngữ cảnh dài và thị giác."
+ },
+ "gemini-1.5-pro-exp-0801": {
+ "description": "Gemini 1.5 Pro 0801 cung cấp khả năng xử lý đa phương thức xuất sắc, mang lại sự linh hoạt lớn hơn cho phát triển ứng dụng."
+ },
+ "gemini-1.5-pro-exp-0827": {
+ "description": "Gemini 1.5 Pro 0827 kết hợp công nghệ tối ưu hóa mới nhất, mang lại khả năng xử lý dữ liệu đa phương thức hiệu quả hơn."
+ },
+ "gemini-1.5-pro-latest": {
+ "description": "Gemini 1.5 Pro hỗ trợ lên đến 2 triệu tokens, là lựa chọn lý tưởng cho mô hình đa phương thức trung bình, phù hợp cho hỗ trợ đa diện cho các nhiệm vụ phức tạp."
+ },
+ "gemma-7b-it": {
+ "description": "Gemma 7B phù hợp cho việc xử lý các nhiệm vụ quy mô vừa và nhỏ, đồng thời mang lại hiệu quả chi phí."
+ },
+ "gemma2": {
+ "description": "Gemma 2 là mô hình hiệu quả do Google phát hành, bao gồm nhiều ứng dụng từ nhỏ đến xử lý dữ liệu phức tạp."
+ },
+ "gemma2-9b-it": {
+ "description": "Gemma 2 9B là một mô hình được tối ưu hóa cho các nhiệm vụ cụ thể và tích hợp công cụ."
+ },
+ "gemma2:27b": {
+ "description": "Gemma 2 là mô hình hiệu quả do Google phát hành, bao gồm nhiều ứng dụng từ nhỏ đến xử lý dữ liệu phức tạp."
+ },
+ "gemma2:2b": {
+ "description": "Gemma 2 là mô hình hiệu quả do Google phát hành, bao gồm nhiều ứng dụng từ nhỏ đến xử lý dữ liệu phức tạp."
+ },
+ "general": {
+ "description": "Spark Lite là một mô hình ngôn ngữ lớn nhẹ, có độ trễ cực thấp và khả năng xử lý hiệu quả, hoàn toàn miễn phí và mở, hỗ trợ chức năng tìm kiếm trực tuyến theo thời gian thực. Đặc điểm phản hồi nhanh giúp nó thể hiện xuất sắc trong các ứng dụng suy luận trên thiết bị có công suất thấp và tinh chỉnh mô hình, mang lại hiệu quả chi phí và trải nghiệm thông minh xuất sắc cho người dùng, đặc biệt trong các tình huống hỏi đáp kiến thức, tạo nội dung và tìm kiếm."
+ },
+ "generalv3": {
+ "description": "Spark Pro là một mô hình ngôn ngữ lớn hiệu suất cao được tối ưu hóa cho các lĩnh vực chuyên môn, tập trung vào toán học, lập trình, y tế, giáo dục và nhiều lĩnh vực khác, đồng thời hỗ trợ tìm kiếm trực tuyến và các plugin tích hợp như thời tiết, ngày tháng. Mô hình đã được tối ưu hóa thể hiện xuất sắc và hiệu suất cao trong các nhiệm vụ hỏi đáp kiến thức phức tạp, hiểu ngôn ngữ và sáng tạo văn bản cấp cao, là lựa chọn lý tưởng cho các tình huống ứng dụng chuyên nghiệp."
+ },
+ "generalv3.5": {
+ "description": "Spark3.5 Max là phiên bản toàn diện nhất, hỗ trợ tìm kiếm trực tuyến và nhiều plugin tích hợp. Khả năng cốt lõi đã được tối ưu hóa toàn diện cùng với thiết lập vai trò hệ thống và chức năng gọi hàm, giúp nó thể hiện xuất sắc và nổi bật trong nhiều tình huống ứng dụng phức tạp."
+ },
+ "glm-4": {
+ "description": "GLM-4 là phiên bản flagship cũ phát hành vào tháng 1 năm 2024, hiện đã được GLM-4-0520 mạnh mẽ hơn thay thế."
+ },
+ "glm-4-0520": {
+ "description": "GLM-4-0520 là phiên bản mô hình mới nhất, được thiết kế cho các nhiệm vụ phức tạp và đa dạng, thể hiện xuất sắc."
+ },
+ "glm-4-air": {
+ "description": "GLM-4-Air là phiên bản có giá trị sử dụng cao, hiệu suất gần giống GLM-4, cung cấp tốc độ nhanh và giá cả phải chăng."
+ },
+ "glm-4-airx": {
+ "description": "GLM-4-AirX cung cấp phiên bản hiệu quả của GLM-4-Air, tốc độ suy luận có thể đạt 2.6 lần."
+ },
+ "glm-4-alltools": {
+ "description": "GLM-4-AllTools là một mô hình tác nhân đa chức năng, được tối ưu hóa để hỗ trợ lập kế hoạch chỉ dẫn phức tạp và gọi công cụ, như duyệt web, giải thích mã và sinh văn bản, phù hợp cho thực hiện nhiều nhiệm vụ."
+ },
+ "glm-4-flash": {
+ "description": "GLM-4-Flash là lựa chọn lý tưởng cho các nhiệm vụ đơn giản, tốc độ nhanh nhất và giá cả phải chăng nhất."
+ },
+ "glm-4-long": {
+ "description": "GLM-4-Long hỗ trợ đầu vào văn bản siêu dài, phù hợp cho các nhiệm vụ ghi nhớ và xử lý tài liệu quy mô lớn."
+ },
+ "glm-4-plus": {
+ "description": "GLM-4-Plus là mô hình flagship thông minh cao, có khả năng xử lý văn bản dài và nhiệm vụ phức tạp, hiệu suất được nâng cao toàn diện."
+ },
+ "glm-4v": {
+ "description": "GLM-4V cung cấp khả năng hiểu và suy luận hình ảnh mạnh mẽ, hỗ trợ nhiều nhiệm vụ hình ảnh."
+ },
+ "glm-4v-plus": {
+ "description": "GLM-4V-Plus có khả năng hiểu nội dung video và nhiều hình ảnh, phù hợp cho các nhiệm vụ đa phương tiện."
+ },
+ "google/gemini-flash-1.5-exp": {
+ "description": "Gemini 1.5 Flash 0827 cung cấp khả năng xử lý đa phương thức tối ưu, phù hợp cho nhiều tình huống nhiệm vụ phức tạp."
+ },
+ "google/gemini-pro-1.5-exp": {
+ "description": "Gemini 1.5 Pro 0827 kết hợp công nghệ tối ưu mới nhất, mang lại khả năng xử lý dữ liệu đa phương thức hiệu quả hơn."
+ },
+ "google/gemma-2-27b-it": {
+ "description": "Gemma 2 tiếp tục triết lý thiết kế nhẹ và hiệu quả."
+ },
+ "google/gemma-2-9b-it": {
+ "description": "Gemma 2 là một loạt mô hình văn bản mã nguồn mở nhẹ của Google."
+ },
+ "google/gemma-2-9b-it:free": {
+ "description": "Gemma 2 là loạt mô hình văn bản mã nguồn mở nhẹ của Google."
+ },
+ "google/gemma-2b-it": {
+ "description": "Gemma Instruct (2B) cung cấp khả năng xử lý chỉ dẫn cơ bản, phù hợp cho các ứng dụng nhẹ."
+ },
+ "gpt-3.5-turbo": {
+ "description": "GPT 3.5 Turbo, phù hợp cho nhiều nhiệm vụ sinh và hiểu văn bản, hiện tại trỏ đến gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-0125": {
+ "description": "GPT 3.5 Turbo, phù hợp cho nhiều nhiệm vụ sinh và hiểu văn bản, hiện tại trỏ đến gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-1106": {
+ "description": "GPT 3.5 Turbo, phù hợp cho nhiều nhiệm vụ sinh và hiểu văn bản, hiện tại trỏ đến gpt-3.5-turbo-0125."
+ },
+ "gpt-3.5-turbo-instruct": {
+ "description": "GPT 3.5 Turbo, phù hợp cho nhiều nhiệm vụ sinh và hiểu văn bản, hiện tại trỏ đến gpt-3.5-turbo-0125."
+ },
+ "gpt-4": {
+ "description": "GPT-4 cung cấp một cửa sổ ngữ cảnh lớn hơn, có khả năng xử lý các đầu vào văn bản dài hơn, phù hợp cho các tình huống cần tích hợp thông tin rộng rãi và phân tích dữ liệu."
+ },
+ "gpt-4-0125-preview": {
+ "description": "Mô hình GPT-4 Turbo mới nhất có chức năng hình ảnh. Hiện tại, các yêu cầu hình ảnh có thể sử dụng chế độ JSON và gọi hàm. GPT-4 Turbo là một phiên bản nâng cao, cung cấp hỗ trợ chi phí hiệu quả cho các nhiệm vụ đa phương tiện. Nó tìm thấy sự cân bằng giữa độ chính xác và hiệu quả, phù hợp cho các ứng dụng cần tương tác theo thời gian thực."
+ },
+ "gpt-4-0613": {
+ "description": "GPT-4 cung cấp một cửa sổ ngữ cảnh lớn hơn, có khả năng xử lý các đầu vào văn bản dài hơn, phù hợp cho các tình huống cần tích hợp thông tin rộng rãi và phân tích dữ liệu."
+ },
+ "gpt-4-1106-preview": {
+ "description": "Mô hình GPT-4 Turbo mới nhất có chức năng hình ảnh. Hiện tại, các yêu cầu hình ảnh có thể sử dụng chế độ JSON và gọi hàm. GPT-4 Turbo là một phiên bản nâng cao, cung cấp hỗ trợ chi phí hiệu quả cho các nhiệm vụ đa phương tiện. Nó tìm thấy sự cân bằng giữa độ chính xác và hiệu quả, phù hợp cho các ứng dụng cần tương tác theo thời gian thực."
+ },
+ "gpt-4-1106-vision-preview": {
+ "description": "Mô hình GPT-4 Turbo mới nhất có chức năng hình ảnh. Hiện tại, các yêu cầu hình ảnh có thể sử dụng chế độ JSON và gọi hàm. GPT-4 Turbo là một phiên bản nâng cao, cung cấp hỗ trợ chi phí hiệu quả cho các nhiệm vụ đa phương tiện. Nó tìm thấy sự cân bằng giữa độ chính xác và hiệu quả, phù hợp cho các ứng dụng cần tương tác theo thời gian thực."
+ },
+ "gpt-4-32k": {
+ "description": "GPT-4 cung cấp một cửa sổ ngữ cảnh lớn hơn, có khả năng xử lý các đầu vào văn bản dài hơn, phù hợp cho các tình huống cần tích hợp thông tin rộng rãi và phân tích dữ liệu."
+ },
+ "gpt-4-32k-0613": {
+ "description": "GPT-4 cung cấp một cửa sổ ngữ cảnh lớn hơn, có khả năng xử lý các đầu vào văn bản dài hơn, phù hợp cho các tình huống cần tích hợp thông tin rộng rãi và phân tích dữ liệu."
+ },
+ "gpt-4-turbo": {
+ "description": "Mô hình GPT-4 Turbo mới nhất có chức năng hình ảnh. Hiện tại, các yêu cầu hình ảnh có thể sử dụng chế độ JSON và gọi hàm. GPT-4 Turbo là một phiên bản nâng cao, cung cấp hỗ trợ chi phí hiệu quả cho các nhiệm vụ đa phương tiện. Nó tìm thấy sự cân bằng giữa độ chính xác và hiệu quả, phù hợp cho các ứng dụng cần tương tác theo thời gian thực."
+ },
+ "gpt-4-turbo-2024-04-09": {
+ "description": "Mô hình GPT-4 Turbo mới nhất có chức năng hình ảnh. Hiện tại, các yêu cầu hình ảnh có thể sử dụng chế độ JSON và gọi hàm. GPT-4 Turbo là một phiên bản nâng cao, cung cấp hỗ trợ chi phí hiệu quả cho các nhiệm vụ đa phương tiện. Nó tìm thấy sự cân bằng giữa độ chính xác và hiệu quả, phù hợp cho các ứng dụng cần tương tác theo thời gian thực."
+ },
+ "gpt-4-turbo-preview": {
+ "description": "Mô hình GPT-4 Turbo mới nhất có chức năng hình ảnh. Hiện tại, các yêu cầu hình ảnh có thể sử dụng chế độ JSON và gọi hàm. GPT-4 Turbo là một phiên bản nâng cao, cung cấp hỗ trợ chi phí hiệu quả cho các nhiệm vụ đa phương tiện. Nó tìm thấy sự cân bằng giữa độ chính xác và hiệu quả, phù hợp cho các ứng dụng cần tương tác theo thời gian thực."
+ },
+ "gpt-4-vision-preview": {
+ "description": "Mô hình GPT-4 Turbo mới nhất có chức năng hình ảnh. Hiện tại, các yêu cầu hình ảnh có thể sử dụng chế độ JSON và gọi hàm. GPT-4 Turbo là một phiên bản nâng cao, cung cấp hỗ trợ chi phí hiệu quả cho các nhiệm vụ đa phương tiện. Nó tìm thấy sự cân bằng giữa độ chính xác và hiệu quả, phù hợp cho các ứng dụng cần tương tác theo thời gian thực."
+ },
+ "gpt-4o": {
+ "description": "ChatGPT-4o là một mô hình động, được cập nhật theo thời gian thực để giữ phiên bản mới nhất. Nó kết hợp khả năng hiểu và sinh ngôn ngữ mạnh mẽ, phù hợp cho các ứng dụng quy mô lớn, bao gồm dịch vụ khách hàng, giáo dục và hỗ trợ kỹ thuật."
+ },
+ "gpt-4o-2024-05-13": {
+ "description": "ChatGPT-4o là một mô hình động, được cập nhật theo thời gian thực để giữ phiên bản mới nhất. Nó kết hợp khả năng hiểu và sinh ngôn ngữ mạnh mẽ, phù hợp cho các ứng dụng quy mô lớn, bao gồm dịch vụ khách hàng, giáo dục và hỗ trợ kỹ thuật."
+ },
+ "gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o là một mô hình động, được cập nhật theo thời gian thực để giữ phiên bản mới nhất. Nó kết hợp khả năng hiểu và sinh ngôn ngữ mạnh mẽ, phù hợp cho các ứng dụng quy mô lớn, bao gồm dịch vụ khách hàng, giáo dục và hỗ trợ kỹ thuật."
+ },
+ "gpt-4o-mini": {
+ "description": "GPT-4o mini là mô hình mới nhất do OpenAI phát hành sau GPT-4 Omni, hỗ trợ đầu vào hình ảnh và đầu ra văn bản. Là mô hình nhỏ gọn tiên tiến nhất của họ, nó rẻ hơn nhiều so với các mô hình tiên tiến gần đây khác và rẻ hơn hơn 60% so với GPT-3.5 Turbo. Nó giữ lại trí thông minh tiên tiến nhất trong khi có giá trị sử dụng đáng kể. GPT-4o mini đạt 82% điểm trong bài kiểm tra MMLU và hiện đứng cao hơn GPT-4 về sở thích trò chuyện."
+ },
+ "gryphe/mythomax-l2-13b": {
+ "description": "MythoMax l2 13B là mô hình ngôn ngữ kết hợp giữa sáng tạo và trí thông minh, kết hợp nhiều mô hình hàng đầu."
+ },
+ "internlm/internlm2_5-20b-chat": {
+ "description": "Mô hình mã nguồn mở sáng tạo InternLM2.5, thông qua số lượng tham số lớn, nâng cao trí thông minh trong đối thoại."
+ },
+ "internlm/internlm2_5-7b-chat": {
+ "description": "InternLM2.5 cung cấp giải pháp đối thoại thông minh cho nhiều tình huống."
+ },
+ "jamba-1.5-large": {},
+ "jamba-1.5-mini": {},
+ "llama-3.1-70b-instruct": {
+ "description": "Mô hình Llama 3.1 70B Instruct, có 70B tham số, có thể cung cấp hiệu suất xuất sắc trong các nhiệm vụ sinh văn bản và chỉ dẫn lớn."
+ },
+ "llama-3.1-70b-versatile": {
+ "description": "Llama 3.1 70B cung cấp khả năng suy luận AI mạnh mẽ hơn, phù hợp cho các ứng dụng phức tạp, hỗ trợ xử lý tính toán cực lớn và đảm bảo hiệu quả và độ chính xác cao."
+ },
+ "llama-3.1-8b-instant": {
+ "description": "Llama 3.1 8B là một mô hình hiệu suất cao, cung cấp khả năng sinh văn bản nhanh chóng, rất phù hợp cho các tình huống ứng dụng cần hiệu quả quy mô lớn và tiết kiệm chi phí."
+ },
+ "llama-3.1-8b-instruct": {
+ "description": "Mô hình Llama 3.1 8B Instruct, có 8B tham số, hỗ trợ thực hiện nhiệm vụ chỉ dẫn hình ảnh hiệu quả, cung cấp khả năng sinh văn bản chất lượng."
+ },
+ "llama-3.1-sonar-huge-128k-online": {
+ "description": "Mô hình Llama 3.1 Sonar Huge Online, có 405B tham số, hỗ trợ độ dài ngữ cảnh khoảng 127,000 mã, được thiết kế cho các ứng dụng trò chuyện trực tuyến phức tạp."
+ },
+ "llama-3.1-sonar-large-128k-chat": {
+ "description": "Mô hình Llama 3.1 Sonar Large Chat, có 70B tham số, hỗ trợ độ dài ngữ cảnh khoảng 127,000 mã, phù hợp cho các nhiệm vụ trò chuyện ngoại tuyến phức tạp."
+ },
+ "llama-3.1-sonar-large-128k-online": {
+ "description": "Mô hình Llama 3.1 Sonar Large Online, có 70B tham số, hỗ trợ độ dài ngữ cảnh khoảng 127,000 mã, phù hợp cho các nhiệm vụ trò chuyện có dung lượng lớn và đa dạng."
+ },
+ "llama-3.1-sonar-small-128k-chat": {
+ "description": "Mô hình Llama 3.1 Sonar Small Chat, có 8B tham số, được thiết kế cho trò chuyện ngoại tuyến, hỗ trợ độ dài ngữ cảnh khoảng 127,000 mã."
+ },
+ "llama-3.1-sonar-small-128k-online": {
+ "description": "Mô hình Llama 3.1 Sonar Small Online, có 8B tham số, hỗ trợ độ dài ngữ cảnh khoảng 127,000 mã, được thiết kế cho trò chuyện trực tuyến, có khả năng xử lý hiệu quả các tương tác văn bản khác nhau."
+ },
+ "llama3-70b-8192": {
+ "description": "Meta Llama 3 70B cung cấp khả năng xử lý phức tạp vô song, được thiết kế riêng cho các dự án yêu cầu cao."
+ },
+ "llama3-8b-8192": {
+ "description": "Meta Llama 3 8B mang lại hiệu suất suy luận chất lượng cao, phù hợp cho nhu cầu ứng dụng đa dạng."
+ },
+ "llama3-groq-70b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 70B Tool Use cung cấp khả năng gọi công cụ mạnh mẽ, hỗ trợ xử lý hiệu quả cho các nhiệm vụ phức tạp."
+ },
+ "llama3-groq-8b-8192-tool-use-preview": {
+ "description": "Llama 3 Groq 8B Tool Use là mô hình được tối ưu hóa cho việc sử dụng công cụ hiệu quả, hỗ trợ tính toán song song nhanh chóng."
+ },
+ "llama3.1": {
+ "description": "Llama 3.1 là mô hình tiên tiến do Meta phát hành, hỗ trợ lên đến 405B tham số, có thể áp dụng cho các cuộc đối thoại phức tạp, dịch đa ngôn ngữ và phân tích dữ liệu."
+ },
+ "llama3.1:405b": {
+ "description": "Llama 3.1 là mô hình tiên tiến do Meta phát hành, hỗ trợ lên đến 405B tham số, có thể áp dụng cho các cuộc đối thoại phức tạp, dịch đa ngôn ngữ và phân tích dữ liệu."
+ },
+ "llama3.1:70b": {
+ "description": "Llama 3.1 là mô hình tiên tiến do Meta phát hành, hỗ trợ lên đến 405B tham số, có thể áp dụng cho các cuộc đối thoại phức tạp, dịch đa ngôn ngữ và phân tích dữ liệu."
+ },
+ "llava": {
+ "description": "LLaVA là mô hình đa phương thức kết hợp bộ mã hóa hình ảnh và Vicuna, phục vụ cho việc hiểu biết mạnh mẽ về hình ảnh và ngôn ngữ."
+ },
+ "llava-v1.5-7b-4096-preview": {
+ "description": "LLaVA 1.5 7B cung cấp khả năng xử lý hình ảnh tích hợp, tạo ra đầu ra phức tạp thông qua đầu vào thông tin hình ảnh."
+ },
+ "llava:13b": {
+ "description": "LLaVA là mô hình đa phương thức kết hợp bộ mã hóa hình ảnh và Vicuna, phục vụ cho việc hiểu biết mạnh mẽ về hình ảnh và ngôn ngữ."
+ },
+ "llava:34b": {
+ "description": "LLaVA là mô hình đa phương thức kết hợp bộ mã hóa hình ảnh và Vicuna, phục vụ cho việc hiểu biết mạnh mẽ về hình ảnh và ngôn ngữ."
+ },
+ "mathstral": {
+ "description": "MathΣtral được thiết kế cho nghiên cứu khoa học và suy luận toán học, cung cấp khả năng tính toán hiệu quả và giải thích kết quả."
+ },
+ "meta-llama-3-70b-instruct": {
+ "description": "Mô hình 70 tỷ tham số mạnh mẽ, xuất sắc trong lý luận, lập trình và các ứng dụng ngôn ngữ rộng lớn."
+ },
+ "meta-llama-3-8b-instruct": {
+ "description": "Mô hình 8 tỷ tham số đa năng, tối ưu hóa cho các tác vụ đối thoại và tạo văn bản."
+ },
+ "meta-llama-3.1-405b-instruct": {
+ "description": "Các mô hình văn bản chỉ được tinh chỉnh theo hướng dẫn Llama 3.1 được tối ưu hóa cho các trường hợp sử dụng đối thoại đa ngôn ngữ và vượt trội hơn nhiều mô hình trò chuyện mã nguồn mở và đóng có sẵn trên các tiêu chuẩn ngành phổ biến."
+ },
+ "meta-llama-3.1-70b-instruct": {
+ "description": "Các mô hình văn bản chỉ được tinh chỉnh theo hướng dẫn Llama 3.1 được tối ưu hóa cho các trường hợp sử dụng đối thoại đa ngôn ngữ và vượt trội hơn nhiều mô hình trò chuyện mã nguồn mở và đóng có sẵn trên các tiêu chuẩn ngành phổ biến."
+ },
+ "meta-llama-3.1-8b-instruct": {
+ "description": "Các mô hình văn bản chỉ được tinh chỉnh theo hướng dẫn Llama 3.1 được tối ưu hóa cho các trường hợp sử dụng đối thoại đa ngôn ngữ và vượt trội hơn nhiều mô hình trò chuyện mã nguồn mở và đóng có sẵn trên các tiêu chuẩn ngành phổ biến."
+ },
+ "meta-llama/Llama-2-13b-chat-hf": {
+ "description": "LLaMA-2 Chat (13B) cung cấp khả năng xử lý ngôn ngữ xuất sắc và trải nghiệm tương tác tuyệt vời."
+ },
+ "meta-llama/Llama-3-70b-chat-hf": {
+ "description": "LLaMA-3 Chat (70B) là mô hình trò chuyện mạnh mẽ, hỗ trợ các nhu cầu đối thoại phức tạp."
+ },
+ "meta-llama/Llama-3-8b-chat-hf": {
+ "description": "LLaMA-3 Chat (8B) cung cấp hỗ trợ đa ngôn ngữ, bao gồm nhiều lĩnh vực kiến thức phong phú."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Lite": {
+ "description": "Llama 3 70B Instruct Lite phù hợp cho các môi trường cần hiệu suất cao và độ trễ thấp."
+ },
+ "meta-llama/Meta-Llama-3-70B-Instruct-Turbo": {
+ "description": "Llama 3 70B Instruct Turbo cung cấp khả năng hiểu và sinh ngôn ngữ xuất sắc, phù hợp cho các nhiệm vụ tính toán khắt khe nhất."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Lite": {
+ "description": "Llama 3 8B Instruct Lite phù hợp cho các môi trường hạn chế tài nguyên, cung cấp hiệu suất cân bằng xuất sắc."
+ },
+ "meta-llama/Meta-Llama-3-8B-Instruct-Turbo": {
+ "description": "Llama 3 8B Instruct Turbo là một mô hình ngôn ngữ lớn hiệu suất cao, hỗ trợ nhiều tình huống ứng dụng."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct": {
+ "description": "LLaMA 3.1 405B là mô hình mạnh mẽ cho việc đào tạo trước và điều chỉnh theo hướng dẫn."
+ },
+ "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo": {
+ "description": "Mô hình Llama 3.1 Turbo 405B cung cấp hỗ trợ ngữ cảnh dung lượng lớn cho xử lý dữ liệu lớn, thể hiện xuất sắc trong các ứng dụng trí tuệ nhân tạo quy mô lớn."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct": {
+ "description": "LLaMA 3.1 70B cung cấp hỗ trợ đối thoại hiệu quả đa ngôn ngữ."
+ },
+ "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": {
+ "description": "Mô hình Llama 3.1 70B được tinh chỉnh để phù hợp với các ứng dụng tải cao, định lượng đến FP8 cung cấp khả năng tính toán và độ chính xác hiệu quả hơn, đảm bảo hiệu suất xuất sắc trong các tình huống phức tạp."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct": {
+ "description": "LLaMA 3.1 cung cấp hỗ trợ đa ngôn ngữ, là một trong những mô hình sinh nổi bật trong ngành."
+ },
+ "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo": {
+ "description": "Mô hình Llama 3.1 8B sử dụng định lượng FP8, hỗ trợ lên đến 131,072 mã ngữ cảnh, là một trong những mô hình mã nguồn mở hàng đầu, phù hợp cho các nhiệm vụ phức tạp, vượt trội hơn nhiều tiêu chuẩn ngành."
+ },
+ "meta-llama/llama-3-70b-instruct": {
+ "description": "Llama 3 70B Instruct được tối ưu hóa cho các tình huống đối thoại chất lượng cao, thể hiện xuất sắc trong nhiều đánh giá của con người."
+ },
+ "meta-llama/llama-3-8b-instruct": {
+ "description": "Llama 3 8B Instruct tối ưu hóa cho các tình huống đối thoại chất lượng cao, hiệu suất vượt trội hơn nhiều mô hình đóng nguồn."
+ },
+ "meta-llama/llama-3.1-405b-instruct": {
+ "description": "Llama 3.1 405B Instruct là phiên bản mới nhất do Meta phát hành, tối ưu hóa cho việc tạo ra các cuộc đối thoại chất lượng cao, vượt qua nhiều mô hình đóng nguồn hàng đầu."
+ },
+ "meta-llama/llama-3.1-70b-instruct": {
+ "description": "Llama 3.1 70B Instruct được thiết kế đặc biệt cho các cuộc đối thoại chất lượng cao, thể hiện xuất sắc trong các đánh giá của con người, đặc biệt phù hợp cho các tình huống tương tác cao."
+ },
+ "meta-llama/llama-3.1-8b-instruct": {
+ "description": "Llama 3.1 8B Instruct là phiên bản mới nhất do Meta phát hành, tối ưu hóa cho các tình huống đối thoại chất lượng cao, vượt trội hơn nhiều mô hình đóng nguồn hàng đầu."
+ },
+ "meta-llama/llama-3.1-8b-instruct:free": {
+ "description": "LLaMA 3.1 cung cấp hỗ trợ đa ngôn ngữ, là một trong những mô hình sinh hàng đầu trong ngành."
+ },
+ "meta.llama3-1-405b-instruct-v1:0": {
+ "description": "Meta Llama 3.1 405B Instruct là mô hình lớn nhất và mạnh mẽ nhất trong mô hình Llama 3.1 Instruct, là một mô hình sinh dữ liệu và suy luận đối thoại tiên tiến, cũng có thể được sử dụng làm nền tảng cho việc tiền huấn luyện hoặc tinh chỉnh chuyên sâu trong các lĩnh vực cụ thể. Các mô hình ngôn ngữ lớn đa ngôn ngữ (LLMs) mà Llama 3.1 cung cấp là một tập hợp các mô hình sinh đã được tiền huấn luyện và điều chỉnh theo chỉ dẫn, bao gồm kích thước 8B, 70B và 405B (đầu vào/đầu ra văn bản). Các mô hình văn bản điều chỉnh theo chỉ dẫn của Llama 3.1 (8B, 70B, 405B) được tối ưu hóa cho các trường hợp đối thoại đa ngôn ngữ và đã vượt qua nhiều mô hình trò chuyện mã nguồn mở có sẵn trong các bài kiểm tra chuẩn ngành phổ biến. Llama 3.1 được thiết kế để sử dụng cho nhiều mục đích thương mại và nghiên cứu bằng nhiều ngôn ngữ. Các mô hình văn bản điều chỉnh theo chỉ dẫn phù hợp cho các cuộc trò chuyện giống như trợ lý, trong khi các mô hình đã được tiền huấn luyện có thể thích ứng với nhiều nhiệm vụ sinh ngôn ngữ tự nhiên khác nhau. Mô hình Llama 3.1 cũng hỗ trợ việc cải thiện các mô hình khác bằng cách sử dụng đầu ra của nó, bao gồm sinh dữ liệu tổng hợp và tinh chỉnh. Llama 3.1 là một mô hình ngôn ngữ tự hồi quy sử dụng kiến trúc biến áp tối ưu. Phiên bản điều chỉnh sử dụng tinh chỉnh có giám sát (SFT) và học tăng cường có phản hồi từ con người (RLHF) để phù hợp với sở thích của con người về tính hữu ích và an toàn."
+ },
+ "meta.llama3-1-70b-instruct-v1:0": {
+ "description": "Phiên bản cập nhật của Meta Llama 3.1 70B Instruct, bao gồm độ dài ngữ cảnh mở rộng 128K, tính đa ngôn ngữ và khả năng suy luận cải tiến. Các mô hình ngôn ngữ lớn (LLMs) đa ngôn ngữ do Llama 3.1 cung cấp là một tập hợp các mô hình sinh đã được huấn luyện trước và điều chỉnh theo chỉ dẫn, bao gồm kích thước 8B, 70B và 405B (đầu vào/đầu ra văn bản). Các mô hình văn bản điều chỉnh theo chỉ dẫn của Llama 3.1 (8B, 70B, 405B) được tối ưu hóa cho các trường hợp đối thoại đa ngôn ngữ và đã vượt qua nhiều mô hình trò chuyện mã nguồn mở có sẵn trong các bài kiểm tra chuẩn ngành phổ biến. Llama 3.1 được thiết kế cho các mục đích thương mại và nghiên cứu đa ngôn ngữ. Các mô hình văn bản điều chỉnh theo chỉ dẫn phù hợp cho các cuộc trò chuyện giống như trợ lý, trong khi các mô hình đã được huấn luyện trước có thể thích ứng với nhiều nhiệm vụ sinh ngôn ngữ tự nhiên khác nhau. Mô hình Llama 3.1 cũng hỗ trợ việc sử dụng đầu ra của mô hình để cải thiện các mô hình khác, bao gồm tạo dữ liệu tổng hợp và tinh chỉnh. Llama 3.1 là mô hình ngôn ngữ tự hồi quy sử dụng kiến trúc biến áp được tối ưu hóa. Phiên bản điều chỉnh sử dụng tinh chỉnh giám sát (SFT) và học tăng cường có phản hồi của con người (RLHF) để phù hợp với sở thích của con người về tính hữu ích và an toàn."
+ },
+ "meta.llama3-1-8b-instruct-v1:0": {
+ "description": "Phiên bản cập nhật của Meta Llama 3.1 8B Instruct, bao gồm độ dài ngữ cảnh mở rộng 128K, tính đa ngôn ngữ và khả năng suy luận cải tiến. Các mô hình ngôn ngữ lớn (LLMs) đa ngôn ngữ do Llama 3.1 cung cấp là một tập hợp các mô hình sinh đã được huấn luyện trước và điều chỉnh theo chỉ dẫn, bao gồm kích thước 8B, 70B và 405B (đầu vào/đầu ra văn bản). Các mô hình văn bản điều chỉnh theo chỉ dẫn của Llama 3.1 (8B, 70B, 405B) được tối ưu hóa cho các trường hợp đối thoại đa ngôn ngữ và đã vượt qua nhiều mô hình trò chuyện mã nguồn mở có sẵn trong các bài kiểm tra chuẩn ngành phổ biến. Llama 3.1 được thiết kế cho các mục đích thương mại và nghiên cứu đa ngôn ngữ. Các mô hình văn bản điều chỉnh theo chỉ dẫn phù hợp cho các cuộc trò chuyện giống như trợ lý, trong khi các mô hình đã được huấn luyện trước có thể thích ứng với nhiều nhiệm vụ sinh ngôn ngữ tự nhiên khác nhau. Mô hình Llama 3.1 cũng hỗ trợ việc sử dụng đầu ra của mô hình để cải thiện các mô hình khác, bao gồm tạo dữ liệu tổng hợp và tinh chỉnh. Llama 3.1 là mô hình ngôn ngữ tự hồi quy sử dụng kiến trúc biến áp được tối ưu hóa. Phiên bản điều chỉnh sử dụng tinh chỉnh giám sát (SFT) và học tăng cường có phản hồi của con người (RLHF) để phù hợp với sở thích của con người về tính hữu ích và an toàn."
+ },
+ "meta.llama3-70b-instruct-v1:0": {
+ "description": "Meta Llama 3 là một mô hình ngôn ngữ lớn (LLM) mở dành cho các nhà phát triển, nhà nghiên cứu và doanh nghiệp, nhằm giúp họ xây dựng, thử nghiệm và mở rộng ý tưởng AI sinh một cách có trách nhiệm. Là một phần của hệ thống cơ sở hạ tầng đổi mới toàn cầu, nó rất phù hợp cho việc tạo nội dung, AI đối thoại, hiểu ngôn ngữ, nghiên cứu và ứng dụng doanh nghiệp."
+ },
+ "meta.llama3-8b-instruct-v1:0": {
+ "description": "Meta Llama 3 là một mô hình ngôn ngữ lớn (LLM) mở dành cho các nhà phát triển, nhà nghiên cứu và doanh nghiệp, nhằm giúp họ xây dựng, thử nghiệm và mở rộng ý tưởng AI sinh một cách có trách nhiệm. Là một phần của hệ thống cơ sở hạ tầng đổi mới toàn cầu, nó rất phù hợp cho các thiết bị biên và thời gian huấn luyện nhanh hơn với khả năng tính toán và tài nguyên hạn chế."
+ },
+ "microsoft/wizardlm 2-7b": {
+ "description": "WizardLM 2 7B là mô hình nhẹ và nhanh mới nhất của Microsoft AI, hiệu suất gần gấp 10 lần so với các mô hình mở nguồn hiện có."
+ },
+ "microsoft/wizardlm-2-8x22b": {
+ "description": "WizardLM-2 8x22B là mô hình Wizard tiên tiến nhất của Microsoft AI, thể hiện hiệu suất cực kỳ cạnh tranh."
+ },
+ "minicpm-v": {
+ "description": "MiniCPM-V là mô hình đa phương thức thế hệ mới do OpenBMB phát triển, có khả năng nhận diện OCR xuất sắc và hiểu biết đa phương thức, hỗ trợ nhiều ứng dụng khác nhau."
+ },
+ "mistral": {
+ "description": "Mistral là mô hình 7B do Mistral AI phát hành, phù hợp cho các nhu cầu xử lý ngôn ngữ đa dạng."
+ },
+ "mistral-large": {
+ "description": "Mixtral Large là mô hình hàng đầu của Mistral, kết hợp khả năng sinh mã, toán học và suy luận, hỗ trợ cửa sổ ngữ cảnh 128k."
+ },
+ "mistral-large-2407": {
+ "description": "Mistral Large (2407) là một Mô hình Ngôn ngữ Lớn (LLM) tiên tiến với khả năng lý luận, kiến thức và lập trình hiện đại."
+ },
+ "mistral-large-latest": {
+ "description": "Mistral Large là mô hình lớn hàng đầu, chuyên về các nhiệm vụ đa ngôn ngữ, suy luận phức tạp và sinh mã, là lựa chọn lý tưởng cho các ứng dụng cao cấp."
+ },
+ "mistral-nemo": {
+ "description": "Mistral Nemo được phát triển hợp tác giữa Mistral AI và NVIDIA, là mô hình 12B hiệu suất cao."
+ },
+ "mistral-small": {
+ "description": "Mistral Small có thể được sử dụng cho bất kỳ nhiệm vụ nào dựa trên ngôn ngữ yêu cầu hiệu suất cao và độ trễ thấp."
+ },
+ "mistral-small-latest": {
+ "description": "Mistral Small là lựa chọn hiệu quả về chi phí, nhanh chóng và đáng tin cậy, phù hợp cho các trường hợp như dịch thuật, tóm tắt và phân tích cảm xúc."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.1": {
+ "description": "Mistral (7B) Instruct nổi bật với hiệu suất cao, phù hợp cho nhiều nhiệm vụ ngôn ngữ."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.2": {
+ "description": "Mistral 7B là mô hình fine-tuning theo yêu cầu, cung cấp giải pháp tối ưu cho các nhiệm vụ."
+ },
+ "mistralai/Mistral-7B-Instruct-v0.3": {
+ "description": "Mistral (7B) Instruct v0.3 cung cấp khả năng tính toán hiệu quả và hiểu ngôn ngữ tự nhiên, phù hợp cho nhiều ứng dụng."
+ },
+ "mistralai/Mixtral-8x22B-Instruct-v0.1": {
+ "description": "Mixtral-8x22B Instruct (141B) là một mô hình ngôn ngữ lớn siêu cấp, hỗ trợ nhu cầu xử lý cực cao."
+ },
+ "mistralai/Mixtral-8x7B-Instruct-v0.1": {
+ "description": "Mixtral 8x7B là mô hình chuyên gia hỗn hợp thưa được tiền huấn luyện, dùng cho các nhiệm vụ văn bản tổng quát."
+ },
+ "mistralai/mistral-7b-instruct": {
+ "description": "Mistral 7B Instruct là mô hình tiêu chuẩn ngành với tốc độ tối ưu hóa và hỗ trợ ngữ cảnh dài."
+ },
+ "mistralai/mistral-nemo": {
+ "description": "Mistral Nemo là mô hình 7.3B tham số hỗ trợ đa ngôn ngữ và lập trình hiệu suất cao."
+ },
+ "mixtral": {
+ "description": "Mixtral là mô hình chuyên gia của Mistral AI, có trọng số mã nguồn mở và cung cấp hỗ trợ cho việc sinh mã và hiểu ngôn ngữ."
+ },
+ "mixtral-8x7b-32768": {
+ "description": "Mixtral 8x7B cung cấp khả năng tính toán song song có độ dung sai cao, phù hợp cho các nhiệm vụ phức tạp."
+ },
+ "mixtral:8x22b": {
+ "description": "Mixtral là mô hình chuyên gia của Mistral AI, có trọng số mã nguồn mở và cung cấp hỗ trợ cho việc sinh mã và hiểu ngôn ngữ."
+ },
+ "moonshot-v1-128k": {
+ "description": "Moonshot V1 128K là một mô hình có khả năng xử lý ngữ cảnh siêu dài, phù hợp cho việc sinh văn bản siêu dài, đáp ứng nhu cầu nhiệm vụ sinh phức tạp, có thể xử lý nội dung lên đến 128.000 tokens, rất phù hợp cho nghiên cứu, học thuật và sinh tài liệu lớn."
+ },
+ "moonshot-v1-32k": {
+ "description": "Moonshot V1 32K cung cấp khả năng xử lý ngữ cảnh độ dài trung bình, có thể xử lý 32.768 tokens, đặc biệt phù hợp cho việc sinh các tài liệu dài và đối thoại phức tạp, ứng dụng trong sáng tạo nội dung, sinh báo cáo và hệ thống đối thoại."
+ },
+ "moonshot-v1-8k": {
+ "description": "Moonshot V1 8K được thiết kế đặc biệt cho các nhiệm vụ sinh văn bản ngắn, có hiệu suất xử lý cao, có thể xử lý 8.192 tokens, rất phù hợp cho các cuộc đối thoại ngắn, ghi chú nhanh và sinh nội dung nhanh chóng."
+ },
+ "nousresearch/hermes-2-pro-llama-3-8b": {
+ "description": "Hermes 2 Pro Llama 3 8B là phiên bản nâng cấp của Nous Hermes 2, bao gồm bộ dữ liệu phát triển nội bộ mới nhất."
+ },
+ "o1-mini": {
+ "description": "o1-mini là một mô hình suy diễn nhanh chóng và tiết kiệm chi phí, được thiết kế cho các ứng dụng lập trình, toán học và khoa học. Mô hình này có ngữ cảnh 128K và thời điểm cắt kiến thức vào tháng 10 năm 2023."
+ },
+ "o1-preview": {
+ "description": "o1 là mô hình suy diễn mới của OpenAI, phù hợp cho các nhiệm vụ phức tạp cần kiến thức tổng quát rộng rãi. Mô hình này có ngữ cảnh 128K và thời điểm cắt kiến thức vào tháng 10 năm 2023."
+ },
+ "open-codestral-mamba": {
+ "description": "Codestral Mamba là mô hình ngôn ngữ Mamba 2 tập trung vào sinh mã, cung cấp hỗ trợ mạnh mẽ cho các nhiệm vụ mã và suy luận tiên tiến."
+ },
+ "open-mistral-7b": {
+ "description": "Mistral 7B là một mô hình nhỏ gọn nhưng hiệu suất cao, chuyên về xử lý hàng loạt và các nhiệm vụ đơn giản như phân loại và sinh văn bản, có khả năng suy luận tốt."
+ },
+ "open-mistral-nemo": {
+ "description": "Mistral Nemo là một mô hình 12B được phát triển hợp tác với Nvidia, cung cấp hiệu suất suy luận và mã hóa xuất sắc, dễ dàng tích hợp và thay thế."
+ },
+ "open-mixtral-8x22b": {
+ "description": "Mixtral 8x22B là một mô hình chuyên gia lớn hơn, tập trung vào các nhiệm vụ phức tạp, cung cấp khả năng suy luận xuất sắc và thông lượng cao hơn."
+ },
+ "open-mixtral-8x7b": {
+ "description": "Mixtral 8x7B là một mô hình chuyên gia thưa thớt, sử dụng nhiều tham số để tăng tốc độ suy luận, phù hợp cho việc xử lý đa ngôn ngữ và sinh mã."
+ },
+ "openai/gpt-4o-2024-08-06": {
+ "description": "ChatGPT-4o là một mô hình động, được cập nhật theo thời gian để giữ phiên bản mới nhất. Nó kết hợp khả năng hiểu và sinh ngôn ngữ mạnh mẽ, phù hợp cho các ứng dụng quy mô lớn, bao gồm dịch vụ khách hàng, giáo dục và hỗ trợ kỹ thuật."
+ },
+ "openai/gpt-4o-mini": {
+ "description": "GPT-4o mini là mô hình mới nhất của OpenAI, được phát hành sau GPT-4 Omni, hỗ trợ đầu vào hình ảnh và văn bản, và đầu ra văn bản. Là mô hình nhỏ tiên tiến nhất của họ, nó rẻ hơn nhiều so với các mô hình tiên tiến gần đây khác và rẻ hơn hơn 60% so với GPT-3.5 Turbo. Nó giữ lại trí thông minh tiên tiến nhất trong khi có giá trị sử dụng đáng kể. GPT-4o mini đạt 82% điểm trong bài kiểm tra MMLU và hiện đứng đầu về sở thích trò chuyện so với GPT-4."
+ },
+ "openai/o1-mini": {
+ "description": "o1-mini là một mô hình suy diễn nhanh chóng và tiết kiệm chi phí, được thiết kế cho các ứng dụng lập trình, toán học và khoa học. Mô hình này có ngữ cảnh 128K và thời điểm cắt kiến thức vào tháng 10 năm 2023."
+ },
+ "openai/o1-preview": {
+ "description": "o1 là mô hình suy diễn mới của OpenAI, phù hợp cho các nhiệm vụ phức tạp cần kiến thức tổng quát rộng rãi. Mô hình này có ngữ cảnh 128K và thời điểm cắt kiến thức vào tháng 10 năm 2023."
+ },
+ "openchat/openchat-7b": {
+ "description": "OpenChat 7B là thư viện mô hình ngôn ngữ mã nguồn mở được tinh chỉnh bằng chiến lược 'C-RLFT (tinh chỉnh tăng cường có điều kiện)'."
+ },
+ "openrouter/auto": {
+ "description": "Dựa trên độ dài ngữ cảnh, chủ đề và độ phức tạp, yêu cầu của bạn sẽ được gửi đến Llama 3 70B Instruct, Claude 3.5 Sonnet (tự điều chỉnh) hoặc GPT-4o."
+ },
+ "phi3": {
+ "description": "Phi-3 là mô hình mở nhẹ do Microsoft phát hành, phù hợp cho việc tích hợp hiệu quả và suy luận kiến thức quy mô lớn."
+ },
+ "phi3:14b": {
+ "description": "Phi-3 là mô hình mở nhẹ do Microsoft phát hành, phù hợp cho việc tích hợp hiệu quả và suy luận kiến thức quy mô lớn."
+ },
+ "pixtral-12b-2409": {
+ "description": "Mô hình Pixtral thể hiện khả năng mạnh mẽ trong các nhiệm vụ như hiểu biểu đồ và hình ảnh, hỏi đáp tài liệu, suy luận đa phương tiện và tuân thủ hướng dẫn, có khả năng tiếp nhận hình ảnh với độ phân giải và tỷ lệ khung hình tự nhiên, cũng như xử lý bất kỳ số lượng hình ảnh nào trong cửa sổ ngữ cảnh dài lên đến 128K token."
+ },
+ "qwen-coder-turbo-latest": {
+ "description": "Mô hình mã Qwen."
+ },
+ "qwen-long": {
+ "description": "Mô hình ngôn ngữ quy mô lớn Qwen, hỗ trợ ngữ cảnh văn bản dài và chức năng đối thoại dựa trên tài liệu dài, nhiều tài liệu."
+ },
+ "qwen-math-plus-latest": {
+ "description": "Mô hình toán học Qwen được thiết kế đặc biệt để giải quyết các bài toán toán học."
+ },
+ "qwen-math-turbo-latest": {
+ "description": "Mô hình toán học Qwen được thiết kế đặc biệt để giải quyết các bài toán toán học."
+ },
+ "qwen-max-latest": {
+ "description": "Mô hình ngôn ngữ quy mô lớn Qwen với hàng trăm tỷ tham số, hỗ trợ đầu vào bằng tiếng Trung, tiếng Anh và nhiều ngôn ngữ khác, là mô hình API đứng sau phiên bản sản phẩm Qwen 2.5 hiện tại."
+ },
+ "qwen-plus-latest": {
+ "description": "Phiên bản nâng cao của mô hình ngôn ngữ quy mô lớn Qwen, hỗ trợ đầu vào bằng tiếng Trung, tiếng Anh và nhiều ngôn ngữ khác."
+ },
+ "qwen-turbo-latest": {
+ "description": "Mô hình ngôn ngữ quy mô lớn Qwen, hỗ trợ đầu vào bằng tiếng Trung, tiếng Anh và nhiều ngôn ngữ khác."
+ },
+ "qwen-vl-chat-v1": {
+ "description": "Mô hình Qwen VL hỗ trợ các phương thức tương tác linh hoạt, bao gồm nhiều hình ảnh, nhiều vòng hỏi đáp, sáng tạo, v.v."
+ },
+ "qwen-vl-max": {
+ "description": "Mô hình ngôn ngữ hình ảnh quy mô lớn Qwen. So với phiên bản nâng cao, nâng cao khả năng suy luận hình ảnh và tuân thủ chỉ dẫn, cung cấp mức độ nhận thức và nhận thức hình ảnh cao hơn."
+ },
+ "qwen-vl-plus": {
+ "description": "Mô hình ngôn ngữ hình ảnh quy mô lớn Qwen phiên bản nâng cao. Nâng cao khả năng nhận diện chi tiết và nhận diện văn bản, hỗ trợ độ phân giải hình ảnh trên một triệu pixel và tỷ lệ khung hình tùy ý."
+ },
+ "qwen-vl-v1": {
+ "description": "Mô hình được khởi tạo bằng mô hình ngôn ngữ Qwen-7B, thêm mô hình hình ảnh, mô hình được huấn luyện trước với độ phân giải đầu vào hình ảnh là 448."
+ },
+ "qwen/qwen-2-7b-instruct:free": {
+ "description": "Qwen2 là một loạt mô hình ngôn ngữ lớn hoàn toàn mới, có khả năng hiểu và sinh mạnh mẽ hơn."
+ },
+ "qwen2": {
+ "description": "Qwen2 là mô hình ngôn ngữ quy mô lớn thế hệ mới của Alibaba, hỗ trợ các nhu cầu ứng dụng đa dạng với hiệu suất xuất sắc."
+ },
+ "qwen2.5-14b-instruct": {
+ "description": "Mô hình 14B quy mô mở nguồn của Qwen 2.5."
+ },
+ "qwen2.5-32b-instruct": {
+ "description": "Mô hình 32B quy mô mở nguồn của Qwen 2.5."
+ },
+ "qwen2.5-72b-instruct": {
+ "description": "Mô hình 72B quy mô mở nguồn của Qwen 2.5."
+ },
+ "qwen2.5-7b-instruct": {
+ "description": "Mô hình 7B quy mô mở nguồn của Qwen 2.5."
+ },
+ "qwen2.5-coder-1.5b-instruct": {
+ "description": "Phiên bản mã nguồn mở của mô hình mã Qwen."
+ },
+ "qwen2.5-coder-7b-instruct": {
+ "description": "Phiên bản mã nguồn mở của mô hình mã Qwen."
+ },
+ "qwen2.5-math-1.5b-instruct": {
+ "description": "Mô hình Qwen-Math có khả năng giải quyết bài toán toán học mạnh mẽ."
+ },
+ "qwen2.5-math-72b-instruct": {
+ "description": "Mô hình Qwen-Math có khả năng giải quyết bài toán toán học mạnh mẽ."
+ },
+ "qwen2.5-math-7b-instruct": {
+ "description": "Mô hình Qwen-Math có khả năng giải quyết bài toán toán học mạnh mẽ."
+ },
+ "qwen2:0.5b": {
+ "description": "Qwen2 là mô hình ngôn ngữ quy mô lớn thế hệ mới của Alibaba, hỗ trợ các nhu cầu ứng dụng đa dạng với hiệu suất xuất sắc."
+ },
+ "qwen2:1.5b": {
+ "description": "Qwen2 là mô hình ngôn ngữ quy mô lớn thế hệ mới của Alibaba, hỗ trợ các nhu cầu ứng dụng đa dạng với hiệu suất xuất sắc."
+ },
+ "qwen2:72b": {
+ "description": "Qwen2 là mô hình ngôn ngữ quy mô lớn thế hệ mới của Alibaba, hỗ trợ các nhu cầu ứng dụng đa dạng với hiệu suất xuất sắc."
+ },
+ "solar-1-mini-chat": {
+ "description": "Solar Mini là một LLM dạng nhỏ gọn, hiệu suất vượt trội hơn GPT-3.5, có khả năng đa ngôn ngữ mạnh mẽ, hỗ trợ tiếng Anh và tiếng Hàn, cung cấp giải pháp hiệu quả và nhỏ gọn."
+ },
+ "solar-1-mini-chat-ja": {
+ "description": "Solar Mini (Ja) mở rộng khả năng của Solar Mini, tập trung vào tiếng Nhật, đồng thời duy trì hiệu suất cao và xuất sắc trong việc sử dụng tiếng Anh và tiếng Hàn."
+ },
+ "solar-pro": {
+ "description": "Solar Pro là một LLM thông minh cao do Upstage phát hành, tập trung vào khả năng tuân theo hướng dẫn trên một GPU, đạt điểm IFEval trên 80. Hiện tại hỗ trợ tiếng Anh, phiên bản chính thức dự kiến ra mắt vào tháng 11 năm 2024, sẽ mở rộng hỗ trợ ngôn ngữ và độ dài ngữ cảnh."
+ },
+ "step-1-128k": {
+ "description": "Cân bằng hiệu suất và chi phí, phù hợp cho các tình huống chung."
+ },
+ "step-1-256k": {
+ "description": "Có khả năng xử lý ngữ cảnh siêu dài, đặc biệt phù hợp cho phân tích tài liệu dài."
+ },
+ "step-1-32k": {
+ "description": "Hỗ trợ đối thoại có độ dài trung bình, phù hợp cho nhiều tình huống ứng dụng."
+ },
+ "step-1-8k": {
+ "description": "Mô hình nhỏ, phù hợp cho các nhiệm vụ nhẹ."
+ },
+ "step-1-flash": {
+ "description": "Mô hình tốc độ cao, phù hợp cho đối thoại thời gian thực."
+ },
+ "step-1v-32k": {
+ "description": "Hỗ trợ đầu vào hình ảnh, tăng cường trải nghiệm tương tác đa mô hình."
+ },
+ "step-1v-8k": {
+ "description": "Mô hình thị giác nhỏ, phù hợp cho các nhiệm vụ cơ bản về văn bản và hình ảnh."
+ },
+ "step-2-16k": {
+ "description": "Hỗ trợ tương tác ngữ cảnh quy mô lớn, phù hợp cho các tình huống đối thoại phức tạp."
+ },
+ "taichu_llm": {
+ "description": "Mô hình ngôn ngữ lớn Taichu có khả năng hiểu ngôn ngữ mạnh mẽ và các khả năng như sáng tạo văn bản, trả lời câu hỏi kiến thức, lập trình mã, tính toán toán học, suy luận logic, phân tích cảm xúc, tóm tắt văn bản. Đổi mới kết hợp giữa đào tạo trước với dữ liệu phong phú từ nhiều nguồn, thông qua việc liên tục cải tiến công nghệ thuật toán và hấp thụ kiến thức mới từ dữ liệu văn bản khổng lồ, giúp mô hình ngày càng hoàn thiện. Cung cấp thông tin và dịch vụ tiện lợi hơn cho người dùng cùng trải nghiệm thông minh hơn."
+ },
+ "taichu_vqa": {
+ "description": "Taichu 2.0V kết hợp khả năng hiểu hình ảnh, chuyển giao kiến thức, suy luận logic, v.v., thể hiện xuất sắc trong lĩnh vực hỏi đáp hình ảnh và văn bản."
+ },
+ "togethercomputer/StripedHyena-Nous-7B": {
+ "description": "StripedHyena Nous (7B) cung cấp khả năng tính toán nâng cao thông qua chiến lược và kiến trúc mô hình hiệu quả."
+ },
+ "upstage/SOLAR-10.7B-Instruct-v1.0": {
+ "description": "Upstage SOLAR Instruct v1 (11B) phù hợp cho các nhiệm vụ chỉ dẫn tinh vi, cung cấp khả năng xử lý ngôn ngữ xuất sắc."
+ },
+ "wizardlm2": {
+ "description": "WizardLM 2 là mô hình ngôn ngữ do Microsoft AI cung cấp, đặc biệt xuất sắc trong các lĩnh vực đối thoại phức tạp, đa ngôn ngữ, suy luận và trợ lý thông minh."
+ },
+ "wizardlm2:8x22b": {
+ "description": "WizardLM 2 là mô hình ngôn ngữ do Microsoft AI cung cấp, đặc biệt xuất sắc trong các lĩnh vực đối thoại phức tạp, đa ngôn ngữ, suy luận và trợ lý thông minh."
+ },
+ "yi-large": {
+ "description": "Mô hình với hàng trăm tỷ tham số mới, cung cấp khả năng hỏi đáp và sinh văn bản mạnh mẽ."
+ },
+ "yi-large-fc": {
+ "description": "Hỗ trợ và tăng cường khả năng gọi công cụ trên cơ sở mô hình yi-large, phù hợp cho nhiều tình huống kinh doanh cần xây dựng agent hoặc workflow."
+ },
+ "yi-large-preview": {
+ "description": "Phiên bản ban đầu, khuyến nghị sử dụng yi-large (phiên bản mới)."
+ },
+ "yi-large-rag": {
+ "description": "Dịch vụ cao cấp dựa trên mô hình yi-large mạnh mẽ, kết hợp công nghệ tìm kiếm và sinh để cung cấp câu trả lời chính xác, dịch vụ tìm kiếm thông tin toàn mạng theo thời gian thực."
+ },
+ "yi-large-turbo": {
+ "description": "Hiệu suất vượt trội với chi phí hợp lý. Tối ưu hóa độ chính xác cao dựa trên hiệu suất, tốc độ suy luận và chi phí."
+ },
+ "yi-medium": {
+ "description": "Mô hình kích thước trung bình được nâng cấp và tinh chỉnh, khả năng cân bằng, chi phí hiệu quả cao. Tối ưu hóa sâu khả năng tuân theo chỉ dẫn."
+ },
+ "yi-medium-200k": {
+ "description": "Cửa sổ ngữ cảnh siêu dài 200K, cung cấp khả năng hiểu và sinh văn bản sâu cho các văn bản dài."
+ },
+ "yi-spark": {
+ "description": "Mô hình nhỏ gọn và nhanh chóng. Cung cấp khả năng tính toán toán học và viết mã được tăng cường."
+ },
+ "yi-vision": {
+ "description": "Mô hình cho các nhiệm vụ hình ảnh phức tạp, cung cấp khả năng hiểu và phân tích hình ảnh hiệu suất cao."
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/plugin.json b/DigitalHumanWeb/locales/vi-VN/plugin.json
new file mode 100644
index 0000000..b8631e0
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/plugin.json
@@ -0,0 +1,166 @@
+{
+ "debug": {
+ "arguments": "Tham số gọi",
+ "function_call": "Gọi hàm",
+ "off": "Tắt gỡ lỗi",
+ "on": "Xem thông tin gọi plugin",
+ "payload": "Dữ liệu cắm",
+ "response": "Kết quả trả về",
+ "tool_call": "Yêu cầu công cụ"
+ },
+ "detailModal": {
+ "info": {
+ "description": "Mô tả API",
+ "name": "Tên API"
+ },
+ "tabs": {
+ "info": "Khả năng plugin",
+ "manifest": "Tệp cài đặt",
+ "settings": "Cài đặt"
+ },
+ "title": "Chi tiết plugin"
+ },
+ "dev": {
+ "confirmDeleteDevPlugin": "Bạn sắp xóa plugin cục bộ này, sau khi xóa sẽ không thể khôi phục, bạn có muốn xóa plugin này không?",
+ "customParams": {
+ "useProxy": {
+ "label": "Cài đặt thông qua proxy (nếu gặp lỗi truy cập qua các miền, hãy thử bật tùy chọn này và cài đặt lại)"
+ }
+ },
+ "deleteSuccess": "Xóa plugin thành công",
+ "manifest": {
+ "identifier": {
+ "desc": "Định danh duy nhất của plugin",
+ "label": "Định danh"
+ },
+ "mode": {
+ "local": "Cấu hình trực quan",
+ "local-tooltip": "Tạm thời không hỗ trợ cấu hình trực quan",
+ "url": "Liên kết trực tuyến"
+ },
+ "name": {
+ "desc": "Tiêu đề plugin",
+ "label": "Tiêu đề",
+ "placeholder": "Tìm kiếm công cụ tìm kiếm"
+ }
+ },
+ "meta": {
+ "author": {
+ "desc": "Tác giả của plugin",
+ "label": "Tác giả"
+ },
+ "avatar": {
+ "desc": "Biểu tượng của plugin, có thể sử dụng Emoji hoặc URL",
+ "label": "Biểu tượng"
+ },
+ "description": {
+ "desc": "Mô tả plugin",
+ "label": "Mô tả",
+ "placeholder": "Tìm kiếm công cụ tìm kiếm để lấy thông tin"
+ },
+ "formFieldRequired": "Trường này là bắt buộc",
+ "homepage": {
+ "desc": "Trang chủ của plugin",
+ "label": "Trang chủ"
+ },
+ "identifier": {
+ "desc": "Định danh duy nhất của plugin, sẽ tự động nhận dạng từ manifest",
+ "errorDuplicate": "Định danh trùng với plugin đã có, vui lòng sửa đổi định danh",
+ "label": "Định danh",
+ "pattenErrorMessage": "Chỉ có thể nhập ký tự tiếng Anh, số, - và _"
+ },
+ "manifest": {
+ "desc": "{{appName}} sẽ cài đặt plugin qua liên kết này",
+ "label": "Tệp mô tả plugin (Manifest) URL",
+ "preview": "Xem trước Manifest",
+ "refresh": "Làm mới"
+ },
+ "title": {
+ "desc": "Tiêu đề plugin",
+ "label": "Tiêu đề",
+ "placeholder": "Tìm kiếm công cụ tìm kiếm"
+ }
+ },
+ "metaConfig": "Cấu hình thông tin plugin",
+ "modalDesc": "Sau khi thêm plugin tùy chỉnh, có thể sử dụng để xác minh phát triển plugin, cũng có thể sử dụng trực tiếp trong cuộc trò chuyện. Vui lòng tham khảo<1>tài liệu phát triển↗>",
+ "openai": {
+ "importUrl": "Nhập từ liên kết URL",
+ "schema": "Schema"
+ },
+ "preview": {
+ "card": "Xem trước hiệu ứng plugin",
+ "desc": "Xem trước mô tả plugin",
+ "title": "Xem trước tên plugin"
+ },
+ "save": "Cài đặt plugin",
+ "saveSuccess": "Lưu cài đặt plugin thành công",
+ "tabs": {
+ "manifest": "Danh sách mô tả chức năng (Manifest)",
+ "meta": "Thông tin plugin"
+ },
+ "title": {
+ "create": "Thêm plugin tùy chỉnh",
+ "edit": "Chỉnh sửa plugin tùy chỉnh"
+ },
+ "type": {
+ "lobe": "Plugin LobeChat",
+ "openai": "Plugin OpenAI"
+ },
+ "update": "Cập nhật",
+ "updateSuccess": "Cập nhật cài đặt plugin thành công"
+ },
+ "error": {
+ "fetchError": "Lỗi khi yêu cầu liên kết manifest, vui lòng đảm bảo tính hợp lệ của liên kết và kiểm tra xem liên kết có cho phép truy cập qua tên miền khác không",
+ "installError": "Cài đặt plugin {{name}} thất bại",
+ "manifestInvalid": "Manifest không tuân theo quy tắc, kết quả kiểm tra: \n\n {{error}}",
+ "noManifest": "Tệp mô tả không tồn tại",
+ "openAPIInvalid": "OpenAPI phân tích thất bại, lỗi: \n\n {{error}}",
+ "reinstallError": "Làm mới plugin {{name}} thất bại",
+ "urlError": "Liên kết này không trả về nội dung dạng JSON, vui lòng đảm bảo rằng đó là một liên kết hợp lệ"
+ },
+ "list": {
+ "item": {
+ "deprecated.title": "Đã loại bỏ",
+ "local.config": "Cấu hình",
+ "local.title": "Tùy chỉnh"
+ }
+ },
+ "loading": {
+ "content": "Đang gọi plugin...",
+ "plugin": "Plugin đang chạy..."
+ },
+ "pluginList": "Danh sách plugin",
+ "setting": "Cài đặt plugin",
+ "settings": {
+ "indexUrl": {
+ "title": "Chỉ mục thị trường",
+ "tooltip": "Hiện không hỗ trợ chỉnh sửa trực tuyến, vui lòng thiết lập thông qua biến môi trường khi triển khai"
+ },
+ "modalDesc": "Sau khi cấu hình địa chỉ thị trường plugin, bạn có thể sử dụng thị trường plugin tùy chỉnh",
+ "title": "Cài đặt thị trường plugin"
+ },
+ "showInPortal": "Vui lòng xem chi tiết trong khu vực làm việc",
+ "store": {
+ "actions": {
+ "confirmUninstall": "Sắp gỡ bỏ plugin này, sau khi gỡ bỏ sẽ xóa cấu hình của plugin này, vui lòng xác nhận hành động của bạn",
+ "detail": "Chi tiết",
+ "install": "Cài đặt",
+ "manifest": "Chỉnh sửa tệp cài đặt",
+ "settings": "Cài đặt",
+ "uninstall": "Gỡ bỏ"
+ },
+ "communityPlugin": "Cộng đồng bên thứ ba",
+ "customPlugin": "Tùy chỉnh",
+ "empty": "Hiện chưa có plugin được cài đặt",
+ "installAllPlugins": "Cài đặt tất cả",
+ "networkError": "Lấy cửa hàng plugin thất bại, vui lòng kiểm tra kết nối mạng và thử lại",
+ "placeholder": "Tìm kiếm tên hoặc mô tả plugin...",
+ "releasedAt": "Đã phát hành vào {{createdAt}}",
+ "tabs": {
+ "all": "Tất cả",
+ "installed": "Đã cài đặt"
+ },
+ "title": "Cửa hàng plugin"
+ },
+ "unknownPlugin": "Plugin không xác định"
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/portal.json b/DigitalHumanWeb/locales/vi-VN/portal.json
new file mode 100644
index 0000000..bfb400c
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/portal.json
@@ -0,0 +1,35 @@
+{
+ "Artifacts": "Tác Phẩm",
+ "FilePreview": {
+ "tabs": {
+ "chunk": "Phân đoạn",
+ "file": "Tập tin"
+ }
+ },
+ "Plugins": "Tiện ích",
+ "actions": {
+ "genAiMessage": "Tạo tin nhắn trợ giúp",
+ "summary": "Tóm tắt",
+ "summaryTooltip": "Tóm tắt nội dung hiện tại"
+ },
+ "artifacts": {
+ "display": {
+ "code": "Mã",
+ "preview": "Xem trước"
+ },
+ "svg": {
+ "copyAsImage": "Sao chép dưới dạng hình ảnh",
+ "copyFail": "Sao chép thất bại, lý do lỗi: {{error}}",
+ "copySuccess": "Sao chép hình ảnh thành công",
+ "download": {
+ "png": "Tải xuống dưới dạng PNG",
+ "svg": "Tải xuống dưới dạng SVG"
+ }
+ }
+ },
+ "emptyArtifactList": "Danh sách Tác Phẩm hiện tại đang trống, vui lòng sử dụng các plugin trong cuộc trò chuyện trước khi xem lại",
+ "emptyKnowledgeList": "Danh sách kiến thức hiện tại trống, vui lòng mở kho kiến thức khi cần trong cuộc trò chuyện trước khi xem",
+ "files": "Tập tin",
+ "messageDetail": "Chi tiết tin nhắn",
+ "title": "Cửa sổ mở rộng"
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/providers.json b/DigitalHumanWeb/locales/vi-VN/providers.json
new file mode 100644
index 0000000..be0e51f
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/providers.json
@@ -0,0 +1,84 @@
+{
+ "ai21": {},
+ "ai360": {
+ "description": "360 AI là nền tảng mô hình và dịch vụ AI do công ty 360 phát hành, cung cấp nhiều mô hình xử lý ngôn ngữ tự nhiên tiên tiến, bao gồm 360GPT2 Pro, 360GPT Pro, 360GPT Turbo và 360GPT Turbo Responsibility 8K. Những mô hình này kết hợp giữa tham số quy mô lớn và khả năng đa phương thức, được ứng dụng rộng rãi trong tạo văn bản, hiểu ngữ nghĩa, hệ thống đối thoại và tạo mã. Thông qua chiến lược giá linh hoạt, 360 AI đáp ứng nhu cầu đa dạng của người dùng, hỗ trợ nhà phát triển tích hợp, thúc đẩy sự đổi mới và phát triển ứng dụng thông minh."
+ },
+ "anthropic": {
+ "description": "Anthropic là một công ty tập trung vào nghiên cứu và phát triển trí tuệ nhân tạo, cung cấp một loạt các mô hình ngôn ngữ tiên tiến như Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus và Claude 3 Haiku. Những mô hình này đạt được sự cân bằng lý tưởng giữa trí thông minh, tốc độ và chi phí, phù hợp cho nhiều ứng dụng từ khối lượng công việc doanh nghiệp đến phản hồi nhanh. Claude 3.5 Sonnet, là mô hình mới nhất của họ, thể hiện xuất sắc trong nhiều đánh giá, đồng thời duy trì tỷ lệ hiệu suất cao."
+ },
+ "azure": {
+ "description": "Azure cung cấp nhiều mô hình AI tiên tiến, bao gồm GPT-3.5 và dòng GPT-4 mới nhất, hỗ trợ nhiều loại dữ liệu và nhiệm vụ phức tạp, cam kết cung cấp các giải pháp AI an toàn, đáng tin cậy và bền vững."
+ },
+ "baichuan": {
+ "description": "Baichuan Intelligent là công ty tập trung vào nghiên cứu phát triển mô hình ngôn ngữ lớn AI, mô hình của họ thể hiện xuất sắc trong các nhiệm vụ tiếng Trung như bách khoa toàn thư, xử lý văn bản dài và sáng tác, vượt trội hơn so với các mô hình chính thống quốc tế. Baichuan Intelligent còn có khả năng đa phương thức hàng đầu trong ngành, thể hiện xuất sắc trong nhiều bài kiểm tra uy tín. Các mô hình của họ bao gồm Baichuan 4, Baichuan 3 Turbo và Baichuan 3 Turbo 128k, được tối ưu hóa cho các tình huống ứng dụng khác nhau, cung cấp giải pháp hiệu quả về chi phí."
+ },
+ "bedrock": {
+ "description": "Bedrock là dịch vụ do Amazon AWS cung cấp, tập trung vào việc cung cấp các mô hình ngôn ngữ AI và mô hình hình ảnh tiên tiến cho doanh nghiệp. Gia đình mô hình của nó bao gồm dòng Claude của Anthropic, dòng Llama 3.1 của Meta, v.v., bao quát nhiều lựa chọn từ nhẹ đến hiệu suất cao, hỗ trợ nhiều nhiệm vụ như tạo văn bản, đối thoại, xử lý hình ảnh, phù hợp cho các ứng dụng doanh nghiệp với quy mô và nhu cầu khác nhau."
+ },
+ "deepseek": {
+ "description": "DeepSeek là một công ty tập trung vào nghiên cứu và ứng dụng công nghệ trí tuệ nhân tạo, mô hình mới nhất của họ, DeepSeek-V2.5, kết hợp khả năng đối thoại chung và xử lý mã, đồng thời đạt được sự cải thiện đáng kể trong việc căn chỉnh sở thích của con người, nhiệm vụ viết và tuân theo chỉ dẫn."
+ },
+ "fireworksai": {
+ "description": "Fireworks AI là nhà cung cấp dịch vụ mô hình ngôn ngữ cao cấp hàng đầu, tập trung vào gọi chức năng và xử lý đa phương thức. Mô hình mới nhất của họ, Firefunction V2, dựa trên Llama-3, được tối ưu hóa cho gọi chức năng, đối thoại và tuân theo chỉ dẫn. Mô hình ngôn ngữ hình ảnh FireLLaVA-13B hỗ trợ đầu vào hỗn hợp hình ảnh và văn bản. Các mô hình đáng chú ý khác bao gồm dòng Llama và dòng Mixtral, cung cấp hỗ trợ cho việc tuân theo và tạo ra chỉ dẫn đa ngôn ngữ hiệu quả."
+ },
+ "github": {
+ "description": "Với GitHub Models, các nhà phát triển có thể trở thành kỹ sư AI và xây dựng với các mô hình AI hàng đầu trong ngành."
+ },
+ "google": {
+ "description": "Dòng Gemini của Google là mô hình AI tiên tiến và đa năng nhất của họ, được phát triển bởi Google DeepMind, được thiết kế cho đa phương thức, hỗ trợ hiểu và xử lý liền mạch văn bản, mã, hình ảnh, âm thanh và video. Phù hợp cho nhiều môi trường từ trung tâm dữ liệu đến thiết bị di động, nâng cao đáng kể hiệu quả và tính ứng dụng của mô hình AI."
+ },
+ "groq": {
+ "description": "Bộ máy suy diễn LPU của Groq thể hiện xuất sắc trong các bài kiểm tra chuẩn mô hình ngôn ngữ lớn (LLM) độc lập mới nhất, định nghĩa lại tiêu chuẩn cho các giải pháp AI với tốc độ và hiệu quả đáng kinh ngạc. Groq là đại diện cho tốc độ suy diễn tức thì, thể hiện hiệu suất tốt trong triển khai dựa trên đám mây."
+ },
+ "minimax": {
+ "description": "MiniMax là công ty công nghệ trí tuệ nhân tạo tổng quát được thành lập vào năm 2021, cam kết cùng người dùng sáng tạo trí thông minh. MiniMax đã tự phát triển nhiều mô hình lớn đa phương thức, bao gồm mô hình văn bản MoE với một triệu tham số, mô hình giọng nói và mô hình hình ảnh. Họ cũng đã phát hành các ứng dụng như AI Hải Lý."
+ },
+ "mistral": {
+ "description": "Mistral cung cấp các mô hình tiên tiến cho mục đích chung, chuyên nghiệp và nghiên cứu, được ứng dụng rộng rãi trong suy diễn phức tạp, nhiệm vụ đa ngôn ngữ, tạo mã, v.v. Thông qua giao diện gọi chức năng, người dùng có thể tích hợp các chức năng tùy chỉnh để thực hiện các ứng dụng cụ thể."
+ },
+ "moonshot": {
+ "description": "Moonshot là nền tảng mã nguồn mở do Công ty TNHH Công nghệ Mặt Trăng Bắc Kinh phát hành, cung cấp nhiều mô hình xử lý ngôn ngữ tự nhiên, ứng dụng rộng rãi trong nhiều lĩnh vực, bao gồm nhưng không giới hạn ở sáng tác nội dung, nghiên cứu học thuật, gợi ý thông minh, chẩn đoán y tế, v.v., hỗ trợ xử lý văn bản dài và nhiệm vụ tạo phức tạp."
+ },
+ "novita": {
+ "description": "Novita AI là một nền tảng cung cấp dịch vụ API cho nhiều mô hình ngôn ngữ lớn và tạo hình ảnh AI, linh hoạt, đáng tin cậy và hiệu quả về chi phí. Nó hỗ trợ các mô hình mã nguồn mở mới nhất như Llama3, Mistral, và cung cấp giải pháp API toàn diện, thân thiện với người dùng và tự động mở rộng cho phát triển ứng dụng AI, phù hợp cho sự phát triển nhanh chóng của các công ty khởi nghiệp AI."
+ },
+ "ollama": {
+ "description": "Mô hình do Ollama cung cấp bao quát rộng rãi các lĩnh vực như tạo mã, tính toán toán học, xử lý đa ngôn ngữ và tương tác đối thoại, hỗ trợ nhu cầu đa dạng cho triển khai doanh nghiệp và địa phương."
+ },
+ "openai": {
+ "description": "OpenAI là tổ chức nghiên cứu trí tuệ nhân tạo hàng đầu thế giới, với các mô hình như dòng GPT đã thúc đẩy ranh giới của xử lý ngôn ngữ tự nhiên. OpenAI cam kết thay đổi nhiều ngành công nghiệp thông qua các giải pháp AI sáng tạo và hiệu quả. Sản phẩm của họ có hiệu suất và tính kinh tế nổi bật, được sử dụng rộng rãi trong nghiên cứu, thương mại và ứng dụng đổi mới."
+ },
+ "openrouter": {
+ "description": "OpenRouter là một nền tảng dịch vụ cung cấp nhiều giao diện mô hình lớn tiên tiến, hỗ trợ OpenAI, Anthropic, LLaMA và nhiều hơn nữa, phù hợp cho nhu cầu phát triển và ứng dụng đa dạng. Người dùng có thể linh hoạt chọn mô hình và giá cả tối ưu theo nhu cầu của mình, giúp nâng cao trải nghiệm AI."
+ },
+ "perplexity": {
+ "description": "Perplexity là nhà cung cấp mô hình tạo đối thoại hàng đầu, cung cấp nhiều mô hình Llama 3.1 tiên tiến, hỗ trợ ứng dụng trực tuyến và ngoại tuyến, đặc biệt phù hợp cho các nhiệm vụ xử lý ngôn ngữ tự nhiên phức tạp."
+ },
+ "qwen": {
+ "description": "Qwen là mô hình ngôn ngữ quy mô lớn tự phát triển của Alibaba Cloud, có khả năng hiểu và tạo ngôn ngữ tự nhiên mạnh mẽ. Nó có thể trả lời nhiều câu hỏi, sáng tác nội dung văn bản, bày tỏ quan điểm, viết mã, v.v., hoạt động trong nhiều lĩnh vực."
+ },
+ "siliconcloud": {
+ "description": "SiliconFlow cam kết tăng tốc AGI để mang lại lợi ích cho nhân loại, nâng cao hiệu quả AI quy mô lớn thông qua một ngăn xếp GenAI dễ sử dụng và chi phí thấp."
+ },
+ "spark": {
+ "description": "Mô hình lớn Xinghuo của iFlytek cung cấp khả năng AI mạnh mẽ cho nhiều lĩnh vực và ngôn ngữ, sử dụng công nghệ xử lý ngôn ngữ tự nhiên tiên tiến để xây dựng các ứng dụng đổi mới phù hợp cho các lĩnh vực như phần cứng thông minh, y tế thông minh, tài chính thông minh, v.v."
+ },
+ "stepfun": {
+ "description": "Mô hình lớn Star Class có khả năng đa phương thức và suy diễn phức tạp hàng đầu trong ngành, hỗ trợ hiểu văn bản siêu dài và chức năng tìm kiếm tự động mạnh mẽ."
+ },
+ "taichu": {
+ "description": "Viện Nghiên cứu Tự động hóa Trung Quốc và Viện Nghiên cứu Trí tuệ Nhân tạo Vũ Hán đã phát hành mô hình lớn đa phương thức thế hệ mới, hỗ trợ các nhiệm vụ hỏi đáp toàn diện như hỏi đáp nhiều vòng, sáng tác văn bản, tạo hình ảnh, hiểu 3D, phân tích tín hiệu, v.v., với khả năng nhận thức, hiểu biết và sáng tác mạnh mẽ hơn, mang đến trải nghiệm tương tác hoàn toàn mới."
+ },
+ "togetherai": {
+ "description": "Together AI cam kết đạt được hiệu suất hàng đầu thông qua các mô hình AI sáng tạo, cung cấp khả năng tùy chỉnh rộng rãi, bao gồm hỗ trợ mở rộng nhanh chóng và quy trình triển khai trực quan, đáp ứng nhiều nhu cầu của doanh nghiệp."
+ },
+ "upstage": {
+ "description": "Upstage tập trung vào việc phát triển các mô hình AI cho nhiều nhu cầu thương mại khác nhau, bao gồm Solar LLM và AI tài liệu, nhằm đạt được trí thông minh nhân tạo tổng quát (AGI) cho công việc. Tạo ra các đại lý đối thoại đơn giản thông qua Chat API, và hỗ trợ gọi chức năng, dịch thuật, nhúng và ứng dụng trong các lĩnh vực cụ thể."
+ },
+ "zeroone": {
+ "description": "01.AI tập trung vào công nghệ trí tuệ nhân tạo trong kỷ nguyên AI 2.0, thúc đẩy mạnh mẽ sự đổi mới và ứng dụng của \"người + trí tuệ nhân tạo\", sử dụng các mô hình mạnh mẽ và công nghệ AI tiên tiến để nâng cao năng suất của con người và thực hiện sự trao quyền công nghệ."
+ },
+ "zhipu": {
+ "description": "Zhipu AI cung cấp nền tảng mở cho mô hình đa phương thức và ngôn ngữ, hỗ trợ nhiều tình huống ứng dụng AI, bao gồm xử lý văn bản, hiểu hình ảnh và hỗ trợ lập trình."
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/ragEval.json b/DigitalHumanWeb/locales/vi-VN/ragEval.json
new file mode 100644
index 0000000..4ef87dc
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/ragEval.json
@@ -0,0 +1,91 @@
+{
+ "addDataset": {
+ "confirm": "Tạo mới",
+ "description": {
+ "placeholder": "Mô tả bộ dữ liệu (tùy chọn)"
+ },
+ "name": {
+ "placeholder": "Tên bộ dữ liệu",
+ "required": "Vui lòng điền tên bộ dữ liệu"
+ },
+ "title": "Thêm bộ dữ liệu"
+ },
+ "dataset": {
+ "addNewButton": "Tạo bộ dữ liệu",
+ "emptyGuide": "Hiện tại không có bộ dữ liệu nào, vui lòng tạo một bộ dữ liệu.",
+ "list": {
+ "table": {
+ "actions": {
+ "importData": "Nhập dữ liệu"
+ },
+ "columns": {
+ "actions": "Hành động",
+ "ideal": {
+ "title": "Câu trả lời mong đợi"
+ },
+ "question": {
+ "title": "Câu hỏi"
+ },
+ "referenceFiles": {
+ "title": "Tài liệu tham khảo"
+ }
+ },
+ "notSelected": "Vui lòng chọn bộ dữ liệu ở bên trái",
+ "title": "Chi tiết bộ dữ liệu"
+ },
+ "title": "Bộ dữ liệu"
+ }
+ },
+ "evaluation": {
+ "addEvaluation": {
+ "confirm": "Tạo mới",
+ "datasetId": {
+ "placeholder": "Vui lòng chọn bộ dữ liệu đánh giá của bạn",
+ "required": "Vui lòng chọn bộ dữ liệu đánh giá"
+ },
+ "description": {
+ "placeholder": "Mô tả nhiệm vụ đánh giá (tùy chọn)"
+ },
+ "name": {
+ "placeholder": "Tên nhiệm vụ đánh giá",
+ "required": "Vui lòng điền tên nhiệm vụ đánh giá"
+ },
+ "title": "Thêm nhiệm vụ đánh giá"
+ },
+ "addNewButton": "Tạo đánh giá",
+ "emptyGuide": "Hiện tại không có nhiệm vụ đánh giá nào, hãy bắt đầu tạo đánh giá.",
+ "table": {
+ "columns": {
+ "actions": {
+ "checkStatus": "Kiểm tra trạng thái",
+ "confirmDelete": "Có chắc chắn muốn xóa nhiệm vụ đánh giá này không?",
+ "confirmRun": "Có chắc chắn muốn bắt đầu chạy không? Sau khi bắt đầu, nhiệm vụ đánh giá sẽ được thực hiện bất đồng bộ ở phía sau, đóng trang không ảnh hưởng đến việc thực hiện nhiệm vụ bất đồng bộ.",
+ "downloadRecords": "Tải xuống đánh giá",
+ "retry": "Thử lại",
+ "run": "Chạy",
+ "title": "Hành động"
+ },
+ "datasetId": {
+ "title": "Bộ dữ liệu"
+ },
+ "name": {
+ "title": "Tên nhiệm vụ đánh giá"
+ },
+ "records": {
+ "title": "Số lượng bản ghi đánh giá"
+ },
+ "referenceFiles": {
+ "title": "Tài liệu tham khảo"
+ },
+ "status": {
+ "error": "Có lỗi trong quá trình thực hiện",
+ "pending": "Chờ chạy",
+ "processing": "Đang chạy",
+ "success": "Thực hiện thành công",
+ "title": "Trạng thái"
+ }
+ },
+ "title": "Danh sách nhiệm vụ đánh giá"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/setting.json b/DigitalHumanWeb/locales/vi-VN/setting.json
new file mode 100644
index 0000000..3e5189d
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/setting.json
@@ -0,0 +1,415 @@
+{
+ "about": {
+ "title": "Về chúng tôi"
+ },
+ "agentTab": {
+ "chat": "Tùy chọn Trò chuyện",
+ "meta": "Thông tin Trợ lý",
+ "modal": "Cài đặt Mô hình",
+ "plugin": "Cài đặt Plugin",
+ "prompt": "Thiết lập Nhân vật",
+ "tts": "Dịch vụ Giọng nói"
+ },
+ "analytics": {
+ "telemetry": {
+ "desc": "Bằng cách chọn gửi dữ liệu telemetry, bạn có thể giúp chúng tôi cải thiện trải nghiệm người dùng tổng thể của {{appName}}",
+ "title": "Gửi dữ liệu sử dụng ẩn danh"
+ },
+ "title": "Thống kê dữ liệu"
+ },
+ "danger": {
+ "clear": {
+ "action": "Xóa ngay",
+ "confirm": "Xác nhận xóa tất cả dữ liệu trò chuyện?",
+ "desc": "Dữ liệu cuộc trò chuyện, bao gồm trợ lý, tệp tin, tin nhắn, plugin, sẽ bị xóa",
+ "success": "Đã xóa tất cả tin nhắn cuộc trò chuyện",
+ "title": "Xóa tất cả tin nhắn cuộc trò chuyện"
+ },
+ "reset": {
+ "action": "Đặt lại ngay",
+ "confirm": "Xác nhận đặt lại tất cả cài đặt?",
+ "currentVersion": "Phiên bản hiện tại",
+ "desc": "Đặt lại tất cả cài đặt về giá trị mặc định",
+ "success": "Đã đặt lại tất cả cài đặt",
+ "title": "Đặt lại tất cả cài đặt"
+ }
+ },
+ "header": {
+ "desc": "Thiết lập Ưu tiên và Mô hình.",
+ "global": "Cài đặt toàn cầu",
+ "session": "Cài đặt cuộc trò chuyện",
+ "sessionDesc": "Thiết lập nhân vật và ưu tiên phiên hội thoại.",
+ "sessionWithName": "Cài đặt cuộc trò chuyện · {{name}}",
+ "title": "Cài đặt"
+ },
+ "llm": {
+ "aesGcm": "Khóa và địa chỉ proxy của bạn sẽ được mã hóa bằng thuật toán mã hóa <1>AES-GCM1>",
+ "apiKey": {
+ "desc": "Vui lòng nhập {{name}} API Key của bạn",
+ "placeholder": "{{name}} API Key",
+ "title": "API Key"
+ },
+ "checker": {
+ "button": "Kiểm tra",
+ "desc": "Kiểm tra xem API Key và địa chỉ proxy đã được điền đúng chưa",
+ "pass": "Kiểm tra thành công",
+ "title": "Kiểm tra kết nối"
+ },
+ "customModelCards": {
+ "addNew": "Tạo và thêm mô hình {{id}}",
+ "config": "Cấu hình mô hình",
+ "confirmDelete": "Bạn sắp xóa mô hình tùy chỉnh này, sau khi xóa sẽ không thể khôi phục, hãy thận trọng.",
+ "modelConfig": {
+ "azureDeployName": {
+ "extra": "Trường thực tế được yêu cầu trong Azure OpenAI",
+ "placeholder": "Nhập tên triển khai mô hình trong Azure",
+ "title": "Tên triển khai mô hình"
+ },
+ "displayName": {
+ "placeholder": "Nhập tên hiển thị của mô hình, ví dụ ChatGPT, GPT-4, v.v.",
+ "title": "Tên hiển thị mô hình"
+ },
+ "files": {
+ "extra": "Hiện tại, việc tải lên tệp chỉ là một giải pháp Hack, chỉ dành cho việc thử nghiệm cá nhân. Vui lòng chờ đợi khả năng tải lên tệp đầy đủ trong các phiên bản sau.",
+ "title": "Hỗ trợ tải lên tệp"
+ },
+ "functionCall": {
+ "extra": "Cấu hình này chỉ mở khả năng gọi hàm trong ứng dụng, việc hỗ trợ gọi hàm hoàn toàn phụ thuộc vào chính mô hình, vui lòng tự kiểm tra khả năng gọi hàm của mô hình đó.",
+ "title": "Hỗ trợ gọi hàm"
+ },
+ "id": {
+ "extra": "Sẽ được hiển thị như một nhãn mô hình",
+ "placeholder": "Nhập ID mô hình, ví dụ gpt-4-turbo-preview hoặc claude-2.1",
+ "title": "ID mô hình"
+ },
+ "modalTitle": "Cấu hình mô hình tùy chỉnh",
+ "tokens": {
+ "title": "Số lượng token tối đa",
+ "unlimited": "vô hạn"
+ },
+ "vision": {
+ "extra": "Cấu hình này chỉ mở khả năng tải lên hình ảnh trong ứng dụng, việc hỗ trợ nhận diện hoàn toàn phụ thuộc vào chính mô hình, vui lòng tự kiểm tra khả năng nhận diện hình ảnh của mô hình đó.",
+ "title": "Hỗ trợ nhận diện hình ảnh"
+ }
+ }
+ },
+ "fetchOnClient": {
+ "desc": "Chế độ yêu cầu từ khách hàng sẽ khởi động yêu cầu phiên trực tiếp từ trình duyệt, có thể cải thiện tốc độ phản hồi",
+ "title": "Sử dụng chế độ yêu cầu từ khách hàng"
+ },
+ "fetcher": {
+ "fetch": "Lấy danh sách mô hình",
+ "fetching": "Đang lấy danh sách mô hình...",
+ "latestTime": "Thời gian cập nhật lần cuối: {{time}}",
+ "noLatestTime": "Chưa có danh sách nào được lấy"
+ },
+ "helpDoc": "Hướng dẫn cấu hình",
+ "modelList": {
+ "desc": "Chọn mô hình hiển thị trong cuộc trò chuyện, mô hình đã chọn sẽ được hiển thị trong danh sách mô hình",
+ "placeholder": "Vui lòng chọn mô hình từ danh sách",
+ "title": "Danh sách mô hình",
+ "total": "Tổng cộng có {{count}} mô hình có sẵn"
+ },
+ "proxyUrl": {
+ "desc": "Ngoài địa chỉ mặc định, phải bao gồm http(s)://",
+ "title": "Địa chỉ Proxy API"
+ },
+ "waitingForMore": "Có thêm mô hình đang <1>được lên kế hoạch tích hợp1>, hãy chờ đợi"
+ },
+ "plugin": {
+ "addTooltip": "Thêm tiện ích",
+ "clearDeprecated": "Xóa tiện ích không còn hỗ trợ",
+ "empty": "Hiện chưa có tiện ích nào được cài đặt, hãy truy cập <1>cửa hàng tiện ích1> để khám phá",
+ "installStatus": {
+ "deprecated": "Đã gỡ bỏ"
+ },
+ "settings": {
+ "hint": "Vui lòng điền cấu hình dựa trên mô tả",
+ "title": "Cấu hình tiện ích {{id}}",
+ "tooltip": "Cấu hình tiện ích"
+ },
+ "store": "Cửa hàng tiện ích"
+ },
+ "settingAgent": {
+ "avatar": {
+ "title": "Hình đại diện"
+ },
+ "backgroundColor": {
+ "title": "Màu nền"
+ },
+ "description": {
+ "placeholder": "Vui lòng nhập mô tả trợ lý",
+ "title": "Mô tả trợ lý"
+ },
+ "name": {
+ "placeholder": "Vui lòng nhập tên trợ lý",
+ "title": "Tên"
+ },
+ "prompt": {
+ "placeholder": "Vui lòng nhập từ khóa Prompt cho vai diễn",
+ "title": "Thiết lập vai diễn"
+ },
+ "tag": {
+ "placeholder": "Vui lòng nhập nhãn",
+ "title": "Nhãn"
+ },
+ "title": "Thông tin trợ lý"
+ },
+ "settingChat": {
+ "autoCreateTopicThreshold": {
+ "desc": "Khi số tin nhắn hiện tại vượt quá giá trị này, chủ đề sẽ tự động được tạo",
+ "title": "Ngưỡng tự động tạo chủ đề"
+ },
+ "chatStyleType": {
+ "title": "Kiểu cửa sổ trò chuyện",
+ "type": {
+ "chat": "Chế độ trò chuyện",
+ "docs": "Chế độ tài liệu"
+ }
+ },
+ "compressThreshold": {
+ "desc": "Khi số tin nhắn lịch sử chưa được nén vượt quá giá trị này, sẽ thực hiện nén",
+ "title": "Ngưỡng nén độ dài lịch sử"
+ },
+ "enableAutoCreateTopic": {
+ "desc": "Có tự động tạo chủ đề trong quá trình trò chuyện hay không, chỉ áp dụng trong chủ đề tạm thời",
+ "title": "Tự động tạo chủ đề"
+ },
+ "enableCompressThreshold": {
+ "title": "Bật ngưỡng nén độ dài lịch sử"
+ },
+ "enableHistoryCount": {
+ "alias": "Không giới hạn",
+ "limited": "Chỉ chứa {{number}} tin nhắn trò chuyện",
+ "setlimited": "Thiết lập số lượng tin nhắn lịch sử",
+ "title": "Giới hạn số lượng tin nhắn lịch sử",
+ "unlimited": "Không giới hạn số lượng tin nhắn lịch sử"
+ },
+ "historyCount": {
+ "desc": "Số lượng tin nhắn được gửi mỗi lần yêu cầu (bao gồm cả câu hỏi mới nhất. Mỗi câu hỏi và câu trả lời đều tính là 1)",
+ "title": "Số lượng tin nhắn đi kèm"
+ },
+ "inputTemplate": {
+ "desc": "Tin nhắn mới nhất của người dùng sẽ được điền vào mẫu này",
+ "placeholder": "Mẫu xử lý trước {{text}} sẽ được thay thế bằng thông tin nhập thời gian thực",
+ "title": "Mẫu xử lý đầu vào của người dùng"
+ },
+ "title": "Cài đặt trò chuyện"
+ },
+ "settingModel": {
+ "enableMaxTokens": {
+ "title": "Bật giới hạn phản hồi một lần"
+ },
+ "frequencyPenalty": {
+ "desc": "Giá trị càng cao, càng có khả năng giảm sự lặp lại của từ/cụm từ",
+ "title": "Hình phạt tần suất"
+ },
+ "maxTokens": {
+ "desc": "Số lượng Token tối đa được sử dụng trong mỗi tương tác",
+ "title": "Giới hạn phản hồi một lần"
+ },
+ "model": {
+ "desc": "Mô hình {{provider}}",
+ "title": "Mô hình"
+ },
+ "presencePenalty": {
+ "desc": "Giá trị càng cao, càng có khả năng mở rộng đến chủ đề mới",
+ "title": "Độ mới của chủ đề"
+ },
+ "temperature": {
+ "desc": "Giá trị càng cao, phản hồi càng ngẫu nhiên",
+ "title": "Độ ngẫu nhiên",
+ "titleWithValue": "Độ ngẫu nhiên {{value}}"
+ },
+ "title": "Cài đặt mô hình",
+ "topP": {
+ "desc": "Tương tự như độ ngẫu nhiên, nhưng không nên thay đổi cùng lúc với độ ngẫu nhiên",
+ "title": "Lấy mẫu cốt lõi"
+ }
+ },
+ "settingPlugin": {
+ "title": "Danh sách plugin"
+ },
+ "settingSystem": {
+ "accessCode": {
+ "desc": "Quản trị viên đã bật mã hóa truy cập",
+ "placeholder": "Nhập mật khẩu truy cập",
+ "title": "Mật khẩu truy cập"
+ },
+ "oauth": {
+ "info": {
+ "desc": "Đã đăng nhập",
+ "title": "Thông tin tài khoản"
+ },
+ "signin": {
+ "action": "Đăng nhập",
+ "desc": "Đăng nhập bằng SSO để mở khóa ứng dụng",
+ "title": "Đăng nhập tài khoản"
+ },
+ "signout": {
+ "action": "Đăng xuất",
+ "confirm": "Xác nhận đăng xuất?",
+ "success": "Đăng xuất thành công"
+ }
+ },
+ "title": "Cài đặt hệ thống"
+ },
+ "settingTTS": {
+ "openai": {
+ "sttModel": "Mô hình nhận dạng giọng nói OpenAI",
+ "title": "OpenAI",
+ "ttsModel": "Mô hình tổng hợp giọng nói OpenAI"
+ },
+ "showAllLocaleVoice": {
+ "desc": "Tắt sẽ chỉ hiển thị nguồn âm thanh của ngôn ngữ hiện tại",
+ "title": "Hiển thị tất cả nguồn âm thanh ngôn ngữ"
+ },
+ "stt": "Cài đặt nhận dạng giọng nói",
+ "sttAutoStop": {
+ "desc": "Tắt sẽ không tự động dừng nhận dạng giọng nói, cần phải bấm nút dừng thủ công",
+ "title": "Tự động dừng nhận dạng giọng nói"
+ },
+ "sttLocale": {
+ "desc": "Ngôn ngữ đầu vào cho giọng nói, tùy chọn này có thể cải thiện độ chính xác của nhận dạng giọng nói",
+ "title": "Ngôn ngữ nhận dạng giọng nói"
+ },
+ "sttService": {
+ "desc": "Trong đó, trình duyệt là dịch vụ nhận dạng giọng nói nguyên bản của trình duyệt",
+ "title": "Dịch vụ nhận dạng giọng nói"
+ },
+ "title": "Dịch vụ giọng nói",
+ "tts": "Cài đặt tổng hợp giọng nói",
+ "ttsService": {
+ "desc": "Nếu sử dụng dịch vụ tổng hợp giọng nói OpenAI, cần đảm bảo dịch vụ mô hình OpenAI đã được bật",
+ "title": "Dịch vụ tổng hợp giọng nói"
+ },
+ "voice": {
+ "desc": "Chọn một giọng nói cho trợ lý hiện tại, các dịch vụ TTS khác nhau hỗ trợ các nguồn âm thanh khác nhau",
+ "preview": "Xem trước âm thanh",
+ "title": "Nguồn âm thanh tổng hợp giọng nói"
+ }
+ },
+ "settingTheme": {
+ "avatar": {
+ "title": "Hình đại diện"
+ },
+ "fontSize": {
+ "desc": "Kích cỡ chữ của nội dung trò chuyện",
+ "marks": {
+ "normal": "Bình thường"
+ },
+ "title": "Kích cỡ chữ"
+ },
+ "lang": {
+ "autoMode": "Theo hệ thống",
+ "title": "Ngôn ngữ"
+ },
+ "neutralColor": {
+ "desc": "Tùy chỉnh mức xám theo xu hướng màu sắc khác nhau",
+ "title": "Màu trung tính"
+ },
+ "primaryColor": {
+ "desc": "Tùy chỉnh màu chủ đề",
+ "title": "Màu chủ đề"
+ },
+ "themeMode": {
+ "auto": "Tự động",
+ "dark": "Tối",
+ "light": "Sáng",
+ "title": "Chủ đề"
+ },
+ "title": "Cài đặt chủ đề"
+ },
+ "submitAgentModal": {
+ "button": "Gửi trợ lý",
+ "identifier": "Nhận dạng trợ lý",
+ "metaMiss": "Vui lòng điền đầy đủ thông tin trợ lý trước khi gửi, cần bao gồm tên, mô tả và nhãn",
+ "placeholder": "Vui lòng nhập nhận dạng trợ lý, cần phải duy nhất, ví dụ như phát triển web",
+ "tooltips": "Chia sẻ lên thị trường trợ lý"
+ },
+ "sync": {
+ "device": {
+ "deviceName": {
+ "hint": "Thêm tên để dễ nhận biết",
+ "placeholder": "Nhập tên thiết bị",
+ "title": "Tên thiết bị"
+ },
+ "title": "Thông tin thiết bị",
+ "unknownBrowser": "Trình duyệt không xác định",
+ "unknownOS": "Hệ điều hành không xác định"
+ },
+ "warning": {
+ "tip": "经过较长一段时间社区公测,WebRTC 同步可能无法稳定满足通用的数据同步诉求。请自行 <1>部署信令服务器1> 后使用。"
+ },
+ "webrtc": {
+ "channelName": {
+ "desc": "WebRTC sẽ sử dụng tên này để tạo kênh đồng bộ, đảm bảo tên kênh là duy nhất",
+ "placeholder": "Nhập tên kênh đồng bộ",
+ "shuffle": "Tạo ngẫu nhiên",
+ "title": "Tên kênh đồng bộ"
+ },
+ "channelPassword": {
+ "desc": "Thêm mật khẩu để đảm bảo tính riêng tư của kênh, chỉ khi mật khẩu đúng, thiết bị mới có thể tham gia kênh",
+ "placeholder": "Nhập mật khẩu kênh đồng bộ",
+ "title": "Mật khẩu kênh đồng bộ"
+ },
+ "desc": "Truyền thông dữ liệu thời gian thực, điểm-điểm, cần thiết bị cùng online mới có thể đồng bộ",
+ "enabled": {
+ "invalid": "Vui lòng nhập địa chỉ máy chủ tín hiệu và tên kênh đồng bộ trước khi bật",
+ "title": "Bật đồng bộ"
+ },
+ "signaling": {
+ "desc": "WebRTC sẽ sử dụng địa chỉ này để đồng bộ",
+ "placeholder": "Vui lòng nhập địa chỉ máy chủ tín hiệu",
+ "title": "Máy chủ tín hiệu"
+ },
+ "title": "WebRTC Đồng bộ"
+ }
+ },
+ "systemAgent": {
+ "agentMeta": {
+ "label": "Mô hình tạo siêu dữ liệu trợ lý",
+ "modelDesc": "Xác định mô hình được sử dụng để tạo tên, mô tả, hình đại diện, nhãn cho trợ lý",
+ "title": "Tự động tạo thông tin trợ lý"
+ },
+ "queryRewrite": {
+ "label": "Mô hình viết lại câu hỏi",
+ "modelDesc": "Mô hình được chỉ định để tối ưu hóa câu hỏi của người dùng",
+ "title": "Kho tri thức"
+ },
+ "title": "Trợ lý hệ thống",
+ "topic": {
+ "label": "Mô hình đặt tên chủ đề",
+ "modelDesc": "Mô hình được chỉ định để tự động đặt tên chủ đề",
+ "title": "Tự động đặt tên chủ đề"
+ },
+ "translation": {
+ "label": "Mô hình dịch",
+ "modelDesc": "Chọn mô hình để dịch",
+ "title": "Cài đặt trợ lý dịch"
+ }
+ },
+ "tab": {
+ "about": "Về chúng tôi",
+ "agent": "Trợ lý mặc định",
+ "common": "Cài đặt chung",
+ "experiment": "Thử nghiệm",
+ "llm": "Mô hình ngôn ngữ",
+ "sync": "Đồng bộ trên đám mây",
+ "system-agent": "Trợ lý hệ thống",
+ "tts": "Dịch vụ giọng nói"
+ },
+ "tools": {
+ "builtins": {
+ "groupName": "Mở rộng tích hợp sẵn"
+ },
+ "disabled": "Mô hình hiện tại không hỗ trợ gọi hàm, không thể sử dụng plugin",
+ "plugins": {
+ "enabled": "Đã kích hoạt {{num}}",
+ "groupName": "Tiện ích",
+ "noEnabled": "Chưa có tiện ích nào được kích hoạt",
+ "store": "Cửa hàng tiện ích"
+ },
+ "title": "Công cụ mở rộng"
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/tool.json b/DigitalHumanWeb/locales/vi-VN/tool.json
new file mode 100644
index 0000000..38e997b
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/tool.json
@@ -0,0 +1,10 @@
+{
+ "dalle": {
+ "autoGenerate": "Tự động tạo",
+ "downloading": "Liên kết hình ảnh được tạo bởi DallE3 chỉ có hiệu lực trong 1 giờ, đang tải hình ảnh xuống máy...",
+ "generate": "Tạo",
+ "generating": "Đang tạo...",
+ "images": "Hình ảnh:",
+ "prompt": "Từ khóa"
+ }
+}
diff --git a/DigitalHumanWeb/locales/vi-VN/welcome.json b/DigitalHumanWeb/locales/vi-VN/welcome.json
new file mode 100644
index 0000000..366675d
--- /dev/null
+++ b/DigitalHumanWeb/locales/vi-VN/welcome.json
@@ -0,0 +1,50 @@
+{
+ "button": {
+ "import": "Nhập Cấu Hình",
+ "market": "Thăm Thị trường",
+ "start": "Bắt Đầu Ngay"
+ },
+ "guide": {
+ "agents": {
+ "replaceBtn": "Thay đổi",
+ "title": "Đề xuất trợ lý mới: "
+ },
+ "defaultMessage": "Tôi là trợ lý thông minh cá nhân của bạn {{appName}}. Xin hỏi tôi có thể giúp gì cho bạn ngay bây giờ?\nNếu bạn cần một trợ lý chuyên nghiệp hoặc tùy chỉnh hơn, hãy nhấp vào `+` để tạo trợ lý tùy chỉnh.",
+ "defaultMessageWithoutCreate": "Tôi là trợ lý thông minh cá nhân của bạn {{appName}}. Xin hỏi tôi có thể giúp gì cho bạn ngay bây giờ?",
+ "qa": {
+ "q01": "LobeHub là gì?",
+ "q02": "{{appName}} là gì?",
+ "q03": "{{appName}} có hỗ trợ cộng đồng không?",
+ "q04": "{{appName}} hỗ trợ những tính năng nào?",
+ "q05": "{{appName}} được triển khai và sử dụng như thế nào?",
+ "q06": "Giá cả của {{appName}} như thế nào?",
+ "q07": "{{appName}} có miễn phí không?",
+ "q08": "Có phiên bản dịch vụ đám mây không?",
+ "q09": "Có hỗ trợ mô hình ngôn ngữ địa phương không?",
+ "q10": "Có hỗ trợ nhận diện và tạo hình ảnh không?",
+ "q11": "Có hỗ trợ tổng hợp giọng nói và nhận diện giọng nói không?",
+ "q12": "Có hỗ trợ hệ thống plugin không?",
+ "q13": "Có thị trường riêng để lấy GPTs không?",
+ "q14": "Có hỗ trợ nhiều nhà cung cấp dịch vụ AI không?",
+ "q15": "Tôi nên làm gì nếu gặp vấn đề khi sử dụng?"
+ },
+ "questions": {
+ "moreBtn": "Tìm hiểu thêm",
+ "title": "Mọi người đều đặt câu hỏi: "
+ },
+ "welcome": {
+ "afternoon": "Chào buổi chiều",
+ "morning": "Chào buổi sáng",
+ "night": "Chào buổi tối",
+ "noon": "Chào buổi trưa"
+ }
+ },
+ "header": "Chào Mừng",
+ "pickAgent": "Hoặc chọn từ các mẫu đại lý sau",
+ "skip": "Bỏ Qua Tạo",
+ "slogan": {
+ "desc1": "Tiên phong trong kỷ nguyên mới của tư duy và sáng tạo. Được xây dựng cho bạn, Siêu Cá Nhân.",
+ "desc2": "Tạo đại lý đầu tiên và bắt đầu nào~",
+ "title": "Khám phá siêu năng lực của bộ não bạn"
+ }
+}
diff --git a/DigitalHumanWeb/locales/zh-CN/auth.json b/DigitalHumanWeb/locales/zh-CN/auth.json
new file mode 100644
index 0000000..08711d7
--- /dev/null
+++ b/DigitalHumanWeb/locales/zh-CN/auth.json
@@ -0,0 +1,8 @@
+{
+ "login": "登录",
+ "loginOrSignup": "登录 / 注册",
+ "profile": "个人资料",
+ "security": "安全",
+ "signout": "退出登录",
+ "signup": "注册"
+}
diff --git a/DigitalHumanWeb/locales/zh-CN/chat.json b/DigitalHumanWeb/locales/zh-CN/chat.json
new file mode 100644
index 0000000..651d793
--- /dev/null
+++ b/DigitalHumanWeb/locales/zh-CN/chat.json
@@ -0,0 +1,189 @@
+{
+ "ModelSwitch": {
+ "title": "模型"
+ },
+ "agentDefaultMessage": "你好,我是 **{{name}}**,你可以立即与我开始对话,也可以前往 [助手设置]({{url}}) 完善我的信息。",
+ "agentDefaultMessageWithSystemRole": "你好,我是 **{{name}}**,{{systemRole}},让我们开始对话吧!",
+ "agentDefaultMessageWithoutEdit": "你好,我是 **{{name}}**,让我们开始对话吧!",
+ "agents": "助手",
+ "artifact": {
+ "generating": "生成中",
+ "thinking": "思考中",
+ "thought": "思考过程",
+ "unknownTitle": "未命名作品"
+ },
+ "backToBottom": "跳转至当前",
+ "chatList": {
+ "longMessageDetail": "查看详情"
+ },
+ "clearCurrentMessages": "清空当前会话消息",
+ "confirmClearCurrentMessages": "即将清空当前会话消息,清空后将无法找回,请确认你的操作",
+ "confirmRemoveSessionItemAlert": "即将删除该助手,删除后该将无法找回,请确认你的操作",
+ "confirmRemoveSessionSuccess": "助手删除成功",
+ "defaultAgent": "自定义助手",
+ "defaultList": "助手会话列表",
+ "defaultSession": "自定义助手",
+ "duplicateSession": {
+ "loading": "复制中...",
+ "success": "复制成功",
+ "title": "{{title}} 副本"
+ },
+ "duplicateTitle": "{{title}} 副本",
+ "emptyAgent": "暂无助手",
+ "historyRange": "历史范围",
+ "inbox": {
+ "desc": "开启大脑集群,激发思维火花。你的智能助理,在这里与你交流一切",
+ "title": "随便聊聊"
+ },
+ "input": {
+ "addAi": "添加一条 AI 消息",
+ "addUser": "添加一条用户消息",
+ "more": "更多",
+ "send": "发送",
+ "clear": "清空内容",
+ "sendWithCmdEnter": "按 {{meta}} + Enter 键发送",
+ "sendWithEnter": "按 Enter 键发送",
+ "stop": "停止",
+ "warp": "换行"
+ },
+ "knowledgeBase": {
+ "all": "所有内容",
+ "allFiles": "所有文件",
+ "allKnowledgeBases": "所有知识库",
+ "disabled": "当前部署模式不支持知识库对话,如需使用,请切换到服务端数据库部署或使用 {{cloud}} 服务",
+ "library": {
+ "action": {
+ "add": "添加",
+ "detail": "详情",
+ "remove": "移除"
+ },
+ "title": "文件/知识库"
+ },
+ "relativeFilesOrKnowledgeBases": "关联文件/知识库",
+ "title": "知识库",
+ "uploadGuide": "上传过的文件可以在「知识库」中查看哦",
+ "viewMore": "查看更多"
+ },
+ "messageAction": {
+ "delAndRegenerate": "删除并重新生成",
+ "regenerate": "重新生成"
+ },
+ "newAgent": "新建助手",
+ "pin": "置顶",
+ "pinOff": "取消置顶",
+ "rag": {
+ "referenceChunks": "引用源",
+ "userQuery": {
+ "actions": {
+ "delete": "删除 Query 重写",
+ "regenerate": "重新生成 Query"
+ }
+ }
+ },
+ "regenerate": "重新生成",
+ "roleAndArchive": "角色与记录",
+ "searchAgentPlaceholder": "搜索助手...",
+ "sendPlaceholder": "输入聊天内容...",
+ "sessionGroup": {
+ "config": "分组管理",
+ "confirmRemoveGroupAlert": "即将删除该分组,删除后该分组的助手将移动到默认列表,请确认你的操作",
+ "createAgentSuccess": "助手创建成功",
+ "createGroup": "添加新分组",
+ "createSuccess": "分组创建成功",
+ "creatingAgent": "助手创建中...",
+ "inputPlaceholder": "请输入分组名称...",
+ "moveGroup": "移动到分组",
+ "newGroup": "新分组",
+ "rename": "重命名分组",
+ "renameSuccess": "重命名成功",
+ "sortSuccess": "重新排序成功",
+ "sorting": "分组排序更新中...",
+ "tooLong": "分组名称长度需在 1-20 之内"
+ },
+ "shareModal": {
+ "download": "下载截图",
+ "imageType": "图片格式",
+ "screenshot": "截图",
+ "settings": "导出设置",
+ "shareToShareGPT": "生成 ShareGPT 分享链接",
+ "withBackground": "包含背景图片",
+ "withFooter": "包含页脚",
+ "withPluginInfo": "包含插件信息",
+ "withSystemRole": "包含助手角色设定"
+ },
+ "stt": {
+ "action": "语音输入",
+ "loading": "识别中...",
+ "prettifying": "润色中..."
+ },
+ "temp": "临时",
+ "tokenDetails": {
+ "chats": "会话消息",
+ "rest": "剩余可用",
+ "systemRole": "角色设定",
+ "title": "上下文明细",
+ "tools": "插件设定",
+ "total": "总共可用",
+ "used": "总计使用"
+ },
+ "tokenTag": {
+ "overload": "超过限制",
+ "remained": "剩余",
+ "used": "使用"
+ },
+ "topic": {
+ "actions": {
+ "autoRename": "智能重命名",
+ "duplicate": "创建副本",
+ "export": "导出话题"
+ },
+ "checkOpenNewTopic": "是否开启新话题?",
+ "checkSaveCurrentMessages": "是否保存当前会话为话题?",
+ "confirmRemoveAll": "即将删除全部话题,删除后将不可恢复,请谨慎操作。",
+ "confirmRemoveTopic": "即将删除该话题,删除后将不可恢复,请谨慎操作。",
+ "confirmRemoveUnstarred": "即将删除未收藏话题,删除后将不可恢复,请谨慎操作。",
+ "defaultTitle": "默认话题",
+ "duplicateLoading": "话题复制中...",
+ "duplicateSuccess": "话题复制成功",
+ "guide": {
+ "desc": "点击发送左侧按钮可将当前会话保存为历史话题,并开启新一轮会话",
+ "title": "话题列表"
+ },
+ "openNewTopic": "开启新话题",
+ "removeAll": "删除全部话题",
+ "removeUnstarred": "删除未收藏话题",
+ "saveCurrentMessages": "将当前会话保存为话题",
+ "searchPlaceholder": "搜索话题...",
+ "title": "话题"
+ },
+ "translate": {
+ "action": "翻译",
+ "clear": "删除翻译"
+ },
+ "tts": {
+ "action": "语音朗读",
+ "clear": "删除语音"
+ },
+ "updateAgent": "更新助理信息",
+ "upload": {
+ "action": {
+ "fileUpload": "上传文件",
+ "folderUpload": "上传文件夹",
+ "imageDisabled": "当前模型不支持视觉识别,请切换模型后使用",
+ "imageUpload": "上传图片",
+ "tooltip": "上传"
+ },
+ "clientMode": {
+ "actionFiletip": "上传文件",
+ "actionTooltip": "上传",
+ "disabled": "当前模型不支持视觉识别和文件分析,请切换模型后使用"
+ },
+ "preview": {
+ "prepareTasks": "准备分块...",
+ "status": {
+ "pending": "准备上传...",
+ "processing": "文件处理中..."
+ }
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/zh-CN/clerk.json b/DigitalHumanWeb/locales/zh-CN/clerk.json
new file mode 100644
index 0000000..d2c2f4e
--- /dev/null
+++ b/DigitalHumanWeb/locales/zh-CN/clerk.json
@@ -0,0 +1,769 @@
+{
+ "backButton": "返回",
+ "badge__default": "默认",
+ "badge__otherImpersonatorDevice": "其他模拟器设备",
+ "badge__primary": "主要",
+ "badge__requiresAction": "需要采取行动",
+ "badge__thisDevice": "此设备",
+ "badge__unverified": "未验证",
+ "badge__userDevice": "用户设备",
+ "badge__you": "您",
+ "createOrganization": {
+ "formButtonSubmit": "创建组织",
+ "invitePage": {
+ "formButtonReset": "跳过"
+ },
+ "title": "创建组织"
+ },
+ "dates": {
+ "lastDay": "昨天 {{ date | timeString('zh-CN') }}",
+ "next6Days": "{{ date | weekday('zh-CN','long') }} {{ date | timeString('zh-CN') }}",
+ "nextDay": "明天 {{ date | timeString('zh-CN') }}",
+ "numeric": "{{ date | numeric('zh-CN') }}",
+ "previous6Days": "上周{{ date | weekday('zh-CN','long') }} {{ date | timeString('zh-CN') }}",
+ "sameDay": "今天 {{ date | timeString('zh-CN') }}"
+ },
+ "dividerText": "或者",
+ "footerActionLink__useAnotherMethod": "使用另一种方法",
+ "footerPageLink__help": "帮助",
+ "footerPageLink__privacy": "隐私",
+ "footerPageLink__terms": "条款",
+ "formButtonPrimary": "继续",
+ "formButtonPrimary__verify": "验证",
+ "formFieldAction__forgotPassword": "忘记密码?",
+ "formFieldError__matchingPasswords": "密码匹配。",
+ "formFieldError__notMatchingPasswords": "密码不匹配。",
+ "formFieldError__verificationLinkExpired": "验证链接已过期。请申请新的链接。",
+ "formFieldHintText__optional": "选填",
+ "formFieldHintText__slug": "Slug 是一个人类可读的 ID,它必须是唯一的。它通常用于 URL 中。",
+ "formFieldInputPlaceholder__backupCode": "",
+ "formFieldInputPlaceholder__confirmDeletionUserAccount": "删除帐户",
+ "formFieldInputPlaceholder__emailAddress": "",
+ "formFieldInputPlaceholder__emailAddress_username": "",
+ "formFieldInputPlaceholder__emailAddresses": "输入或粘贴一个或多个电子邮件地址,用空格或逗号分隔",
+ "formFieldInputPlaceholder__firstName": "",
+ "formFieldInputPlaceholder__lastName": "",
+ "formFieldInputPlaceholder__organizationDomain": "",
+ "formFieldInputPlaceholder__organizationDomainEmailAddress": "",
+ "formFieldInputPlaceholder__organizationName": "",
+ "formFieldInputPlaceholder__organizationSlug": "my-org",
+ "formFieldInputPlaceholder__password": "",
+ "formFieldInputPlaceholder__phoneNumber": "",
+ "formFieldInputPlaceholder__username": "",
+ "formFieldLabel__automaticInvitations": "为此域名启用自动邀请",
+ "formFieldLabel__backupCode": "备用代码",
+ "formFieldLabel__confirmDeletion": "确认",
+ "formFieldLabel__confirmPassword": "确认密码",
+ "formFieldLabel__currentPassword": "当前密码",
+ "formFieldLabel__emailAddress": "电子邮件地址",
+ "formFieldLabel__emailAddress_username": "电子邮件地址或用户名",
+ "formFieldLabel__emailAddresses": "电子邮件地址",
+ "formFieldLabel__firstName": "名字",
+ "formFieldLabel__lastName": "姓氏",
+ "formFieldLabel__newPassword": "新密码",
+ "formFieldLabel__organizationDomain": "域名",
+ "formFieldLabel__organizationDomainDeletePending": "删除待处理的邀请和建议",
+ "formFieldLabel__organizationDomainEmailAddress": "验证邮箱地址",
+ "formFieldLabel__organizationDomainEmailAddressDescription": "输入此域名下的一个邮箱地址以接收验证码并验证此域名。",
+ "formFieldLabel__organizationName": "组织名称",
+ "formFieldLabel__organizationSlug": "URL 简称",
+ "formFieldLabel__passkeyName": "Passkey 名称",
+ "formFieldLabel__password": "密码",
+ "formFieldLabel__phoneNumber": "电话号码",
+ "formFieldLabel__role": "角色",
+ "formFieldLabel__signOutOfOtherSessions": "登出所有其他设备",
+ "formFieldLabel__username": "用户名",
+ "impersonationFab": {
+ "action__signOut": "退出登录",
+ "title": "以 {{identifier}} 的身份登录"
+ },
+ "locale": "zh-CN",
+ "maintenanceMode": "我们目前正在进行维护,但不用担心,不会超过几分钟。",
+ "membershipRole__admin": "管理员",
+ "membershipRole__basicMember": "成员",
+ "membershipRole__guestMember": "访客",
+ "organizationList": {
+ "action__createOrganization": "创建组织",
+ "action__invitationAccept": "加入",
+ "action__suggestionsAccept": "请求加入",
+ "createOrganization": "创建组织",
+ "invitationAcceptedLabel": "已加入",
+ "subtitle": "以继续使用 {{applicationName}}",
+ "suggestionsAcceptedLabel": "等待批准",
+ "title": "选择一个帐户",
+ "titleWithoutPersonal": "选择一个组织"
+ },
+ "organizationProfile": {
+ "badge__automaticInvitation": "自动邀请",
+ "badge__automaticSuggestion": "自动建议",
+ "badge__manualInvitation": "无自动注册",
+ "badge__unverified": "未验证",
+ "createDomainPage": {
+ "subtitle": "添加域名以进行验证。具有此域名电子邮件地址的用户可以自动加入组织或请求加入。",
+ "title": "添加域名"
+ },
+ "invitePage": {
+ "detailsTitle__inviteFailed": "邀请未发送。以下电子邮件地址已有待处理的邀请:{{email_addresses}}。",
+ "formButtonPrimary__continue": "发送邀请",
+ "selectDropdown__role": "选择角色",
+ "subtitle": "输入或粘贴一个或多个电子邮件地址,用空格或逗号分隔。",
+ "successMessage": "邀请已成功发送",
+ "title": "邀请新成员"
+ },
+ "membersPage": {
+ "action__invite": "邀请",
+ "activeMembersTab": {
+ "menuAction__remove": "移除成员",
+ "tableHeader__actions": "",
+ "tableHeader__joined": "加入时间",
+ "tableHeader__role": "角色",
+ "tableHeader__user": "用户"
+ },
+ "detailsTitle__emptyRow": "没有可显示的成员",
+ "invitationsTab": {
+ "autoInvitations": {
+ "headerSubtitle": "通过连接电子邮件域邀请用户加入组织。任何使用匹配电子邮件域注册的用户都可以随时加入组织。",
+ "headerTitle": "自动邀请",
+ "primaryButton": "管理已验证域名"
+ },
+ "table__emptyRow": "没有可显示的邀请"
+ },
+ "invitedMembersTab": {
+ "menuAction__revoke": "撤销邀请",
+ "tableHeader__invited": "已邀请"
+ },
+ "requestsTab": {
+ "autoSuggestions": {
+ "headerSubtitle": "使用匹配电子邮件域注册的用户可以看到请求加入组织的建议。",
+ "headerTitle": "自动建议",
+ "primaryButton": "管理已验证的域"
+ },
+ "menuAction__approve": "批准",
+ "menuAction__reject": "拒绝",
+ "tableHeader__requested": "请求访问",
+ "table__emptyRow": "没有请求显示"
+ },
+ "start": {
+ "headerTitle__invitations": "邀请",
+ "headerTitle__members": "成员",
+ "headerTitle__requests": "请求"
+ }
+ },
+ "navbar": {
+ "description": "管理您的组织",
+ "general": "常规",
+ "members": "成员",
+ "title": "组织"
+ },
+ "profilePage": {
+ "dangerSection": {
+ "deleteOrganization": {
+ "actionDescription": "在下方输入“{{organizationName}}”以继续。",
+ "messageLine1": "您确定要删除此组织吗?",
+ "messageLine2": "此操作是永久且不可逆转的。",
+ "successMessage": "您已删除组织",
+ "title": "删除组织"
+ },
+ "leaveOrganization": {
+ "actionDescription": "在下方输入“{{organizationName}}”以继续。",
+ "messageLine1": "您确定要离开此组织吗?您将失去对该组织及其应用程序的访问权限。",
+ "messageLine2": "此操作是永久且不可逆转的。",
+ "successMessage": "您已离开组织",
+ "title": "离开组织"
+ },
+ "title": "危险"
+ },
+ "domainSection": {
+ "menuAction__manage": "管理",
+ "menuAction__remove": "删除",
+ "menuAction__verify": "验证",
+ "primaryButton": "添加域",
+ "subtitle": "允许用户根据已验证的电子邮件域自动加入组织或请求加入。",
+ "title": "已验证的域"
+ },
+ "successMessage": "组织已更新",
+ "title": "更新配置文件"
+ },
+ "removeDomainPage": {
+ "messageLine1": "电子邮件域 {{domain}} 将被移除。",
+ "messageLine2": "此后用户将无法自动加入组织。",
+ "successMessage": "{{domain}} 已移除",
+ "title": "移除域"
+ },
+ "start": {
+ "headerTitle__general": "常规",
+ "headerTitle__members": "成员",
+ "profileSection": {
+ "primaryButton": "更新配置文件",
+ "title": "组织配置文件",
+ "uploadAction__title": "标识"
+ }
+ },
+ "verifiedDomainPage": {
+ "dangerTab": {
+ "calloutInfoLabel": "移除此域将影响已邀请的用户。",
+ "removeDomainActionLabel__remove": "移除域",
+ "removeDomainSubtitle": "从已验证的域中移除此域",
+ "removeDomainTitle": "移除域"
+ },
+ "enrollmentTab": {
+ "automaticInvitationOption__description": "用户注册时将自动邀请加入组织,并随时可以加入。",
+ "automaticInvitationOption__label": "自动邀请",
+ "automaticSuggestionOption__description": "用户将收到请求加入的建议,但必须由管理员批准后才能加入组织。",
+ "automaticSuggestionOption__label": "自动建议",
+ "calloutInfoLabel": "更改注册模式仅影响新用户。",
+ "calloutInvitationCountLabel": "已发送给用户的待处理邀请:{{count}}",
+ "calloutSuggestionCountLabel": "已发送给用户的待处理建议:{{count}}",
+ "manualInvitationOption__description": "用户只能手动邀请加入组织。",
+ "manualInvitationOption__label": "不自动加入",
+ "subtitle": "选择来自此域的用户如何加入组织。"
+ },
+ "start": {
+ "headerTitle__danger": "危险",
+ "headerTitle__enrollment": "注册选项"
+ },
+ "subtitle": "域 {{domain}} 已验证。继续选择注册模式。",
+ "title": "更新 {{domain}}"
+ },
+ "verifyDomainPage": {
+ "formSubtitle": "输入发送到您电子邮件地址的验证代码",
+ "formTitle": "验证代码",
+ "resendButton": "没有收到验证码?重新发送",
+ "subtitle": "需要通过电子邮件验证域 {{domainName}}。",
+ "subtitleVerificationCodeScreen": "已发送验证码至 {{emailAddress}}。输入验证码以继续。",
+ "title": "验证域"
+ }
+ },
+ "organizationSwitcher": {
+ "action__createOrganization": "创建组织",
+ "action__invitationAccept": "加入",
+ "action__manageOrganization": "管理",
+ "action__suggestionsAccept": "请求加入",
+ "notSelected": "未选择组织",
+ "personalWorkspace": "个人账户",
+ "suggestionsAcceptedLabel": "待批准"
+ },
+ "paginationButton__next": "下一页",
+ "paginationButton__previous": "上一页",
+ "paginationRowText__displaying": "显示",
+ "paginationRowText__of": "共",
+ "signIn": {
+ "accountSwitcher": {
+ "action__addAccount": "添加账户",
+ "action__signOutAll": "退出所有账户",
+ "subtitle": "选择要继续使用的账户。",
+ "title": "选择一个账户"
+ },
+ "alternativeMethods": {
+ "actionLink": "获取帮助",
+ "actionText": "没有这些?",
+ "blockButton__backupCode": "使用备用代码",
+ "blockButton__emailCode": "将验证码发送至 {{identifier}}",
+ "blockButton__emailLink": "将链接发送至 {{identifier}}",
+ "blockButton__passkey": "使用您的密钥登录",
+ "blockButton__password": "使用密码登录",
+ "blockButton__phoneCode": "将短信验证码发送至 {{identifier}}",
+ "blockButton__totp": "使用您的身份验证器应用程序",
+ "getHelp": {
+ "blockButton__emailSupport": "邮件支持",
+ "content": "如果您在登录您的帐户时遇到困难,请给我们发送电子邮件,我们将尽快与您合作恢复访问权限。",
+ "title": "获取帮助"
+ },
+ "subtitle": "遇到问题?您可以使用以下任一方法登录。",
+ "title": "使用其他方法"
+ },
+ "backupCodeMfa": {
+ "subtitle": "您的备用代码是在设置两步验证时获得的。",
+ "title": "输入备用代码"
+ },
+ "emailCode": {
+ "formTitle": "验证码",
+ "resendButton": "没有收到验证码?重新发送",
+ "subtitle": "继续访问 {{applicationName}}",
+ "title": "查看您的电子邮件"
+ },
+ "emailLink": {
+ "expired": {
+ "subtitle": "返回原始标签继续。",
+ "title": "此验证链接已过期"
+ },
+ "failed": {
+ "subtitle": "返回原始标签继续。",
+ "title": "此验证链接无效"
+ },
+ "formSubtitle": "使用发送到您电子邮件的验证链接",
+ "formTitle": "验证链接",
+ "loading": {
+ "subtitle": "您将很快被重定向",
+ "title": "登录中..."
+ },
+ "resendButton": "没有收到链接?重新发送",
+ "subtitle": "继续访问 {{applicationName}}",
+ "title": "查看您的电子邮件",
+ "unusedTab": {
+ "title": "您可以关闭此标签"
+ },
+ "verified": {
+ "subtitle": "您将很快被重定向",
+ "title": "成功登录"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "返回原始标签继续",
+ "subtitleNewTab": "返回新打开的标签继续",
+ "titleNewTab": "在其他标签上登录"
+ }
+ },
+ "forgotPassword": {
+ "formTitle": "重置密码代码",
+ "resendButton": "没有收到验证码?重新发送",
+ "subtitle": "重置您的密码",
+ "subtitle_email": "首先,输入发送到您电子邮件地址的代码",
+ "subtitle_phone": "首先,输入发送到您手机的代码",
+ "title": "重置密码"
+ },
+ "forgotPasswordAlternativeMethods": {
+ "blockButton__resetPassword": "重置您的密码",
+ "label__alternativeMethods": "或者,使用其他方法登录",
+ "title": "忘记密码?"
+ },
+ "noAvailableMethods": {
+ "message": "无法继续登录。没有可用的身份验证因素。",
+ "subtitle": "发生错误",
+ "title": "无法登录"
+ },
+ "passkey": {
+ "subtitle": "使用您的密钥确认是您本人。您的设备可能会要求您的指纹、面容或屏幕锁。",
+ "title": "使用您的密钥"
+ },
+ "password": {
+ "actionLink": "使用其他方法",
+ "subtitle": "输入与您的帐户关联的密码",
+ "title": "输入您的密码"
+ },
+ "passwordPwned": {
+ "title": "密码已泄露"
+ },
+ "phoneCode": {
+ "formTitle": "验证码",
+ "resendButton": "没有收到验证码?重新发送",
+ "subtitle": "继续访问 {{applicationName}}",
+ "title": "检查您的手机"
+ },
+ "phoneCodeMfa": {
+ "formTitle": "验证码",
+ "resendButton": "没有收到验证码?重新发送",
+ "subtitle": "请继续,输入发送到您手机的验证码",
+ "title": "检查您的手机"
+ },
+ "resetPassword": {
+ "formButtonPrimary": "重置密码",
+ "requiredMessage": "出于安全原因,需要重置您的密码。",
+ "successMessage": "您的密码已成功更改。正在登录,请稍候。",
+ "title": "设置新密码"
+ },
+ "resetPasswordMfa": {
+ "detailsLabel": "在重置密码之前,我们需要验证您的身份。"
+ },
+ "start": {
+ "actionLink": "注册",
+ "actionLink__use_email": "使用电子邮件",
+ "actionLink__use_email_username": "使用电子邮件或用户名",
+ "actionLink__use_passkey": "改用密钥",
+ "actionLink__use_phone": "使用手机",
+ "actionLink__use_username": "使用用户名",
+ "actionText": "没有帐户?",
+ "subtitle": "欢迎回来!请登录以继续",
+ "title": "登录到 {{applicationName}}"
+ },
+ "totpMfa": {
+ "formTitle": "验证码",
+ "subtitle": "请继续,输入您的身份验证器应用程序生成的验证码",
+ "title": "双重验证"
+ }
+ },
+ "signInEnterPasswordTitle": "输入您的密码",
+ "signUp": {
+ "continue": {
+ "actionLink": "登录",
+ "actionText": "已有帐户?",
+ "subtitle": "请填写剩余的细节以继续。",
+ "title": "填写缺失的字段"
+ },
+ "emailCode": {
+ "formSubtitle": "输入发送到您电子邮件地址的验证码",
+ "formTitle": "验证码",
+ "resendButton": "没有收到验证码?重新发送",
+ "subtitle": "输入发送到您电子邮件的验证码",
+ "title": "验证您的电子邮件"
+ },
+ "emailLink": {
+ "formSubtitle": "使用发送到您电子邮件地址的验证链接",
+ "formTitle": "验证链接",
+ "loading": {
+ "title": "注册中..."
+ },
+ "resendButton": "没有收到链接?重新发送",
+ "subtitle": "继续访问 {{applicationName}}",
+ "title": "验证您的电子邮件",
+ "verified": {
+ "title": "成功注册"
+ },
+ "verifiedSwitchTab": {
+ "subtitle": "返回新打开的标签继续",
+ "subtitleNewTab": "返回上一个标签继续",
+ "title": "成功验证电子邮件"
+ }
+ },
+ "phoneCode": {
+ "formSubtitle": "输入发送到您电话号码的验证码",
+ "formTitle": "验证码",
+ "resendButton": "没有收到验证码?重新发送",
+ "subtitle": "输入发送到您手机的验证码",
+ "title": "验证您的手机"
+ },
+ "start": {
+ "actionLink": "登录",
+ "actionText": "已有帐户?",
+ "subtitle": "欢迎!请填写详细信息开始。",
+ "title": "创建您的帐户"
+ }
+ },
+ "socialButtonsBlockButton": "继续使用 {{provider|titleize}}",
+ "unstable__errors": {
+ "captcha_invalid": "由于安全验证失败,注册失败。请刷新页面重试,或联系支持获取更多帮助。",
+ "captcha_unavailable": "由于机器人验证失败,注册失败。请刷新页面重试,或联系支持获取更多帮助。",
+ "form_code_incorrect": "",
+ "form_identifier_exists": "",
+ "form_identifier_exists__email_address": "此电子邮件地址已被使用。请尝试另一个。",
+ "form_identifier_exists__phone_number": "此电话号码已被使用。请尝试另一个。",
+ "form_identifier_exists__username": "此用户名已被使用。请尝试另一个。",
+ "form_identifier_not_found": "",
+ "form_param_format_invalid": "",
+ "form_param_format_invalid__email_address": "电子邮件地址必须是有效的电子邮件地址。",
+ "form_param_format_invalid__phone_number": "电话号码必须符合有效的国际格式。",
+ "form_param_max_length_exceeded__first_name": "名字不应超过256个字符。",
+ "form_param_max_length_exceeded__last_name": "姓氏不应超过256个字符。",
+ "form_param_max_length_exceeded__name": "名称不应超过256个字符。",
+ "form_param_nil": "",
+ "form_password_incorrect": "",
+ "form_password_length_too_short": "",
+ "form_password_not_strong_enough": "您的密码不够强大。",
+ "form_password_pwned": "此密码已经被发现为泄露的一部分,不能使用,请尝试其他密码。",
+ "form_password_pwned__sign_in": "此密码已经被发现为泄露的一部分,不能使用,请重置您的密码。",
+ "form_password_size_in_bytes_exceeded": "您的密码已超过允许的最大字节数,请缩短或删除一些特殊字符。",
+ "form_password_validation_failed": "密码不正确",
+ "form_username_invalid_character": "",
+ "form_username_invalid_length": "",
+ "identification_deletion_failed": "您不能删除您的最后一个身份验证。",
+ "not_allowed_access": "",
+ "passkey_already_exists": "此设备已注册过通行密钥。",
+ "passkey_not_supported": "此设备不支持通行密钥。",
+ "passkey_pa_not_supported": "注册需要平台验证器,但设备不支持。",
+ "passkey_registration_cancelled": "通行密钥注册已取消或超时。",
+ "passkey_retrieval_cancelled": "通行密钥验证已取消或超时。",
+ "passwordComplexity": {
+ "maximumLength": "少于{{length}}个字符",
+ "minimumLength": "{{length}}个或更多字符",
+ "requireLowercase": "一个小写字母",
+ "requireNumbers": "一个数字",
+ "requireSpecialCharacter": "一个特殊字符",
+ "requireUppercase": "一个大写字母",
+ "sentencePrefix": "您的密码必须包含"
+ },
+ "phone_number_exists": "此电话号码已被使用。请尝试另一个。",
+ "zxcvbn": {
+ "couldBeStronger": "您的密码可以更强大。尝试添加更多字符。",
+ "goodPassword": "您的密码符合所有必要要求。",
+ "notEnough": "您的密码不够强大。",
+ "suggestions": {
+ "allUppercase": "将一些字母大写,但不是全部。",
+ "anotherWord": "添加更少见的单词。",
+ "associatedYears": "避免与您相关的年份。",
+ "capitalization": "大写不止第一个字母。",
+ "dates": "避免与您相关的日期和年份。",
+ "l33t": "避免可预测的字母替换,如将'@'替换为'a'。",
+ "longerKeyboardPattern": "使用更长的键盘模式,多次改变输入方向。",
+ "noNeed": "您可以创建强大的密码,而无需使用符号、数字或大写字母。",
+ "pwned": "如果您在其他地方使用此密码,应该更改它。",
+ "recentYears": "避免最近的年份。",
+ "repeated": "避免重复的单词和字符。",
+ "reverseWords": "避免常见单词的反向拼写。",
+ "sequences": "避免常见的字符序列。",
+ "useWords": "使用多个单词,但避免常见短语。"
+ },
+ "warnings": {
+ "common": "这是一个常用的密码。",
+ "commonNames": "常见的名字和姓氏容易被猜到。",
+ "dates": "日期容易被猜到。",
+ "extendedRepeat": "重复的字符模式如“abcabcabc”容易被猜到。",
+ "keyPattern": "简短的键盘模式容易被猜到。",
+ "namesByThemselves": "单个名字或姓氏容易被猜到。",
+ "pwned": "您的密码在互联网上的数据泄露中曝光。",
+ "recentYears": "最近的年份容易被猜到。",
+ "sequences": "常见的字符序列如“abc”容易被猜到。",
+ "similarToCommon": "这与常用密码相似。",
+ "simpleRepeat": "重复的字符如“aaa”容易被猜到。",
+ "straightRow": "键盘上直线排列的键易被猜到。",
+ "topHundred": "这是一个常用密码。",
+ "topTen": "这是一个广泛使用的密码。",
+ "userInputs": "密码中不应包含任何个人或页面相关数据。",
+ "wordByItself": "单个单词容易被猜到。"
+ }
+ }
+ },
+ "userButton": {
+ "action__addAccount": "添加账户",
+ "action__manageAccount": "管理账户",
+ "action__signOut": "登出",
+ "action__signOutAll": "从所有账户登出"
+ },
+ "userProfile": {
+ "backupCodePage": {
+ "actionLabel__copied": "已复制!",
+ "actionLabel__copy": "复制全部",
+ "actionLabel__download": "下载 .txt",
+ "actionLabel__print": "打印",
+ "infoText1": "此帐户将启用备份代码。",
+ "infoText2": "保持备份代码的机密性并安全存储。如果怀疑备份代码已泄露,可以重新生成备份代码。",
+ "subtitle__codelist": "安全存储并保密备份代码。",
+ "successMessage": "备份代码现已启用。如果您无法访问您的身份验证设备,可以使用其中之一登录您的帐户。每个代码只能使用一次。",
+ "successSubtitle": "如果您无法访问您的身份验证设备,您可以使用其中之一登录您的帐户。",
+ "title": "添加备份代码验证",
+ "title__codelist": "备份代码"
+ },
+ "connectedAccountPage": {
+ "formHint": "选择要连接您的帐户的提供商。",
+ "formHint__noAccounts": "没有可用的外部帐户提供商。",
+ "removeResource": {
+ "messageLine1": "{{identifier}} 将从此帐户中删除。",
+ "messageLine2": "您将无法再使用此连接的帐户,并且任何依赖功能将不再起作用。",
+ "successMessage": "{{connectedAccount}} 已从您的帐户中删除。",
+ "title": "删除连接的帐户"
+ },
+ "socialButtonsBlockButton": "{{provider|titleize}}",
+ "successMessage": "提供商已添加到您的帐户",
+ "title": "添加连接的帐户"
+ },
+ "deletePage": {
+ "actionDescription": "在下方输入“删除帐户”以继续。",
+ "confirm": "删除帐户",
+ "messageLine1": "您确定要删除您的帐户吗?",
+ "messageLine2": "此操作是永久且不可逆转的。",
+ "title": "删除帐户"
+ },
+ "emailAddressPage": {
+ "emailCode": {
+ "formHint": "将发送包含验证码的电子邮件至此电子邮件地址。",
+ "formSubtitle": "输入发送至 {{identifier}} 的验证码。",
+ "formTitle": "验证码",
+ "resendButton": "没有收到验证码?重新发送",
+ "successMessage": "电子邮件 {{identifier}} 已添加到您的帐户。"
+ },
+ "emailLink": {
+ "formHint": "将发送包含验证链接的电子邮件至此电子邮件地址。",
+ "formSubtitle": "点击发送至 {{identifier}} 的电子邮件中的验证链接。",
+ "formTitle": "验证链接",
+ "resendButton": "没有收到链接?重新发送",
+ "successMessage": "电子邮件 {{identifier}} 已添加到您的帐户。"
+ },
+ "removeResource": {
+ "messageLine1": "{{identifier}} 将从此帐户中删除。",
+ "messageLine2": "您将无法再使用此电子邮件地址登录。",
+ "successMessage": "{{emailAddress}} 已从您的帐户中删除。",
+ "title": "删除电子邮件地址"
+ },
+ "title": "添加电子邮件地址",
+ "verifyTitle": "验证电子邮件地址"
+ },
+ "formButtonPrimary__add": "添加",
+ "formButtonPrimary__continue": "继续",
+ "formButtonPrimary__finish": "完成",
+ "formButtonPrimary__remove": "删除",
+ "formButtonPrimary__save": "保存",
+ "formButtonReset": "取消",
+ "mfaPage": {
+ "formHint": "选择要添加的方法。",
+ "title": "添加双重验证"
+ },
+ "mfaPhoneCodePage": {
+ "backButton": "使用现有号码",
+ "primaryButton__addPhoneNumber": "添加电话号码",
+ "removeResource": {
+ "messageLine1": "{{identifier}} 在登录时将不再接收验证代码。",
+ "messageLine2": "您的帐户可能不够安全。您确定要继续吗?",
+ "successMessage": "{{mfaPhoneCode}} 的短信代码双重验证已移除。",
+ "title": "移除双重验证"
+ },
+ "subtitle__availablePhoneNumbers": "选择现有电话号码注册短信代码双重验证,或添加新号码。",
+ "subtitle__unavailablePhoneNumbers": "没有可用的电话号码用于注册短信代码双重验证,请添加新号码。",
+ "successMessage1": "登录时,您需要输入发送至此电话号码的验证代码作为额外步骤。",
+ "successMessage2": "保存这些备份代码并将其安全存储。如果无法访问您的身份验证设备,可以使用备份代码登录。",
+ "successTitle": "短信代码验证已启用",
+ "title": "添加短信代码验证"
+ },
+ "mfaTOTPPage": {
+ "authenticatorApp": {
+ "buttonAbleToScan__nonPrimary": "扫描 QR 码",
+ "buttonUnableToScan__nonPrimary": "无法扫描 QR 码?",
+ "infoText__ableToScan": "在您的身份验证器应用中设置新的登录方法,并扫描以下 QR 码将其链接到您的帐户。",
+ "infoText__unableToScan": "在您的身份验证器中设置新的登录方法,并输入下面提供的密钥。",
+ "inputLabel__unableToScan1": "确保已启用基于时间或一次性密码,然后完成链接您的帐户。",
+ "inputLabel__unableToScan2": "或者,如果您的身份验证器支持 TOTP URI,您也可以复制完整的 URI。"
+ },
+ "removeResource": {
+ "messageLine1": "登录时将不再需要此身份验证器的验证代码。",
+ "messageLine2": "您的帐户可能不够安全。您确定要继续吗?",
+ "successMessage": "通过身份验证器应用的双重验证已移除。",
+ "title": "移除双重验证"
+ },
+ "successMessage": "双重验证现已启用。登录时,您将需要输入此身份验证器生成的验证代码作为额外步骤。",
+ "title": "添加身份验证器应用",
+ "verifySubtitle": "输入您的身份验证器生成的验证代码",
+ "verifyTitle": "验证代码"
+ },
+ "mobileButton__menu": "菜单",
+ "navbar": {
+ "account": "个人资料",
+ "description": "管理您的帐户信息",
+ "security": "安全",
+ "title": "帐户"
+ },
+ "passkeyScreen": {
+ "removeResource": {
+ "messageLine1": "{{name}} 将从此帐户中删除。",
+ "title": "删除密码"
+ },
+ "subtitle__rename": "您可以更改密码名称以便更容易找到。",
+ "title__rename": "重命名密码"
+ },
+ "passwordPage": {
+ "checkboxInfoText__signOutOfOtherSessions": "建议注销所有可能使用旧密码的其他设备。",
+ "readonly": "您当前无法编辑密码,因为只能通过企业连接登录。",
+ "successMessage__set": "您的密码已设置。",
+ "successMessage__signOutOfOtherSessions": "所有其他设备已注销。",
+ "successMessage__update": "您的密码已更新。",
+ "title__set": "设置密码",
+ "title__update": "更新密码"
+ },
+ "phoneNumberPage": {
+ "infoText": "将向此电话号码发送包含验证码的短信。可能会收取短信和数据费用。",
+ "removeResource": {
+ "messageLine1": "{{identifier}} 将从此帐户中删除。",
+ "messageLine2": "您将无法再使用此电话号码登录。",
+ "successMessage": "{{phoneNumber}} 已从您的帐户中删除。",
+ "title": "删除电话号码"
+ },
+ "successMessage": "{{identifier}} 已添加到您的帐户。",
+ "title": "添加电话号码",
+ "verifySubtitle": "输入发送至 {{identifier}} 的验证码",
+ "verifyTitle": "验证电话号码"
+ },
+ "profilePage": {
+ "fileDropAreaHint": "推荐尺寸 1:1,最多 10MB。",
+ "imageFormDestructiveActionSubtitle": "移除",
+ "imageFormSubtitle": "上传",
+ "imageFormTitle": "个人资料图片",
+ "readonly": "您的个人资料信息由企业连接提供,无法编辑。",
+ "successMessage": "您的个人资料已更新。",
+ "title": "更新个人资料"
+ },
+ "start": {
+ "activeDevicesSection": {
+ "destructiveAction": "注销设备",
+ "title": "活动设备"
+ },
+ "connectedAccountsSection": {
+ "actionLabel__connectionFailed": "重试",
+ "actionLabel__reauthorize": "立即授权",
+ "destructiveActionTitle": "删除",
+ "primaryButton": "连接帐户",
+ "subtitle__reauthorize": "所需的范围已更新,您可能体验到功能受限。请重新授权此应用以避免任何问题。",
+ "title": "连接的帐户"
+ },
+ "dangerSection": {
+ "deleteAccountButton": "删除帐户",
+ "title": "删除帐户"
+ },
+ "emailAddressesSection": {
+ "destructiveAction": "删除电子邮件",
+ "detailsAction__nonPrimary": "设为主要",
+ "detailsAction__primary": "完成验证",
+ "detailsAction__unverified": "验证",
+ "primaryButton": "添加电子邮件地址",
+ "title": "电子邮件地址"
+ },
+ "enterpriseAccountsSection": {
+ "title": "企业帐户"
+ },
+ "headerTitle__account": "个人资料详情",
+ "headerTitle__security": "安全",
+ "mfaSection": {
+ "backupCodes": {
+ "actionLabel__regenerate": "重新生成",
+ "headerTitle": "备份代码",
+ "subtitle__regenerate": "获取一组新的安全备份代码。之前的备份代码将被删除且无法使用。",
+ "title__regenerate": "重新生成备份代码"
+ },
+ "phoneCode": {
+ "actionLabel__setDefault": "设为默认",
+ "destructiveActionLabel": "删除"
+ },
+ "primaryButton": "添加双重验证",
+ "title": "双重验证",
+ "totp": {
+ "destructiveActionTitle": "删除",
+ "headerTitle": "身份验证器应用"
+ }
+ },
+ "passkeysSection": {
+ "menuAction__destructive": "删除",
+ "menuAction__rename": "重命名",
+ "title": "密码"
+ },
+ "passwordSection": {
+ "primaryButton__setPassword": "设置密码",
+ "primaryButton__updatePassword": "更新密码",
+ "title": "密码"
+ },
+ "phoneNumbersSection": {
+ "destructiveAction": "删除电话号码",
+ "detailsAction__nonPrimary": "设为主要",
+ "detailsAction__primary": "完成验证",
+ "detailsAction__unverified": "验证电话号码",
+ "primaryButton": "添加电话号码",
+ "title": "电话号码"
+ },
+ "profileSection": {
+ "primaryButton": "更新个人资料",
+ "title": "个人资料"
+ },
+ "usernameSection": {
+ "primaryButton__setUsername": "设置用户名",
+ "primaryButton__updateUsername": "更新用户名",
+ "title": "用户名"
+ },
+ "web3WalletsSection": {
+ "destructiveAction": "删除钱包",
+ "primaryButton": "Web3 钱包",
+ "title": "Web3 钱包"
+ }
+ },
+ "usernamePage": {
+ "successMessage": "您的用户名已更新。",
+ "title__set": "设置用户名",
+ "title__update": "更新用户名"
+ },
+ "web3WalletPage": {
+ "removeResource": {
+ "messageLine1": "{{identifier}} 将从此帐户中删除。",
+ "messageLine2": "您将无法再使用此 Web3 钱包登录。",
+ "successMessage": "{{web3Wallet}} 已从您的帐户中删除。",
+ "title": "删除 Web3 钱包"
+ },
+ "subtitle__availableWallets": "选择要连接到您的帐户的 Web3 钱包。",
+ "subtitle__unavailableWallets": "没有可用的 Web3 钱包。",
+ "successMessage": "钱包已添加到您的帐户。",
+ "title": "添加 Web3 钱包"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/zh-CN/common.json b/DigitalHumanWeb/locales/zh-CN/common.json
new file mode 100644
index 0000000..503b562
--- /dev/null
+++ b/DigitalHumanWeb/locales/zh-CN/common.json
@@ -0,0 +1,237 @@
+{
+ "about": "关于",
+ "advanceSettings": "高级设置",
+ "alert": {
+ "cloud": {
+ "action": "立即体验",
+ "desc": "我们为所有注册用户提供了免费的 {{credit}} 额度计算积分,无需复杂配置开箱即用, 支持全局云同步与进阶联网查询,更多高级特性等你探索。",
+ "descOnMobile": "我们为所有注册用户提供了免费的 {{credit}} 额度计算积分,无需复杂配置开箱即用。",
+ "title": "{{name}} 开始公测"
+ }
+ },
+ "appInitializing": "应用启动中...",
+ "autoGenerate": "自动补全",
+ "autoGenerateTooltip": "基于提示词自动补全助手描述",
+ "autoGenerateTooltipDisabled": "请填写提示词后使用自动补全功能",
+ "back": "返回",
+ "batchDelete": "批量删除",
+ "blog": "产品博客",
+ "cancel": "取消",
+ "changelog": "更新日志",
+ "close": "关闭",
+ "contact": "联系我们",
+ "copy": "复制",
+ "copyFail": "复制失败",
+ "copySuccess": "复制成功",
+ "dataStatistics": {
+ "messages": "消息",
+ "sessions": "助手",
+ "today": "今日新增",
+ "topics": "话题"
+ },
+ "defaultAgent": "自定义助手",
+ "defaultSession": "自定义助手",
+ "delete": "删除",
+ "document": "使用文档",
+ "download": "下载",
+ "duplicate": "创建副本",
+ "edit": "编辑",
+ "export": "导出配置",
+ "exportType": {
+ "agent": "导出助手设定",
+ "agentWithMessage": "导出助手和消息",
+ "all": "导出全局设置和所有助手数据",
+ "allAgent": "导出所有助手设定",
+ "allAgentWithMessage": "导出所有助手和消息",
+ "globalSetting": "导出全局设置"
+ },
+ "feedback": "反馈与建议",
+ "follow": "在 {{name}} 上关注我们",
+ "footer": {
+ "action": {
+ "feedback": "分享您宝贵的建议",
+ "star": "在 GitHub 给添加星标"
+ },
+ "and": "并",
+ "feedback": {
+ "action": "分享反馈",
+ "desc": "您的每一个想法和建议对我们来说都弥足珍贵,我们迫不及待地想知道您的看法!欢迎联系我们提供产品功能和使用体验反馈,帮助我们将 {{appName}} 建设得更好。",
+ "title": "在 GitHub 分享您宝贵的反馈"
+ },
+ "later": "稍后",
+ "star": {
+ "action": "点亮星标",
+ "desc": "如果您喜爱我们的产品,并希望支持我们,可以去 GitHub 给我们点一颗星吗?这个小小的动作对我们来说意义重大,能激励我们为您持续提供特性体验。",
+ "title": "在 GitHub 为我们点亮星标"
+ },
+ "title": "喜欢我们的产品?"
+ },
+ "fullscreen": "全屏模式",
+ "historyRange": "历史范围",
+ "import": "导入配置",
+ "importModal": {
+ "error": {
+ "desc": "非常抱歉,数据导入过程发生异常。请尝试重新导入,或 <1>提交问题1>,我们将会第一时间帮你排查问题。",
+ "title": "数据导入失败"
+ },
+ "finish": {
+ "onlySettings": "系统设置导入成功",
+ "start": "开始使用",
+ "subTitle": "数据导入成功,耗时 {{duration}} 秒。导入明细如下:",
+ "title": "数据导入完成"
+ },
+ "loading": "数据导入中,请耐心等待...",
+ "preparing": "数据导入模块准备中...",
+ "result": {
+ "added": "导入成功",
+ "errors": "导入出错",
+ "messages": "消息",
+ "sessionGroups": "分组",
+ "sessions": "助手",
+ "skips": "重复跳过",
+ "topics": "话题",
+ "type": "数据类型"
+ },
+ "title": "导入数据",
+ "uploading": {
+ "desc": "当前文件较大,正在努力上传中...",
+ "restTime": "剩余时间",
+ "speed": "上传速度"
+ }
+ },
+ "information": "社区与资讯",
+ "installPWA": "安装浏览器应用 (PWA)",
+ "lang": {
+ "ar": "阿拉伯语",
+ "bg-BG": "保加利亚语",
+ "bn": "孟加拉语",
+ "cs-CZ": "捷克语",
+ "da-DK": "丹麦语",
+ "de-DE": "德语",
+ "el-GR": "希腊语",
+ "en": "英语",
+ "en-US": "英语",
+ "es-ES": "西班牙语",
+ "fi-FI": "芬兰语",
+ "fr-FR": "法语",
+ "hi-IN": "印地语",
+ "hu-HU": "匈牙利语",
+ "id-ID": "印尼语",
+ "it-IT": "意大利语",
+ "ja-JP": "日语",
+ "ko-KR": "韩语",
+ "nl-NL": "荷兰语",
+ "no-NO": "挪威语",
+ "pl-PL": "波兰语",
+ "pt-BR": "葡萄牙语",
+ "pt-PT": "葡萄牙语",
+ "ro-RO": "罗马尼亚语",
+ "ru-RU": "俄语",
+ "sk-SK": "斯洛伐克语",
+ "sr-RS": "塞尔维亚语",
+ "sv-SE": "瑞典语",
+ "th-TH": "泰语",
+ "tr-TR": "土耳其语",
+ "uk-UA": "乌克兰语",
+ "vi-VN": "越南语",
+ "zh": "简体中文",
+ "zh-CN": "简体中文",
+ "zh-TW": "繁体中文"
+ },
+ "layoutInitializing": "正在加载布局...",
+ "legal": "法律声明",
+ "loading": "加载中...",
+ "mail": {
+ "business": "商务合作",
+ "support": "邮件支持"
+ },
+ "oauth": "SSO 登录",
+ "officialSite": "官方网站",
+ "ok": "确定",
+ "password": "密码",
+ "pin": "置顶",
+ "pinOff": "取消置顶",
+ "privacy": "隐私政策",
+ "regenerate": "重新生成",
+ "rename": "重命名",
+ "reset": "重置",
+ "retry": "重试",
+ "send": "发送",
+ "setting": "设置",
+ "share": "分享",
+ "stop": "停止",
+ "sync": {
+ "actions": {
+ "settings": "同步设置",
+ "sync": "立即同步"
+ },
+ "awareness": {
+ "current": "当前设备"
+ },
+ "channel": "频道",
+ "disabled": {
+ "actions": {
+ "enable": "开启云端同步",
+ "settings": "配置同步参数"
+ },
+ "desc": "当前会话数据仅存储于此浏览器中。如果你需要在多个设备间同步数据,请配置并开启云端同步。",
+ "title": "数据同步未开启"
+ },
+ "enabled": {
+ "title": "数据同步"
+ },
+ "status": {
+ "connecting": "连接中",
+ "disabled": "同步未开启",
+ "ready": "已连接",
+ "synced": "已同步",
+ "syncing": "同步中",
+ "unconnected": "连接失败"
+ },
+ "title": "同步状态",
+ "unconnected": {
+ "tip": "信令服务器连接失败,将无法建立点对点通信频道,请检查网络后重试"
+ }
+ },
+ "tab": {
+ "chat": "会话",
+ "discover": "发现",
+ "files": "文件",
+ "me": "我",
+ "setting": "设置"
+ },
+ "telemetry": {
+ "allow": "允许",
+ "deny": "拒绝",
+ "desc": "我们希望匿名获取你的使用信息,进而帮助我们改进 {{appName}},并为你提供更好的产品体验。你可以在「设置」 - 「关于」随时关闭。",
+ "learnMore": "了解更多",
+ "title": "帮助 {{appName}} 做得更好"
+ },
+ "temp": "临时",
+ "terms": "服务条款",
+ "updateAgent": "更新助理信息",
+ "upgradeVersion": {
+ "action": "升级",
+ "hasNew": "有可用更新",
+ "newVersion": "有新版本可用:{{version}}"
+ },
+ "userPanel": {
+ "anonymousNickName": "匿名用户",
+ "billing": "账单管理",
+ "cloud": "体验 {{name}}",
+ "data": "数据存储",
+ "defaultNickname": "社区版用户",
+ "discord": "社区支持",
+ "docs": "使用文档",
+ "email": "邮件支持",
+ "feedback": "反馈与建议",
+ "help": "帮助中心",
+ "moveGuide": "设置按钮搬到这里啦",
+ "plans": "订阅方案",
+ "preview": "预览版",
+ "profile": "账户管理",
+ "setting": "应用设置",
+ "usages": "用量统计"
+ },
+ "version": "版本"
+}
diff --git a/DigitalHumanWeb/locales/zh-CN/components.json b/DigitalHumanWeb/locales/zh-CN/components.json
new file mode 100644
index 0000000..03811f6
--- /dev/null
+++ b/DigitalHumanWeb/locales/zh-CN/components.json
@@ -0,0 +1,84 @@
+{
+ "DragUpload": {
+ "dragDesc": "拖拽文件到这里,支持上传多个图片。",
+ "dragFileDesc": "拖拽图片和文件到这里,支持上传多个图片和文件。",
+ "dragFileTitle": "上传文件",
+ "dragTitle": "上传图片"
+ },
+ "FileManager": {
+ "actions": {
+ "addToKnowledgeBase": "添加到知识库",
+ "addToOtherKnowledgeBase": "添加到其他知识库",
+ "batchChunking": "批量分块",
+ "chunking": "分块",
+ "chunkingTooltip": "将文件拆分为多个文本块并向量化后,可用于语义检索和文件对话",
+ "confirmDelete": "即将删除该文件,删除后该将无法找回,请确认你的操作",
+ "confirmDeleteMultiFiles": "即将删除选中的 {{count}} 个文件,删除后该将无法找回,请确认你的操作",
+ "confirmRemoveFromKnowledgeBase": "即将从知识库中移除选中的 {{count}} 个文件,移除后文件仍然可以在全部文件中查看,请确认你的操作",
+ "copyUrl": "复制链接",
+ "copyUrlSuccess": "文件地址复制成功",
+ "createChunkingTask": "准备中...",
+ "deleteSuccess": "文件删除成功",
+ "downloading": "文件下载中...",
+ "removeFromKnowledgeBase": "从知识库中移除",
+ "removeFromKnowledgeBaseSuccess": "文件移除成功"
+ },
+ "bottom": "已经到底啦",
+ "config": {
+ "showFilesInKnowledgeBase": "显示知识库中内容"
+ },
+ "emptyStatus": {
+ "actions": {
+ "file": "上传文件",
+ "folder": "上传文件夹",
+ "knowledgeBase": "新建知识库"
+ },
+ "or": "或者",
+ "title": "将文件或文件夹拖到这里"
+ },
+ "title": {
+ "createdAt": "创建时间",
+ "size": "大小",
+ "title": "文件"
+ },
+ "total": {
+ "fileCount": "共 {{count}} 项",
+ "selectedCount": "已选 {{count}} 项"
+ }
+ },
+ "FileParsingStatus": {
+ "chunks": {
+ "embeddingStatus": {
+ "empty": "文本块尚未完全向量化,将导致语义检索功能不可用,为提升检索质量,请对文本块向量化",
+ "error": "向量化失败",
+ "errorResult": "向量化失败,请检查后重试。失败原因:",
+ "processing": "文本块正在向量化,请耐心等待",
+ "success": "当前文本块均已向量化"
+ },
+ "embeddings": "向量化",
+ "status": {
+ "error": "分块失败",
+ "errorResult": "分块失败,请检查后重试。失败原因:",
+ "processing": "分块中",
+ "processingTip": "服务端正在拆分文本块,关闭页面不影响分块进度"
+ }
+ }
+ },
+ "GoBack": {
+ "back": "返回"
+ },
+ "ModelSelect": {
+ "featureTag": {
+ "custom": "自定义模型,默认设定同时支持函数调用与视觉识别,请根据实际情况验证上述能力的可用性",
+ "file": "该模型支持上传文件读取与识别",
+ "functionCall": "该模型支持函数调用(Function Call)",
+ "tokens": "该模型单个会话最多支持 {{tokens}} Tokens",
+ "vision": "该模型支持视觉识别"
+ },
+ "removed": "该模型不在列表中,若取消选中将会自动移除"
+ },
+ "ModelSwitchPanel": {
+ "emptyModel": "没有启用的模型,请前往设置开启",
+ "provider": "提供商"
+ }
+}
diff --git a/DigitalHumanWeb/locales/zh-CN/discover.json b/DigitalHumanWeb/locales/zh-CN/discover.json
new file mode 100644
index 0000000..b4543af
--- /dev/null
+++ b/DigitalHumanWeb/locales/zh-CN/discover.json
@@ -0,0 +1,200 @@
+{
+ "assistants": {
+ "addAgent": "添加助手",
+ "addAgentAndConverse": "添加助手并会话",
+ "addAgentSuccess": "添加成功",
+ "conversation": {
+ "l1": "你好,我是 **{{name}}**,你可以问我任何问题,我会尽力回答你 ~",
+ "l2": "以下是我的能力介绍: ",
+ "l3": "让我们开始对话吧!"
+ },
+ "description": "助手介绍",
+ "detail": "详情",
+ "list": "助手列表",
+ "more": "更多",
+ "plugins": "集成插件",
+ "recentSubmits": "最近更新",
+ "suggestions": "相关推荐",
+ "systemRole": "助手设定",
+ "try": "试一下"
+ },
+ "back": "返回发现",
+ "category": {
+ "assistant": {
+ "all": "全部",
+ "academic": "收藏",
+ "career": "政务",
+ "copywriting": "教育",
+ "design": "营销",
+ "education": "设计",
+ "emotions": "办公",
+ "programming": "编程",
+ "games": "娱乐",
+ "life": "生活",
+ "general": "通用"
+ },
+ "plugin": {
+ "all": "全部",
+ "gaming-entertainment": "游戏娱乐",
+ "life-style": "生活方式",
+ "media-generate": "媒体生成",
+ "science-education": "科学教育",
+ "social": "社交媒体",
+ "stocks-finance": "股票金融",
+ "tools": "实用工具",
+ "web-search": "网络搜索"
+ }
+ },
+ "cleanFilter": "清除筛选",
+ "create": "创作",
+ "createGuide": {
+ "func1": {
+ "desc1": "在会话窗口中通过右上角设置进入你想提交助手的设置页面;",
+ "desc2": "点击右上角提交到助手市场按钮。",
+ "tag": "方法一",
+ "title": "通过 LobeChat 提交"
+ },
+ "func2": {
+ "button": "前往 Github 助手仓库",
+ "desc": "如果您想将助手添加到索引中,请使用 agent-template.json 或 agent-template-full.json 在 plugins 目录中创建一个条目,编写简短的描述并适当标记,然后创建一个拉取请求。",
+ "tag": "方法二",
+ "title": "通过 Github 提交"
+ }
+ },
+ "dislike": "不喜欢",
+ "filter": "筛选",
+ "filterBy": {
+ "authorRange": {
+ "everyone": "所有作者",
+ "followed": "关注的作者",
+ "title": "作者范围"
+ },
+ "contentLength": "最小上下文长度",
+ "maxToken": {
+ "title": "设定最大长度 (Token)",
+ "unlimited": "无限制"
+ },
+ "other": {
+ "functionCall": "支持函数调用",
+ "title": "其他",
+ "vision": "支持视觉识别",
+ "withKnowledge": "附带知识库",
+ "withTool": "附带插件"
+ },
+ "pricing": "模型价格",
+ "timePeriod": {
+ "all": "全部时间",
+ "day": "近 24 小时",
+ "month": "近 30 天",
+ "title": "时间范围",
+ "week": "近 7 天",
+ "year": "近一年"
+ }
+ },
+ "home": {
+ "featuredAssistants": "推荐助手",
+ "featuredModels": "推荐模型",
+ "featuredProviders": "推荐模型服务商",
+ "featuredTools": "推荐插件",
+ "more": "发现更多"
+ },
+ "like": "喜欢",
+ "models": {
+ "chat": "开始会话",
+ "contentLength": "最大上下文长度",
+ "free": "免费",
+ "guide": "配置指南",
+ "list": "模型列表",
+ "more": "更多",
+ "parameterList": {
+ "defaultValue": "默认值",
+ "docs": "查看文档",
+ "frequency_penalty": {
+ "desc": "此设置调整模型重复使用输入中已经出现的特定词汇的频率。较高的值使得这种重复出现的可能性降低,而负值则产生相反的效果。词汇惩罚不随出现次数增加而增加。负值将鼓励词汇的重复使用。",
+ "title": "频率惩罚度"
+ },
+ "max_tokens": {
+ "desc": "此设置定义了模型在单次回复中可以生成的最大长度。设置较高的值允许模型生成更长的回应,而较低的值则限制回应的长度,使其更简洁。根据不同的应用场景,合理调整此值可以帮助达到预期的回应长度和详细程度。",
+ "title": "单次回复限制"
+ },
+ "presence_penalty": {
+ "desc": "此设置旨在根据词汇在输入中出现的频率来控制词汇的重复使用。它尝试较少使用那些在输入中出现较多的词汇,其使用频率与出现频率成比例。词汇惩罚随出现次数而增加。负值将鼓励重复使用词汇。",
+ "title": "话题新鲜度"
+ },
+ "range": "范围",
+ "temperature": {
+ "desc": "此设置影响模型回应的多样性。较低的值会导致更可预测和典型的回应,而较高的值则鼓励更多样化和不常见的回应。当值设为0时,模型对于给定的输入总是给出相同的回应。",
+ "title": "随机性"
+ },
+ "title": "模型参数",
+ "top_p": {
+ "desc": "此设置将模型的选择限制为可能性最高的一定比例的词汇:只选择那些累计概率达到P的顶尖词汇。较低的值使得模型的回应更加可预测,而默认设置则允许模型从全部范围的词汇中进行选择。",
+ "title": "核采样"
+ },
+ "type": "类型"
+ },
+ "providerInfo": {
+ "apiTooltip": "LobeChat 支持为此提供商使用自定义 API 密钥。",
+ "input": "输入价格",
+ "inputTooltip": "每百万个 Token 的成本",
+ "latency": "延迟",
+ "latencyTooltip": "服务商发送第一个 Token 的平均响应时间",
+ "maxOutput": "最大输出长度",
+ "maxOutputTooltip": "此端点可以生成的最大 Token 数",
+ "officialTooltip": "LobeHub 官方服务",
+ "output": "输出价格",
+ "outputTooltip": "每百万个 Token 的成本",
+ "streamCancellationTooltip": "此服务商支持流取消功能。",
+ "throughput": "吞吐量",
+ "throughputTooltip": "流请求每秒传输的平均 Token 数"
+ },
+ "suggestions": "相关模型",
+ "supportedProviders": "支持该模型的服务商"
+ },
+ "plugins": {
+ "community": "社区插件",
+ "install": "安装插件",
+ "installed": "已安装",
+ "list": "插件列表",
+ "meta": {
+ "description": "描述",
+ "parameter": "参数",
+ "title": "工具参数",
+ "type": "类型"
+ },
+ "more": "更多",
+ "official": "官方插件",
+ "recentSubmits": "最近更新",
+ "suggestions": "相关推荐"
+ },
+ "providers": {
+ "config": "配置服务商",
+ "list": "模型服务商列表",
+ "modelCount": "{{count}} 个模型",
+ "modelSite": "模型文档",
+ "more": "更多",
+ "officialSite": "官方网站",
+ "showAllModels": "显示所有模型",
+ "suggestions": "相关服务商",
+ "supportedModels": "支持模型"
+ },
+ "search": {
+ "placeholder": "搜索名称介绍或关键词...",
+ "result": "{{count}} 个关于 {{keyword}} 的搜索结果",
+ "searching": "搜索中..."
+ },
+ "sort": {
+ "mostLiked": "最多喜欢",
+ "mostUsed": "最多使用",
+ "newest": "从新到旧",
+ "oldest": "从旧到新",
+ "recommended": "推荐"
+ },
+ "tab": {
+ "assistants": "助手",
+ "home": "首页",
+ "models": "模型",
+ "plugins": "插件",
+ "providers": "模型服务商"
+ }
+}
diff --git a/DigitalHumanWeb/locales/zh-CN/error.json b/DigitalHumanWeb/locales/zh-CN/error.json
new file mode 100644
index 0000000..14fb419
--- /dev/null
+++ b/DigitalHumanWeb/locales/zh-CN/error.json
@@ -0,0 +1,126 @@
+{
+ "clerkAuth": {
+ "loginSuccess": {
+ "action": "继续会话",
+ "desc": "{{greeting}},很高兴能够继续为你服务。让我们接着刚刚的话题聊下去吧",
+ "title": "欢迎回来, {{nickName}}"
+ }
+ },
+ "error": {
+ "backHome": "返回首页",
+ "desc": "待会来试试,或者回到已知的世界",
+ "retry": "重新加载",
+ "title": "页面遇到一点问题.."
+ },
+ "fetchError": "请求失败",
+ "fetchErrorDetail": "错误详情",
+ "notFound": {
+ "backHome": "返回首页",
+ "check": "请检查你的 URL 是否正确",
+ "desc": "我们找不到你寻找的页面",
+ "title": "进入了未知领域?"
+ },
+ "pluginSettings": {
+ "desc": "完成以下配置,即可开始使用该插件",
+ "title": "{{name}} 插件配置"
+ },
+ "response": {
+ "400": "很抱歉,服务器不明白您的请求,请确认您的请求参数是否正确",
+ "401": "很抱歉,服务器拒绝了您的请求,可能是因为您的权限不足或未提供有效的身份验证",
+ "403": "很抱歉,服务器拒绝了您的请求,您没有访问此内容的权限 ",
+ "404": "很抱歉,服务器找不到您请求的页面或资源,请确认您的 URL 是否正确",
+ "405": "很抱歉,服务器不支持您使用的请求方法,请确认您的请求方法是否正确",
+ "406": "很抱歉,服务器无法根据您请求的内容特性完成请求",
+ "407": "很抱歉,您需要进行代理认证后才能继续此请求",
+ "408": "很抱歉,服务器在等待请求时超时,请检查您的网络连接后再试",
+ "409": "很抱歉,请求存在冲突无法处理,可能是因为资源状态与请求不兼容",
+ "410": "很抱歉,您请求的资源已被永久移除,无法找到",
+ "411": "很抱歉,服务器无法处理不含有效内容长度的请求",
+ "412": "很抱歉,您的请求未满足服务器端的条件,无法完成请求",
+ "413": "很抱歉,您的请求数据量过大,服务器无法处理",
+ "414": "很抱歉,您的请求的 URI 过长,服务器无法处理",
+ "415": "很抱歉,服务器无法处理请求附带的媒体格式",
+ "416": "很抱歉,服务器无法满足您请求的范围",
+ "417": "很抱歉,服务器无法满足您的期望值",
+ "422": "很抱歉,您的请求格式正确,但是由于含有语义错误,无法响应",
+ "423": "很抱歉,您请求的资源被锁定",
+ "424": "很抱歉,由于之前的请求失败,导致当前请求无法完成",
+ "426": "很抱歉,服务器要求您的客户端升级到更高的协议版本",
+ "428": "很抱歉,服务器要求先决条件,要求您的请求包含正确的条件头",
+ "429": "很抱歉,您的请求太多,服务器有点累了,请稍后再试",
+ "431": "很抱歉,您的请求头字段太大,服务器无法处理",
+ "451": "很抱歉,由于法律原因,服务器拒绝提供此资源",
+ "500": "很抱歉,服务器似乎遇到了一些困难,暂时无法完成您的请求,请稍后再试",
+ "502": "很抱歉,服务器似乎迷失了方向,暂时无法提供服务,请稍后再试",
+ "503": "很抱歉,服务器当前无法处理您的请求,可能是由于过载或正在进行维护,请稍后再试",
+ "504": "很抱歉,服务器没有等到上游服务器的回应,请稍后再试",
+ "PluginMarketIndexNotFound": "很抱歉,服务器没有找到插件索引,请检查索引地址是否正确",
+ "PluginMarketIndexInvalid": "很抱歉,插件索引校验未通过,请检查索引文件格式是否规范",
+ "PluginMetaNotFound": "很抱歉,没有在索引中发现该插件,请插件在索引中的配置信息",
+ "PluginMetaInvalid": "很抱歉,该插件的元信息校验未通过,请检查插件元信息格式是否规范",
+ "PluginManifestNotFound": "很抱歉,服务器没有找到该插件的描述清单 (manifest.json),请检查插件描述文件地址是否正确",
+ "PluginManifestInvalid": "很抱歉,该插件的描述清单校验未通过,请检查描述清单格式是否规范",
+ "PluginApiNotFound": "很抱歉,插件描述清单中不存在该 API ,请检查你的请求方法与插件清单 API 是否匹配",
+ "PluginApiParamsError": "很抱歉,该插件请求的入参校验未通过,请检查入参与 Api 描述信息是否匹配",
+ "PluginSettingsInvalid": "该插件需要正确配置后才可以使用,请检查你的配置是否正确",
+ "PluginServerError": "插件服务端请求返回出错,请检查根据下面的报错信息检查你的插件描述文件、插件配置或服务端实现",
+ "PluginGatewayError": "很抱歉,插件网关出现错误,请检查插件网关配置是否正确",
+ "PluginOpenApiInitError": "很抱歉,OpenAPI 客户端初始化失败,请检查 OpenAPI 的配置信息是否正确",
+ "PluginFailToTransformArguments": "很抱歉,插件调用参数解析失败,请尝试重新生成助手消息,或更换 Tools Calling 能力更强的 AI 模型后重试",
+ "InvalidAccessCode": "密码不正确或为空,请输入正确的访问密码,或者添加自定义 API Key",
+ "InvalidClerkUser": "很抱歉,你当前尚未登录,请先登录或注册账号后继续操作",
+ "LocationNotSupportError": "很抱歉,你的所在地区不支持此模型服务,可能是由于区域限制或服务未开通。请确认当前地区是否支持使用此服务,或尝试使用切换到其他地区后重试。",
+ "InvalidProviderAPIKey": "{{provider}} API Key 不正确或为空,请检查 {{provider}} API Key 后重试",
+ "ProviderBizError": "请求 {{provider}} 服务出错,请根据以下信息排查或重试",
+ "NoOpenAIAPIKey": "OpenAI API Key 不正确或为空,请添加自定义 OpenAI API Key",
+ "OpenAIBizError": "请求 OpenAI 服务出错,请根据以下信息排查或重试",
+ "InvalidBedrockCredentials": "Bedrock 鉴权未通过,请检查 AccessKeyId/SecretAccessKey 后重试",
+ "StreamChunkError": "流式请求的消息块解析错误,请检查当前 API 接口是否符合标准规范,或联系你的 API 供应商咨询",
+ "UnknownChatFetchError": "很抱歉,遇到未知请求错误,请根据以下信息排查或重试",
+ "InvalidOllamaArgs": "Ollama 配置不正确,请检查 Ollama 配置后重试",
+ "OllamaBizError": "请求 Ollama 服务出错,请根据以下信息排查或重试",
+ "OllamaServiceUnavailable": "Ollama 服务连接失败,请检查 Ollama 是否运行正常,或是否正确设置 Ollama 的跨域配置",
+ "AgentRuntimeError": "Lobe AI Runtime 执行出错,请根据以下信息排查或重试",
+ "FreePlanLimit": "当前为免费用户,无法使用该功能,请升级到付费计划后继续使用",
+ "SubscriptionPlanLimit": "您的订阅额度已用尽,无法使用该功能,请升级到更高计划,或购买资源包后继续使用",
+ "InvalidGithubToken": "Github PAT 不正确或为空,请检查 Github PAT 后重试"
+ },
+ "stt": {
+ "responseError": "服务请求失败,请检查配置或重试"
+ },
+ "tts": {
+ "responseError": "服务请求失败,请检查配置或重试"
+ },
+ "unlock": {
+ "addProxyUrl": "添加 OpenAI 代理地址(可选)",
+ "apiKey": {
+ "description": "输入你的 {{name}} API Key 即可开始会话",
+ "title": "使用自定义 {{name}} API Key"
+ },
+ "closeMessage": "关闭提示",
+ "confirm": "确认并重试",
+ "oauth": {
+ "description": "管理员已开启统一登录认证,点击下方按钮登录,即可解锁应用",
+ "success": "登录成功",
+ "title": "登录账号",
+ "welcome": "欢迎你!"
+ },
+ "password": {
+ "description": "管理员已开启应用加密,输入应用密码后即可解锁应用。密码只需填写一次",
+ "placeholder": "请输入密码",
+ "title": "输入密码解锁应用"
+ },
+ "tabs": {
+ "apiKey": "自定义 API Key",
+ "password": "密码"
+ }
+ },
+ "upload": {
+ "desc": "详情: {{detail}}",
+ "fileOnlySupportInServerMode": "当前部署模式不支持上传非图片文件,如需上传 {{ext}} 格式,请切换到服务端数据库部署或使用 {{cloud}} 服务",
+ "networkError": "请确认你的网络是否正常,并检查文件存储服务跨域配置是否正确",
+ "title": "文件上传失败,请检查网络连接或稍后再试",
+ "unknownError": "错误原因: {{reason}}",
+ "uploadFailed": "文件上传失败"
+ }
+}
diff --git a/DigitalHumanWeb/locales/zh-CN/file.json b/DigitalHumanWeb/locales/zh-CN/file.json
new file mode 100644
index 0000000..236c7d7
--- /dev/null
+++ b/DigitalHumanWeb/locales/zh-CN/file.json
@@ -0,0 +1,94 @@
+{
+ "desc": "管理你的文件与知识库",
+ "detail": {
+ "basic": {
+ "createdAt": "创建时间",
+ "filename": "文件名",
+ "size": "文件大小",
+ "title": "基本信息",
+ "type": "格式",
+ "updatedAt": "更新时间"
+ },
+ "data": {
+ "chunkCount": "分块数",
+ "embedding": {
+ "default": "暂未向量化",
+ "error": "失败",
+ "pending": "待启动",
+ "processing": "进行中",
+ "success": "已完成"
+ },
+ "embeddingStatus": "向量化"
+ }
+ },
+ "empty": "暂无已上传文件/文件夹",
+ "header": {
+ "actions": {
+ "newFolder": "新建文件夹",
+ "uploadFile": "上传文件",
+ "uploadFolder": "上传文件夹"
+ },
+ "uploadButton": "上传"
+ },
+ "knowledgeBase": {
+ "list": {
+ "confirmRemoveKnowledgeBase": "即将删除该知识库,其中的文件不会删除,将移入全部文件中。知识库删除后将不可恢复,请谨慎操作。",
+ "empty": "点击 <1>+1> 开始创建知识库"
+ },
+ "new": "新建知识库",
+ "title": "知识库"
+ },
+ "networkError": "获取知识库失败,请检测网络连接后重试",
+ "notSupportGuide": {
+ "desc": "当前部署实例为客户端数据库模式,无法使用文件管理功能。请切换到<1>服务端数据库部署模式1>,或直接使用 <3>LobeChat Cloud3>",
+ "features": {
+ "allKind": {
+ "desc": "支持主流文件类型,包括 Word、PPT、Excel、PDF、TXT 等常见文档格式,以及JS、Python 等主流代码文件",
+ "title": "多种文件类型解析"
+ },
+ "embeddings": {
+ "desc": "使用高性能向量模型,对文本分块进行向量化,实现文件内容的语义化检索",
+ "title": "向量语义化"
+ },
+ "repos": {
+ "desc": "支持创建知识库,并允许添加不同类型的文件,构建属于你的领域知识",
+ "title": "知识库"
+ }
+ },
+ "title": "当前部署模式不支持文件管理"
+ },
+ "preview": {
+ "downloadFile": "下载文件",
+ "unsupportedFileAndContact": "此文件格式暂不支持在线预览,如有预览诉求,欢迎<1>反馈给我们1>"
+ },
+ "searchFilePlaceholder": "搜索文件",
+ "tab": {
+ "all": "全部文件",
+ "audios": "语音",
+ "documents": "文档",
+ "images": "图片",
+ "videos": "视频",
+ "websites": "网页"
+ },
+ "title": "文件",
+ "uploadDock": {
+ "body": {
+ "collapse": "收起",
+ "item": {
+ "done": "已上传",
+ "error": "上传失败,请重试",
+ "pending": "准备上传...",
+ "processing": "文件处理中...",
+ "restTime": "剩余 {{time}}"
+ }
+ },
+ "totalCount": "共 {{count}} 项",
+ "uploadStatus": {
+ "error": "上传出错",
+ "pending": "等待上传",
+ "processing": "正在上传",
+ "success": "上传完成",
+ "uploading": "正在上传"
+ }
+ }
+}
diff --git a/DigitalHumanWeb/locales/zh-CN/knowledgeBase.json b/DigitalHumanWeb/locales/zh-CN/knowledgeBase.json
new file mode 100644
index 0000000..63a5c45
--- /dev/null
+++ b/DigitalHumanWeb/locales/zh-CN/knowledgeBase.json
@@ -0,0 +1,32 @@
+{
+ "addToKnowledgeBase": {
+ "addSuccess": "文件添加成功,<1>立即查看1>",
+ "confirm": "添加",
+ "id": {
+ "placeholder": "请选择要添加的知识库",
+ "required": "请选择知识库",
+ "title": "目标知识库"
+ },
+ "title": "添加到知识库",
+ "totalFiles": "已选择 {{count}} 个文件"
+ },
+ "createNew": {
+ "confirm": "新建",
+ "description": {
+ "placeholder": "知识库简介(选填)"
+ },
+ "formTitle": "基本信息",
+ "name": {
+ "placeholder": "知识库名称",
+ "required": "请填写知识库名称"
+ },
+ "title": "新建知识库"
+ },
+ "tab": {
+ "evals": "评测",
+ "files": "文档",
+ "settings": "设置",
+ "testing": "召回测试"
+ },
+ "title": "知识库"
+}
diff --git a/DigitalHumanWeb/locales/zh-CN/market.json b/DigitalHumanWeb/locales/zh-CN/market.json
new file mode 100644
index 0000000..1d3c35e
--- /dev/null
+++ b/DigitalHumanWeb/locales/zh-CN/market.json
@@ -0,0 +1,32 @@
+{
+ "addAgent": "添加助手",
+ "addAgentAndConverse": "添加助手并会话",
+ "addAgentSuccess": "添加成功",
+ "guide": {
+ "func1": {
+ "desc1": "在会话窗口中通过右上角设置进入你想提交助手的设置页面;",
+ "desc2": "点击右上角提交到助手市场按钮。",
+ "tag": "方法一",
+ "title": "通过 {{appName}} 提交"
+ },
+ "func2": {
+ "button": "前往 Github 助手仓库",
+ "desc": "如果您想将助手添加到索引中,请使用 agent-template.json 或 agent-template-full.json 在 plugins 目录中创建一个条目,编写简短的描述并适当标记,然后创建一个拉取请求。",
+ "tag": "方法二",
+ "title": "通过 Github 提交"
+ }
+ },
+ "search": {
+ "placeholder": "搜索助手名称介绍或关键词..."
+ },
+ "sidebar": {
+ "comment": "讨论区",
+ "prompt": "提示词",
+ "title": "助手详情"
+ },
+ "submitAgent": "提交助手",
+ "title": {
+ "allAgents": "全部助手",
+ "recentSubmits": "最近新增"
+ }
+}
diff --git a/DigitalHumanWeb/locales/zh-CN/metadata.json b/DigitalHumanWeb/locales/zh-CN/metadata.json
new file mode 100644
index 0000000..a6fa05b
--- /dev/null
+++ b/DigitalHumanWeb/locales/zh-CN/metadata.json
@@ -0,0 +1,35 @@
+{
+ "chat": {
+ "description": "{{appName}} 带给你最好的 ChatGPT, Claude , Gemini, OLLaMA WebUI 使用体验",
+ "title": "{{appName}}:个人 AI 效能工具,给自己一个更聪明的大脑"
+ },
+ "discover": {
+ "assistants": {
+ "description": "内容创作、文案、问答、图像生成、视频生成、语音生成、智能 Agent、自动化工作流,定制你专属的 AI / GPTs / OLLaMA 智能助手",
+ "title": "AI助手"
+ },
+ "description": "内容创作、文案、问答、图像生成、视频生成、语音生成、智能 Agent、自动化工作流、自定义AI应用,定制你专属的 AI 应用工作台",
+ "models": {
+ "description": "探索主流 AI 模型 OpenAI / GPT / Claude 3 / Gemini / Ollama / Azure / DeepSeek",
+ "title": "AI模型"
+ },
+ "plugins": {
+ "description": "搜素图表生成、学术、图像生成、视频生成、语音生成、自动化工作流,为你的助手集成丰富的插件能力",
+ "title": "AI插件"
+ },
+ "providers": {
+ "description": "探索主流模型供应商 OpenAI / Qwen / Ollama / Anthropic / DeepSeek / Google Gemini / OpenRouter",
+ "title": "AI模型服务商"
+ },
+ "search": "搜索",
+ "title": "发现"
+ },
+ "plugins": {
+ "description": "搜素、图表生成、学术、图像生成、视频生成、语音生成、自动化工作流,定制 ChatGPT / Claude 专属的 ToolCall 插件能力",
+ "title": "插件市场"
+ },
+ "welcome": {
+ "description": "{{appName}} 带给你最好的 ChatGPT, Claude , Gemini, OLLaMA WebUI 使用体验",
+ "title": "欢迎使用 {{appName}}:个人 AI 效能工具,给自己一个更聪明的大脑"
+ }
+}
diff --git a/DigitalHumanWeb/locales/zh-CN/migration.json b/DigitalHumanWeb/locales/zh-CN/migration.json
new file mode 100644
index 0000000..3c92fe6
--- /dev/null
+++ b/DigitalHumanWeb/locales/zh-CN/migration.json
@@ -0,0 +1,45 @@
+{
+ "dbV1": {
+ "action": {
+ "clearDB": "清空本地数据",
+ "downloadBackup": "下载数据备份",
+ "reUpgrade": "重新升级",
+ "start": "开始使用",
+ "upgrade": "一键升级"
+ },
+ "clear": {
+ "confirm": "即将清空本地数据(全局设置不受影响),请确认你已经下载了数据备份。"
+ },
+ "description": "在新版本中,{{appName}} 的数据存储有了巨大的飞跃。因此我们要对旧版数据进行升级,进而为你带来更好的使用体验。",
+ "features": {
+ "capability": {
+ "desc": "基于 IndexedDB 技术,足以装下你一生的会话消息",
+ "title": "大容量"
+ },
+ "performance": {
+ "desc": "百万条消息自动索引,检索查询毫秒级响应",
+ "title": "高性能"
+ },
+ "use": {
+ "desc": "支持标题、描述、标签、消息内容乃至翻译文本检索,日常搜索效率大大提升",
+ "title": "更易用"
+ }
+ },
+ "title": "{{appName}} 数据进化",
+ "upgrade": {
+ "error": {
+ "subTitle": "非常抱歉,数据库升级过程发生异常。请尝试以下方案:A. 清空本地数据后,重新导入备份数据; B.点击 「重新升级」按钮。