You cannot select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
57 lines
3.1 KiB
Markdown
57 lines
3.1 KiB
Markdown
---
|
|
title: Using Local LLM in LobeChat
|
|
description: >-
|
|
Experience groundbreaking AI support with a local LLM in LobeChat powered by
|
|
Ollama AI. Start conversations effortlessly and enjoy unprecedented
|
|
interaction speed!
|
|
tags:
|
|
- Local Large Language Model
|
|
- Ollama AI
|
|
- LobeChat
|
|
- AI communication
|
|
- Natural Language Processing
|
|
- Docker deployment
|
|
---
|
|
|
|
# Local Large Language Model (LLM) Support
|
|
|
|
<Image
|
|
alt={'Ollama Local Large Language Model (LLM) Support'}
|
|
borderless
|
|
cover
|
|
src={'https://github.com/lobehub/lobe-chat/assets/28616219/ca9a21bc-ea6c-4c90-bf4a-fa53b4fb2b5c'}
|
|
/>
|
|
|
|
<Callout>Available in >=0.127.0, currently only supports Docker deployment</Callout>
|
|
|
|
With the release of LobeChat v0.127.0, we are excited to introduce a groundbreaking feature - Ollama AI support! 🤯 With the powerful infrastructure of [Ollama AI](https://ollama.ai/) and the [community's collaborative efforts](https://github.com/lobehub/lobe-chat/pull/1265), you can now engage in conversations with a local LLM (Large Language Model) in LobeChat! 🤩
|
|
|
|
We are thrilled to introduce this revolutionary feature to all LobeChat users at this special moment. The integration of Ollama AI not only signifies a significant technological leap for us but also reaffirms our commitment to continuously pursue more efficient and intelligent communication.
|
|
|
|
### How to Start a Conversation with Local LLM?
|
|
|
|
The startup process is exceptionally simple! By running the following Docker command, you can experience conversations with a local LLM in LobeChat:
|
|
|
|
```bash
|
|
docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://host.docker.internal:11434/v1 lobehub/lobe-chat
|
|
```
|
|
|
|
Yes, it's that simple! 🤩 You don't need to go through complicated configurations or worry about a complex installation process. We have prepared everything for you. With just one command, you can engage in deep conversations with a local AI.
|
|
|
|
### Experience Unprecedented Interaction Speed
|
|
|
|
With the powerful capabilities of Ollama AI, LobeChat has greatly improved its efficiency in natural language processing. Both processing speed and response time have reached new heights. This means that your conversational experience will be smoother, without any waiting, and with instant responses.
|
|
|
|
### Why Choose a Local LLM?
|
|
|
|
Compared to cloud-based solutions, a local LLM provides higher privacy and security. All your conversations are processed locally, without passing through any external servers, ensuring the security of your data. Additionally, local processing can reduce network latency, providing you with a more immediate communication experience.
|
|
|
|
### Embark on Your LobeChat & Ollama AI Journey
|
|
|
|
Now, let's embark on this exciting journey together! Through the collaboration of LobeChat and Ollama AI, explore the endless possibilities brought by AI. Whether you are a tech enthusiast or simply curious about AI communication, LobeChat will offer you an unprecedented experience.
|
|
|
|
<Cards>
|
|
<Card href={'/docs/usage/providers'} title={'Using Multiple Model Providers'} />
|
|
<Card href={'/docs/usage/providers/ollama'} title={'Using Ollama Local Model'} />
|
|
</Cards>
|