---
title: Using Local Qwen Model in LobeChat
description: 通过LobeChat和Ollama的集成,您可以轻松在LobeChat中使用Qwen。本文将指导您如何在LobeChat中使用本地部署版本的Qwen。
tags:
- Qwen
- LobeChat
- Ollama
- 本地部署
- AI大模型
---
# Using the Local Qwen Model
[Qwen](https://github.com/QwenLM/Qwen1.5) is a large language model (LLM) open-sourced by Alibaba Cloud. It is officially defined as a constantly evolving AI large model, and it achieves more accurate Chinese recognition capabilities through more training set content.
Now, through the integration of LobeChat and [Ollama](https://ollama.com/), you can easily use Qwen in LobeChat. This document will guide you on how to use the local deployment version of Qwen in LobeChat:
## Local Installation of Ollama
First, you need to install Ollama. For the installation process, please refer to the [Ollama Usage Document](/docs/usage/providers/ollama).
## Pull the Qwen Model to Local with Ollama
After installing Ollama, you can install the Qwen model with the following command, taking the 14b model as an example:
```bash
ollama pull qwen:14b
```
The local version of Qwen provides different model sizes to choose from. Please refer to the
[Qwen's Ollama integration page](https://ollama.com/library/qwen) to understand how to choose the
model size.
### Select the Qwen Model
In the LobeChat conversation page, open the model selection panel, and then select the Qwen model.
If you do not see the Ollama provider in the model selection panel, please refer to [Integration with Ollama](/docs/self-hosting/examples/ollama) to learn how to enable the Ollama provider in LobeChat.
Next, you can have a conversation with the local Qwen model in LobeChat.