---
title: Using Google Gemma Model in LobeChat
description: >-
Learn how to integrate and utilize Google Gemma in LobeChat, an open-source
large language model, in LobeChat with the help of Ollama. Follow these steps
to pull and select the Gemma model for natural language processing tasks.
tags:
- Google Gemma
- LobeChat
- Ollama
- Natural Language Processing
- Language Model
---
# Using Google Gemma Model
[Gemma](https://blog.google/technology/developers/gemma-open-models/) is an open-source large language model (LLM) from Google, designed to provide a more general and flexible model for various natural language processing tasks. Now, with the integration of LobeChat and [Ollama](https://ollama.com/), you can easily use Google Gemma in LobeChat.
This document will guide you on how to use Google Gemma in LobeChat:
### Install Ollama locally
First, you need to install Ollama. For the installation process, please refer to the [Ollama usage documentation](/docs/usage/providers/ollama).
### Pull Google Gemma model to local using Ollama
After installing Ollama, you can install the Google Gemma model using the following command, using the 7b model as an example:
```bash
ollama pull gemma
```
### Select Gemma model
In the session page, open the model panel and then select the Gemma model.
If you do not see the Ollama provider in the model selection panel, please refer to [Integrating
with Ollama](/docs/self-hosting/examples/ollama) to learn how to enable the Ollama provider in
LobeChat.
Now, you can start conversing with the local Gemma model using LobeChat.