You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

52 lines
2.2 KiB
Markdown

---
title: Using Google Gemma Model in LobeChat
description: >-
Learn how to integrate and utilize Google Gemma in LobeChat, an open-source
large language model, in LobeChat with the help of Ollama. Follow these steps
to pull and select the Gemma model for natural language processing tasks.
tags:
- Google Gemma
- LobeChat
- Ollama
- Natural Language Processing
- Language Model
---
# Using Google Gemma Model
<Image alt={'Using Gemma in LobeChat'} cover src={'https://github.com/lobehub/lobe-chat/assets/17870709/65d2dd2a-fdcf-4f3f-a6af-4ed5164a510d'} />
[Gemma](https://blog.google/technology/developers/gemma-open-models/) is an open-source large language model (LLM) from Google, designed to provide a more general and flexible model for various natural language processing tasks. Now, with the integration of LobeChat and [Ollama](https://ollama.com/), you can easily use Google Gemma in LobeChat.
This document will guide you on how to use Google Gemma in LobeChat:
<Steps>
### Install Ollama locally
First, you need to install Ollama. For the installation process, please refer to the [Ollama usage documentation](/docs/usage/providers/ollama).
### Pull Google Gemma model to local using Ollama
After installing Ollama, you can install the Google Gemma model using the following command, using the 7b model as an example:
```bash
ollama pull gemma
```
<Image alt={'Pulling Gemma model using Ollama'} height={473} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/7049a811-a08b-45d3-8491-970f579c2ebd'} width={791} />
### Select Gemma model
In the session page, open the model panel and then select the Gemma model.
<Image alt={'Selecting Gemma model in the model selection panel'} height={629} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/c91d0c18-a21f-41f6-b5cc-94d29faeb797'} width={791} />
<Callout type={'info'}>
If you do not see the Ollama provider in the model selection panel, please refer to [Integrating
with Ollama](/docs/self-hosting/examples/ollama) to learn how to enable the Ollama provider in
LobeChat.
</Callout>
</Steps>
Now, you can start conversing with the local Gemma model using LobeChat.