--- title: Enhancing Multimodal Interaction with Visual Recognition Models description: >- Explore how LobeChat integrates visual recognition capabilities into large language models, enabling multimodal interactions for enhanced user experiences. tags: - Visual Recognition - Multimodal Interaction - Large Language Models - LobeChat - Custom Model Configuration --- # Visual Model User Guide The ecosystem of large language models that support visual recognition is becoming increasingly rich. Starting from `gpt-4-vision`, LobeChat now supports various large language models with visual recognition capabilities, enabling LobeChat to have multimodal interaction capabilities.