Home

Awesome

<p align="center"> <img src="https://github.com/user-attachments/assets/bebd92b8-765e-4d63-bb3d-47e1bb8b51ad" width=500px> </p> <p align=center> <img src=https://img.shields.io/badge/HACS-Custom-orange.svg?style=for-the-badg> <img src=https://img.shields.io/badge/version-1.0.3-blue> <a href="https://github.com/valentinfrlch/ha-llmvision/issues"> <img src="https://img.shields.io/maintenance/yes/2024.svg"> <img alt="Issues" src="https://img.shields.io/github/issues/valentinfrlch/ha-llmvision?color=0088ff"/> </a> <p align=center style="font-weight:bold"> Image and video analyzer for Home Assistant using multimodal LLMs </p> </p> <p align="center"> <a href="#features">🌟 Features </a> · <a href="#resources">📖 Resources</a> · <a href="#installation">⬇️ Installation</a> · <a href="#roadmap">🚧 Roadmap</a> · <a href="#how-to-report-a-bug-or-request-a-feature">🪲 How to report Bugs</a> </p> <br> <br> <br>

LLM Vision is a Home Assistant integration to analyze images, videos and camera feeds using the vision capabilities of multimodal LLMs.
Supported providers are OpenAI, Anthropic, Google Gemini, LocalAI and Ollama.

Features

Resources

Check the docs for detailed instructions on how to set up LLM Vision and each of the supported providers as well as usage examples and service call parameters:

<a href="https://llm-vision.gitbook.io/getting-started"><img src="https://img.shields.io/badge/Documentation-blue?style=for-the-badge&logo=gitbook&logoColor=white&color=18bcf2"/></a>

Check 📖 Examples on how you can integrate llmvision into your Home Assistant setup or join the 🗨️ discussion on the Home Assistant Community.

Installation

Open a repository inside the Home Assistant Community Store.

  1. Search for LLM Vision in Home Assistant Settings/Devices & services
  2. Select your provider
  3. Follow the instructions to add your AI providers.

Detailed instruction on how to set up LLM Vision and each of the supported providers are available here: https://llm-vision.gitbook.io/getting-started/

Debugging

To enable debugging, add the following to your configuration.yaml:

logger:
  logs:
    custom_components.llmvision: debug

Roadmap

[!NOTE] These are planned features and ideas. They are subject to change and may not be implemented in the order listed or at all.

  1. New Provider: NVIDIA ChatRTX
  2. New Provider: Custom (OpenAI API compatible) Providers
  3. HACS: Include in HACS default
  4. Feature: HTTPS support for LocalAI and Ollama
  5. Feature: Support for video files
  6. Feature: Analyze Frigate Recordings using frigate's event_id

How to report a bug or request a feature

[!IMPORTANT] Bugs: If you encounter any bugs and have followed the instructions carefully, feel free to file a bug report.
Feature Requests: If you have an idea for a feature, create a feature request.

<div align = left>

<kbd><br> Create new Issue <br></kbd>

</div>