340 lines
9.9 KiB
Markdown
340 lines
9.9 KiB
Markdown
# @host.today/ht-docker-ai 🚀
|
||
|
||
Production-ready Docker images for state-of-the-art AI Vision-Language Models. Run powerful multimodal AI locally with GPU acceleration or CPU fallback—no cloud API keys required.
|
||
|
||
## Issue Reporting and Security
|
||
|
||
For reporting bugs, issues, or security vulnerabilities, please visit [community.foss.global/](https://community.foss.global/). This is the central community hub for all issue reporting. Developers who sign and comply with our contribution agreement and go through identification can also get a [code.foss.global/](https://code.foss.global/) account to submit Pull Requests directly.
|
||
|
||
## 🎯 What's Included
|
||
|
||
| Model | Parameters | Best For | API |
|
||
|-------|-----------|----------|-----|
|
||
| **MiniCPM-V 4.5** | 8B | General vision understanding, image analysis, multi-image | Ollama-compatible |
|
||
| **PaddleOCR-VL** | 0.9B | Document parsing, table extraction, OCR | OpenAI-compatible |
|
||
|
||
## 📦 Available Images
|
||
|
||
```
|
||
code.foss.global/host.today/ht-docker-ai:<tag>
|
||
```
|
||
|
||
| Tag | Model | Hardware | Port |
|
||
|-----|-------|----------|------|
|
||
| `minicpm45v` / `latest` | MiniCPM-V 4.5 | NVIDIA GPU (9-18GB VRAM) | 11434 |
|
||
| `minicpm45v-cpu` | MiniCPM-V 4.5 | CPU only (8GB+ RAM) | 11434 |
|
||
| `paddleocr-vl` / `paddleocr-vl-gpu` | PaddleOCR-VL | NVIDIA GPU | 8000 |
|
||
| `paddleocr-vl-cpu` | PaddleOCR-VL | CPU only | 8000 |
|
||
|
||
---
|
||
|
||
## 🖼️ MiniCPM-V 4.5
|
||
|
||
A GPT-4o level multimodal LLM from OpenBMB—handles image understanding, OCR, multi-image analysis, and visual reasoning across 30+ languages.
|
||
|
||
### Quick Start
|
||
|
||
**GPU (Recommended):**
|
||
```bash
|
||
docker run -d \
|
||
--name minicpm \
|
||
--gpus all \
|
||
-p 11434:11434 \
|
||
-v ollama-data:/root/.ollama \
|
||
code.foss.global/host.today/ht-docker-ai:minicpm45v
|
||
```
|
||
|
||
**CPU Only:**
|
||
```bash
|
||
docker run -d \
|
||
--name minicpm \
|
||
-p 11434:11434 \
|
||
-v ollama-data:/root/.ollama \
|
||
code.foss.global/host.today/ht-docker-ai:minicpm45v-cpu
|
||
```
|
||
|
||
> 💡 **Pro tip:** Mount the volume to persist downloaded models (~5GB). Without it, models re-download on every container start.
|
||
|
||
### API Examples
|
||
|
||
**List models:**
|
||
```bash
|
||
curl http://localhost:11434/api/tags
|
||
```
|
||
|
||
**Analyze an image:**
|
||
```bash
|
||
curl http://localhost:11434/api/generate -d '{
|
||
"model": "minicpm-v",
|
||
"prompt": "What do you see in this image?",
|
||
"images": ["<base64-encoded-image>"]
|
||
}'
|
||
```
|
||
|
||
**Chat with vision:**
|
||
```bash
|
||
curl http://localhost:11434/api/chat -d '{
|
||
"model": "minicpm-v",
|
||
"messages": [{
|
||
"role": "user",
|
||
"content": "Describe this image in detail",
|
||
"images": ["<base64-encoded-image>"]
|
||
}]
|
||
}'
|
||
```
|
||
|
||
### Hardware Requirements
|
||
|
||
| Variant | VRAM/RAM | Notes |
|
||
|---------|----------|-------|
|
||
| GPU (int4 quantized) | 9GB VRAM | Recommended for most use cases |
|
||
| GPU (full precision) | 18GB VRAM | Maximum quality |
|
||
| CPU (GGUF) | 8GB+ RAM | Slower but accessible |
|
||
|
||
---
|
||
|
||
## 📄 PaddleOCR-VL
|
||
|
||
A specialized 0.9B Vision-Language Model optimized for document parsing. Native support for tables, formulas, charts, and text extraction in 109 languages.
|
||
|
||
### Quick Start
|
||
|
||
**GPU:**
|
||
```bash
|
||
docker run -d \
|
||
--name paddleocr \
|
||
--gpus all \
|
||
-p 8000:8000 \
|
||
-v hf-cache:/root/.cache/huggingface \
|
||
code.foss.global/host.today/ht-docker-ai:paddleocr-vl
|
||
```
|
||
|
||
**CPU:**
|
||
```bash
|
||
docker run -d \
|
||
--name paddleocr \
|
||
-p 8000:8000 \
|
||
-v hf-cache:/root/.cache/huggingface \
|
||
code.foss.global/host.today/ht-docker-ai:paddleocr-vl-cpu
|
||
```
|
||
|
||
### OpenAI-Compatible API
|
||
|
||
PaddleOCR-VL exposes a fully OpenAI-compatible `/v1/chat/completions` endpoint:
|
||
|
||
```bash
|
||
curl http://localhost:8000/v1/chat/completions \
|
||
-H "Content-Type: application/json" \
|
||
-d '{
|
||
"model": "paddleocr-vl",
|
||
"messages": [{
|
||
"role": "user",
|
||
"content": [
|
||
{"type": "image_url", "image_url": {"url": "data:image/png;base64,<base64>"}},
|
||
{"type": "text", "text": "Table Recognition:"}
|
||
]
|
||
}],
|
||
"max_tokens": 8192
|
||
}'
|
||
```
|
||
|
||
### Task Prompts
|
||
|
||
| Prompt | Output | Use Case |
|
||
|--------|--------|----------|
|
||
| `OCR:` | Plain text | General text extraction |
|
||
| `Table Recognition:` | Markdown table | Invoices, bank statements, spreadsheets |
|
||
| `Formula Recognition:` | LaTeX | Math equations, scientific notation |
|
||
| `Chart Recognition:` | Description | Graphs and visualizations |
|
||
|
||
### API Endpoints
|
||
|
||
| Endpoint | Method | Description |
|
||
|----------|--------|-------------|
|
||
| `/health` | GET | Health check with model/device info |
|
||
| `/formats` | GET | Supported image formats and input methods |
|
||
| `/v1/models` | GET | List available models |
|
||
| `/v1/chat/completions` | POST | OpenAI-compatible chat completions |
|
||
| `/ocr` | POST | Legacy OCR endpoint |
|
||
|
||
### Image Input Methods
|
||
|
||
PaddleOCR-VL accepts images in multiple formats:
|
||
|
||
```javascript
|
||
// Base64 data URL
|
||
"data:image/png;base64,iVBORw0KGgo..."
|
||
|
||
// HTTP URL
|
||
"https://example.com/document.png"
|
||
|
||
// Raw base64
|
||
"iVBORw0KGgo..."
|
||
```
|
||
|
||
**Supported formats:** PNG, JPEG, WebP, BMP, GIF, TIFF
|
||
|
||
**Optimal resolution:** 1080p–2K. Images are automatically scaled for best results.
|
||
|
||
### Performance
|
||
|
||
| Mode | Speed per Page |
|
||
|------|----------------|
|
||
| GPU (CUDA) | 2–5 seconds |
|
||
| CPU | 30–60 seconds |
|
||
|
||
---
|
||
|
||
## 🐳 Docker Compose
|
||
|
||
```yaml
|
||
version: '3.8'
|
||
services:
|
||
# General vision tasks
|
||
minicpm:
|
||
image: code.foss.global/host.today/ht-docker-ai:minicpm45v
|
||
ports:
|
||
- "11434:11434"
|
||
volumes:
|
||
- ollama-data:/root/.ollama
|
||
deploy:
|
||
resources:
|
||
reservations:
|
||
devices:
|
||
- driver: nvidia
|
||
count: 1
|
||
capabilities: [gpu]
|
||
restart: unless-stopped
|
||
|
||
# Document parsing / OCR
|
||
paddleocr:
|
||
image: code.foss.global/host.today/ht-docker-ai:paddleocr-vl
|
||
ports:
|
||
- "8000:8000"
|
||
volumes:
|
||
- hf-cache:/root/.cache/huggingface
|
||
deploy:
|
||
resources:
|
||
reservations:
|
||
devices:
|
||
- driver: nvidia
|
||
count: 1
|
||
capabilities: [gpu]
|
||
restart: unless-stopped
|
||
|
||
volumes:
|
||
ollama-data:
|
||
hf-cache:
|
||
```
|
||
|
||
---
|
||
|
||
## ⚙️ Environment Variables
|
||
|
||
### MiniCPM-V 4.5
|
||
|
||
| Variable | Default | Description |
|
||
|----------|---------|-------------|
|
||
| `MODEL_NAME` | `minicpm-v` | Ollama model to pull on startup |
|
||
| `OLLAMA_HOST` | `0.0.0.0` | API bind address |
|
||
| `OLLAMA_ORIGINS` | `*` | Allowed CORS origins |
|
||
|
||
### PaddleOCR-VL
|
||
|
||
| Variable | Default | Description |
|
||
|----------|---------|-------------|
|
||
| `MODEL_NAME` | `PaddlePaddle/PaddleOCR-VL` | HuggingFace model ID |
|
||
| `SERVER_HOST` | `0.0.0.0` | API bind address |
|
||
| `SERVER_PORT` | `8000` | API port |
|
||
|
||
---
|
||
|
||
## 🔧 Building from Source
|
||
|
||
```bash
|
||
# Clone the repository
|
||
git clone https://code.foss.global/host.today/ht-docker-ai.git
|
||
cd ht-docker-ai
|
||
|
||
# Build all images
|
||
./build-images.sh
|
||
|
||
# Run tests
|
||
./test-images.sh
|
||
```
|
||
|
||
---
|
||
|
||
## 🏗️ Architecture Notes
|
||
|
||
### Dual-VLM Consensus Strategy
|
||
|
||
For production document extraction, consider using both models together:
|
||
|
||
1. **Pass 1:** MiniCPM-V visual extraction (images → JSON)
|
||
2. **Pass 2:** PaddleOCR-VL table recognition (images → markdown → JSON)
|
||
3. **Consensus:** If results match → Done (fast path)
|
||
4. **Pass 3+:** Additional visual passes if needed
|
||
|
||
This dual-VLM approach catches extraction errors that single models miss.
|
||
|
||
### Why This Works
|
||
|
||
- **Different architectures:** Two independent models cross-validate each other
|
||
- **Specialized strengths:** PaddleOCR-VL excels at tables; MiniCPM-V handles general vision
|
||
- **Native processing:** Both VLMs see original images—no intermediate HTML/structure loss
|
||
|
||
---
|
||
|
||
## 🔍 Troubleshooting
|
||
|
||
### Model download hangs
|
||
```bash
|
||
docker logs -f <container-name>
|
||
```
|
||
Model downloads can take several minutes (~5GB for MiniCPM-V).
|
||
|
||
### Out of memory
|
||
- **GPU:** Use the CPU variant or upgrade VRAM
|
||
- **CPU:** Increase container memory: `--memory=16g`
|
||
|
||
### API not responding
|
||
1. Check container health: `docker ps`
|
||
2. Review logs: `docker logs <container>`
|
||
3. Verify port: `curl localhost:11434/api/tags` or `curl localhost:8000/health`
|
||
|
||
### Enable NVIDIA GPU support on host
|
||
```bash
|
||
# Install NVIDIA Container Toolkit
|
||
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
|
||
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
|
||
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
|
||
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
|
||
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
|
||
sudo nvidia-ctk runtime configure --runtime=docker
|
||
sudo systemctl restart docker
|
||
```
|
||
|
||
---
|
||
|
||
## License and Legal Information
|
||
|
||
This repository contains open-source code licensed under the MIT License. A copy of the license can be found in the [LICENSE](./LICENSE) file.
|
||
|
||
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
|
||
|
||
### Trademarks
|
||
|
||
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH or third parties, and are not included within the scope of the MIT license granted herein.
|
||
|
||
Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines or the guidelines of the respective third-party owners, and any usage must be approved in writing. Third-party trademarks used herein are the property of their respective owners and used only in a descriptive manner, e.g. for an implementation of an API or similar.
|
||
|
||
### Company Information
|
||
|
||
Task Venture Capital GmbH
|
||
Registered at District Court Bremen HRB 35230 HB, Germany
|
||
|
||
For any legal inquiries or further information, please contact us via email at hello@task.vc.
|
||
|
||
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.
|