update
This commit is contained in:
233
readme.md
233
readme.md
@@ -1,19 +1,27 @@
|
||||
# @host.today/ht-docker-ai 🚀
|
||||
|
||||
Production-ready Docker images for state-of-the-art AI Vision-Language Models. Run powerful multimodal AI locally with GPU acceleration or CPU fallback—no cloud API keys required.
|
||||
Production-ready Docker images for state-of-the-art AI Vision-Language Models. Run powerful multimodal AI locally with GPU acceleration or CPU fallback—**no cloud API keys required**.
|
||||
|
||||
> 🔥 **Four VLMs, one registry.** From lightweight document OCR to GPT-4o-level vision understanding—pick the right tool for your task.
|
||||
|
||||
## Issue Reporting and Security
|
||||
|
||||
For reporting bugs, issues, or security vulnerabilities, please visit [community.foss.global/](https://community.foss.global/). This is the central community hub for all issue reporting. Developers who sign and comply with our contribution agreement and go through identification can also get a [code.foss.global/](https://code.foss.global/) account to submit Pull Requests directly.
|
||||
|
||||
---
|
||||
|
||||
## 🎯 What's Included
|
||||
|
||||
| Model | Parameters | Best For | API |
|
||||
|-------|-----------|----------|-----|
|
||||
| **MiniCPM-V 4.5** | 8B | General vision understanding, image analysis, multi-image | Ollama-compatible |
|
||||
| **PaddleOCR-VL** | 0.9B | Document parsing, table extraction, OCR | OpenAI-compatible |
|
||||
| Model | Parameters | Best For | API | Port |
|
||||
|-------|-----------|----------|-----|------|
|
||||
| **MiniCPM-V 4.5** | 8B | General vision understanding, multi-image analysis | Ollama-compatible | 11434 |
|
||||
| **PaddleOCR-VL** | 0.9B | Document parsing, table extraction, structured OCR | OpenAI-compatible | 8000 |
|
||||
| **Nanonets-OCR-s** | ~4B | Document OCR with semantic markdown output | OpenAI-compatible | 8000 |
|
||||
| **Qwen3-VL-30B** | 30B (A3B) | Advanced visual agents, code generation from images | Ollama-compatible | 11434 |
|
||||
|
||||
## 📦 Available Images
|
||||
---
|
||||
|
||||
## 📦 Quick Reference: All Available Images
|
||||
|
||||
```
|
||||
code.foss.global/host.today/ht-docker-ai:<tag>
|
||||
@@ -25,12 +33,14 @@ code.foss.global/host.today/ht-docker-ai:<tag>
|
||||
| `minicpm45v-cpu` | MiniCPM-V 4.5 | CPU only (8GB+ RAM) | 11434 |
|
||||
| `paddleocr-vl` / `paddleocr-vl-gpu` | PaddleOCR-VL | NVIDIA GPU | 8000 |
|
||||
| `paddleocr-vl-cpu` | PaddleOCR-VL | CPU only | 8000 |
|
||||
| `nanonets-ocr` | Nanonets-OCR-s | NVIDIA GPU (8-10GB VRAM) | 8000 |
|
||||
| `qwen3vl` | Qwen3-VL-30B-A3B | NVIDIA GPU (~20GB VRAM) | 11434 |
|
||||
|
||||
---
|
||||
|
||||
## 🖼️ MiniCPM-V 4.5
|
||||
|
||||
A GPT-4o level multimodal LLM from OpenBMB—handles image understanding, OCR, multi-image analysis, and visual reasoning across 30+ languages.
|
||||
A GPT-4o level multimodal LLM from OpenBMB—handles image understanding, OCR, multi-image analysis, and visual reasoning across **30+ languages**.
|
||||
|
||||
### Quick Start
|
||||
|
||||
@@ -95,7 +105,7 @@ curl http://localhost:11434/api/chat -d '{
|
||||
|
||||
## 📄 PaddleOCR-VL
|
||||
|
||||
A specialized 0.9B Vision-Language Model optimized for document parsing. Native support for tables, formulas, charts, and text extraction in 109 languages.
|
||||
A specialized **0.9B Vision-Language Model** optimized for document parsing. Native support for tables, formulas, charts, and text extraction in **109 languages**.
|
||||
|
||||
### Quick Start
|
||||
|
||||
@@ -185,8 +195,121 @@ PaddleOCR-VL accepts images in multiple formats:
|
||||
|
||||
---
|
||||
|
||||
## 🔍 Nanonets-OCR-s
|
||||
|
||||
A **Qwen2.5-VL-3B** model fine-tuned specifically for document OCR. Outputs structured markdown with semantic HTML tags—perfect for preserving document structure.
|
||||
|
||||
### Key Features
|
||||
|
||||
- 📝 **Semantic output:** Tables → HTML, equations → LaTeX, watermarks/page numbers → tagged
|
||||
- 🌍 **Multilingual:** Inherits Qwen's broad language support
|
||||
- ⚡ **Efficient:** ~8-10GB VRAM, runs great on consumer GPUs
|
||||
- 🔌 **OpenAI-compatible:** Drop-in replacement for existing pipelines
|
||||
|
||||
### Quick Start
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name nanonets \
|
||||
--gpus all \
|
||||
-p 8000:8000 \
|
||||
-v hf-cache:/root/.cache/huggingface \
|
||||
code.foss.global/host.today/ht-docker-ai:nanonets-ocr
|
||||
```
|
||||
|
||||
### API Usage
|
||||
|
||||
```bash
|
||||
curl http://localhost:8000/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "nanonets/Nanonets-OCR-s",
|
||||
"messages": [{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "image_url", "image_url": {"url": "data:image/png;base64,<base64>"}},
|
||||
{"type": "text", "text": "Extract the text from the above document as if you were reading it naturally. Return the tables in html format. Return the equations in LaTeX representation."}
|
||||
]
|
||||
}],
|
||||
"temperature": 0.0,
|
||||
"max_tokens": 4096
|
||||
}'
|
||||
```
|
||||
|
||||
### Output Format
|
||||
|
||||
Nanonets-OCR-s returns markdown with semantic tags:
|
||||
|
||||
| Element | Output Format |
|
||||
|---------|---------------|
|
||||
| Tables | `<table>...</table>` (HTML) |
|
||||
| Equations | `$...$` (LaTeX) |
|
||||
| Images | `<img>description</img>` |
|
||||
| Watermarks | `<watermark>OFFICIAL COPY</watermark>` |
|
||||
| Page numbers | `<page_number>14</page_number>` |
|
||||
|
||||
### Performance
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Speed | 3–8 seconds per page |
|
||||
| VRAM | ~8-10GB |
|
||||
|
||||
---
|
||||
|
||||
## 🧠 Qwen3-VL-30B-A3B
|
||||
|
||||
The **most powerful** Qwen vision model—30B parameters with 3B active (MoE architecture). Handles complex visual reasoning, code generation from screenshots, and visual agent capabilities.
|
||||
|
||||
### Key Features
|
||||
|
||||
- 🚀 **256K context** (expandable to 1M tokens!)
|
||||
- 🤖 **Visual agent capabilities** — can plan and execute multi-step tasks
|
||||
- 💻 **Code generation from images** — screenshot → working code
|
||||
- 🎯 **State-of-the-art** visual reasoning
|
||||
|
||||
### Quick Start
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name qwen3vl \
|
||||
--gpus all \
|
||||
-p 11434:11434 \
|
||||
-v ollama-data:/root/.ollama \
|
||||
code.foss.global/host.today/ht-docker-ai:qwen3vl
|
||||
```
|
||||
|
||||
Then pull the model (one-time, ~20GB):
|
||||
```bash
|
||||
docker exec qwen3vl ollama pull qwen3-vl:30b-a3b
|
||||
```
|
||||
|
||||
### API Usage
|
||||
|
||||
```bash
|
||||
curl http://localhost:11434/api/chat -d '{
|
||||
"model": "qwen3-vl:30b-a3b",
|
||||
"messages": [{
|
||||
"role": "user",
|
||||
"content": "Analyze this screenshot and write the code to recreate this UI",
|
||||
"images": ["<base64-encoded-image>"]
|
||||
}]
|
||||
}'
|
||||
```
|
||||
|
||||
### Hardware Requirements
|
||||
|
||||
| Requirement | Value |
|
||||
|-------------|-------|
|
||||
| VRAM | ~20GB (Q4_K_M quantization) |
|
||||
| Context | 256K tokens default |
|
||||
|
||||
---
|
||||
|
||||
## 🐳 Docker Compose
|
||||
|
||||
Run multiple VLMs together for maximum flexibility:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
@@ -206,7 +329,7 @@ services:
|
||||
capabilities: [gpu]
|
||||
restart: unless-stopped
|
||||
|
||||
# Document parsing / OCR
|
||||
# Document parsing / OCR (table specialist)
|
||||
paddleocr:
|
||||
image: code.foss.global/host.today/ht-docker-ai:paddleocr-vl
|
||||
ports:
|
||||
@@ -222,6 +345,22 @@ services:
|
||||
capabilities: [gpu]
|
||||
restart: unless-stopped
|
||||
|
||||
# Document OCR with semantic output
|
||||
nanonets:
|
||||
image: code.foss.global/host.today/ht-docker-ai:nanonets-ocr
|
||||
ports:
|
||||
- "8001:8000"
|
||||
volumes:
|
||||
- hf-cache:/root/.cache/huggingface
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: 1
|
||||
capabilities: [gpu]
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
ollama-data:
|
||||
hf-cache:
|
||||
@@ -231,7 +370,7 @@ volumes:
|
||||
|
||||
## ⚙️ Environment Variables
|
||||
|
||||
### MiniCPM-V 4.5
|
||||
### MiniCPM-V 4.5 & Qwen3-VL (Ollama-based)
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
@@ -239,13 +378,47 @@ volumes:
|
||||
| `OLLAMA_HOST` | `0.0.0.0` | API bind address |
|
||||
| `OLLAMA_ORIGINS` | `*` | Allowed CORS origins |
|
||||
|
||||
### PaddleOCR-VL
|
||||
### PaddleOCR-VL & Nanonets-OCR (vLLM-based)
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `MODEL_NAME` | `PaddlePaddle/PaddleOCR-VL` | HuggingFace model ID |
|
||||
| `SERVER_HOST` | `0.0.0.0` | API bind address |
|
||||
| `SERVER_PORT` | `8000` | API port |
|
||||
| `MODEL_NAME` | Model-specific | HuggingFace model ID |
|
||||
| `HOST` | `0.0.0.0` | API bind address |
|
||||
| `PORT` | `8000` | API port |
|
||||
| `MAX_MODEL_LEN` | `8192` | Maximum sequence length |
|
||||
| `GPU_MEMORY_UTILIZATION` | `0.9` | GPU memory usage (0-1) |
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Architecture Notes
|
||||
|
||||
### Dual-VLM Consensus Strategy
|
||||
|
||||
For production document extraction, consider using multiple models together:
|
||||
|
||||
1. **Pass 1:** MiniCPM-V visual extraction (images → JSON)
|
||||
2. **Pass 2:** PaddleOCR-VL table recognition (images → markdown → JSON)
|
||||
3. **Consensus:** If results match → Done (fast path)
|
||||
4. **Pass 3+:** Additional visual passes if needed
|
||||
|
||||
This dual-VLM approach catches extraction errors that single models miss.
|
||||
|
||||
### Why Multi-Model Works
|
||||
|
||||
- **Different architectures:** Independent models cross-validate each other
|
||||
- **Specialized strengths:** PaddleOCR-VL excels at tables; MiniCPM-V handles general vision
|
||||
- **Native processing:** All VLMs see original images—no intermediate structure loss
|
||||
|
||||
### Model Selection Guide
|
||||
|
||||
| Task | Recommended Model |
|
||||
|------|-------------------|
|
||||
| General image understanding | MiniCPM-V 4.5 |
|
||||
| Table extraction from documents | PaddleOCR-VL |
|
||||
| Document OCR with structure preservation | Nanonets-OCR-s |
|
||||
| Complex visual reasoning / code generation | Qwen3-VL-30B |
|
||||
| Multi-image analysis | MiniCPM-V 4.5 |
|
||||
| Visual agent tasks | Qwen3-VL-30B |
|
||||
|
||||
---
|
||||
|
||||
@@ -265,37 +438,16 @@ cd ht-docker-ai
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Architecture Notes
|
||||
|
||||
### Dual-VLM Consensus Strategy
|
||||
|
||||
For production document extraction, consider using both models together:
|
||||
|
||||
1. **Pass 1:** MiniCPM-V visual extraction (images → JSON)
|
||||
2. **Pass 2:** PaddleOCR-VL table recognition (images → markdown → JSON)
|
||||
3. **Consensus:** If results match → Done (fast path)
|
||||
4. **Pass 3+:** Additional visual passes if needed
|
||||
|
||||
This dual-VLM approach catches extraction errors that single models miss.
|
||||
|
||||
### Why This Works
|
||||
|
||||
- **Different architectures:** Two independent models cross-validate each other
|
||||
- **Specialized strengths:** PaddleOCR-VL excels at tables; MiniCPM-V handles general vision
|
||||
- **Native processing:** Both VLMs see original images—no intermediate HTML/structure loss
|
||||
|
||||
---
|
||||
|
||||
## 🔍 Troubleshooting
|
||||
|
||||
### Model download hangs
|
||||
```bash
|
||||
docker logs -f <container-name>
|
||||
```
|
||||
Model downloads can take several minutes (~5GB for MiniCPM-V).
|
||||
Model downloads can take several minutes (~5GB for MiniCPM-V, ~20GB for Qwen3-VL).
|
||||
|
||||
### Out of memory
|
||||
- **GPU:** Use the CPU variant or upgrade VRAM
|
||||
- **GPU:** Use a lighter model variant or upgrade VRAM
|
||||
- **CPU:** Increase container memory: `--memory=16g`
|
||||
|
||||
### API not responding
|
||||
@@ -315,6 +467,13 @@ sudo nvidia-ctk runtime configure --runtime=docker
|
||||
sudo systemctl restart docker
|
||||
```
|
||||
|
||||
### GPU Memory Contention (Multi-Model)
|
||||
|
||||
When running multiple VLMs on a single GPU:
|
||||
- vLLM and Ollama both need significant GPU memory
|
||||
- **Single GPU:** Run services sequentially (stop one before starting another)
|
||||
- **Multi-GPU:** Assign each service to a different GPU via `CUDA_VISIBLE_DEVICES`
|
||||
|
||||
---
|
||||
|
||||
## License and Legal Information
|
||||
|
||||
Reference in New Issue
Block a user