feat(docker-images): add vLLM-based Nanonets-OCR2-3B image, Qwen3-VL Ollama image and refactor build/docs/tests to use new runtime/layout
This commit is contained in:
170
readme.md
170
readme.md
@@ -1,8 +1,8 @@
|
||||
# @host.today/ht-docker-ai 🚀
|
||||
|
||||
Production-ready Docker images for state-of-the-art AI Vision-Language Models. Run powerful multimodal AI locally with GPU acceleration or CPU fallback—**no cloud API keys required**.
|
||||
Production-ready Docker images for state-of-the-art AI Vision-Language Models. Run powerful multimodal AI locally with GPU acceleration—**no cloud API keys required**.
|
||||
|
||||
> 🔥 **Four VLMs, one registry.** From lightweight document OCR to GPT-4o-level vision understanding—pick the right tool for your task.
|
||||
> 🔥 **Three VLMs, one registry.** From lightweight document OCR to GPT-4o-level vision understanding—pick the right tool for your task.
|
||||
|
||||
## Issue Reporting and Security
|
||||
|
||||
@@ -12,12 +12,11 @@ For reporting bugs, issues, or security vulnerabilities, please visit [community
|
||||
|
||||
## 🎯 What's Included
|
||||
|
||||
| Model | Parameters | Best For | API | Port |
|
||||
|-------|-----------|----------|-----|------|
|
||||
| **MiniCPM-V 4.5** | 8B | General vision understanding, multi-image analysis | Ollama-compatible | 11434 |
|
||||
| **PaddleOCR-VL** | 0.9B | Document parsing, table extraction, structured OCR | OpenAI-compatible | 8000 |
|
||||
| **Nanonets-OCR-s** | ~4B | Document OCR with semantic markdown output | OpenAI-compatible | 8000 |
|
||||
| **Qwen3-VL-30B** | 30B (A3B) | Advanced visual agents, code generation from images | Ollama-compatible | 11434 |
|
||||
| Model | Parameters | Best For | API | Port | VRAM |
|
||||
|-------|-----------|----------|-----|------|------|
|
||||
| **MiniCPM-V 4.5** | 8B | General vision understanding, multi-image analysis | Ollama-compatible | 11434 | ~9GB |
|
||||
| **Nanonets-OCR-s** | ~4B | Document OCR with semantic markdown output | OpenAI-compatible | 8000 | ~10GB |
|
||||
| **Qwen3-VL-30B** | 30B (A3B) | Advanced visual agents, code generation from images | Ollama-compatible | 11434 | ~20GB |
|
||||
|
||||
---
|
||||
|
||||
@@ -27,14 +26,11 @@ For reporting bugs, issues, or security vulnerabilities, please visit [community
|
||||
code.foss.global/host.today/ht-docker-ai:<tag>
|
||||
```
|
||||
|
||||
| Tag | Model | Hardware | Port |
|
||||
|-----|-------|----------|------|
|
||||
| `minicpm45v` / `latest` | MiniCPM-V 4.5 | NVIDIA GPU (9-18GB VRAM) | 11434 |
|
||||
| `minicpm45v-cpu` | MiniCPM-V 4.5 | CPU only (8GB+ RAM) | 11434 |
|
||||
| `paddleocr-vl` / `paddleocr-vl-gpu` | PaddleOCR-VL | NVIDIA GPU | 8000 |
|
||||
| `paddleocr-vl-cpu` | PaddleOCR-VL | CPU only | 8000 |
|
||||
| `nanonets-ocr` | Nanonets-OCR-s | NVIDIA GPU (8-10GB VRAM) | 8000 |
|
||||
| `qwen3vl` | Qwen3-VL-30B-A3B | NVIDIA GPU (~20GB VRAM) | 11434 |
|
||||
| Tag | Model | Runtime | Port | VRAM |
|
||||
|-----|-------|---------|------|------|
|
||||
| `minicpm45v` / `latest` | MiniCPM-V 4.5 | Ollama | 11434 | ~9GB |
|
||||
| `nanonets-ocr` | Nanonets-OCR-s | vLLM | 8000 | ~10GB |
|
||||
| `qwen3vl` | Qwen3-VL-30B-A3B | Ollama | 11434 | ~20GB |
|
||||
|
||||
---
|
||||
|
||||
@@ -44,7 +40,6 @@ A GPT-4o level multimodal LLM from OpenBMB—handles image understanding, OCR, m
|
||||
|
||||
### Quick Start
|
||||
|
||||
**GPU (Recommended):**
|
||||
```bash
|
||||
docker run -d \
|
||||
--name minicpm \
|
||||
@@ -54,15 +49,6 @@ docker run -d \
|
||||
code.foss.global/host.today/ht-docker-ai:minicpm45v
|
||||
```
|
||||
|
||||
**CPU Only:**
|
||||
```bash
|
||||
docker run -d \
|
||||
--name minicpm \
|
||||
-p 11434:11434 \
|
||||
-v ollama-data:/root/.ollama \
|
||||
code.foss.global/host.today/ht-docker-ai:minicpm45v-cpu
|
||||
```
|
||||
|
||||
> 💡 **Pro tip:** Mount the volume to persist downloaded models (~5GB). Without it, models re-download on every container start.
|
||||
|
||||
### API Examples
|
||||
@@ -95,103 +81,10 @@ curl http://localhost:11434/api/chat -d '{
|
||||
|
||||
### Hardware Requirements
|
||||
|
||||
| Variant | VRAM/RAM | Notes |
|
||||
|---------|----------|-------|
|
||||
| GPU (int4 quantized) | 9GB VRAM | Recommended for most use cases |
|
||||
| GPU (full precision) | 18GB VRAM | Maximum quality |
|
||||
| CPU (GGUF) | 8GB+ RAM | Slower but accessible |
|
||||
|
||||
---
|
||||
|
||||
## 📄 PaddleOCR-VL
|
||||
|
||||
A specialized **0.9B Vision-Language Model** optimized for document parsing. Native support for tables, formulas, charts, and text extraction in **109 languages**.
|
||||
|
||||
### Quick Start
|
||||
|
||||
**GPU:**
|
||||
```bash
|
||||
docker run -d \
|
||||
--name paddleocr \
|
||||
--gpus all \
|
||||
-p 8000:8000 \
|
||||
-v hf-cache:/root/.cache/huggingface \
|
||||
code.foss.global/host.today/ht-docker-ai:paddleocr-vl
|
||||
```
|
||||
|
||||
**CPU:**
|
||||
```bash
|
||||
docker run -d \
|
||||
--name paddleocr \
|
||||
-p 8000:8000 \
|
||||
-v hf-cache:/root/.cache/huggingface \
|
||||
code.foss.global/host.today/ht-docker-ai:paddleocr-vl-cpu
|
||||
```
|
||||
|
||||
### OpenAI-Compatible API
|
||||
|
||||
PaddleOCR-VL exposes a fully OpenAI-compatible `/v1/chat/completions` endpoint:
|
||||
|
||||
```bash
|
||||
curl http://localhost:8000/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "paddleocr-vl",
|
||||
"messages": [{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "image_url", "image_url": {"url": "data:image/png;base64,<base64>"}},
|
||||
{"type": "text", "text": "Table Recognition:"}
|
||||
]
|
||||
}],
|
||||
"max_tokens": 8192
|
||||
}'
|
||||
```
|
||||
|
||||
### Task Prompts
|
||||
|
||||
| Prompt | Output | Use Case |
|
||||
|--------|--------|----------|
|
||||
| `OCR:` | Plain text | General text extraction |
|
||||
| `Table Recognition:` | Markdown table | Invoices, bank statements, spreadsheets |
|
||||
| `Formula Recognition:` | LaTeX | Math equations, scientific notation |
|
||||
| `Chart Recognition:` | Description | Graphs and visualizations |
|
||||
|
||||
### API Endpoints
|
||||
|
||||
| Endpoint | Method | Description |
|
||||
|----------|--------|-------------|
|
||||
| `/health` | GET | Health check with model/device info |
|
||||
| `/formats` | GET | Supported image formats and input methods |
|
||||
| `/v1/models` | GET | List available models |
|
||||
| `/v1/chat/completions` | POST | OpenAI-compatible chat completions |
|
||||
| `/ocr` | POST | Legacy OCR endpoint |
|
||||
|
||||
### Image Input Methods
|
||||
|
||||
PaddleOCR-VL accepts images in multiple formats:
|
||||
|
||||
```javascript
|
||||
// Base64 data URL
|
||||
"data:image/png;base64,iVBORw0KGgo..."
|
||||
|
||||
// HTTP URL
|
||||
"https://example.com/document.png"
|
||||
|
||||
// Raw base64
|
||||
"iVBORw0KGgo..."
|
||||
```
|
||||
|
||||
**Supported formats:** PNG, JPEG, WebP, BMP, GIF, TIFF
|
||||
|
||||
**Optimal resolution:** 1080p–2K. Images are automatically scaled for best results.
|
||||
|
||||
### Performance
|
||||
|
||||
| Mode | Speed per Page |
|
||||
|------|----------------|
|
||||
| GPU (CUDA) | 2–5 seconds |
|
||||
| CPU | 30–60 seconds |
|
||||
| Mode | VRAM Required |
|
||||
|------|---------------|
|
||||
| int4 quantized | 9GB |
|
||||
| Full precision (bf16) | 18GB |
|
||||
|
||||
---
|
||||
|
||||
@@ -203,7 +96,7 @@ A **Qwen2.5-VL-3B** model fine-tuned specifically for document OCR. Outputs stru
|
||||
|
||||
- 📝 **Semantic output:** Tables → HTML, equations → LaTeX, watermarks/page numbers → tagged
|
||||
- 🌍 **Multilingual:** Inherits Qwen's broad language support
|
||||
- ⚡ **Efficient:** ~8-10GB VRAM, runs great on consumer GPUs
|
||||
- ⚡ **Efficient:** ~10GB VRAM, runs great on consumer GPUs
|
||||
- 🔌 **OpenAI-compatible:** Drop-in replacement for existing pipelines
|
||||
|
||||
### Quick Start
|
||||
@@ -253,7 +146,7 @@ Nanonets-OCR-s returns markdown with semantic tags:
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Speed | 3–8 seconds per page |
|
||||
| VRAM | ~8-10GB |
|
||||
| VRAM | ~10GB |
|
||||
|
||||
---
|
||||
|
||||
@@ -329,27 +222,11 @@ services:
|
||||
capabilities: [gpu]
|
||||
restart: unless-stopped
|
||||
|
||||
# Document parsing / OCR (table specialist)
|
||||
paddleocr:
|
||||
image: code.foss.global/host.today/ht-docker-ai:paddleocr-vl
|
||||
ports:
|
||||
- "8000:8000"
|
||||
volumes:
|
||||
- hf-cache:/root/.cache/huggingface
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: 1
|
||||
capabilities: [gpu]
|
||||
restart: unless-stopped
|
||||
|
||||
# Document OCR with semantic output
|
||||
nanonets:
|
||||
image: code.foss.global/host.today/ht-docker-ai:nanonets-ocr
|
||||
ports:
|
||||
- "8001:8000"
|
||||
- "8000:8000"
|
||||
volumes:
|
||||
- hf-cache:/root/.cache/huggingface
|
||||
deploy:
|
||||
@@ -378,11 +255,11 @@ volumes:
|
||||
| `OLLAMA_HOST` | `0.0.0.0` | API bind address |
|
||||
| `OLLAMA_ORIGINS` | `*` | Allowed CORS origins |
|
||||
|
||||
### PaddleOCR-VL & Nanonets-OCR (vLLM-based)
|
||||
### Nanonets-OCR (vLLM-based)
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `MODEL_NAME` | Model-specific | HuggingFace model ID |
|
||||
| `MODEL_NAME` | `nanonets/Nanonets-OCR-s` | HuggingFace model ID |
|
||||
| `HOST` | `0.0.0.0` | API bind address |
|
||||
| `PORT` | `8000` | API port |
|
||||
| `MAX_MODEL_LEN` | `8192` | Maximum sequence length |
|
||||
@@ -397,7 +274,7 @@ volumes:
|
||||
For production document extraction, consider using multiple models together:
|
||||
|
||||
1. **Pass 1:** MiniCPM-V visual extraction (images → JSON)
|
||||
2. **Pass 2:** PaddleOCR-VL table recognition (images → markdown → JSON)
|
||||
2. **Pass 2:** Nanonets-OCR semantic extraction (images → markdown → JSON)
|
||||
3. **Consensus:** If results match → Done (fast path)
|
||||
4. **Pass 3+:** Additional visual passes if needed
|
||||
|
||||
@@ -406,7 +283,7 @@ This dual-VLM approach catches extraction errors that single models miss.
|
||||
### Why Multi-Model Works
|
||||
|
||||
- **Different architectures:** Independent models cross-validate each other
|
||||
- **Specialized strengths:** PaddleOCR-VL excels at tables; MiniCPM-V handles general vision
|
||||
- **Specialized strengths:** Nanonets-OCR-s excels at document structure; MiniCPM-V handles general vision
|
||||
- **Native processing:** All VLMs see original images—no intermediate structure loss
|
||||
|
||||
### Model Selection Guide
|
||||
@@ -414,7 +291,6 @@ This dual-VLM approach catches extraction errors that single models miss.
|
||||
| Task | Recommended Model |
|
||||
|------|-------------------|
|
||||
| General image understanding | MiniCPM-V 4.5 |
|
||||
| Table extraction from documents | PaddleOCR-VL |
|
||||
| Document OCR with structure preservation | Nanonets-OCR-s |
|
||||
| Complex visual reasoning / code generation | Qwen3-VL-30B |
|
||||
| Multi-image analysis | MiniCPM-V 4.5 |
|
||||
|
||||
Reference in New Issue
Block a user