feat(invoices): add hybrid OCR + vision invoice/document parsing with PaddleOCR, consensus voting, and prompt/test refactors

This commit is contained in:
2026-01-16 14:24:37 +00:00
parent acded2a165
commit 82358b2d5d
4 changed files with 380 additions and 109 deletions

View File

@@ -1,5 +1,14 @@
# Changelog # Changelog
## 2026-01-16 - 1.4.0 - feat(invoices)
add hybrid OCR + vision invoice/document parsing with PaddleOCR, consensus voting, and prompt/test refactors
- Add hybrid pipeline documentation and examples (PaddleOCR + MiniCPM-V) and architecture diagram in recipes/document.md
- Integrate PaddleOCR: new OCR extraction functions and OCR-only prompt flow in test/test.node.ts
- Add consensus voting and parallel-pass optimization to improve reliability (multiple passes, hashing, and majority voting)
- Refactor prompts and tests: introduce /nothink token, OCR truncation limits, separate visual and OCR-only prompts, and improved prompt building in test/test.invoices.ts
- Update image conversion defaults (200 DPI, filename change) and add TypeScript helper functions for extraction and consensus handling
## 2026-01-16 - 1.3.0 - feat(paddleocr) ## 2026-01-16 - 1.3.0 - feat(paddleocr)
add PaddleOCR OCR service (Docker images, server, tests, docs) and CI workflows add PaddleOCR OCR service (Docker images, server, tests, docs) and CI workflows

View File

@@ -1,129 +1,250 @@
# Bank Statement Parsing with MiniCPM-V 4.5 # Document Recognition with Hybrid OCR + Vision AI
Recipe for extracting transactions from bank statement PDFs using vision-language AI. Recipe for extracting structured data from invoices and documents using a hybrid approach:
PaddleOCR for text extraction + MiniCPM-V 4.5 for intelligent parsing.
## Model ## Architecture
- **Model**: MiniCPM-V 4.5 (8B parameters) ```
- **Ollama Name**: `openbmb/minicpm-v4.5:q8_0` ┌──────────────┐ ┌──────────────┐ ┌──────────────┐
- **Quantization**: Q8_0 (9.8GB VRAM) │ PDF/Image │ ───> │ PaddleOCR │ ───> │ Raw Text │
- **Runtime**: Ollama on GPU └──────────────┘ └──────────────┘ └──────┬───────┘
┌──────────────┐ │
│ MiniCPM-V │ <───────────┘
│ 4.5 VLM │ <─── Image
└──────┬───────┘
┌──────▼───────┐
│ Structured │
│ JSON │
└──────────────┘
```
## Why Hybrid?
| Approach | Accuracy | Speed | Best For |
|----------|----------|-------|----------|
| VLM Only | 85-90% | Fast | Simple layouts |
| OCR Only | N/A | Fast | Just text extraction |
| **Hybrid** | **91%+** | Medium | Complex invoices |
The hybrid approach provides OCR text as context to the VLM, improving accuracy on:
- Small text and numbers
- Low contrast documents
- Dense tables
## Services
| Service | Port | Purpose |
|---------|------|---------|
| PaddleOCR | 5000 | Text extraction |
| Ollama (MiniCPM-V) | 11434 | Intelligent parsing |
## Running the Containers
**Start both services:**
```bash
# PaddleOCR (CPU is sufficient for OCR)
docker run -d --name paddleocr -p 5000:5000 \
code.foss.global/host.today/ht-docker-ai:paddleocr-cpu
# MiniCPM-V 4.5 (GPU recommended)
docker run -d --name minicpm --gpus all -p 11434:11434 \
-v ollama-data:/root/.ollama \
code.foss.global/host.today/ht-docker-ai:minicpm45v
```
## Image Conversion ## Image Conversion
Convert PDF to PNG at 300 DPI for optimal OCR accuracy. Convert PDF to PNG at 200 DPI:
```bash ```bash
convert -density 300 -quality 100 input.pdf \ convert -density 200 -quality 90 input.pdf \
-background white -alpha remove \ -background white -alpha remove \
output-%d.png page-%d.png
``` ```
**Parameters:** ## Step 1: Extract OCR Text
- `-density 300`: 300 DPI resolution (critical for accuracy)
- `-quality 100`: Maximum quality
- `-background white -alpha remove`: Remove transparency
- `output-%d.png`: Outputs page-0.png, page-1.png, etc.
**Dependencies:** ```typescript
```bash async function extractOcrText(imageBase64: string): Promise<string> {
apt-get install imagemagick const response = await fetch('http://localhost:5000/ocr', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ image: imageBase64 }),
});
const data = await response.json();
if (data.success && data.results) {
return data.results.map((r: { text: string }) => r.text).join('\n');
}
return '';
}
``` ```
## Prompt ## Step 2: Build Enhanced Prompt
``` ```typescript
You are a bank statement parser. Extract EVERY transaction from the table. function buildPrompt(ocrText: string): string {
const base = `You are an invoice parser. Extract the following fields:
Read the Amount column carefully: 1. invoice_number: The invoice/receipt number
- "- 21,47 €" means DEBIT, output as: -21.47 2. invoice_date: Date in YYYY-MM-DD format
- "+ 1.000,00 €" means CREDIT, output as: 1000.00 3. vendor_name: Company that issued the invoice
- European format: comma = decimal point 4. currency: EUR, USD, etc.
5. net_amount: Amount before tax (if shown)
6. vat_amount: Tax/VAT amount (0 if reverse charge)
7. total_amount: Final amount due
For each row output: {"date":"YYYY-MM-DD","counterparty":"NAME","amount":-21.47} Return ONLY valid JSON:
{"invoice_number":"XXX","invoice_date":"YYYY-MM-DD","vendor_name":"Company","currency":"EUR","net_amount":100.00,"vat_amount":19.00,"total_amount":119.00}`;
Do not skip any rows. Return complete JSON array: if (ocrText) {
return `${base}
OCR text extracted from the invoice:
---
${ocrText}
---
Cross-reference the image with the OCR text above for accuracy.`;
}
return base;
}
``` ```
## API Call ## Step 3: Call Vision-Language Model
```python ```typescript
import base64 async function extractInvoice(images: string[], ocrText: string): Promise<Invoice> {
import requests const payload = {
model: 'openbmb/minicpm-v4.5:q8_0',
prompt: buildPrompt(ocrText),
images, // Base64 encoded
stream: false,
options: {
num_predict: 2048,
temperature: 0.1,
},
};
# Load images const response = await fetch('http://localhost:11434/api/generate', {
with open('page-0.png', 'rb') as f: method: 'POST',
page0 = base64.b64encode(f.read()).decode('utf-8') headers: { 'Content-Type': 'application/json' },
with open('page-1.png', 'rb') as f: body: JSON.stringify(payload),
page1 = base64.b64encode(f.read()).decode('utf-8') });
payload = { const result = await response.json();
"model": "openbmb/minicpm-v4.5:q8_0", return JSON.parse(result.response);
"prompt": prompt, }
"images": [page0, page1], # Multiple pages supported ```
"stream": False,
"options": { ## Consensus Voting
"num_predict": 16384,
"temperature": 0.1 For production reliability, run multiple extraction passes and require consensus:
```typescript
async function extractWithConsensus(images: string[], maxPasses: number = 5): Promise<Invoice> {
const results: Map<string, { invoice: Invoice; count: number }> = new Map();
// Optimization: Run Pass 1 (no OCR) parallel with OCR + Pass 2
const [pass1Result, ocrText] = await Promise.all([
extractInvoice(images, ''),
extractOcrText(images[0]),
]);
// Add Pass 1 result
addResult(results, pass1Result);
// Pass 2 with OCR context
const pass2Result = await extractInvoice(images, ocrText);
addResult(results, pass2Result);
// Check for consensus (2 matching results)
for (const [hash, data] of results) {
if (data.count >= 2) {
return data.invoice; // Consensus reached!
} }
}
// Continue until consensus or max passes
for (let pass = 3; pass <= maxPasses; pass++) {
const result = await extractInvoice(images, ocrText);
addResult(results, result);
// Check consensus...
}
// Return most common result
return getMostCommon(results);
} }
response = requests.post( function hashInvoice(inv: Invoice): string {
'http://localhost:11434/api/generate', return `${inv.invoice_number}|${inv.invoice_date}|${inv.total_amount.toFixed(2)}`;
json=payload, }
timeout=600
)
result = response.json()['response']
``` ```
## Output Format ## Output Format
```json ```json
[ {
{"date":"2022-04-01","counterparty":"DIGITALOCEAN.COM","amount":-21.47}, "invoice_number": "INV-2024-001234",
{"date":"2022-04-01","counterparty":"DIGITALOCEAN.COM","amount":-58.06}, "invoice_date": "2024-08-15",
{"date":"2022-04-12","counterparty":"LOSSLESS GMBH","amount":1000.00} "vendor_name": "Hetzner Online GmbH",
] "currency": "EUR",
"net_amount": 167.52,
"vat_amount": 31.83,
"total_amount": 199.35
}
``` ```
## Running the Container
**GPU (recommended):**
```bash
docker run -d --gpus all -p 11434:11434 \
-v ollama-data:/root/.ollama \
-e MODEL_NAME="openbmb/minicpm-v4.5:q8_0" \
ht-docker-ai:minicpm45v
```
**CPU (slower):**
```bash
docker run -d -p 11434:11434 \
-v ollama-data:/root/.ollama \
-e MODEL_NAME="openbmb/minicpm-v4.5:q4_0" \
ht-docker-ai:minicpm45v-cpu
```
## Hardware Requirements
| Quantization | VRAM/RAM | Speed |
|--------------|----------|-------|
| Q8_0 (GPU) | 10GB | Fast |
| Q4_0 (CPU) | 8GB | Slow |
## Test Results ## Test Results
| Statement | Pages | Transactions | Accuracy | Tested on 46 real invoices from various vendors:
|-----------|-------|--------------|----------|
| bunq-2022-04 | 2 | 26 | 100% | | Metric | Value |
| bunq-2021-06 | 3 | 28 | 100% | |--------|-------|
| **Accuracy** | 91.3% (42/46) |
| **Avg Time** | 42.7s per invoice |
| **Consensus Rate** | 85% in 2 passes |
### Per-Vendor Results
| Vendor | Invoices | Accuracy |
|--------|----------|----------|
| Hetzner | 3 | 100% |
| DigitalOcean | 4 | 100% |
| Adobe | 3 | 100% |
| Cloudflare | 1 | 100% |
| Wasabi | 4 | 100% |
| Figma | 3 | 100% |
| Google Cloud | 1 | 100% |
| MongoDB | 3 | 0% (date parsing) |
## Hardware Requirements
| Component | Minimum | Recommended |
|-----------|---------|-------------|
| PaddleOCR (CPU) | 4GB RAM | 8GB RAM |
| MiniCPM-V (GPU) | 10GB VRAM | 12GB VRAM |
| MiniCPM-V (CPU) | 16GB RAM | 32GB RAM |
## Tips ## Tips
1. **DPI matters**: 150 DPI causes missed rows; 300 DPI is optimal 1. **Use hybrid approach**: OCR text dramatically improves number/date accuracy
2. **PNG over JPEG**: PNG preserves text clarity better 2. **Consensus voting**: Run 2-5 passes to catch hallucinations
3. **Remove alpha**: Some models struggle with transparency 3. **200 DPI is optimal**: Higher doesn't help, lower loses detail
4. **Multi-page**: Pass all pages in single request for context 4. **PNG over JPEG**: Preserves text clarity
5. **Temperature 0.1**: Low temperature for consistent output 5. **Temperature 0.1**: Low temperature for consistent output
6. **European format**: Explicitly explain comma=decimal in prompt 6. **Multi-page support**: Pass all pages in single request for context
7. **Normalize for comparison**: Ignore case/whitespace when comparing invoice numbers
## Common Issues
| Issue | Cause | Solution |
|-------|-------|----------|
| Wrong date | Multiple dates on invoice | Be specific in prompt about which date |
| Wrong currency | Symbol vs code mismatch | OCR helps disambiguate |
| Missing digits | Low resolution | Increase density to 300 DPI |
| Hallucinated data | VLM uncertainty | Use consensus voting |

View File

@@ -45,7 +45,8 @@ async function extractOcrText(imageBase64: string): Promise<string> {
* Build prompt with optional OCR text * Build prompt with optional OCR text
*/ */
function buildPrompt(ocrText: string): string { function buildPrompt(ocrText: string): string {
const base = `You are an invoice parser. Extract the following fields from this invoice: const base = `/nothink
You are an invoice parser. Extract the following fields from this invoice:
1. invoice_number: The invoice/receipt number 1. invoice_number: The invoice/receipt number
2. invoice_date: Date in YYYY-MM-DD format 2. invoice_date: Date in YYYY-MM-DD format
@@ -62,11 +63,17 @@ If a field is not visible, use null for strings or 0 for numbers.
No explanation, just the JSON object.`; No explanation, just the JSON object.`;
if (ocrText) { if (ocrText) {
// Limit OCR text to prevent context overflow
const maxOcrLength = 4000;
const truncatedOcr = ocrText.length > maxOcrLength
? ocrText.substring(0, maxOcrLength) + '\n... (truncated)'
: ocrText;
return `${base} return `${base}
OCR text extracted from the invoice: OCR text extracted from the invoice (use for reference):
--- ---
${ocrText} ${truncatedOcr}
--- ---
Cross-reference the image with the OCR text above for accuracy.`; Cross-reference the image with the OCR text above for accuracy.`;

View File

@@ -6,8 +6,11 @@ import * as os from 'os';
const OLLAMA_URL = 'http://localhost:11434'; const OLLAMA_URL = 'http://localhost:11434';
const MODEL = 'openbmb/minicpm-v4.5:q8_0'; const MODEL = 'openbmb/minicpm-v4.5:q8_0';
const PADDLEOCR_URL = 'http://localhost:5000';
const EXTRACT_PROMPT = `You are a bank statement parser. Extract EVERY transaction from the table. // Prompt for visual extraction (with images)
const VISUAL_EXTRACT_PROMPT = `/nothink
You are a bank statement parser. Extract EVERY transaction from the table.
Read the Amount column carefully: Read the Amount column carefully:
- "- 21,47 €" means DEBIT, output as: -21.47 - "- 21,47 €" means DEBIT, output as: -21.47
@@ -18,6 +21,60 @@ For each row output: {"date":"YYYY-MM-DD","counterparty":"NAME","amount":-21.47}
Do not skip any rows. Return ONLY the JSON array, no explanation.`; Do not skip any rows. Return ONLY the JSON array, no explanation.`;
// Prompt for OCR-only extraction (no images)
const OCR_EXTRACT_PROMPT = `/nothink
You are a bank statement parser. Extract EVERY transaction from the OCR text below.
Read the Amount values carefully:
- "- 21,47 €" means DEBIT, output as: -21.47
- "+ 1.000,00 €" means CREDIT, output as: 1000.00
- European format: comma = decimal point
For each transaction output: {"date":"YYYY-MM-DD","counterparty":"NAME","amount":-21.47}
Do not skip any transactions. Return ONLY the JSON array, no explanation.`;
/**
* Build prompt for OCR-only extraction (no images)
*/
function buildOcrOnlyPrompt(ocrText: string): string {
// Limit OCR text to prevent context overflow
const maxOcrLength = 12000;
const truncatedOcr = ocrText.length > maxOcrLength
? ocrText.substring(0, maxOcrLength) + '\n... (truncated)'
: ocrText;
return `${OCR_EXTRACT_PROMPT}
OCR text from bank statement:
---
${truncatedOcr}
---`;
}
/**
* Extract OCR text from an image using PaddleOCR
*/
async function extractOcrText(imageBase64: string): Promise<string> {
try {
const response = await fetch(`${PADDLEOCR_URL}/ocr`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ image: imageBase64 }),
});
if (!response.ok) return '';
const data = await response.json();
if (data.success && data.results) {
return data.results.map((r: { text: string }) => r.text).join('\n');
}
} catch {
// PaddleOCR unavailable
}
return '';
}
interface ITransaction { interface ITransaction {
date: string; date: string;
counterparty: string; counterparty: string;
@@ -53,12 +110,12 @@ function convertPdfToImages(pdfPath: string): string[] {
} }
/** /**
* Single extraction pass * Visual extraction pass (with images)
*/ */
async function extractOnce(images: string[], passNum: number): Promise<ITransaction[]> { async function extractVisual(images: string[], passLabel: string): Promise<ITransaction[]> {
const payload = { const payload = {
model: MODEL, model: MODEL,
prompt: EXTRACT_PROMPT, prompt: VISUAL_EXTRACT_PROMPT,
images, images,
stream: true, stream: true,
options: { options: {
@@ -67,6 +124,31 @@ async function extractOnce(images: string[], passNum: number): Promise<ITransact
}, },
}; };
return doExtraction(payload, passLabel);
}
/**
* OCR-only extraction pass (no images, just text)
*/
async function extractFromOcr(ocrText: string, passLabel: string): Promise<ITransaction[]> {
const payload = {
model: MODEL,
prompt: buildOcrOnlyPrompt(ocrText),
stream: true,
options: {
num_predict: 16384,
temperature: 0.1,
},
};
return doExtraction(payload, passLabel);
}
/**
* Common extraction logic
*/
async function doExtraction(payload: object, passLabel: string): Promise<ITransaction[]> {
const response = await fetch(`${OLLAMA_URL}/api/generate`, { const response = await fetch(`${OLLAMA_URL}/api/generate`, {
method: 'POST', method: 'POST',
headers: { 'Content-Type': 'application/json' }, headers: { 'Content-Type': 'application/json' },
@@ -86,7 +168,7 @@ async function extractOnce(images: string[], passNum: number): Promise<ITransact
let fullText = ''; let fullText = '';
let lineBuffer = ''; let lineBuffer = '';
console.log(`[Pass ${passNum}] Extracting...`); console.log(`[${passLabel}] Extracting...`);
while (true) { while (true) {
const { done, value } = await reader.read(); const { done, value } = await reader.read();
@@ -144,30 +226,78 @@ function hashTransactions(transactions: ITransaction[]): string {
/** /**
* Extract with majority voting - run until 2 passes match * Extract with majority voting - run until 2 passes match
* Strategy: Pass 1 = Visual (images), Pass 2 = OCR-only (text), Pass 3+ = Visual
*/ */
async function extractWithConsensus(images: string[], maxPasses: number = 5): Promise<ITransaction[]> { async function extractWithConsensus(images: string[], maxPasses: number = 5): Promise<ITransaction[]> {
const results: Array<{ transactions: ITransaction[]; hash: string }> = []; const results: Array<{ transactions: ITransaction[]; hash: string }> = [];
const hashCounts: Map<string, number> = new Map(); const hashCounts: Map<string, number> = new Map();
for (let pass = 1; pass <= maxPasses; pass++) { const addResult = (transactions: ITransaction[], passLabel: string): number => {
const transactions = await extractOnce(images, pass);
const hash = hashTransactions(transactions); const hash = hashTransactions(transactions);
results.push({ transactions, hash }); results.push({ transactions, hash });
hashCounts.set(hash, (hashCounts.get(hash) || 0) + 1); hashCounts.set(hash, (hashCounts.get(hash) || 0) + 1);
console.log(`[${passLabel}] Got ${transactions.length} transactions (hash: ${hash.substring(0, 20)}...)`);
return hashCounts.get(hash)!;
};
console.log(`[Pass ${pass}] Got ${transactions.length} transactions (hash: ${hash.substring(0, 20)}...)`); // Run Pass 1 (Visual) in parallel with OCR extraction
let ocrText = '';
const pass1Promise = extractVisual(images, 'Pass 1 Visual').catch((err) => ({ error: err }));
// Check if we have consensus (2+ matching) // Extract OCR from all pages
const count = hashCounts.get(hash)!; const ocrPromise = (async () => {
if (count >= 2) { const ocrTexts: string[] = [];
console.log(`[Consensus] Reached after ${pass} passes (${count} matching results)`); for (let i = 0; i < images.length; i++) {
return transactions; const pageOcr = await extractOcrText(images[i]);
if (pageOcr) {
ocrTexts.push(`--- Page ${i + 1} ---\n${pageOcr}`);
}
} }
ocrText = ocrTexts.join('\n\n');
if (ocrText) {
console.log(`[OCR] Extracted text from ${ocrTexts.length} page(s)`);
}
return ocrText;
})();
// Wait for Pass 1 and OCR to complete
const [pass1Result] = await Promise.all([pass1Promise, ocrPromise]);
// Process Pass 1 result
if ('error' in pass1Result) {
console.log(`[Pass 1] Error: ${(pass1Result as { error: unknown }).error}`);
} else {
addResult(pass1Result as ITransaction[], 'Pass 1 Visual');
}
// Pass 2: OCR-only (no images) - faster, different approach
if (ocrText) {
try {
const pass2Result = await extractFromOcr(ocrText, 'Pass 2 OCR-only');
const count = addResult(pass2Result, 'Pass 2 OCR-only');
if (count >= 2) {
console.log(`[Consensus] Visual and OCR extractions match!`);
return pass2Result;
}
} catch (err) {
console.log(`[Pass 2 OCR-only] Error: ${err}`);
}
}
// Continue with visual passes 3+ if no consensus yet
for (let pass = 3; pass <= maxPasses; pass++) {
try {
const transactions = await extractVisual(images, `Pass ${pass} Visual`);
const count = addResult(transactions, `Pass ${pass} Visual`);
if (count >= 2) {
console.log(`[Consensus] Reached after ${pass} passes`);
return transactions;
}
// After 2 passes, if no match yet, continue
if (pass >= 2) {
console.log(`[Pass ${pass}] No consensus yet, trying again...`); console.log(`[Pass ${pass}] No consensus yet, trying again...`);
} catch (err) {
console.log(`[Pass ${pass}] Error: ${err}`);
} }
} }
@@ -181,6 +311,10 @@ async function extractWithConsensus(images: string[], maxPasses: number = 5): Pr
} }
} }
if (!bestHash) {
throw new Error('No valid results obtained');
}
const best = results.find((r) => r.hash === bestHash)!; const best = results.find((r) => r.hash === bestHash)!;
console.log(`[No consensus] Using most common result (${bestCount}/${maxPasses} passes)`); console.log(`[No consensus] Using most common result (${bestCount}/${maxPasses} passes)`);
return best.transactions; return best.transactions;