Compare commits
8 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| b202e024a4 | |||
| 2210611f70 | |||
| d8bdb18841 | |||
| d384c1d79b | |||
| 6bd672da61 | |||
| 44d6dc3336 | |||
| d1ff95bd94 | |||
| 09770d3177 |
26
changelog.md
26
changelog.md
@@ -1,5 +1,31 @@
|
||||
# Changelog
|
||||
|
||||
## 2026-01-20 - 1.14.3 - fix(repo)
|
||||
no changes detected in the diff; no files modified and no release required
|
||||
|
||||
- Diff contained no changes
|
||||
- No files were added, removed, or modified
|
||||
- No code, dependency, or documentation updates to release
|
||||
|
||||
## 2026-01-19 - 1.14.2 - fix(readme)
|
||||
update README to document Nanonets-OCR2-3B (replaces Nanonets-OCR-s), adjust VRAM and context defaults, expand feature docs, and update examples/test command
|
||||
|
||||
- Renamed Nanonets-OCR-s -> Nanonets-OCR2-3B throughout README and examples
|
||||
- Updated Nanonets VRAM guidance from ~10GB to ~12-16GB and documented 30K context
|
||||
- Changed documented MAX_MODEL_LEN default from 8192 to 30000
|
||||
- Updated example model identifiers (model strings and curl/example snippets) to nanonets/Nanonets-OCR2-3B
|
||||
- Added MiniCPM and Qwen feature bullets (multilingual, multi-image, flowchart support, expanded context notes)
|
||||
- Replaced README test command from ./test-images.sh to pnpm test
|
||||
|
||||
## 2026-01-19 - 1.14.1 - fix(extraction)
|
||||
improve JSON extraction prompts and model options for invoice and bank statement tests
|
||||
|
||||
- Refactor JSON extraction prompts to be sent after the document text and add explicit 'WHERE TO FIND DATA' and 'RULES' sections for clearer extraction guidance
|
||||
- Change chat message flow to: send document, assistant acknowledgement, then the JSON extraction prompt (avoids concatenating large prompts into one message)
|
||||
- Add model options (num_ctx: 32768, temperature: 0) to give larger context windows and deterministic JSON output
|
||||
- Simplify logging to avoid printing full prompt contents; log document and prompt lengths instead
|
||||
- Increase timeouts for large documents to 600000ms (10 minutes) where applicable
|
||||
|
||||
## 2026-01-19 - 1.14.0 - feat(docker-images)
|
||||
add vLLM-based Nanonets-OCR2-3B image, Qwen3-VL Ollama image and refactor build/docs/tests to use new runtime/layout
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@host.today/ht-docker-ai",
|
||||
"version": "1.14.0",
|
||||
"version": "1.14.3",
|
||||
"type": "module",
|
||||
"private": false,
|
||||
"description": "Docker images for AI vision-language models including MiniCPM-V 4.5",
|
||||
@@ -14,7 +14,9 @@
|
||||
},
|
||||
"devDependencies": {
|
||||
"@git.zone/tsrun": "^2.0.1",
|
||||
"@git.zone/tstest": "^3.1.5"
|
||||
"@git.zone/tstest": "^3.1.5",
|
||||
"@push.rocks/smartagent": "^1.2.8",
|
||||
"@push.rocks/smartai": "^0.11.1"
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
|
||||
1134
pnpm-lock.yaml
generated
1134
pnpm-lock.yaml
generated
File diff suppressed because it is too large
Load Diff
57
readme.md
57
readme.md
@@ -2,7 +2,7 @@
|
||||
|
||||
Production-ready Docker images for state-of-the-art AI Vision-Language Models. Run powerful multimodal AI locally with GPU acceleration—**no cloud API keys required**.
|
||||
|
||||
> 🔥 **Three VLMs, one registry.** From lightweight document OCR to GPT-4o-level vision understanding—pick the right tool for your task.
|
||||
> 🔥 **Three VLMs, one registry.** From high-performance document OCR to GPT-4o-level vision understanding—pick the right tool for your task.
|
||||
|
||||
## Issue Reporting and Security
|
||||
|
||||
@@ -15,7 +15,7 @@ For reporting bugs, issues, or security vulnerabilities, please visit [community
|
||||
| Model | Parameters | Best For | API | Port | VRAM |
|
||||
|-------|-----------|----------|-----|------|------|
|
||||
| **MiniCPM-V 4.5** | 8B | General vision understanding, multi-image analysis | Ollama-compatible | 11434 | ~9GB |
|
||||
| **Nanonets-OCR-s** | ~4B | Document OCR with semantic markdown output | OpenAI-compatible | 8000 | ~10GB |
|
||||
| **Nanonets-OCR2-3B** | ~3B | Document OCR with semantic markdown, LaTeX, flowcharts | OpenAI-compatible | 8000 | ~12-16GB |
|
||||
| **Qwen3-VL-30B** | 30B (A3B) | Advanced visual agents, code generation from images | Ollama-compatible | 11434 | ~20GB |
|
||||
|
||||
---
|
||||
@@ -29,7 +29,7 @@ code.foss.global/host.today/ht-docker-ai:<tag>
|
||||
| Tag | Model | Runtime | Port | VRAM |
|
||||
|-----|-------|---------|------|------|
|
||||
| `minicpm45v` / `latest` | MiniCPM-V 4.5 | Ollama | 11434 | ~9GB |
|
||||
| `nanonets-ocr` | Nanonets-OCR-s | vLLM | 8000 | ~10GB |
|
||||
| `nanonets-ocr` | Nanonets-OCR2-3B | vLLM | 8000 | ~12-16GB |
|
||||
| `qwen3vl` | Qwen3-VL-30B-A3B | Ollama | 11434 | ~20GB |
|
||||
|
||||
---
|
||||
@@ -38,6 +38,13 @@ code.foss.global/host.today/ht-docker-ai:<tag>
|
||||
|
||||
A GPT-4o level multimodal LLM from OpenBMB—handles image understanding, OCR, multi-image analysis, and visual reasoning across **30+ languages**.
|
||||
|
||||
### ✨ Key Features
|
||||
|
||||
- 🌍 **Multilingual:** 30+ languages supported
|
||||
- 🖼️ **Multi-image:** Analyze multiple images in one request
|
||||
- 📊 **Versatile:** Charts, documents, photos, diagrams
|
||||
- ⚡ **Efficient:** Runs on consumer GPUs (9GB VRAM)
|
||||
|
||||
### Quick Start
|
||||
|
||||
```bash
|
||||
@@ -83,21 +90,22 @@ curl http://localhost:11434/api/chat -d '{
|
||||
|
||||
| Mode | VRAM Required |
|
||||
|------|---------------|
|
||||
| int4 quantized | 9GB |
|
||||
| Full precision (bf16) | 18GB |
|
||||
| int4 quantized | ~9GB |
|
||||
| Full precision (bf16) | ~18GB |
|
||||
|
||||
---
|
||||
|
||||
## 🔍 Nanonets-OCR-s
|
||||
## 🔍 Nanonets-OCR2-3B
|
||||
|
||||
A **Qwen2.5-VL-3B** model fine-tuned specifically for document OCR. Outputs structured markdown with semantic HTML tags—perfect for preserving document structure.
|
||||
The **latest Nanonets document OCR model** (October 2025 release)—based on Qwen2.5-VL-3B, fine-tuned specifically for document extraction with significant improvements over the original OCR-s.
|
||||
|
||||
### Key Features
|
||||
### ✨ Key Features
|
||||
|
||||
- 📝 **Semantic output:** Tables → HTML, equations → LaTeX, watermarks/page numbers → tagged
|
||||
- 📝 **Semantic output:** Tables → HTML, equations → LaTeX, flowcharts → structured markup
|
||||
- 🌍 **Multilingual:** Inherits Qwen's broad language support
|
||||
- ⚡ **Efficient:** ~10GB VRAM, runs great on consumer GPUs
|
||||
- 📄 **30K context:** Handle large, multi-page documents
|
||||
- 🔌 **OpenAI-compatible:** Drop-in replacement for existing pipelines
|
||||
- 🎯 **Improved accuracy:** Better semantic tagging and LaTeX equation extraction vs. OCR-s
|
||||
|
||||
### Quick Start
|
||||
|
||||
@@ -116,7 +124,7 @@ docker run -d \
|
||||
curl http://localhost:8000/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "nanonets/Nanonets-OCR-s",
|
||||
"model": "nanonets/Nanonets-OCR2-3B",
|
||||
"messages": [{
|
||||
"role": "user",
|
||||
"content": [
|
||||
@@ -131,7 +139,7 @@ curl http://localhost:8000/v1/chat/completions \
|
||||
|
||||
### Output Format
|
||||
|
||||
Nanonets-OCR-s returns markdown with semantic tags:
|
||||
Nanonets-OCR2-3B returns markdown with semantic tags:
|
||||
|
||||
| Element | Output Format |
|
||||
|---------|---------------|
|
||||
@@ -140,13 +148,14 @@ Nanonets-OCR-s returns markdown with semantic tags:
|
||||
| Images | `<img>description</img>` |
|
||||
| Watermarks | `<watermark>OFFICIAL COPY</watermark>` |
|
||||
| Page numbers | `<page_number>14</page_number>` |
|
||||
| Flowcharts | Structured markup |
|
||||
|
||||
### Performance
|
||||
### Hardware Requirements
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Speed | 3–8 seconds per page |
|
||||
| VRAM | ~10GB |
|
||||
| Config | VRAM |
|
||||
|--------|------|
|
||||
| 30K context (default) | ~12-16GB |
|
||||
| Speed | ~3-8 seconds per page |
|
||||
|
||||
---
|
||||
|
||||
@@ -154,7 +163,7 @@ Nanonets-OCR-s returns markdown with semantic tags:
|
||||
|
||||
The **most powerful** Qwen vision model—30B parameters with 3B active (MoE architecture). Handles complex visual reasoning, code generation from screenshots, and visual agent capabilities.
|
||||
|
||||
### Key Features
|
||||
### ✨ Key Features
|
||||
|
||||
- 🚀 **256K context** (expandable to 1M tokens!)
|
||||
- 🤖 **Visual agent capabilities** — can plan and execute multi-step tasks
|
||||
@@ -204,7 +213,6 @@ curl http://localhost:11434/api/chat -d '{
|
||||
Run multiple VLMs together for maximum flexibility:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
# General vision tasks
|
||||
minicpm:
|
||||
@@ -259,10 +267,10 @@ volumes:
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `MODEL_NAME` | `nanonets/Nanonets-OCR-s` | HuggingFace model ID |
|
||||
| `MODEL_NAME` | `nanonets/Nanonets-OCR2-3B` | HuggingFace model ID |
|
||||
| `HOST` | `0.0.0.0` | API bind address |
|
||||
| `PORT` | `8000` | API port |
|
||||
| `MAX_MODEL_LEN` | `8192` | Maximum sequence length |
|
||||
| `MAX_MODEL_LEN` | `30000` | Maximum sequence length |
|
||||
| `GPU_MEMORY_UTILIZATION` | `0.9` | GPU memory usage (0-1) |
|
||||
|
||||
---
|
||||
@@ -283,7 +291,7 @@ This dual-VLM approach catches extraction errors that single models miss.
|
||||
### Why Multi-Model Works
|
||||
|
||||
- **Different architectures:** Independent models cross-validate each other
|
||||
- **Specialized strengths:** Nanonets-OCR-s excels at document structure; MiniCPM-V handles general vision
|
||||
- **Specialized strengths:** Nanonets-OCR2-3B excels at document structure; MiniCPM-V handles general vision
|
||||
- **Native processing:** All VLMs see original images—no intermediate structure loss
|
||||
|
||||
### Model Selection Guide
|
||||
@@ -291,10 +299,11 @@ This dual-VLM approach catches extraction errors that single models miss.
|
||||
| Task | Recommended Model |
|
||||
|------|-------------------|
|
||||
| General image understanding | MiniCPM-V 4.5 |
|
||||
| Document OCR with structure preservation | Nanonets-OCR-s |
|
||||
| Document OCR with structure preservation | Nanonets-OCR2-3B |
|
||||
| Complex visual reasoning / code generation | Qwen3-VL-30B |
|
||||
| Multi-image analysis | MiniCPM-V 4.5 |
|
||||
| Visual agent tasks | Qwen3-VL-30B |
|
||||
| Large documents (30K+ tokens) | Nanonets-OCR2-3B |
|
||||
|
||||
---
|
||||
|
||||
@@ -309,7 +318,7 @@ cd ht-docker-ai
|
||||
./build-images.sh
|
||||
|
||||
# Run tests
|
||||
./test-images.sh
|
||||
pnpm test
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
/**
|
||||
* Bank statement extraction using MiniCPM-V (visual extraction)
|
||||
*
|
||||
* JSON per-page approach:
|
||||
* JSON per-page approach with streaming output:
|
||||
* 1. Ask for structured JSON of all transactions per page
|
||||
* 2. Consensus: extract twice, compare, retry if mismatch
|
||||
* 2. Single pass extraction (no consensus)
|
||||
*/
|
||||
import { tap, expect } from '@git.zone/tstest/tapbundle';
|
||||
import * as fs from 'fs';
|
||||
@@ -66,11 +66,11 @@ function convertPdfToImages(pdfPath: string): string[] {
|
||||
}
|
||||
|
||||
/**
|
||||
* Query for JSON extraction
|
||||
* Query for JSON extraction with streaming output
|
||||
*/
|
||||
async function queryJson(image: string, queryId: string): Promise<string> {
|
||||
console.log(` [${queryId}] Sending request to ${MODEL}...`);
|
||||
const startTime = Date.now();
|
||||
process.stdout.write(` [${queryId}] `);
|
||||
|
||||
const response = await fetch(`${OLLAMA_URL}/api/chat`, {
|
||||
method: 'POST',
|
||||
@@ -82,25 +82,50 @@ async function queryJson(image: string, queryId: string): Promise<string> {
|
||||
content: JSON_PROMPT,
|
||||
images: [image],
|
||||
}],
|
||||
stream: false,
|
||||
stream: true,
|
||||
options: {
|
||||
num_ctx: 32768,
|
||||
num_predict: 4000,
|
||||
temperature: 0.1,
|
||||
},
|
||||
}),
|
||||
});
|
||||
|
||||
const elapsed = ((Date.now() - startTime) / 1000).toFixed(1);
|
||||
|
||||
if (!response.ok) {
|
||||
console.log(` [${queryId}] ERROR: ${response.status} (${elapsed}s)`);
|
||||
const elapsed = ((Date.now() - startTime) / 1000).toFixed(1);
|
||||
process.stdout.write(`ERROR: ${response.status} (${elapsed}s)\n`);
|
||||
throw new Error(`Ollama API error: ${response.status}`);
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
const content = (data.message?.content || '').trim();
|
||||
console.log(` [${queryId}] Response received (${elapsed}s, ${content.length} chars)`);
|
||||
return content;
|
||||
let content = '';
|
||||
const reader = response.body!.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
|
||||
const chunk = decoder.decode(value, { stream: true });
|
||||
for (const line of chunk.split('\n').filter(l => l.trim())) {
|
||||
try {
|
||||
const json = JSON.parse(line);
|
||||
const token = json.message?.content || '';
|
||||
if (token) {
|
||||
process.stdout.write(token);
|
||||
content += token;
|
||||
}
|
||||
} catch {
|
||||
// Ignore parse errors for partial chunks
|
||||
}
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
const elapsed = ((Date.now() - startTime) / 1000).toFixed(1);
|
||||
process.stdout.write(` (${elapsed}s)\n`);
|
||||
}
|
||||
|
||||
return content.trim();
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -284,102 +309,29 @@ function parseAmount(value: unknown): number {
|
||||
}
|
||||
|
||||
/**
|
||||
* Compare two transaction arrays for consensus
|
||||
*/
|
||||
function transactionArraysMatch(a: ITransaction[], b: ITransaction[]): boolean {
|
||||
if (a.length !== b.length) return false;
|
||||
|
||||
for (let i = 0; i < a.length; i++) {
|
||||
const dateMatch = a[i].date === b[i].date;
|
||||
const amountMatch = Math.abs(a[i].amount - b[i].amount) < 0.01;
|
||||
if (!dateMatch || !amountMatch) return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Compare two transaction arrays and log differences
|
||||
*/
|
||||
function compareAndLogDifferences(txs1: ITransaction[], txs2: ITransaction[], pageNum: number): void {
|
||||
if (txs1.length !== txs2.length) {
|
||||
console.log(` [Page ${pageNum}] Length mismatch: Q1=${txs1.length}, Q2=${txs2.length}`);
|
||||
return;
|
||||
}
|
||||
|
||||
for (let i = 0; i < txs1.length; i++) {
|
||||
const dateMatch = txs1[i].date === txs2[i].date;
|
||||
const amountMatch = Math.abs(txs1[i].amount - txs2[i].amount) < 0.01;
|
||||
|
||||
if (!dateMatch || !amountMatch) {
|
||||
console.log(` [Page ${pageNum}] Tx ${i + 1} differs:`);
|
||||
console.log(` Q1: ${txs1[i].date} | ${txs1[i].amount}`);
|
||||
console.log(` Q2: ${txs2[i].date} | ${txs2[i].amount}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract transactions from a single page with consensus
|
||||
* Extract transactions from a single page (single pass)
|
||||
*/
|
||||
async function extractTransactionsFromPage(image: string, pageNum: number): Promise<ITransaction[]> {
|
||||
const MAX_ATTEMPTS = 5;
|
||||
console.log(`\n ======== Page ${pageNum} ========`);
|
||||
console.log(` [Page ${pageNum}] Starting JSON extraction...`);
|
||||
|
||||
for (let attempt = 1; attempt <= MAX_ATTEMPTS; attempt++) {
|
||||
console.log(`\n [Page ${pageNum}] --- Attempt ${attempt}/${MAX_ATTEMPTS} ---`);
|
||||
const queryId = `P${pageNum}`;
|
||||
const response = await queryJson(image, queryId);
|
||||
const transactions = parseJsonResponse(response, queryId);
|
||||
|
||||
// Extract twice in parallel
|
||||
const q1Id = `P${pageNum}A${attempt}Q1`;
|
||||
const q2Id = `P${pageNum}A${attempt}Q2`;
|
||||
|
||||
const [response1, response2] = await Promise.all([
|
||||
queryJson(image, q1Id),
|
||||
queryJson(image, q2Id),
|
||||
]);
|
||||
|
||||
const txs1 = parseJsonResponse(response1, q1Id);
|
||||
const txs2 = parseJsonResponse(response2, q2Id);
|
||||
|
||||
console.log(` [Page ${pageNum}] Results: Q1=${txs1.length} txs, Q2=${txs2.length} txs`);
|
||||
|
||||
if (txs1.length > 0 && transactionArraysMatch(txs1, txs2)) {
|
||||
console.log(` [Page ${pageNum}] ✓ CONSENSUS REACHED: ${txs1.length} transactions`);
|
||||
console.log(` [Page ${pageNum}] Transactions:`);
|
||||
for (let i = 0; i < txs1.length; i++) {
|
||||
const tx = txs1[i];
|
||||
console.log(` [Page ${pageNum}] Extracted ${transactions.length} transactions:`);
|
||||
for (let i = 0; i < transactions.length; i++) {
|
||||
const tx = transactions[i];
|
||||
console.log(` ${(i + 1).toString().padStart(2)}. ${tx.date} | ${tx.counterparty.substring(0, 30).padEnd(30)} | ${tx.amount >= 0 ? '+' : ''}${tx.amount.toFixed(2)}`);
|
||||
}
|
||||
return txs1;
|
||||
}
|
||||
|
||||
console.log(` [Page ${pageNum}] ✗ NO CONSENSUS`);
|
||||
compareAndLogDifferences(txs1, txs2, pageNum);
|
||||
|
||||
if (attempt < MAX_ATTEMPTS) {
|
||||
console.log(` [Page ${pageNum}] Retrying...`);
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback: use last response
|
||||
console.log(`\n [Page ${pageNum}] === FALLBACK (no consensus after ${MAX_ATTEMPTS} attempts) ===`);
|
||||
const fallbackId = `P${pageNum}FALLBACK`;
|
||||
const fallbackResponse = await queryJson(image, fallbackId);
|
||||
const fallback = parseJsonResponse(fallbackResponse, fallbackId);
|
||||
console.log(` [Page ${pageNum}] ~ FALLBACK RESULT: ${fallback.length} transactions`);
|
||||
for (let i = 0; i < fallback.length; i++) {
|
||||
const tx = fallback[i];
|
||||
console.log(` ${(i + 1).toString().padStart(2)}. ${tx.date} | ${tx.counterparty.substring(0, 30).padEnd(30)} | ${tx.amount >= 0 ? '+' : ''}${tx.amount.toFixed(2)}`);
|
||||
}
|
||||
return fallback;
|
||||
return transactions;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract all transactions from bank statement
|
||||
*/
|
||||
async function extractTransactions(images: string[]): Promise<ITransaction[]> {
|
||||
console.log(` [Vision] Processing ${images.length} page(s) with ${MODEL} (JSON consensus)`);
|
||||
console.log(` [Vision] Processing ${images.length} page(s) with ${MODEL} (single pass)`);
|
||||
|
||||
const allTransactions: ITransaction[] = [];
|
||||
|
||||
@@ -527,7 +479,7 @@ tap.test('summary', async () => {
|
||||
console.log(`\n======================================================`);
|
||||
console.log(` Bank Statement Summary (${MODEL})`);
|
||||
console.log(`======================================================`);
|
||||
console.log(` Method: JSON per-page + consensus`);
|
||||
console.log(` Method: JSON per-page (single pass)`);
|
||||
console.log(` Passed: ${passedCount}/${total}`);
|
||||
console.log(` Failed: ${failedCount}/${total}`);
|
||||
console.log(`======================================================\n`);
|
||||
|
||||
@@ -51,11 +51,21 @@ If there is an image in the document and image caption is not present, add a sma
|
||||
Watermarks should be wrapped in brackets. Ex: <watermark>OFFICIAL COPY</watermark>.
|
||||
Page numbers should be wrapped in brackets. Ex: <page_number>14</page_number>.`;
|
||||
|
||||
// JSON extraction prompt for GPT-OSS 20B
|
||||
const JSON_EXTRACTION_PROMPT = `Extract ALL transactions from this bank statement as JSON array. Each transaction: {"date": "YYYY-MM-DD", "counterparty": "NAME", "amount": -25.99}. Amount negative for debits, positive for credits. Only include actual transactions, not balances. Return ONLY JSON array, no explanation.
|
||||
// JSON extraction prompt for GPT-OSS 20B (sent AFTER the statement text is provided)
|
||||
const JSON_EXTRACTION_PROMPT = `Extract ALL transactions from the bank statement. Return ONLY valid JSON array.
|
||||
|
||||
STATEMENT:
|
||||
`;
|
||||
WHERE TO FIND DATA:
|
||||
- Transactions are typically in TABLES with columns: Date, Description/Counterparty, Debit, Credit, Balance
|
||||
- Look for rows with actual money movements, NOT header rows or summary totals
|
||||
|
||||
RULES:
|
||||
1. date: Convert to YYYY-MM-DD format
|
||||
2. counterparty: The name/description of who the money went to/from
|
||||
3. amount: NEGATIVE for debits/withdrawals, POSITIVE for credits/deposits
|
||||
4. Only include actual transactions, NOT opening/closing balances
|
||||
|
||||
JSON array only:
|
||||
[{"date":"YYYY-MM-DD","counterparty":"NAME","amount":-25.99}]`;
|
||||
|
||||
// Constants for smart batching
|
||||
const MAX_VISUAL_TOKENS = 28000; // ~32K context minus prompt/output headroom
|
||||
@@ -246,12 +256,8 @@ async function ensureExtractionModel(): Promise<boolean> {
|
||||
*/
|
||||
async function extractTransactionsFromMarkdown(markdown: string, queryId: string): Promise<ITransaction[]> {
|
||||
const startTime = Date.now();
|
||||
const fullPrompt = JSON_EXTRACTION_PROMPT + markdown;
|
||||
|
||||
// Log exact prompt
|
||||
console.log(`\n [${queryId}] ===== PROMPT =====`);
|
||||
console.log(fullPrompt);
|
||||
console.log(` [${queryId}] ===== END PROMPT (${fullPrompt.length} chars) =====\n`);
|
||||
console.log(` [${queryId}] Statement: ${markdown.length} chars, Prompt: ${JSON_EXTRACTION_PROMPT.length} chars`);
|
||||
|
||||
const response = await fetch(`${OLLAMA_URL}/api/chat`, {
|
||||
method: 'POST',
|
||||
@@ -261,9 +267,15 @@ async function extractTransactionsFromMarkdown(markdown: string, queryId: string
|
||||
messages: [
|
||||
{ role: 'user', content: 'Hi there, how are you?' },
|
||||
{ role: 'assistant', content: 'Good, how can I help you today?' },
|
||||
{ role: 'user', content: fullPrompt },
|
||||
{ role: 'user', content: `Here is a bank statement document:\n\n${markdown}` },
|
||||
{ role: 'assistant', content: 'I have read the bank statement document you provided. I can see all the transaction data. What would you like me to do with it?' },
|
||||
{ role: 'user', content: JSON_EXTRACTION_PROMPT },
|
||||
],
|
||||
stream: true,
|
||||
options: {
|
||||
num_ctx: 32768, // Larger context for long statements + thinking
|
||||
temperature: 0, // Deterministic for JSON extraction
|
||||
},
|
||||
}),
|
||||
signal: AbortSignal.timeout(600000), // 10 minute timeout
|
||||
});
|
||||
|
||||
@@ -197,6 +197,10 @@ async function extractInvoiceFromMarkdown(markdown: string, queryId: string): Pr
|
||||
{ role: 'user', content: JSON_EXTRACTION_PROMPT },
|
||||
],
|
||||
stream: true,
|
||||
options: {
|
||||
num_ctx: 32768, // Larger context for long invoices + thinking
|
||||
temperature: 0, // Deterministic for JSON extraction
|
||||
},
|
||||
}),
|
||||
signal: AbortSignal.timeout(120000), // 2 min timeout
|
||||
});
|
||||
|
||||
@@ -67,9 +67,12 @@ const JSON_PROMPT = `Extract invoice data from this image. Return ONLY a JSON ob
|
||||
Return only the JSON, no explanation.`;
|
||||
|
||||
/**
|
||||
* Query MiniCPM-V for JSON output (fast, no thinking)
|
||||
* Query MiniCPM-V for JSON output (fast, no thinking) with streaming
|
||||
*/
|
||||
async function queryJsonFast(images: string[]): Promise<string> {
|
||||
const startTime = Date.now();
|
||||
process.stdout.write(` [Fast] `);
|
||||
|
||||
const response = await fetch(`${OLLAMA_URL}/api/chat`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
@@ -80,8 +83,9 @@ async function queryJsonFast(images: string[]): Promise<string> {
|
||||
content: JSON_PROMPT,
|
||||
images: images,
|
||||
}],
|
||||
stream: false,
|
||||
stream: true,
|
||||
options: {
|
||||
num_ctx: 32768,
|
||||
num_predict: 1000,
|
||||
temperature: 0.1,
|
||||
},
|
||||
@@ -92,14 +96,44 @@ async function queryJsonFast(images: string[]): Promise<string> {
|
||||
throw new Error(`Ollama API error: ${response.status}`);
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
return (data.message?.content || '').trim();
|
||||
let content = '';
|
||||
const reader = response.body!.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
|
||||
const chunk = decoder.decode(value, { stream: true });
|
||||
for (const line of chunk.split('\n').filter(l => l.trim())) {
|
||||
try {
|
||||
const json = JSON.parse(line);
|
||||
const token = json.message?.content || '';
|
||||
if (token) {
|
||||
process.stdout.write(token);
|
||||
content += token;
|
||||
}
|
||||
} catch {
|
||||
// Ignore parse errors for partial chunks
|
||||
}
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
const elapsed = ((Date.now() - startTime) / 1000).toFixed(1);
|
||||
process.stdout.write(` (${elapsed}s)\n`);
|
||||
}
|
||||
|
||||
return content.trim();
|
||||
}
|
||||
|
||||
/**
|
||||
* Query MiniCPM-V for JSON output with thinking enabled (slower, more accurate)
|
||||
* Query MiniCPM-V for JSON output with thinking enabled (slower, more accurate) with streaming
|
||||
*/
|
||||
async function queryJsonWithThinking(images: string[]): Promise<string> {
|
||||
const startTime = Date.now();
|
||||
process.stdout.write(` [Think] `);
|
||||
|
||||
const response = await fetch(`${OLLAMA_URL}/api/chat`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
@@ -110,8 +144,9 @@ async function queryJsonWithThinking(images: string[]): Promise<string> {
|
||||
content: `Think carefully about this invoice image, then ${JSON_PROMPT}`,
|
||||
images: images,
|
||||
}],
|
||||
stream: false,
|
||||
stream: true,
|
||||
options: {
|
||||
num_ctx: 32768,
|
||||
num_predict: 2000,
|
||||
temperature: 0.1,
|
||||
},
|
||||
@@ -122,8 +157,56 @@ async function queryJsonWithThinking(images: string[]): Promise<string> {
|
||||
throw new Error(`Ollama API error: ${response.status}`);
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
return (data.message?.content || '').trim();
|
||||
let content = '';
|
||||
let thinkingContent = '';
|
||||
let thinkingStarted = false;
|
||||
let outputStarted = false;
|
||||
const reader = response.body!.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
|
||||
const chunk = decoder.decode(value, { stream: true });
|
||||
for (const line of chunk.split('\n').filter(l => l.trim())) {
|
||||
try {
|
||||
const json = JSON.parse(line);
|
||||
|
||||
// Stream thinking tokens
|
||||
const thinking = json.message?.thinking || '';
|
||||
if (thinking) {
|
||||
if (!thinkingStarted) {
|
||||
process.stdout.write(`THINKING: `);
|
||||
thinkingStarted = true;
|
||||
}
|
||||
process.stdout.write(thinking);
|
||||
thinkingContent += thinking;
|
||||
}
|
||||
|
||||
// Stream content tokens
|
||||
const token = json.message?.content || '';
|
||||
if (token) {
|
||||
if (!outputStarted) {
|
||||
if (thinkingStarted) process.stdout.write('\n [Think] ');
|
||||
process.stdout.write(`OUTPUT: `);
|
||||
outputStarted = true;
|
||||
}
|
||||
process.stdout.write(token);
|
||||
content += token;
|
||||
}
|
||||
} catch {
|
||||
// Ignore parse errors for partial chunks
|
||||
}
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
const elapsed = ((Date.now() - startTime) / 1000).toFixed(1);
|
||||
process.stdout.write(` (${elapsed}s)\n`);
|
||||
}
|
||||
|
||||
return content.trim();
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -12,6 +12,8 @@ import * as path from 'path';
|
||||
import { execSync } from 'child_process';
|
||||
import * as os from 'os';
|
||||
import { ensureNanonetsOcr, ensureMiniCpm, isContainerRunning } from './helpers/docker.js';
|
||||
import { SmartAi } from '@push.rocks/smartai';
|
||||
import { DualAgentOrchestrator } from '@push.rocks/smartagent';
|
||||
|
||||
const NANONETS_URL = 'http://localhost:8000/v1';
|
||||
const NANONETS_MODEL = 'nanonets/Nanonets-OCR2-3B';
|
||||
@@ -19,8 +21,24 @@ const NANONETS_MODEL = 'nanonets/Nanonets-OCR2-3B';
|
||||
const OLLAMA_URL = 'http://localhost:11434';
|
||||
const EXTRACTION_MODEL = 'gpt-oss:20b';
|
||||
|
||||
// Temp directory for storing markdown between stages
|
||||
const TEMP_MD_DIR = path.join(os.tmpdir(), 'nanonets-invoices-markdown');
|
||||
// Persistent cache directory for storing markdown between runs
|
||||
const MD_CACHE_DIR = path.join(process.cwd(), '.nogit/invoices-md');
|
||||
|
||||
// SmartAi instance for Ollama with optimized settings
|
||||
const smartAi = new SmartAi({
|
||||
ollama: {
|
||||
baseUrl: OLLAMA_URL,
|
||||
model: EXTRACTION_MODEL,
|
||||
defaultOptions: {
|
||||
num_ctx: 32768, // Larger context for long invoices + thinking
|
||||
temperature: 0, // Deterministic for JSON extraction
|
||||
},
|
||||
defaultTimeout: 600000, // 10 minute timeout for large documents
|
||||
},
|
||||
});
|
||||
|
||||
// DualAgentOrchestrator for structured task execution
|
||||
let orchestrator: DualAgentOrchestrator;
|
||||
|
||||
interface IInvoice {
|
||||
invoice_number: string;
|
||||
@@ -54,34 +72,30 @@ If there is an image in the document and image caption is not present, add a sma
|
||||
Watermarks should be wrapped in brackets. Ex: <watermark>OFFICIAL COPY</watermark>.
|
||||
Page numbers should be wrapped in brackets. Ex: <page_number>14</page_number>.`;
|
||||
|
||||
// JSON extraction prompt for GPT-OSS 20B
|
||||
const JSON_EXTRACTION_PROMPT = `You are an invoice data extractor. Below is an invoice document converted to text/markdown. Extract the key invoice fields as JSON.
|
||||
// JSON extraction prompt for GPT-OSS 20B (sent AFTER the invoice text is provided)
|
||||
const JSON_EXTRACTION_PROMPT = `Extract key fields from the invoice. Return ONLY valid JSON.
|
||||
|
||||
IMPORTANT RULES:
|
||||
1. invoice_number: The unique invoice/document number (NOT VAT ID, NOT customer ID)
|
||||
2. invoice_date: Format as YYYY-MM-DD
|
||||
3. vendor_name: The company that issued the invoice
|
||||
WHERE TO FIND DATA:
|
||||
- invoice_number, invoice_date, vendor_name: Look in the HEADER section at the TOP of PAGE 1 (near "Invoice no.", "Invoice date:", "Rechnungsnummer")
|
||||
- net_amount, vat_amount, total_amount: Look in the SUMMARY section at the BOTTOM (look for "Total", "Amount due", "Gesamtbetrag")
|
||||
|
||||
RULES:
|
||||
1. invoice_number: Extract ONLY the value (e.g., "R0015632540"), NOT the label "Invoice no."
|
||||
2. invoice_date: Convert to YYYY-MM-DD format (e.g., "14/04/2022" → "2022-04-14")
|
||||
3. vendor_name: The company issuing the invoice
|
||||
4. currency: EUR, USD, or GBP
|
||||
5. net_amount: Amount before tax
|
||||
6. vat_amount: Tax/VAT amount
|
||||
7. total_amount: Final total (gross amount)
|
||||
5. net_amount: Total before tax
|
||||
6. vat_amount: Tax amount
|
||||
7. total_amount: Final total with tax
|
||||
|
||||
Return ONLY this JSON format, no explanation:
|
||||
{
|
||||
"invoice_number": "INV-2024-001",
|
||||
"invoice_date": "2024-01-15",
|
||||
"vendor_name": "Company Name",
|
||||
"currency": "EUR",
|
||||
"net_amount": 100.00,
|
||||
"vat_amount": 19.00,
|
||||
"total_amount": 119.00
|
||||
}
|
||||
JSON only:
|
||||
{"invoice_number":"X","invoice_date":"YYYY-MM-DD","vendor_name":"X","currency":"EUR","net_amount":0,"vat_amount":0,"total_amount":0}
|
||||
|
||||
Double check for valid JSON syntax.
|
||||
|
||||
INVOICE TEXT:
|
||||
`;
|
||||
|
||||
// Constants for smart batching
|
||||
const MAX_VISUAL_TOKENS = 28000; // ~32K context minus prompt/output headroom
|
||||
const PATCH_SIZE = 14; // Qwen2.5-VL uses 14x14 patches
|
||||
|
||||
/**
|
||||
@@ -325,16 +339,20 @@ function extractCurrency(s: string | undefined): string {
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract JSON from response
|
||||
* Try to extract valid JSON from a response string
|
||||
*/
|
||||
function extractJsonFromResponse(response: string): Record<string, unknown> | null {
|
||||
let cleanResponse = response.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
|
||||
const codeBlockMatch = cleanResponse.match(/```(?:json)?\s*([\s\S]*?)```/);
|
||||
const jsonStr = codeBlockMatch ? codeBlockMatch[1].trim() : cleanResponse;
|
||||
function tryExtractJson(response: string): Record<string, unknown> | null {
|
||||
// Remove thinking tags
|
||||
let clean = response.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
|
||||
|
||||
// Try code block
|
||||
const codeBlockMatch = clean.match(/```(?:json)?\s*([\s\S]*?)```/);
|
||||
const jsonStr = codeBlockMatch ? codeBlockMatch[1].trim() : clean;
|
||||
|
||||
try {
|
||||
return JSON.parse(jsonStr);
|
||||
} catch {
|
||||
// Try to find JSON object
|
||||
const jsonMatch = jsonStr.match(/\{[\s\S]*\}/);
|
||||
if (jsonMatch) {
|
||||
try {
|
||||
@@ -348,111 +366,92 @@ function extractJsonFromResponse(response: string): Record<string, unknown> | nu
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse JSON response into IInvoice
|
||||
*/
|
||||
function parseJsonToInvoice(response: string): IInvoice | null {
|
||||
const parsed = extractJsonFromResponse(response);
|
||||
if (!parsed) return null;
|
||||
|
||||
return {
|
||||
invoice_number: extractInvoiceNumber(String(parsed.invoice_number || '')),
|
||||
invoice_date: extractDate(String(parsed.invoice_date || '')),
|
||||
vendor_name: String(parsed.vendor_name || '').replace(/\*\*/g, '').replace(/`/g, '').trim(),
|
||||
currency: extractCurrency(String(parsed.currency || '')),
|
||||
net_amount: parseAmount(parsed.net_amount as string | number),
|
||||
vat_amount: parseAmount(parsed.vat_amount as string | number),
|
||||
total_amount: parseAmount(parsed.total_amount as string | number),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract invoice from markdown using GPT-OSS 20B (streaming)
|
||||
* Extract invoice from markdown using smartagent DualAgentOrchestrator
|
||||
* Validates JSON and retries if invalid
|
||||
*/
|
||||
async function extractInvoiceFromMarkdown(markdown: string, queryId: string): Promise<IInvoice | null> {
|
||||
const startTime = Date.now();
|
||||
const fullPrompt = JSON_EXTRACTION_PROMPT + markdown;
|
||||
const maxRetries = 2;
|
||||
|
||||
// Log exact prompt
|
||||
console.log(`\n [${queryId}] ===== PROMPT =====`);
|
||||
console.log(fullPrompt);
|
||||
console.log(` [${queryId}] ===== END PROMPT (${fullPrompt.length} chars) =====\n`);
|
||||
console.log(` [${queryId}] Invoice: ${markdown.length} chars`);
|
||||
|
||||
const response = await fetch(`${OLLAMA_URL}/api/chat`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
model: EXTRACTION_MODEL,
|
||||
messages: [
|
||||
{ role: 'user', content: 'Hi there, how are you?' },
|
||||
{ role: 'assistant', content: 'Good, how can I help you today?' },
|
||||
{ role: 'user', content: fullPrompt },
|
||||
],
|
||||
stream: true,
|
||||
}),
|
||||
signal: AbortSignal.timeout(600000), // 10 minute timeout for large documents
|
||||
});
|
||||
// Build the extraction task with document context
|
||||
const taskPrompt = `Extract the invoice data from this document and output ONLY the JSON:
|
||||
|
||||
if (!response.ok) {
|
||||
const elapsed = ((Date.now() - startTime) / 1000).toFixed(1);
|
||||
console.log(` [${queryId}] ERROR: ${response.status} (${elapsed}s)`);
|
||||
throw new Error(`Ollama API error: ${response.status}`);
|
||||
}
|
||||
${markdown}
|
||||
|
||||
// Stream the response
|
||||
let content = '';
|
||||
let thinkingContent = '';
|
||||
let thinkingStarted = false;
|
||||
let outputStarted = false;
|
||||
const reader = response.body!.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
${JSON_EXTRACTION_PROMPT}`;
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
let result = await orchestrator.run(taskPrompt);
|
||||
let elapsed = ((Date.now() - startTime) / 1000).toFixed(1);
|
||||
console.log(` [${queryId}] Status: ${result.status}, Iterations: ${result.iterations} (${elapsed}s)`);
|
||||
|
||||
const chunk = decoder.decode(value, { stream: true });
|
||||
// Try to parse JSON from result
|
||||
let jsonData: Record<string, unknown> | null = null;
|
||||
let responseText = result.result || '';
|
||||
|
||||
// Each line is a JSON object
|
||||
for (const line of chunk.split('\n').filter(l => l.trim())) {
|
||||
try {
|
||||
const json = JSON.parse(line);
|
||||
|
||||
// Stream thinking tokens
|
||||
const thinking = json.message?.thinking || '';
|
||||
if (thinking) {
|
||||
if (!thinkingStarted) {
|
||||
process.stdout.write(` [${queryId}] THINKING: `);
|
||||
thinkingStarted = true;
|
||||
}
|
||||
process.stdout.write(thinking);
|
||||
thinkingContent += thinking;
|
||||
if (result.success && responseText) {
|
||||
jsonData = tryExtractJson(responseText);
|
||||
}
|
||||
|
||||
// Stream content tokens
|
||||
const token = json.message?.content || '';
|
||||
if (token) {
|
||||
if (!outputStarted) {
|
||||
if (thinkingStarted) process.stdout.write('\n');
|
||||
process.stdout.write(` [${queryId}] OUTPUT: `);
|
||||
outputStarted = true;
|
||||
// Fallback: try parsing from history
|
||||
if (!jsonData && result.history?.length > 0) {
|
||||
const lastMessage = result.history[result.history.length - 1];
|
||||
if (lastMessage?.content) {
|
||||
responseText = lastMessage.content;
|
||||
jsonData = tryExtractJson(responseText);
|
||||
}
|
||||
process.stdout.write(token);
|
||||
content += token;
|
||||
}
|
||||
} catch {
|
||||
// Ignore parse errors for partial chunks
|
||||
}
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
if (thinkingStarted || outputStarted) process.stdout.write('\n');
|
||||
}
|
||||
|
||||
// If JSON is invalid, retry with correction request
|
||||
let retries = 0;
|
||||
while (!jsonData && retries < maxRetries) {
|
||||
retries++;
|
||||
console.log(` [${queryId}] Invalid JSON, requesting correction (retry ${retries}/${maxRetries})...`);
|
||||
|
||||
result = await orchestrator.continueTask(
|
||||
`Your response was not valid JSON. Please output ONLY the JSON object with no markdown, no explanation, no thinking tags. Just the raw JSON starting with { and ending with }. Format:
|
||||
{"invoice_number":"X","invoice_date":"YYYY-MM-DD","vendor_name":"X","currency":"EUR","net_amount":0,"vat_amount":0,"total_amount":0}`
|
||||
);
|
||||
|
||||
elapsed = ((Date.now() - startTime) / 1000).toFixed(1);
|
||||
console.log(` [${queryId}] Retry ${retries}: ${result.status} (${elapsed}s)`);
|
||||
|
||||
responseText = result.result || '';
|
||||
if (responseText) {
|
||||
jsonData = tryExtractJson(responseText);
|
||||
}
|
||||
|
||||
if (!jsonData && result.history?.length > 0) {
|
||||
const lastMessage = result.history[result.history.length - 1];
|
||||
if (lastMessage?.content) {
|
||||
responseText = lastMessage.content;
|
||||
jsonData = tryExtractJson(responseText);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!jsonData) {
|
||||
console.log(` [${queryId}] Failed to get valid JSON after ${retries} retries`);
|
||||
return null;
|
||||
}
|
||||
|
||||
console.log(` [${queryId}] Valid JSON extracted`);
|
||||
return {
|
||||
invoice_number: extractInvoiceNumber(String(jsonData.invoice_number || '')),
|
||||
invoice_date: extractDate(String(jsonData.invoice_date || '')),
|
||||
vendor_name: String(jsonData.vendor_name || '').replace(/\*\*/g, '').replace(/`/g, '').trim(),
|
||||
currency: extractCurrency(String(jsonData.currency || '')),
|
||||
net_amount: parseAmount(jsonData.net_amount as string | number),
|
||||
vat_amount: parseAmount(jsonData.vat_amount as string | number),
|
||||
total_amount: parseAmount(jsonData.total_amount as string | number),
|
||||
};
|
||||
} catch (error) {
|
||||
const elapsed = ((Date.now() - startTime) / 1000).toFixed(1);
|
||||
console.log(` [${queryId}] Done: ${thinkingContent.length} thinking chars, ${content.length} output chars (${elapsed}s)`);
|
||||
|
||||
return parseJsonToInvoice(content);
|
||||
console.log(` [${queryId}] ERROR: ${error} (${elapsed}s)`);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -561,23 +560,45 @@ function findTestCases(): ITestCase[] {
|
||||
const testCases = findTestCases();
|
||||
console.log(`\nFound ${testCases.length} invoice test cases\n`);
|
||||
|
||||
// Ensure temp directory exists
|
||||
if (!fs.existsSync(TEMP_MD_DIR)) {
|
||||
fs.mkdirSync(TEMP_MD_DIR, { recursive: true });
|
||||
// Ensure cache directory exists
|
||||
if (!fs.existsSync(MD_CACHE_DIR)) {
|
||||
fs.mkdirSync(MD_CACHE_DIR, { recursive: true });
|
||||
}
|
||||
|
||||
// -------- STAGE 1: OCR with Nanonets --------
|
||||
|
||||
tap.test('Stage 1: Setup Nanonets', async () => {
|
||||
tap.test('Stage 1: Convert invoices to markdown (with caching)', async () => {
|
||||
console.log('\n========== STAGE 1: Nanonets OCR ==========\n');
|
||||
const ok = await ensureNanonetsOcr();
|
||||
expect(ok).toBeTrue();
|
||||
});
|
||||
|
||||
tap.test('Stage 1: Convert all invoices to markdown', async () => {
|
||||
console.log('\n Converting all invoice PDFs to markdown with Nanonets-OCR-s...\n');
|
||||
// Check which invoices need OCR conversion
|
||||
const needsConversion: ITestCase[] = [];
|
||||
let cachedCount = 0;
|
||||
|
||||
for (const tc of testCases) {
|
||||
const mdPath = path.join(MD_CACHE_DIR, `${tc.name}.md`);
|
||||
if (fs.existsSync(mdPath)) {
|
||||
cachedCount++;
|
||||
tc.markdownPath = mdPath;
|
||||
console.log(` [CACHED] ${tc.name} - using cached markdown`);
|
||||
} else {
|
||||
needsConversion.push(tc);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`\n Summary: ${cachedCount} cached, ${needsConversion.length} need conversion\n`);
|
||||
|
||||
if (needsConversion.length === 0) {
|
||||
console.log(' All invoices already cached, skipping Nanonets OCR\n');
|
||||
return;
|
||||
}
|
||||
|
||||
// Start Nanonets only if there are files to convert
|
||||
console.log(' Starting Nanonets for OCR conversion...\n');
|
||||
const ok = await ensureNanonetsOcr();
|
||||
expect(ok).toBeTrue();
|
||||
|
||||
// Convert only the invoices that need conversion
|
||||
for (const tc of needsConversion) {
|
||||
console.log(`\n === ${tc.name} ===`);
|
||||
|
||||
const images = convertPdfToImages(tc.pdfPath);
|
||||
@@ -585,13 +606,13 @@ tap.test('Stage 1: Convert all invoices to markdown', async () => {
|
||||
|
||||
const markdown = await convertDocumentToMarkdown(images, tc.name);
|
||||
|
||||
const mdPath = path.join(TEMP_MD_DIR, `${tc.name}.md`);
|
||||
const mdPath = path.join(MD_CACHE_DIR, `${tc.name}.md`);
|
||||
fs.writeFileSync(mdPath, markdown);
|
||||
tc.markdownPath = mdPath;
|
||||
console.log(` Saved: ${mdPath}`);
|
||||
}
|
||||
|
||||
console.log('\n Stage 1 complete: All invoices converted to markdown\n');
|
||||
console.log(`\n Stage 1 complete: ${needsConversion.length} invoices converted to markdown\n`);
|
||||
});
|
||||
|
||||
tap.test('Stage 1: Stop Nanonets', async () => {
|
||||
@@ -610,6 +631,42 @@ tap.test('Stage 2: Setup Ollama + GPT-OSS 20B', async () => {
|
||||
|
||||
const extractionOk = await ensureExtractionModel();
|
||||
expect(extractionOk).toBeTrue();
|
||||
|
||||
// Initialize SmartAi and DualAgentOrchestrator
|
||||
console.log(' [SmartAgent] Starting SmartAi...');
|
||||
await smartAi.start();
|
||||
|
||||
console.log(' [SmartAgent] Creating DualAgentOrchestrator...');
|
||||
orchestrator = new DualAgentOrchestrator({
|
||||
smartAiInstance: smartAi,
|
||||
defaultProvider: 'ollama',
|
||||
guardianPolicyPrompt: `
|
||||
JSON EXTRACTION POLICY:
|
||||
- APPROVE all JSON extraction tasks
|
||||
- This is a read-only operation - no file system or network access needed
|
||||
- The task is to extract structured data from document text
|
||||
`,
|
||||
driverSystemMessage: `You are a precise JSON extraction assistant. Your only job is to extract invoice data from documents.
|
||||
|
||||
CRITICAL RULES:
|
||||
1. Output ONLY valid JSON - no markdown, no explanations, no thinking
|
||||
2. Use the exact format requested
|
||||
3. If you cannot find a value, use empty string "" or 0 for numbers
|
||||
|
||||
When done, wrap your JSON in <task_complete></task_complete> tags.`,
|
||||
maxIterations: 3,
|
||||
// Enable streaming for real-time progress visibility
|
||||
onToken: (token, source) => {
|
||||
if (source === 'driver') {
|
||||
process.stdout.write(token);
|
||||
}
|
||||
},
|
||||
});
|
||||
|
||||
// No tools needed for JSON extraction
|
||||
console.log(' [SmartAgent] Starting orchestrator...');
|
||||
await orchestrator.start();
|
||||
console.log(' [SmartAgent] Ready for extraction');
|
||||
});
|
||||
|
||||
let passedCount = 0;
|
||||
@@ -624,7 +681,7 @@ for (const tc of testCases) {
|
||||
|
||||
const startTime = Date.now();
|
||||
|
||||
const mdPath = path.join(TEMP_MD_DIR, `${tc.name}.md`);
|
||||
const mdPath = path.join(MD_CACHE_DIR, `${tc.name}.md`);
|
||||
if (!fs.existsSync(mdPath)) {
|
||||
throw new Error(`Markdown not found: ${mdPath}. Run Stage 1 first.`);
|
||||
}
|
||||
@@ -654,6 +711,14 @@ for (const tc of testCases) {
|
||||
}
|
||||
|
||||
tap.test('Summary', async () => {
|
||||
// Cleanup orchestrator and SmartAi
|
||||
if (orchestrator) {
|
||||
console.log('\n [SmartAgent] Stopping orchestrator...');
|
||||
await orchestrator.stop();
|
||||
}
|
||||
console.log(' [SmartAgent] Stopping SmartAi...');
|
||||
await smartAi.stop();
|
||||
|
||||
const totalInvoices = testCases.length;
|
||||
const accuracy = totalInvoices > 0 ? (passedCount / totalInvoices) * 100 : 0;
|
||||
const totalTimeMs = processingTimes.reduce((a, b) => a + b, 0);
|
||||
@@ -663,7 +728,7 @@ tap.test('Summary', async () => {
|
||||
console.log(` Invoice Summary (Nanonets + GPT-OSS 20B)`);
|
||||
console.log(`========================================`);
|
||||
console.log(` Stage 1: Nanonets-OCR-s (doc -> md)`);
|
||||
console.log(` Stage 2: GPT-OSS 20B (md -> JSON)`);
|
||||
console.log(` Stage 2: GPT-OSS 20B + SmartAgent (md -> JSON)`);
|
||||
console.log(` Passed: ${passedCount}/${totalInvoices}`);
|
||||
console.log(` Failed: ${failedCount}/${totalInvoices}`);
|
||||
console.log(` Accuracy: ${accuracy.toFixed(1)}%`);
|
||||
@@ -671,14 +736,7 @@ tap.test('Summary', async () => {
|
||||
console.log(` Total time: ${(totalTimeMs / 1000).toFixed(1)}s`);
|
||||
console.log(` Avg per inv: ${avgTimeSec.toFixed(1)}s`);
|
||||
console.log(`========================================\n`);
|
||||
|
||||
// Cleanup temp files
|
||||
try {
|
||||
fs.rmSync(TEMP_MD_DIR, { recursive: true, force: true });
|
||||
console.log(` Cleaned up temp directory: ${TEMP_MD_DIR}\n`);
|
||||
} catch {
|
||||
// Ignore
|
||||
}
|
||||
console.log(` Cache location: ${MD_CACHE_DIR}\n`);
|
||||
});
|
||||
|
||||
export default tap.start();
|
||||
|
||||
Reference in New Issue
Block a user