fix(tests): improve Qwen3-VL invoice extraction test by switching to non-stream API, adding model availability/pull checks, simplifying response parsing, and tightening model options
This commit is contained in:
10
changelog.md
10
changelog.md
@@ -1,5 +1,15 @@
|
||||
# Changelog
|
||||
|
||||
## 2026-01-18 - 1.10.1 - fix(tests)
|
||||
improve Qwen3-VL invoice extraction test by switching to non-stream API, adding model availability/pull checks, simplifying response parsing, and tightening model options
|
||||
|
||||
- Replaced streaming reader logic with direct JSON parsing of the /api/chat response
|
||||
- Added ensureQwen3Vl() to check and pull the Qwen3-VL:8b model from Ollama
|
||||
- Switched to ensureMiniCpm() to verify Ollama service is running before model checks
|
||||
- Use /no_think prompt for direct JSON output and set temperature to 0.0 and num_predict to 512
|
||||
- Removed retry loop and streaming parsing; improved error messages to include response body
|
||||
- Updated logging and test setup messages for clarity
|
||||
|
||||
## 2026-01-18 - 1.10.0 - feat(vision)
|
||||
add Qwen3-VL vision model support with Dockerfile and tests; improve invoice OCR conversion and prompts; simplify extraction flow by removing consensus voting
|
||||
|
||||
|
||||
Reference in New Issue
Block a user