Files
smartai/readme.hints.md

2.4 KiB

SmartAI Project Hints

Architecture (v1.0.0 - Vercel AI SDK rewrite)

The package is a provider registry built on the Vercel AI SDK (ai v6). The core export returns a LanguageModelV3 from @ai-sdk/provider. Specialized capabilities are in subpath exports.

Core Entry (ts/)

  • getModel(options) → returns LanguageModelV3 for any supported provider
  • Providers: anthropic, openai, google, groq, mistral, xai, perplexity, ollama
  • Anthropic prompt caching via wrapLanguageModel middleware (enabled by default)
  • Custom Ollama provider implementing LanguageModelV3 directly (for think, num_ctx support)

Subpath Exports

  • @push.rocks/smartai/visionanalyzeImage() using generateText with image content
  • @push.rocks/smartai/audiotextToSpeech() using OpenAI SDK directly
  • @push.rocks/smartai/imagegenerateImage(), editImage() using OpenAI SDK directly
  • @push.rocks/smartai/documentanalyzeDocuments() using SmartPdf + generateText
  • @push.rocks/smartai/researchresearch() using @anthropic-ai/sdk web_search tool

Dependencies

  • ai ^6.0.116 — Vercel AI SDK core
  • @ai-sdk/* — Provider packages (anthropic, openai, google, groq, mistral, xai, perplexity)
  • @ai-sdk/provider ^3.0.8 — LanguageModelV3 types
  • @anthropic-ai/sdk ^0.78.0 — Direct SDK for research (web search tool)
  • openai ^6.25.0 — Direct SDK for audio TTS and image generation/editing
  • @push.rocks/smartpdf ^4.1.3 — PDF to PNG conversion for document analysis

Build

  • pnpm buildtsbuild tsfolders --allowimplicitany
  • Compiles: ts/, ts_vision/, ts_audio/, ts_image/, ts_document/, ts_research/

Important Notes

  • LanguageModelV3 uses unified/raw in FinishReason (not type/rawType)
  • LanguageModelV3 system messages have content: string (not array)
  • LanguageModelV3 file parts use mediaType (not mimeType)
  • LanguageModelV3FunctionTool uses inputSchema (not parameters)
  • Ollama think param goes at request body top level, not inside options
  • Qwen models get default temperature 0.55 in the custom Ollama provider
  • qenv.getEnvVarOnDemand() returns a Promise — must be awaited in tests

Testing

pnpm test                            # all tests
tstest test/test.smartai.ts --verbose # core tests
tstest test/test.ollama.ts --verbose  # ollama provider tests (mocked, no API needed)