LLM

is a multi-provider LLM client supporting Anthropic, OpenAI, Ollama, and Gemini.

Quick start

(let [config (LLM.ollama "http://localhost:11434")
      req (LLM.chat-request "llama3" [(Message.user "hello")] 256 0.7)]
  (match (LLM.chat &config &req)
    (Result.Success r) (println* (LLMResponse.content &r))
    (Result.Error e) (IO.errorln &e)))

anthropic

defn

(Fn [(Ref String a)] ProviderConfig)

                        (anthropic api-key)
                    

creates a provider config for the Anthropic API.

chat

defn

(Fn [(Ref ProviderConfig a), (Ref LLMRequest b)] (Result LLMResponse LLMError))

                        (chat config req)
                    

sends a chat request to the configured provider. Returns (Result LLMResponse LLMError). On HTTP errors (4xx/5xx), returns a structured LLMError.Api with status code, error type, and message parsed from the provider's error response. On transport failures, returns LLMError.Transport.

chat-request

defn

(Fn [(Ref String a), (Array Message), Int, Double] LLMRequest)

                        (chat-request model messages max-tokens temperature)
                    

creates an LLM request without tools.

chat-request-json

defn

(Fn [(Ref String a), (Array Message), Int, Double] LLMRequest)

                        (chat-request-json model messages max-tokens temperature)
                    

creates an LLM request that asks for a JSON response.

Note: Anthropic has no native JSON mode, so this falls back to a system prompt instruction (best-effort, not guaranteed). All other providers use their native JSON mode.

chat-request-with-schema

defn

(Fn [(Ref String a), (Array Message), Int, Double, JSON] LLMRequest)

                        (chat-request-with-schema model messages max-tokens temperature schema)
                    

creates an LLM request constrained to a JSON schema. The schema is a JSON value (use the JSON constructors).

chat-request-with-tools

defn

(Fn [(Ref String a), (Array Message), Int, Double, (Array ToolDef)] LLMRequest)

                        (chat-request-with-tools model messages max-tokens temperature tools)
                    

creates an LLM request with tool definitions.

chat-stream

defn

(Fn [(Ref ProviderConfig a), (Ref LLMRequest b)] (Result LlmStream LLMError))

                        (chat-stream config req)
                    

sends a streaming chat request and returns an LlmStream. Poll the stream for tokens. Returns (Result LlmStream LLMError). Checks the HTTP status code before returning the stream.

gemini

defn

(Fn [(Ref String a)] ProviderConfig)

                        (gemini api-key)
                    

creates a provider config for the Google Gemini API.

ollama

defn

(Fn [(Ref String a)] ProviderConfig)

                        (ollama base-url)
                    

creates a provider config for Ollama. Takes the base URL (e.g. "http://localhost:11434").

openai

defn

(Fn [(Ref String a)] ProviderConfig)

                        (openai api-key)
                    

creates a provider config for the OpenAI API.