Releases: JetBrains/koog
Releases · JetBrains/koog
0.5.4
Published 03 December 2025
Improvements
- LLM clients: better error reporting (#1149). Potential breaking change: LLM clients now throw
LLMClientExceptioninstead ofIllegalStateException(KG-552) - Add support for OpenAI GPT-5.1 and GPT-5 pro (#1121) and (#1113) and Anthropic Claude Opus 4.5 (#1199)
- Add Bedrock support in Ktor for configuring and initializing Bedrock LLM clients. (#1141)
- Improve Bedrock moderation implementation (#1105)
- Add handler for
GooglePart.InlineDatainGoogleLLMClient(#1094) (KG-487) - Improvements in
ReadFileTool(#1182) and (#1213)
Bug Fixes
- Fix and simplify
McpToolto properly support updated Tool serialization (#1128) - Fix file tools to properly use newer API to provide textual tool result representation (#1201)
- Fix empty list condition check in
onMultipleToolResultsandonMultipleAssistantMessages(#1192) - Fix reasoning message handling in strategy (#1166)
- Fix timeout in
JvmShellCommandExecutor(#1005)
0.5.3
Published 12 November 2025
New Features
Improvements
- Support subgraph execution events in an agent pipeline and features, including
OpenTelemetry(#1052) - Make
systemPromptandtemperatureoptional, set default temperature to null inAIAgentfactory functions (#1078) - Improve compatibility with kotlinx-coroutines 1.8 in runtime by removing
featurePrepareDispatcherfromAIAgentPipeline(#1083)
Bug Fixes
0.5.2
Published 29 Oct 2025
New Features
- Add
subtaskextension for non-graph agents similar tosubgraphWithTask(#982) - Add MistralAI LLM Client (#622)
Improvements
- Replace string content and attachments list in messages with a unified content parts list to make the API more flexible and preserve text/attachment parts order (#1004)
- Add input and output attributes to the NodeExecuteSpan span in OpenTelemetry to improve observability (KG-501)
- Set the JVM target to 11 to support older JVM versions and explicitly specify the JVM target. (#1015)
- Support multi-responses from LLM in the subgraphWithTask API (KG-507)
0.5.1
Published 15 Oct 2025
Improvements
- Add error handling in LocalFileMemoryProvider (#905)
- Add GPT-5 Codex model support (#888)
- Added support for filters in PersistenceProvider (#936)
- Added DashScope (Qwen) LLM client support (#687)
- Excluded Ktor engine dependencies (KG-315)
- Support additional Bedrock auth options (#923)
requestLLMStreamingnow respectAgentConfig.missingToolsConversionStrategy(#944)
Bug Fixes
- Make subgraphWithTask work with models without ToolChoice support (KG-440)
- Fix for KTOR-8881 - Ktor/Koog configuration in
application.yamlgives error - Fixed the ordering issue for Persistence checkpoints (#964)
- Fixed issue with the tool name in
@Toolannotation - now we take it into account (#930)
Examples
- Supported Multi-LLM Prompt Executor Spring Bean by adding llmProvider method to LLM clients (#842)
0.5.0
Published 2 Oct 2025
Major Features
- Full Agent-to-Agent (A2A) Protocol Support:
- Multiplatform Kotlin A2A SDK: Including server and client with JSON-RPC HTTP support.
- A2A Agent Feature: seamlessly integrate A2A in your Koog agents
- Non-Graph API for Strategies: Introduced non-graph API for creating AI Agent strategies as Kotlin extension functions with most of Koog's features supported (#560)
- Agent Persistence and Checkpointing:
- Roll back Tool Side-Effects: Add
RollbackToolRegistryin thePersistencefeature in order to roll back tool calls with side effects when checkpointing. - State-Machine Persistence / Message History Switch: Support switching between full state-machine persistence and message history persistence (#856)
- Roll back Tool Side-Effects: Add
- Tool API Improvements:
subgraphWithTaskSimplification: Get rid of requiredfinishTooland support tools as functions insubgraphWithTask, deduce final step automatically by data class (#791)AIAgentServiceIntroduced: MakeAIAgentstate-manageable and single-run explicitly, introduceAIAgentServiceto manage multiple uniform running agents.- New components:
Improvements
- Make Koog-based tools exportable via MCP server (KG-388)
- Add
additionalPropertiesto LLM clients in order to support custom LLM configurations (#836) - Allow adjusting context window sizes for Ollama dynamically (#883)
- Refactor streaming api to support tool calls (#747)
- Provide an ability to collect and send a list of nodes and edges out of
AIAgentStrategyto the client when running an agent (KG-160) - Add
excludedPropertiesto inlinecreateJsonStructuretoo, update KDocs (#826) - Refactor binary attachment handling and introduce Base64 serializer (#838)
- In
JsonStructuredData.defaultJsoninstance rename class discriminator from#typetokindto align with common practices (#772, KG-384) - Make standard json generator default when creating
JsonStructuredData(it was basic before) (#772, KG-384) - Add default audio configuration and modalities (#817)
- Add
GptAudiomodel in OpenAI client (#818) - Allow re-running of finished agents that have
Persistencefeature installed (#828, KG-193) - Allow ideomatic node transformations with
.transform { ...}lambda function (#684) - Add ability to filter messages for every agent feature (KG-376)
- Add support for trace-level attributes in Langfuse integration (#860, KG-427)
- Keep all system messages when compressing message history of the agent(#857)
- Add support for Anthropic's Sonnet 4.5 model in Anthropic/Bedrock providers (#885)
- Refactored LLM client auto-configuration in Spring Boot integration, to modular provider-specific classes with improved validation and security (#886)
- Add LLM Streaming agent events (KG-148)
Bug Fixes
- Fix broken Anthropic models support via Amazon Bedrock (#789)
- Make
AIAgentStorageKeyin agent storage actually unique by removingdatamodifier (#825) - Fix rerun for agents with Persistence (#828, KG-193)
- Update mcp version to
0.7.2with fix for Android target (#835) - Do not include an empty system message in Anthropic request (#887, KG-317)
- Use
maxTokensfrom params in Google models (#734) - Fix finishReason nullability (#771)
Deprecations
- Rename agent interceptors in
EventHandlerand related feature events (KG-376) - Deprecate concurrent unsafe
AIAgent.asToolin favor ofAIAgentService.createAgentTool(#873) - Rename
PersistencytoPersistenceeverywhere (#896) - Add
agentIdargument to allPersistencemethods instead ofpersistencyIdclass field (#904)
Examples
0.4.2
Published 15 Sep 2025
Bug Fixes
- Add missing tool calling support for Bedrock Nova models so agents can invoke functions when using Nova (KG-239).
- Add Android target support and migrate Android app to Kotlin Multiplatform to widen KMP coverage (KG-315, #728, #767).
- Add Spring Boot Java example to jump‑start integration (#739).
- Add Java Spring auto‑config fixes: correct property binding and make Koog starter work out of the box (#698).
- Fix split package issues in OpenAI LLM clients to avoid classpath/load errors (KG-305, #694).
- Ensure Anthropic tool schemas include the required "type" field in serialized request bodies to prevent validation errors during tool calling (#582).
- Fix AbstractOpenAILLMClient to correctly handle plain‑text responses in capabilities flow; add integration tests to prevent regressions (#564).
- Fix GraalVM native image build failure so projects can compile native binaries again (#774).
- Fix usages in OpenAI‑based data model to align with recent API changes (#688).
Improvements
- Make agents‑mcp support KMP targets to run across more platforms (#756).
- Add LLM client retry support to Spring Boot auto‑configuration to improve resilience on transient failures (#748).
- Add Claude Opus 4.1 model support to Anthropic client to unlock latest reasoning capabilities (#730).
- Add Gemini 2.5 Flash Lite model support to Google client to enable lower‑latency, cost‑efficient generations (#769).
- Add Java‑compatible non‑streaming Prompt Executor so Java apps can call Koog without coroutines (KG-312, #715).
- Support excluding properties in JSON Schema generation to fine‑tune structured outputs (#638).
- Update AWS SDK to latest compatible version for Bedrock integrations.
- Introduce Postgres persistence provider to store agent state and artifacts (#705).
- Update Kotlin to 2.2.10 in dependency configuration for improved performance and language features (#764).
- Refactor executeStreaming to remove suspend for simpler interop and better call sites (#720).
- Add Java‑compatible prompt executor (non‑streaming) wiring and polish across modules (KG-312, #715).
- Decouple FileSystemEntry from FileSystemProvider to simplify testing and enable alternative providers (#664).
CI and Build
0.4.1
Published 28 Aug 2025
Bug Fixes
Fixed iOS target publication
0.4.0
Published 27 Aug 2025
Major Features
- Integration with Observability Tools:
- Ktor Integration: First-class Ktor support via the "Koog" Ktor plugin to register and run agents in Ktor applications (#422).
- iOS Target Support: Multiplatform expanded with native iOS targets, enabling agents to run on Apple platforms (#512).
- Upgraded Structured Output: Refactored structured output API to be more flexible and add built-in/native provider support for OpenAI and Google, reducing prompt boilerplate and improving validation (#443).
- GPT5 and Custom LLM Parameters Support: Now GPT5 is available together with custom additional LLM parameters for OpenAI-compatible clients (#631, #517)
- Resilience and Retries:
Improvements
- OpenTelemetry and Observability:
- Finish reason and unified attributes for inference/tool/message spans and events; extract event body fields to attributes for better querying (KG-218).
- Mask sensitive data in events/attributes and introduce a “hidden-by-default” string type to keep secrets safe in logs (KG-259).
- Include all messages into the inference span and add an index for ChoiceEvent to simplify analysis (KG-172).
- Add tool arguments to
gen_ai.choiceandgen_ai.assistant.messageevents (#462). - Allow setting a custom OpenTelemetry SDK instance in Koog (KG-169).
- LLM and Providers:
- Support Google’s “thinking” mode in generation config to improve reasoning quality (#414).
- Add responses API support for OpenAI (#645)
- AWS Bedrock: support Inference Profiles for simpler, consistent configuration (#506) and accept
AWS_SESSION_TOKEN(#456). - Add
maxTokensas prompt parameters for finer control over generation length (#579). - Add
contextLengthandmaxOutputTokenstoLLModel(#438, KG-134)
- Agent Engine:
- File Tools and RAG:
- Reworked FileSystemProvider with API cleanups and better ergonomics; moved blocking/suspendable operations to
Dispatchers.IOfor improved performance and responsiveness (#557, “Move suspendable operations to Dispatchers.IO”). - Introduce
filterByRoothelpers and allow custom path filters inFilteredFileSystemProviderfor safer agent sandboxes (#494, #508). - Rename
PathFiltertoTraversalFilterand make its methods suspendable to support async checks. - Rename
fromAbsoluteStringtofromAbsolutePathStringfor clarity (#567). - Add
ReadFileToolfor reading local file contents where appropriate (#628).
- Reworked FileSystemProvider with API cleanups and better ergonomics; moved blocking/suspendable operations to
- Update kotlin-mcp dependency to v0.6.0 (#523)
Bug Fixes
- Make
partsfield nullable in Google responses to handle missing content from Gemini models (#652). - Fix enum parsing in MCP when type is not mentioned (#601, KG-49)
- Fix function calling for
gemini-2.5-flashmodels to correctly route tool invocations (#586). - Restore OpenAI
responseFormatoption support in requests (#643). - Correct
o4-minivsgpt-4o-minimodel mix-up in configuration (#573). - Ensure event body for function calls is valid JSON for telemetry ingestion (KG-268).
- Fix duplicated tool names resolution in
AIAgentSubgraphExtto prevent conflicts (#493). - Fix Azure OpenAI client settings to generate valid endpoint URLs (#478).
- Restore
llama3.2:latestas the default for LLAMA_3_2 to match the provider expectations (#522). - Update missing
Documentcapabilities for LLModel (#543) - Fix Anthropic json schema validation error (#457)
Removals / Breaking Changes
- Remove Google Gemini 1.5 Flash/Pro variants from the catalog (KG-216, #574).
- Drop
executeextensions forPromptExecutorin favor of the unified API (#591). - File system API cleanup: removed deprecated FSProvider interfaces and methods;
PathFilterrenamed toTraversalFilterwith suspendable operations;fromAbsoluteStringrenamed tofromAbsolutePathString.
Examples
0.3.0
Published 15 Jul 2025
Major Features
- Agent Persistency and Checkpoints: Save and restore agent state to local disk, memory, or easily integrate with
any cloud storages or databases. Agents can now roll back to any prior state on demand or automatically restore from
the latest checkpoint (#305) - Vector Document Storage: Store embeddings and documents in persistent storage for retrieval-augmented generation (
RAG), with in-memory and local file implementations (#272) - OpenTelemetry Support: Native integration with OpenTelemetry for unified tracing logs across AI agents (#369, #401,
#423, #426) - Content Moderation: Built-in support for moderating models, enabling AI agents to automatically review and filter
outputs for safety and compliance (#395) - Parallel Node Execution: Parallelize different branches of your agent graph with a MapReduce-style API to speed up
agent execution or to choose the best of the parallel attempts (#220, #404) - Spring Integration: Ready-to-use Spring Boot starter with auto-configured LLM clients and beans (#334)
- AWS Bedrock Support: Native support for Amazon Bedrock provider covering several crucial models and services (
#285, #419) - WebAssembly Support: Full support for compiling AI agents to WebAssembly (WASM) for browser deployment (#349)
Improvements
- Multimodal Data Support: Seamlessly integrate and reason over diverse data types such as text, images, and audio (
#277) - Arbitrary Input/Output Types: More flexibility over how agents receive data and produce responses (#326)
- Improved History Compression: Enhanced fact-retrieval history compression for better context management (#394,
#261) - ReAct Strategy: Built-in support for ReAct (Reasoning and Acting) agent strategy, enabling step-by-step reasoning
and dynamic action taking (#370) - Retry Component: Robust retry mechanism to enhance agent resilience (#371)
- Multiple Choice LLM Requests: Generate or evaluate responses using structured multiple-choice formats (#260)
- Azure OpenAI Integration: Support for Azure OpenAI services (#352)
- Ollama Enhancements: Native image input support for agents running with Ollama-backed models (#250)
- Customizable LLM in fact search: Support providing custom LLM for fact retrieval in the history (#289)
- Tool Execution Improvements: Better support for complex parameters in tool execution (#299, #310)
- Agent Pipeline enhancements: More handlers and context available in
AIAgentPipeline(#263) - Default support of tools and messages mixture: Simple single run strategies variants for multiple message and
parallel tool calls (#344) - ResponseMetaInfo Enhancement: Add
additionalInfofield toResponseMetaInfo(#367) - Subgraph Customization: Support custom
LLModelandLLMParamsin subgraphs, makenodeUpdatePrompta
pass-through node (#354) - Attachments API simplification: Remove additional
contentbuilder fromMessageContentBuilder, introduce
TextContentBuilderBase(#331) - Nullable MCP parameters: Added support for nullable MCP tool parameters (#252)
- ToolSet API enhancement: Add missing
tools(ToolSet)convenience method forToolRegistrybuilder (#294) - Thinking support in Ollama: Add THINKING capability and it's serialization for Ollama API 0.9 (#248)
- kotlinx.serialization version update: Update kotlinx-serialization version to 1.8.1
- Host settings in FeatureMessageRemoteServer: Allow configuring custom host in
FeatureMessageRemoteServer(#256)
Bug Fixes
- Make
CachedPromptExecutorandPromptCachetimestamp-insensitive to enable correct caching (#402) - Fix
requestLLMWithoutToolsgenerating tool calls (#325) - Fix Ollama function schema generation from
ToolDescriptor(#313) - Fix OpenAI and OpenRouter clients to produce simple text user message when no attachments are present (#392)
- Fix intput/output token counts for OpenAILLMClient (#370)
- Using correct
OllamaLLM provider for ollama llama4 model (#314) - Fixed an issue where structured data examples were prompted incorrectly (#325)
- Correct mistaken model IDs in DEFAULT_ANTHROPIC_MODEL_VERSIONS_MAP (#327)
- Remove possibility of calling tools in structured LLM request (#304)
- Fix prompt update in
subgraphWithTask(#304) - Removed suspend modifier from LLMClient.executeStreaming (#240)
- Fix
requestLLMWithoutToolsto work properly across all providers (#268)
Examples
- W&B Weave Tracing example
- LangFuse Tracing example
- Moderation example: Moderating iterative joke-generation conversation
- Parallel Nodes Execution example: Generating jokes using 3 different LLMs in parallel, and choosing the funniest one
- Snapshot and Persistency example: Taking agent snapshots and restoring its state example
0.2.1
Published 6 Jun 2025
Bug Fixes
- Support MCP enum arg types and object additionalParameters (#214)
- Allow appending handlers for the EventHandler feature (#234)
- Fix LLM clients after #195, make LLM request construction again more explicit in LLM clients (#229)
Improvements
- Migrating of simple agents to
AIAgentconstructor,simpleSingleRunAgentdeprecation (#222)