preloader
blog post

Calliope IDE v1.4.0: Bedrock Support and Smarter Agents

author image

What’s New in v1.4.0

Calliope AI IDE v1.4.0 is our biggest agent reliability release yet. This update brings full AWS Bedrock support, intelligent context management that prevents agents from hitting token limits, and critical bug fixes that make long-running agents actually work. All macOS builds are signed and notarized by Apple.

Let’s break down what’s in this release.

AWS Bedrock: First-Class Support

Bedrock is how most enterprises run Claude, and until now, getting it working in Calliope required environment variable gymnastics. Not anymore.

v1.4.0 adds full Bedrock integration with a proper credential chain:

  1. Bedrock API Key (bearer token) — the simplest option, straight from the AWS console
  2. Explicit IAM keys — Access Key + Secret Key + optional Session Token
  3. AWS Profile — reads from ~/.aws/credentials
  4. Environment variablesAWS_REGION, AWS_ACCESS_KEY_ID, AWS_PROFILE, etc.
  5. Default credential chain — EC2 metadata, ECS task roles, IRSA

The new Bedrock API Key field in the settings panel means you can paste a bearer token and start running agents on Bedrock in seconds. No IAM configuration required.

We’ve also added proper model output limits for Bedrock-hosted Claude models — Haiku caps at 4,096 tokens, Sonnet at 8,192, and Opus at 16,384. These are silently enforced so your agents don’t hit API errors for exceeding provider limits.

Context Window Management

This was the most requested fix. When agents run long tasks — exploring a codebase, reading dozens of files, iterating on implementations — the conversation history grows. Eventually it exceeds the model’s context window, and the agent crashes with a “prompt is too long” error.

v1.4.0 adds automatic context compaction to the TurboLight backend:

  • Per-result size caps: Individual tool results (file contents, command output) are capped at 50K characters. A 200-line Python file gets through fine. A 10,000-line generated bundle gets truncated with a clear notice.
  • Progressive compaction: Before each LLM call, the backend estimates total token usage. When approaching the limit, it first truncates old tool results, then drops middle conversation history if needed — always preserving the system prompt, initial task, and recent context.
  • Model-aware limits: The system knows Claude has 200K tokens, GPT-4o has 128K, Gemini has 1M. It manages headroom accordingly.

The result: agents can now run 50+ iterations on Haiku without crashing. Tasks that previously failed at iteration 8 now complete successfully.

File Path Resolution

A subtle but critical bug: when agents run in certain environments, they generate absolute paths like /src/backend/app.py. These paths are valid inside the agent’s working directory but not on the host. When the staging system tried to apply changes, it attempted to write to the literal path /src — hitting a read-only filesystem error.

v1.4.0 resolves this at two levels:

  • At the source: All file tools (read_file, write_file, list_directory, search_files) now resolve paths against the working directory. /src/app.py with cwd=/workspace becomes /workspace/src/app.py.
  • At the destination: The staging manager resolves non-workspace paths against the workspace root when applying changes. This handles any paths already persisted from before the fix.

Unlimited Iterations Fix

Setting iterations to “unlimited” in the agent menu was silently capping at 50. The root cause: JavaScript’s truthiness rules. The value 0 (meaning unlimited) is falsy, so maxIterations || 50 evaluated to 50, and maxIterations && maxIterations > 0 evaluated to false.

Fixed with nullish coalescing (??) and explicit !== undefined checks. When you set unlimited iterations, you now actually get unlimited iterations.

Signed and Notarized Builds

All macOS builds are signed and notarized by Apple. No more Gatekeeper warnings, no more right-click-to-open workarounds. Download, open, and start working.

Download

Calliope AI IDE v1.4.0 is available now for macOS (Apple Silicon), Windows (x64), and Linux (x64, ARM64):

Download on GitHub →

Free. No account required. Bring your own API keys.

Related Articles