One API key. Every major model.
cline.bot/blog/one-ap...
One API key. Every major model.
cline.bot/blog/one-ap...
Cline's ComfyUI MCP lets you build generative AI workflows from your code editor using natural language. Tonight we're cohosting a hands-on crash course in SF covering the fundamentals. Come through.
luma.com/comfyui-cra...
Hosting a hackathon in SF today. Enterprise AI agents on Azure infrastructure. Unlimited cloud resources, build something real in one afternoon.
luma.com/musa-labs-h...
We wrote up the exact process: how we set up the eval pipeline, the failure patterns we found, and the fixes that moved the needle. The method (hill climbing) works with any agent, not just Cline.
Full guide:
cline.gg/hill-climbing
A potential partner asked for our benchmark numbers. At the time, benchmarks had us behind other agents. We spent a weekend fixing that: ran Cline against Terminal Bench's 89 real-world tasks, diagnosed every failure, and shipped fixes. 47% β 57%.
GPT 5.3 Codex just landed on Cline (v3.67.1). What's new:
> 25% faster than 5.2 Codex
> #1 on SWE-Bench Pro (4 different languages)
> Nearly 2x on OSWorld (38% β 65%)
> Fewer tokens per task than any prior OpenAI model
Select the model and try it on your repo.
New in Cline 3.64.0: Claude Sonnet 4.6
@AnthropicAI latest iteration of Sonnet just dropped and it's free to use with the Cline provider until Feb 18 at noon PST in the Cline provider. Update Cline wherever you code and try it out.
github.com/cline/cline...
`npm install -g cline` and start building.
Now available on Windows, Mac, and Linux.
Read more: cline.gg/cli
We built CLI 2.0 by listening.
The community told us the terminal needed to be a first-class surface for AI coding, not just an afterthought.
We studied what other tools were doing, looked at how developers actually work in their terminals, and redesigned the entire experience around that.
What's new in Cline CLI 2.0:
+ Completely redesigned terminal UI with interactive mode
+ Parallel agents with isolated state per instance without manual instance creation.
+ Improved headless mode for CI/CD pipelines
+ Added ACP support for Zed, Neovim, and Emacs
Introducing Cline CLI 2.0: An open-source AI coding agent that runs entirely in your terminal.
Parallel agents, headless CI/CD pipelines, ACP support for any editor, and a completely redesigned developer experience. Minimax M2.5 and Kimi K2.5 are free to use for a limited time.
Read how to make the most out of MiniMax M2.5 with Cline in our blog:
cline.bot/blog/minima...
M2.5 runs at 100 tokens per second. That's 3x faster than Opus. At $0.06/M blended with caching, you can run subagents in the CLI and just leave them going.
Fast models exist. Cheap models exist. Both at SOTA performance is new.
M2.5 benchmarks worth paying attention to:
> SWE-Bench Pro: 55.4 (Opus 4.6: 53.4)
> Multi-SWE-Bench: 51.3 (Opus 4.6: 50.3)
> SWE-Bench Verified: 80.2 (Opus 4.6: 80.8)
Beats Opus on the harder engineering tasks. At a fraction of the cost. Great performance for multi-agent work.
@MiniMax_AI M2.5 is now in Cline.
+ 80.2% SWE-Bench Verified.
+ 100 tps. $0.06/M blended cost.
+ 10B activated parameters.
And it's free in Cine for a limited time!
Read all the changes in our changelog: github.com/cline/cline...
Update Cline in VSCode, JetBrains or wherever you code and start using it today.
GLM-5 is ZAI's new flagship. 744B params (40B active), trained on 28.5T tokens, and built for complex systems engineering and long-horizon agentic tasks.
Right now your agent works through tasks one at a time. Subagents changes that. It can now spin up parallel sub-tasks that each run independently with their own context.
Cline v 3.58.0 is out and now has native subagents.
Your AI coding agent can spin up sub-tasks that run in parallel, each with their own context. Pair it with auto-approval and you've got fully autonomous multi-threaded workflows.
Plus GLM 5 support, Bedrock parallel tool calling, and a bunch more
Trust but verify.
cline.bot/blog/ai-slo...
Gemini 3 Flash Preview is now available in Cline.
Flash is the βfrontier intelligence at speedβ option with 1M context, 64K output, and native multimodal inputs (text, images, audio, video).
If youβve got agentic workflows, everyday coding, or multimodal tasks, itβs a great default to try.
Full breakdown on why Devstral 2 works so well with Cline:
cline.bot/blog/devstr...
One week of @MistralAI's Devstral 2 in Cline.
6.52% diff-edit failure rate.
On the diff-edit chart, that puts it ahead of GLM-4.6 (7.58%) and Kimi-K2 (9.29%) -- while being 123B params (5x smaller than DeepSeek V3.2).
Devstral is FREE during launch so try it out while the promotion lasts.
What would you build on top of the Cline CLI?
Join us and @vercel @togethercompute @coderabbitai @kestra_io @Oumi_PBC at the AI Agents Assemble virtual hackathon.
Think review bots, GitHub Actions, mobile apps -- surprise us!
Registrations are open now: www.wemakedevs.org/hackathons/...
Live in Cline, gpt-5.1-codex-max.
OpenAI's most advanced coding model to date, and $1.25/$10 per million tokens.
DeepSeek v3.2 on @basetenco is SOTA open-source performance at SOTA speed.
Available now, in Cline.
new stealth model: `microwave`
(access via cline provider)
> 256k context window
> built for agentic coding
> free during alpha
> from a lab you know & will be excited to hear from
we've been testing internally & have been impressed.
Cline + @jetbrains . Your favorite AI agent, native in your favorite IDE. Daniel ( @NighttrekETH) , Francis ( @inferencetoken), and Rashad from the Cline team are at re:Invent all week. DM us to chat and meet us on Dec 4th at JetBrains booth with exclusive swag.
Cline 3.38.3 is live now!
New:
- Expanded Hooks functionality and UI
- Grok 4.1 & Grok Code added to XAI
- Native tool calling for Baseten & Kimi K2
- Thinking level for Gemini 3.0 Pro preview
Fixes for slash commands, Vertex, Windows terminal & thinking/reasoning across providers