Beyond the Triage Meeting: A Non-Engineer Shipping Code With Linear's Agent SDK and Claude Managed Agents

linear, agent-sdk, claude, claude-managed-agents, ai-automation, workflow-automation, product-development

A Linear board showing the full feedback-to-production loop, with agent sessions visible at every stage.

Linear's Agent SDK shipped a few weeks ago. Claude Managed Agents — Anthropic's hosted agent harness, currently in beta — shipped around the same time. We've wired ours together, and the result is something I didn't expect to see this year.

Our Solutions Lead Carolina, who hasn't written production code in her career, is now the person who ships client feedback into our codebase. Not "files a ticket so an engineer can ship it later." Ships it. Same week, sometimes same day.

The bottleneck on most product teams was never writing code. It was the chain of people between a client saying "this is broken" and a developer with enough context to fix it. Product manager translates the feedback. Engineer asks clarifying questions. Tech lead scopes it. Sprint planning argues about it. By the time someone writes the code, the original signal is three meetings deep.

The interesting thing about these two new pieces of infrastructure isn't that AI writes code. We've had that. The interesting thing is that the translation layer between non-technical and technical can now sit inside the workspace as a real app user, with a real cloud agent doing the actual work, and an audit trail end to end.

Screenshots and architecture below — grab a coffee. ☕️

Why the Translator Was Always the Bottleneck

For most of my career, the limiting reagent on any product team has been the same thing. Not engineering capacity. The gap between the person hearing the customer and the person who can actually do something about it.

In our case, that gap looks like Carolina. She runs solution delivery. She's on the call when a client says "the postcode field disappears on retry" or "we need to bulk-archive from the dashboard." She knows the business context. She knows the client's priorities. What she doesn't know is the codebase, the architecture, or which file the postcode bug is hiding in.

The traditional fix is to put another human between her and the code. A product manager or a tech lead who can read both languages. That works, badly. The translator becomes the bottleneck. Context gets compressed twice — client to PM, PM to engineer — and detail evaporates at every step.

Without an in-workspace translator, every piece of client feedback becomes a queue problem.

With one — which Linear's Agent SDK and Claude Managed Agents have only just made possible together — the translator is a workspace member, fluent both directions, and never the bottleneck.

Two New Layers, Not One: How the SDK and the Harness Fit Together

TODO IMAGE — /assets/images/beyond-triage-meeting/architecture-two-layers.png — Two-layer diagram. Top: Linear workspace with the agent as an app user. Bottom: Claude Managed Agents session running in a cloud container with bash, Git, MCP servers. An arrow showing events flowing between the two. Alt: "Front of house: the Linear app user. Back of house: the Claude Managed Agents session."

Two products, each new, doing different jobs. Easy to conflate. Worth keeping straight.

Linear's Agent SDK is the front of house. It turns an agent into a workspace member — app user installation, scoped OAuth, @-mention and assignment, agent sessions with lifecycle state (pending, working, completed, errored) visible to the whole team. Before the SDK, building this kind of integration meant duct-taping a chatbot to webhooks: issue created → bot scrapes the description → bot posts a comment → engineer ignores it because it's noise. The SDK turns the agent into a participant the rest of the team treats as real.

Claude Managed Agents is the back of house. Anthropic's hosted agent harness, in beta, runs the actual work in a managed cloud container. You define an Agent (model, system prompt, tools, MCP servers, skills), an Environment (the container, packages, network rules), then start a Session and stream Events back. Anthropic handles the loop, the sandbox, the prompt caching, the compaction, the file system. No agent runtime to maintain. No bespoke harness to keep alive at 2am.

Here's the mental model: Linear's SDK gives the agent a name badge. Claude Managed Agents gives it a desk, a computer, and a cloud account.

Layer What it does What you get
Linear Agent SDK App user identity, agent sessions, lifecycle UI Workspace presence and audit trail
Claude Managed Agents Cloud container, tools, runtime, harness Code reading, planning, building, reviewing
Both together First-class teammate doing real work A non-engineer driving the codebase

In our system, the triage agent is one Linear app user backed by one Claude Managed Agents agent definition. When Linear delegates an issue, our backend opens a CMA session, streams events as the work progresses, and posts the agent's structured output back through the Linear app user. From Carolina's perspective, it's one teammate. Underneath, two pieces of brand-new infrastructure finally agreed to talk to each other.

What happens if you only have the Linear SDK? You have a name badge with no one wearing it. The agent can be mentioned and assigned, but it's got no place to actually run, no codebase to read, no session that holds state for hour-long work.

What happens if you only have Claude Managed Agents? You have a hardworking employee with no desk. The agent can run for hours, read code, write specs — but it has nowhere to publish that work where the team lives, no first-class identity in the issue tracker, no audit trail anyone but you can see.

What happens when you wire them together? The translator becomes a teammate. Every. Single. Time.

How Carolina Actually Uses This

Three loops from the last fortnight, lightly anonymised. Same shape every time.

The bug from a client call

TODO IMAGE — /assets/images/beyond-triage-meeting/slack-to-linear-bug.png — Slack message with paraphrased feedback, screenshot of the auto-created Linear issue underneath. Alt: "Client feedback turned into a Linear issue without anyone opening Linear."

A client mentioned on a call that the export was silently dropping rows over a certain size. Carolina paraphrased it into Slack: "Big-account export is dropping rows. Acme hit it on the 50k row report yesterday." Eleven minutes later the triage agent had identified the relevant code path, found a related-but-different issue from a month ago, ruled out the duplicate, written a Solution Design Document scoped to the export pipeline, and parked it in Backlog with acceptance criteria.

What changed: Carolina never opened Linear. The bug was triaged, scoped, and queued before her next meeting started.

The half-feature

TODO IMAGE — /assets/images/beyond-triage-meeting/agent-clarifying-question.png — Linear issue thread showing the agent asking a clarifying question and Carolina answering inline. Alt: "The agent flagging an underspecified request rather than guessing."

A client wanted "a way to bulk-archive from the dashboard." The triage agent dug into the dashboard component, found two plausible interpretations of "bulk-archive," and dropped a comment asking Carolina which one the client meant. She answered in the thread. The agent re-ran the brief, confirmed the scope, and queued it.

What changed: the question that would have happened in sprint planning happened in the Linear thread, with the client context still hot.

The "we already do that"

TODO IMAGE — /assets/images/beyond-triage-meeting/dedup-auto-close.png — Linear issue auto-merged, with the triage agent's dedup explanation visible. Alt: "The agent closing a duplicate before it ever hit Backlog."

A client asked for a feature we shipped six weeks ago. The triage agent found the existing functionality in the codebase, pointed Carolina at the docs, and closed the issue with a note. Carolina forwarded the docs to the client. Total elapsed time: minutes.

What changed: "we already do that" used to be a thing an engineer noticed two days into a sprint. Now it's caught at intake.

What's the common thread? Every loop ends with the right outcome — built, clarified, or closed — without a human translation layer in between.

Let's Walk the Full Loop: From Slack Message to Production

The end-to-end path. Five stops. Every stop has a Linear avatar attached to it.

Stop 1 — Slack to Linear (The Capture)

TODO IMAGE — /assets/images/beyond-triage-meeting/slack-capture.png — Slack channel with a feedback message and the bot reaction creating the issue. Alt: "A message in our #client-feedback channel auto-promoted to a Linear issue."

A bot watches our client-feedback Slack channel. When Carolina drops a paraphrased piece of feedback in, the bot creates a Linear issue with the message as the description, links the Slack thread, and assigns the triage agent.

This is the only piece of plumbing that isn't built on the Agent SDK — a thin Slack listener using the Linear API directly. Could be replaced with Linear's own Slack integration depending on how clean you want intake.

Stop 2 — Triage Agent (The Translator)

TODO IMAGE — /assets/images/beyond-triage-meeting/triage-session.png — Linear issue with the triage agent's session active, status updates appearing in real time, the app user badge visible. Alt: "The Linear app user on top. The Claude Managed Agents session running underneath."

This is where both layers earn their keep at once.

The triage agent is registered with Linear as an app user via the Agent SDK. When Carolina's feedback hits the intake column, Linear delegates the issue and a Linear agent session opens — visible to anyone watching, with lifecycle state that updates as the work progresses.

Underneath, that delegation triggers a Claude Managed Agents session in our cloud environment. The CMA agent is defined once and reused: model, system prompt, tools, MCP servers, skills. The environment is a container with our code-reading toolchain pre-installed and network access scoped to Linear, GitHub, and Notion.

Tools the triage agent has on its desk:

  • Linear MCP → search issues, read related work, post comments, write structured fields
  • Git in the container → clone our repos, grep, read files, follow imports
  • Notion MCP → read our architecture pages and decision records
  • Bash, file ops, web fetch → from the CMA harness, no extra plumbing

When the session opens, the agent runs the dedup search, the codebase walk, and the architecture read in parallel. Then it makes one of three decisions: act, clarify, or close. The decision and the reasoning stream back as CMA events, get posted to the Linear thread by the app user, and the Linear agent session resolves to completed.

Carolina sees one teammate doing one job. Two pieces of brand-new infrastructure are doing the lifting underneath.

Stop 3 — The SDD (The Brief)

TODO IMAGE — /assets/images/beyond-triage-meeting/sdd-in-issue.png — A generated SDD as it appears in the Linear issue body. Alt: "A Solution Design Document the agent wrote, with code paths and Notion pages cited inline."

If the call is act, the agent writes a Solution Design Document directly into the issue. The sections we standardise on:

  • Context → the user story in the client's voice
  • Scope → the slice of architecture this touches, with links to Notion
  • Implementation notes → code paths the implementer should look at, with file references
  • Acceptance criteria → what done looks like for the user
  • Definition of done → what done looks like for engineering (tests, observability, docs)

The card moves to Backlog. Carolina reviews. She decides priority and signs off scope, because she's the one talking to the client.

Stop 4 — Build/Review Loop (The Two-Agent Dance)

TODO IMAGE — /assets/images/beyond-triage-meeting/build-review-loop.png — PR view with the builder agent's commits and the reviewer agent's comments interleaved. Alt: "Builder and reviewer agents iterating on the same PR, with full audit trail."

When Carolina drags the card to In Progress, two more agents wake up:

  • The builder → reads the SDD, plans the change, opens a branch, writes code, opens a PR
  • The reviewer → reads the diff against the SDD and the ACs, comments on the PR, bounces work that doesn't measure up

They iterate until the reviewer signs off. CI failures fold into the same loop — the builder fixes its own broken tests rather than escalating. When the reviewer is done, the PR sits ready for human approval, with a full audit report pinned to the top: what was built, what was checked, where the AC evidence lives.

Stop 5 — Human Approval and Ship (The Gate)

TODO IMAGE — /assets/images/beyond-triage-meeting/human-approval.png — PR review screen with the agent's audit report at the top and the human approve button below. Alt: "The human approver inherits a PR with the build–review history attached."

A human reviews and approves. CD takes it the rest of the way to production. The Linear card lands in Released, with the deploy event linked back to the original Slack thread.

What Carolina Actually Sees, End to End

TODO IMAGE — /assets/images/beyond-triage-meeting/slack-timeline.png — Slack channel showing the full notification timeline for one feature, intake to deploy. Alt: "One feature, full timeline. Slack carries the running commentary the whole way."

For Carolina, the experience is a Slack channel that talks back:

  • 09:14 — "Issue created from your message: ENG-1247 'Bulk archive from dashboard'."
  • 09:22 — "Triage complete. Question for you in the thread."
  • 09:31 — "Re-triaged with your answer. Queued in Backlog with SDD attached."
  • 14:08 — "Implementation started on ENG-1247."
  • 15:42 — "PR ready for review. Acceptance criteria evidence attached."
  • 16:11 — "Deployed to production. Smoke tests green."

Why this matters:

  • For non-technical users → the codebase becomes accessible through the channel they already work in.
  • For engineers → triage stops being the thing that eats Monday morning.
  • For the team → client feedback closes faster, with a paper trail at every step.

Where Human-in-the-Loop Still Lives

I want to be honest about the edges.

The HITL surface is small but real. The agents stop and ask rather than thrash when:

  • Dedup is ambiguous (two plausible matches, neither obvious)
  • The reviewer keeps rejecting the builder's work after multiple iterations — usually a sign the SDD itself was wrong
  • CI failures look structural rather than incidental
  • A code path the agent wants to touch is in an area we've flagged for senior review

These are the moments I'm called in. My job — the system itself — is reading those traces, figuring out where the loop went sideways, and tuning the prompts, gates, or architecture context so the next ten don't fail the same way.

What's Next: Automated QA

The current human approval step inherits a PR that's been built and reviewed by agents but not actually exercised against a running stack. That's the last gap.

The next agent in the queue is a QA agent that runs the acceptance criteria as live user journeys against staging — Playwright or similar, generated from the SDD's AC section. The human approver then inherits a tested PR rather than a hopeful one.

That's the version of the system I'm tuning toward.

What Just Changed

Old shape: Client → SL → PM/TL → engineer → PR → review → ship. Five translations, each lossy. The bottleneck was the translation chain, not the engineering.

New shape: Client → SL → triage agent → builder/reviewer agents → human approval → ship. One human translation step (Carolina paraphrasing the client). The rest happens in the workspace, in the open, with audit trails.

Linear's Agent SDK is the bit that makes this feel like teamwork rather than automation. Claude Managed Agents is the bit that makes the work itself good. One gives the agent a name badge and a chair at the table. The other gives it a cloud container, the tools, and the harness to actually do the work. Neither would land without the other, and neither existed in this shape until weeks ago.

Conclusion

If you've got a non-technical person who's closer to the customer than your engineers are — and most teams do — the question worth asking isn't "can AI write code?" It's "where is the translation layer, and who's bottlenecked on it?"

The translation layer used to need a human. It doesn't any more. And once you put it inside the workspace where everyone can see and correct it, the whole posture of the team changes.

Key Takeaways:

  • From queue to flow: non-technical roles file tickets and wait → non-technical roles ship features and watch them land.
  • Two layers, not one: Linear's Agent SDK is the workspace identity (name badge); Claude Managed Agents is the cloud harness (desk and computer). You need both.
  • The translator goes inside the workspace: visible, correctable, auditable — not an offstage bot.

That's the shift. And it's been possible for about a fortnight.