← Back to home

Claude Code and Claude Cowork Together: Part 2

March 7, 2026·4 min read

A few days ago I wrote about connecting Claude Code and Cowork with a bash script. The idea was simple. Cowork plans, writes a task file to a shared folder, a watcher script detects it and sends it to Claude Code, and Claude Code builds and deploys. Two AI agents talking to each other through a folder on my desktop.

People liked it. One commenter pointed out the obvious next step: what if Cowork could read Claude Code's output and keep their context perfectly synced? What if the loop didn't need me at all?

So I built that. And then I went to sleep.

Here's what happened.

The problem with the first version was that the bridge only went one way. Cowork could send tasks to Claude Code, but once Claude Code finished building, it just sat there. I had to go back to Cowork, tell it what happened, figure out what to do next, and kick off another task. The middleman problem wasn't gone. It just moved.

The fix was giving Cowork eyes and a sense of timing. Here's how it works now. Cowork writes a task file to a shared folder, same as before. The bridge script picks it up and sends it to Claude Code, same as before. But now when Claude Code finishes, it writes a detailed report back to a results folder. What it built, what files it changed, the build output, and exactly how to verify each feature.

Cowork knows to check for that report. Every 30 seconds or so it runs a line count on the result file. Two lines means Claude Code is still working. When it jumps to a hundred plus lines, Cowork knows it's done. Simple. No webhooks, no APIs, just checking how long a file is.

Then Cowork reads the report and does something Claude Code can't do. It opens Chrome. Cowork uses the Claude in Chrome extension to navigate to the actual deployed site. It takes screenshots, clicks through the UI, checks for console errors, looks at whether the layout is broken on mobile, and compares what it sees to what Claude Code said it built. If a tooltip is getting clipped or a chart is showing wrong numbers or a button isn't responding, Cowork catches it.

And then it writes the next task. Bug fixes from what it just found, plus whatever new features are next on the list. Drops it in the shared folder. The bridge picks it up. Claude Code starts building again. Cowork goes back to polling. No human in the loop. At all.

The first real test was on PremiumTracker, a covered call management platform I've been building. I set up the pipeline, gave Cowork a rough list of what the app needed, and went to bed. Not in a "I checked on it every twenty minutes" way. I actually went to sleep.

I woke up and checked the results folder. Twenty plus completed tasks. Each task had three to five features in it. Cowork had written detailed task files, Claude Code had built and deployed each one, Cowork had tested every deployment in Chrome, caught bugs, filed fixes in the next task, and kept the cycle going all night. Interactive charts. CSV import and export. Profit and loss reports. Tax summaries. Position alerts. Trade comparison tools. A global search with keyboard shortcuts. Scroll animations. Accessibility improvements. A calendar view. Dozens of bug fixes that Cowork found by actually looking at the app in a browser. The whole app was basically rebuilt overnight.

The reason this works as an infinite loop and not just a one-shot relay is context. Every result file Claude Code writes back is basically a brain dump. What it built, every file it touched, the exact build output, and step by step instructions on how to verify each feature. When Cowork reads that, it's not starting from scratch. It knows the current state of the entire project. It knows what just changed. It knows what the build output looked like. And after it tests in Chrome, it knows what's actually working and what's not. So when it writes the next task, that task is informed by everything that happened before it. The two agents aren't just passing messages back and forth. They're building on each other's work with full context of where the project stands. That's what makes it a real collaboration and not just a fancy cron job.

The thing that still kind of blows my mind is how simple the infrastructure is. The task files are just markdown. The result files are just markdown. Cowork polls with wc -l. The bridge sends text with tmux send-keys. There's no orchestration framework. There's no database. There's no server. Two AI agents collaborated on building a real application through a shared folder and a bash script.

The separation of concerns is what makes it work so well. Claude Code is incredible at writing code, running builds, and deploying. But it's completely blind. It has no idea what the app actually looks like in a browser. Cowork is great at looking at things, reasoning about what's wrong, and planning what to build next. But it can't edit code. They each do the thing the other one can't, and the bridge just moves text between them.

In the first article I said "stop being the middleman, let them talk." Turns out I was still the middleman. I just didn't realize it yet. Now they actually talk. And they don't need me to be awake for it.