What I love about this is that it reframes AI from a tooling problem to a management problem.
Most of the conversations I’m in right now are still focused on prompts, models, and use cases. But what you’re describing shows up much earlier than that.
If “done” isn’t clearly defined, if ownership is unclear, and if too many things are running at once, AI just accelerates the breakdown that was already there.
In transformation work, I see this all the time. Organizations move straight into execution without doing the leadership work of alignment and direction. AI doesn’t fix that. It exposes it.
The interesting shift here is that leaders aren’t just adopting AI. They are being forced to rethink how work is structured, delegated, and verified across the entire system.
Really appreciate you reading and sharing this, Sherry! That's the core insight for me too. Kacper's framework builds on decades of hard-won lessons in engineering and product management. We don't need to reinvent the wheel, we need to adapt it.
Great post and key insight from it: AI doesn’t scale without systems. And that makes this more than just an engineering challenge... it’s a management one.
My sense is that “management” itself will need to be redefined, and I’ll be writing more on that soon. What’s working is keeping things simple and modular. Breaking work into smaller chunks and assigning isolated agents to each task is crucial, not just for traceability, but for the quality of the outcome itself.
oh this is great! Saving that task brief to test out on my next feature build. The when to esculate is such a good section to add. I feel like Claude is getting a lot better at this, not sure if its cos I've also gotten better at prompting, but it still feels like a good safeguard to add in and consider for each build.
"Five Rules That Improve Any AI Agent Workflow in 2026"
No task without a definition of done.
If you can’t describe what “finished” looks like before the agent starts, the task isn’t ready.
One task at a time.
Don’t let the agent juggle multiple things at once. Focused work beats scattered work, even when the worker is an AI.
Keep deliverables small.
Give the agent one small piece to finish, not a massive batch. The bigger the output, the less carefully you’ll check it.
Always verify before accepting.
Use checklists, spot checks, or human review, especially for high-stakes work. Verification isn’t something you add after. It’s built into your definition of done.
Set clear escalation triggers.
Before the task runs, decide: at what point should the agent stop and ask you instead of continuing on its own? Write it in the brief.
What I love about this is that it reframes AI from a tooling problem to a management problem.
Most of the conversations I’m in right now are still focused on prompts, models, and use cases. But what you’re describing shows up much earlier than that.
If “done” isn’t clearly defined, if ownership is unclear, and if too many things are running at once, AI just accelerates the breakdown that was already there.
In transformation work, I see this all the time. Organizations move straight into execution without doing the leadership work of alignment and direction. AI doesn’t fix that. It exposes it.
The interesting shift here is that leaders aren’t just adopting AI. They are being forced to rethink how work is structured, delegated, and verified across the entire system.
That’s where the real opportunity is.
Really appreciate you reading and sharing this, Sherry! That's the core insight for me too. Kacper's framework builds on decades of hard-won lessons in engineering and product management. We don't need to reinvent the wheel, we need to adapt it.
That's exactly it! I'm glad you're seeing similar patterns out there
Great post and key insight from it: AI doesn’t scale without systems. And that makes this more than just an engineering challenge... it’s a management one.
My sense is that “management” itself will need to be redefined, and I’ll be writing more on that soon. What’s working is keeping things simple and modular. Breaking work into smaller chunks and assigning isolated agents to each task is crucial, not just for traceability, but for the quality of the outcome itself.
Absolutely agree that management itself will be changing. I'm curious to read more of your thoughts on that!
Thank you for talking the time to read and comment Just J 🤗
delegation ladder for AI is such a smart way to build trust
Glad you liked it!
only speeds up a bad process, great tool but must still be managed well
Yes! AI is a multiplier- it multiplies bad and good process as well.
Good systems make average tools work better.
Love this. You can vibe code if you want to move fast.
But you can’t vibe manage AI agents and expect good work.
That’s the same problem you’d have with a human, which is something I wrote about too.
🔗 https://millennialmasters.net/p/ai-tools-management
Ooh, I think your article should be read together with ours, great take on the topic!
This is so useful and to the point. Thanks for sharing!
oh this is great! Saving that task brief to test out on my next feature build. The when to esculate is such a good section to add. I feel like Claude is getting a lot better at this, not sure if its cos I've also gotten better at prompting, but it still feels like a good safeguard to add in and consider for each build.
This is great!!
"Five Rules That Improve Any AI Agent Workflow in 2026"
No task without a definition of done.
If you can’t describe what “finished” looks like before the agent starts, the task isn’t ready.
One task at a time.
Don’t let the agent juggle multiple things at once. Focused work beats scattered work, even when the worker is an AI.
Keep deliverables small.
Give the agent one small piece to finish, not a massive batch. The bigger the output, the less carefully you’ll check it.
Always verify before accepting.
Use checklists, spot checks, or human review, especially for high-stakes work. Verification isn’t something you add after. It’s built into your definition of done.
Set clear escalation triggers.
Before the task runs, decide: at what point should the agent stop and ask you instead of continuing on its own? Write it in the brief.