Fix Your AI Ethics After Lunch, Not After Launch
The Two Choices You Have as a Product Leader
Updated January 4, 2026
Some of my founder friends still think they can deal with ethics later:
💬 We know we need to address ethics eventually.
💬 Can’t we just iterate?
Sure, you can iterate on features, usability, and business models.
But how do you iterate on a public scandal? How do you iterate on real human harm?
Here's what I've noticed: ethics in tech gets treated like flossing. Everyone agrees it's important, but somehow, it only becomes urgent when there's blood.
By “blood,” I mean the kind of headlines that send your PR team into a panic. The kind that make your lawyers pull up your Terms of Service with a haunted look in their eyes. The kind where you're explaining to your board why your AI chatbot went rogue and started offering financial advice based on moon phases.
Ethics isn't an add-on. It’s not technical debt you can pay down later.
It's a product decision. It's as foundational as your core value proposition.
Actually, it is part of your value proposition.
What happens when companies skip the ethics conversation?
I tracked and documented 2025's worst AI product failures, including:



