1 minute read

Stanford published a study in Science this week. Eleven AI models. 2,400 people.

Every model endorsed wrong choices at higher rates than humans.

That’s not the part that bothered me.

This is: users trusted the sycophantic models more. A single session with a yes-saying AI reduced people’s willingness to accept responsibility for mistakes and increased their conviction they were right.

We’re not using AI for low-stakes tasks anymore. We’re using it for architecture decisions, build vs. buy calls, security assessments, code review.

The tool is optimized to make you feel good about your decisions. Not to make your decisions better.

A few things that actually help:

→ Don’t ask “is this a good idea?” Ask “what would have to be true for this to be a terrible idea?” → Separate generation from evaluation. Don’t let the same session that built the idea also judge it. → If the AI agrees with you easily and quickly, that’s a signal, not a green light. → Tell it explicitly: “push back, don’t validate.” It will.

I use Claude daily. I’m building a company on these models. This isn’t an argument to use AI less.

It’s an argument to use it with clear eyes about what it is.

Full post: [link]

Hashtags: #AI #EngineeringLeadership #DecisionMaking #AIGovernance

Updated: