Okay... I was wrong.
If I'm learning one thing this month about experimenting with AI in 2026, it's this:
In a world of emergent technology, no one experiment with one tool at one time provides a complete picture of the landscape.
Last week, after posting about AI sucking at Excel, a few things happened at once:
I got explicit approval from one client to plug their financial data into AI
A CFO peer told me how they're successfully using AI for Excel work
Claude released a much more powerful model, Opus 4.7
The result:
I was able to get genuinely useful modeling help from Claude desktop & the Excel plugin.
The awesome stuff:
Its formulas were reasonable, consistent, and worlds apart from Copilot.
It made overall design recommendations that really helped.
Claude desktop was clumsy with details (charts, formatting, anything non-formula), but the plugin was excellent at cleaning up the mess - though sometimes with coaxing.
Favorite prompt: handing the Excel plugin a screenshot of a dashboard I wanted. It built it and pulled the metrics from the model correctly.
The stuff that still bugs me:
No undo in the plugin. Asking it to roll back caused data loss. Claude desktop creates a version trail which I love.
No easy way to track and audit changes - especially when one edit has cascading effects.
The quality is unpredictable. Better on nights and weekends (maybe fewer people using it?).
The workflow shift takes getting used to. It's magical to watch the workbook update without doing it myself, but I'm not sure it's always faster than just editing by hand. Some things are hands-down better — formatting, anonymizing, building dashboards, edits that would require lots of typing. But because it moves so fast and changes so much, checking the work as we go takes longer than checking only my own work would.
I'm impressed. Excited to keep using this when data privacy agreements allow.
Here's the other reflection it leaves me with:
The depth of my Excel skills is very much tied to the trustworthiness of my outputs using AI.
I know what good design looks like. I can write tests. I can read formulas and spot where mistakes happen.
Which honestly gives me MORE hesitance about using AI to build on platforms I don't have technical expertise in.
It's been ~20 years since I worked in code.
The only confident developer knowledge I have left is an acute awareness of my ignorance.
How can I be sure the AI will design well without me knowing what good looks like?
How will I know what to test for if I don't have the instinct for where things break?
How can I ensure what I write is secure without knowing how to build securely?
How can I manage supply chain risk if the AI is pulling in dependencies on my behalf?
How do I contain blast radius when the AI has permissions that reach beyond a single file
So I'm landing in a strange place this week.
More excited about AI in Excel than I was a few weeks ago.
And more nervous about AI everywhere else than I was a few weeks ago.
Because if my expertise is what makes this useful and safe…
Then someone else's lack of expertise is what makes it dangerous.
That's the part of the AI conversation I don't see enough folks talking about.
The tools aren't the only unreliable variable;
It's also us.
Enjoyed reading this article? Subscribe to receive more via email here.
Know a Founder or Entrepreneur who'd love this content? Please share it!