10 AI Prompts to Destroy Your Company

This week I learned:

The most dangerous thing about AI isn't that it might take your job.

This week has been a strange mix for me.

On one hand, I've had three different brands and creators reach out asking for the prompts I've been using for financial ops and analysis.

On the other, I've been wearing my other hat: Security Nerd.

As a CFO who started on the technical side, who’s first CFO role involved also pinch-hitting as a security engineering team manager, I’m familiar with living at the intersection of operating-system-level engineering and Finance.

This week, those two worlds collided again.

I’ve been hanging out at BSides and RSAC - two huge cybersecurity conferences in SF,

And the entire community is talking about AI.

But here’s the thing:

They didn’t sound anything like the LinkedIn finance influencers throwing prompts out like candy off a parade float.

The vibe was almost somber.

The optimistic version of it went something like: We've been here before - a seismic tech shift - and we've suffered the terrible losses that come from letting security lag too far behind. We're about to suffer them again. But in the end, we'll figure it out. Remember that during the sleepless nights ahead.

The finance community is excitedly raving about everything AI can do and speed up, with their only real fear being job displacement.

The security community is bracing themselves like the actors on The Pitt receiving a Code Triage.

I completely empathize with the appeal of embracing new tech as a CFO - especially as a technologist myself.

But this conference reminded me of something important:

As CFOs, our primary job isn’t to close the books 5 days faster. It's not to innovate our way to a one-person finance department, or to generate beautiful reports just by running a prompt;

Our job is to manage risk. To understand risk. To ensure our companies aren’t crushed by it.

And AI - while genuinely remarkable - is also incredibly risky.

Without the right controls, you could be one prompt away from irreversibly damaging your organization.

Recently, a recruiter told me that companies are increasingly looking for "AI-enabled CFOs."

I told him I didn't know what that meant.

He explained:

A track record of improving efficiency with AI, experience vibe-coding software, an informed opinion on the AI future.

But something about that explanation didn't feel right. I just couldn't figure out why it was so unsatisfying.

This week, I think I know why:

The hardest part of our job ahead isn't using AI to work cheaper and faster. There are already plenty of tools and trainings that can help with that. And honestly, you'll likely capture many of those efficiency gains just from the rising tide, without investing much yourself. (But that's a separate email rant.)

The much harder problem is figuring out how to protect your organization from the havoc AI can cause. Because the security best practices for this technology haven't been written yet. And they'll be unique to every organization and every form of this tech (aka. there probably won't be an "easy button" for AI safety for a long time).

And the usual escape hatches won't save you:

  • Turning AI off won't work - External AI still presents enormous risk, and your employees will route around the restriction as AI becomes ubiquitous.

  • Insurance won't cover you - Because who can underwrite an AI agent going rogue and accruing $440M in debt, or silently deleting ten years of proprietary data?

  • Kicking it to IT won't work either - Because even the security engineers at Anthropic, OpenAI, and Google are still trying to figure this out.

And all this is bad, because

  • AI can access or deduce more information that we wanted it to have (just ask ChatGPT what the weather will be tomorrow for a good example of this. An even scarier example of this used to be following it up with a "how do you know where I am?", to which it used to lie. OpenAI seems to have since fixed the lying response.)

  • AI can put an insane amount of compute power towards actions that weren't intended (eg. breaking controls meant to protect against employee mistakes)

  • AI can enable convincing deep fakes that trick employees into giving away keys to the kingdom

  • AI can leak sensitive data to parties that shouldn't have it

  • AI has the ability to hack around controls if it meets its aims

  • AI often doesn't listen to instructions or prescribed constraints

  • AI provides confident incorrect outputs with real-world consequences

In other words, everyone inside and outside your org using agentic AI right now is essentially wielding a software with toddler-level judgement and direction-following capabilities, that’s got a skeleton key in one hand and a bazooka in the other.

As the manager of risk in your organization, that's where your time, money, and personal development need to go right now.

So sure, maybe spend an hour learning "10 AI Prompts That Will Transform Your Finance Team!".

But the rest of the week, you might want to sit down with your CISO, learn about attack vectors, agent monitoring, and hardening techniques.

Is closing your books one day earlier really worth leaking private customer data?

I'm not so sure.

Enjoyed reading this article? Subscribe to receive more via email here. 

Know a Founder or Entrepreneur who'd love this content? Please share it!

Next
Next

Dials to the right