Signal Intent

[sign-on] April 30 2025

Good morning from South Texas where it is going to get hot and also rain somehow. These are the days where I have to decide whether to take my coffee hot or over ice. These are also the days to get outside before it becomes dangerously hot here and all your coffee has to be taken over ice.

Today is Day 3 of 4 of the RSA Conference – one of the largest annual conferences in the cybersecurity world. While there’s lots of buzz coming out of San Francisco as is typically the case this time of year, I’ve noticed the emergent singularity of injecting generative AI into everything is still going strong.

However, the apotheosis of this trend seems to be one in which human beings are managing the autonomy of machines instead of security incidents proper. This week alone there’s been the announcement of:

  • an AI powered SOC platform
  • more agentic AI capabilities from each of the big market players
  • supervised data-model training security wrappers
  • AI-powered supply chain vulnerability remediation
  • non-human identity security solutions (LLM-powered of course)

And even AI-powered posture management meant specifically to ensure the security of the AI agents themselves which seems … circular.

If machines are given autonomy by identity entities that are, for all intents and purposes, machines themselves—to make security decisions within networks, based on guardrails enforced by machines trained on data shaped by other machines, which feed all alerts and events into a platform managed by machines—then it stands to reason humans would be left to design the trust models between humans and machines, validate that the models are working, and determine the ethics behind the decisions those machines make within those trust models.

I suppose there will be lots of money to be made in standardizing those models as much as possible and building products that eventually let machines do that validation as well.

But here’s the thing: we seem to be sprinting toward the very outcome we all hoped for — a world in which machines handle all the mundanity, while humans are free to debate the morality of it all (or just go do other things). The problem inherent in this is that we are sprinting very fast, and we may get there too soon. Anyone who’s ever asked an LLM to give it a factual answer to a question will understand why even daisy-chaining these models together with agents to validate one another’s accuracy in their homogeneous autonomy is kind of a bad idea currently. Doesn’t mean it won’t be in 3 to 5 years.

Consider that it’s one thing if machines make a mistake when I ask them to book me a restaurant reservation, but it’s an entirely different thing when they accidentally shut down mission-critical systems and believe it was right in doing so. Imagine troubleshooting that scenario. Sheesh.

Anywho, it’s very inspiring to see the innovation and I’m keen to track it over the next year because things are getting weird and, as Richard Feynman has put it:

If you thought that science was certain – well, that is just an error on your part.