Good morning from South Texas where this morning was a brisk 57 degrees and it’s going to get into the triple-digits later this week. The weatherman said on Wednesday our city will be one of the hottest places on Earth that day. Luckily I am headed to the coast a day or so into the fever-pitch so that my bones can rest for a bit, and I will narrowly escape this oppression because it will be a beautiful 83.
Weekend was lovely. On cup of coffee #3 at the moment. Inbox has been tamed. Escalations can been addressed. Now to fit five days worth of work into four.
Been on the road for work the past two days which means I’m both a very sleepy boy and also pretty behind on some major projects. This morning I was able to get my inbox under control and now I need to consume more coffee and gather my notes from the road meetings.
Currently halfway through The Three Body Problem, by Cixin Liu.
Will be taking some much needed time off next week. But for now it’s headphones and deep work.
Over the past week I’ve been very intentional about staying away from social media and staying close to my family, my interests, and authors, artists, and musicians that I enjoy. To nobody’s surprise, I’m much happier than I otherwise have been. Nothing major, mind you – just a little lighter than before. It’s worth it.
Inbox is clear. Music is playing. New cup of coffee has been poured. Time for some deep work.
The skies are ominous this morning. Coffee doesn’t taste very good but is certainly effective. Mind is made of mush from lack of sleep. This week is going to require a lot from me.
All of the music I’ve made over the past 2 years got lost after I had to reimage my MacBook because of data corruption and an over abundance of applications that ground its performance to a fault. The latter actually begetting the former. Feels bad. But I suppose that Yo-Yo Ma’s perspective holds true here:
Sound is ephemeral, fleeting, but some sort of a physical manifestation can help you hold on to it longer in time.
I was fortunate to have most of those releases find their way to cassettes and vinyl. They are not lost media.
It is gorgeous out this morning now the storms have passed. Going for a walk with the wife and baby. Enjoy one another. Drink water.
Good morning from South Texas where it is going to get hot and also rain somehow. These are the days where I have to decide whether to take my coffee hot or over ice. These are also the days to get outside before it becomes dangerously hot here and all your coffee has to be taken over ice.
Today is Day 3 of 4 of the RSA Conference – one of the largest annual conferences in the cybersecurity world. While there’s lots of buzz coming out of San Francisco as is typically the case this time of year, I’ve noticed the emergent singularity of injecting generative AI into everything is still going strong.
However, the apotheosis of this trend seems to be one in which human beings are managing the autonomy of machines instead of security incidents proper. This week alone there’s been the announcement of:
an AI powered SOC platform
more agentic AI capabilities from each of the big market players
supervised data-model training security wrappers
AI-powered supply chain vulnerability remediation
non-human identity security solutions (LLM-powered of course)
And even AI-powered posture management meant specifically to ensure the security of the AI agents themselves which seems … circular.
If machines are given autonomy by identity entities that are, for all intents and purposes, machines themselves—to make security decisions within networks, based on guardrails enforced by machines trained on data shaped by other machines, which feed all alerts and events into a platform managed by machines—then it stands to reason humans would be left to design the trust models between humans and machines, validate that the models are working, and determine the ethics behind the decisions those machines make within those trust models.
I suppose there will be lots of money to be made in standardizing those models as much as possible and building products that eventually let machines do that validation as well.
But here’s the thing: we seem to be sprinting toward the very outcome we all hoped for — a world in which machines handle all the mundanity, while humans are free to debate the morality of it all (or just go do other things). The problem inherent in this is that we are sprinting very fast, and we may get there too soon. Anyone who’s ever asked an LLM to give it a factual answer to a question will understand why even daisy-chaining these models together with agents to validate one another’s accuracy in their homogeneous autonomy is kind of a bad idea currently. Doesn’t mean it won’t be in 3 to 5 years.
Consider that it’s one thing if machines make a mistake when I ask them to book me a restaurant reservation, but it’s an entirely different thing when they accidentally shut down mission-critical systems and believe it was right in doing so. Imagine troubleshooting that scenario. Sheesh.
Anywho, it’s very inspiring to see the innovation and I’m keen to track it over the next year because things are getting weird and, as Richard Feynman has put it:
If you thought that science was certain – well, that is just an error on your part.