Signal Intent

Category: continuous wave identification

  • A Post-Regression World

    Whether by active strategy or passive habit, commodification is being woven into every level of the modern structure. And it might be the best thing that happens to industry.

    That feels counterintuitive, but as we sprint toward the boardroom’s utopia of faster time-to-production and lower human capital costs, the cybersecurity landscape is heading toward a distinction without a difference. Every vendor, every tool, and every output are converging on the same center.

    We’re already seeing it. Every knowledge worker recognizes the pattern in AI-generated output. We’ve even coined a term for it: AI slop. A bit of a glib phrase, to be sure, but it carries the weight of something we all sense and whose impact we haven’t yet triangulated.

    Regression to the Mean

    Generative AI is inherently prone to regression toward the mean. This is to be expected. Models trained on aggregated human data, constrained by capacity limits, compress the distribution of predictions—a phenomenon known as mode collapse. Researchers at Carnegie Mellon University confirmed this after running nearly 1,100 prompts through a benchmark exercise called NoveltyBench. And while they admit their methodology was prone to outliers, they concluded all frontier models, especially larger ones, underperform humans when it comes to distributional diversity.

    There’s a proposed academic name for this: Galton’s Law of Mediocrity. Sir Francis Galton found through experiments in behavioral genetics that extreme characteristics in progenitor peas tended to regress toward the population average in offspring. The name was re-christened in an October 2025 paper by researchers at 55mv Research Lab, Monash University, and Western Sydney University.

    Their study examined LLM creativity in a domain where creativity is non-negotiable–advertising. Using a two-phase evaluation framework, they found that models would first shed creative elements and drive toward succinct articulation of product points, then expand with lexically dense responses that appeared novel but lacked substance. In most cases, the models lost sight of the original messaging, and vivid ideas were distilled into generic facts. The researchers concluded that LLMs appear to inhibit human creativity rather than expand it.

    Both studies found that even with divergent prompting, this regression is a fundamental design characteristic. Training practices, large corpora, and decoding that favors higher-likelihood tokens all point to how we build models as the culprit. That may not be problematic in isolation, but the practices it enables might be.

    The Race to the Bottom

    Where regression manifests most visibly is in code. I’m not arguing that any two codebases are identical—I’m arguing that the playing field has leveled. AI is simultaneously raising the floor and lowering the ceiling, enabling the average user to ship software in hours instead of months.

    Tools like Malus.sh now allow users to create “clean room” versions of proprietary software via AI, free from copyright infringement. Virtually any application can be open-sourced overnight. And the question this raises isn’t so much about startup viability as it is about what differentiation means when anyone can reproduce your feature set over their morning coffee.

    Personally, I believe the answer is operational rigor. A sole proprietor with a Claude Max subscription can ship fast and match features, but will struggle to meet industry-standard SLAs and compliance. And if they turn to AI to solve those problems, they’ll just arrive at the same solutions as everyone else.

    This extends beyond code. MITRE ATT&CK vendor evaluation participation is dropping. Forrester notes differences in the EDR space are becoming increasingly marginal. Deepak Gupta traces the same pattern in benchmarks: traditional evaluations drive vendors toward an homogenization that mirrors Galton’s Law as applied to LLM outputs. Initial threat response, as IBM and Palo Alto Networks both report, now happens at line speed, so humans no longer need to operate at the response layer.

    The security industry has gone through this before. When antivirus signature databases became commoditized inputs, most AV companies were absorbed by larger players with EDR solutions or pivoted into multi-product plays. But something else happened in that transition. The professionals who could read the context around an alert and make a call about business risk became more valuable, not less. The ones who simply ran the tools became replaceable. AI is accelerating that pattern across the enterprise.

    The Judgment Gap

    Further research on this phenomenon converges on a paradox I touched on in my last article—the paradox of skill. As AI gets embedded deeper into business processes, the effects on human dependency hinge on whether automation substitutes for low-expertise or high-expertise tasks.

    A working MIT paper by David Autor and Neil Thompson suggests that when AI lowers expertise requirements, wages fall but more workers enter those roles. When AI raises the expertise requirement, wages rise but the qualified candidate pool shrinks. So what becomes the premium for human talent in an AI-augmented economy?

    It appears to be judgment. Prasad Setty, former Head of Google People Analytics and Stanford researcher, proposed at the Valence AI & The Workforce Summit that organizations are creating a judgment gap AI simply cannot fill. His theory is that the routine work AI automates is precisely the work where humans built pattern recognition, confidence, and professional instinct. When those jobs are offloaded to machines, the developmental pipeline for decision-making collapses. Therefore, the value that humans bring to the table shifts from intellectual capacity to judgment quality.

    The National Bureau of Economic Research describes this as the skill premium—where automating a high-value bottleneck skill enhances the productivity of workers with more common skills, making those workers more valuable.

    Now, here’s the paradox: the NBER also concluded AI augmentation discourages human learning, depleting general knowledge stock over time. AI dependency erodes cognitive confidence and creates further dependency. The more you need human judgment, the harder it becomes to develop it, so existing deep expertise becomes scarcer by the day.

    A Different Kind of Work

    If outputs are regressing to the mean and AI augmentation is hollowing out the developmental pipeline for judgment, what kind of work are we actually doing?

    In cybersecurity, that answer is shifting. The job most certainly becomes less about triaging incidents, determining response, and executing remediation, and more about ensuring organizational compliance through automation, architecting that automation, and aligning both to business continuity. In other words, management of systems, management of outputs, and management of the gap between what machines produce and what the business needs.

    This is where judgment stops being abstract. When automation triages a thousand alerts and resolves nine hundred of them, the remaining hundred require a human who understands the business well enough to determine which ones represent actual risk to the organization. Less a technical skill, and more of a contextual judgment call that no model can make, because the model doesn’t carry the organizational history, the regulatory obligations, or the risk appetite that inform the decision.

    Any CIO will tell you their budget isn’t dominated by tools—it’s labor: MSPs, MDR providers, consultants, retainers. As tools regress to the mean in terms of capabilities, any reduction in the cost to deliver outcomes through AI forces a reprice in the services layer. When vendors can’t compete on features, they compete on outcomes. And outcomes demand the judgment that will grow harder to develop.

    The Price of Judgment

    So, we see humans move up the stack. Now, what does that demand of them?

    Risk tolerance can’t be calculated in tokens. Trust models can’t be enforced by machines that don’t understand why they matter. We buy tools based not only on whether they work, but on whether we trust the people who built them. We maintain vendor relationships based on how they show up on a bad day. So, as tools and outputs homogenize, these judgments become the only variable that isn’t regressing.

    Autor and Thompson predict this bifurcation in their research. The proverbial middle compresses from both directions until it disappears. Those who understand how outputs are generated, what they mean, and how they should influence decisions—and who can articulate that to a boardroom or to a machine—will command premium positions. Those who offload their thinking will fill the rest.

    This is the net positive. Not because commodification is comfortable, but because it forces clarity. When the playing field levels for tools, code, and outputs, the only differentiator left is the one that can’t be reproduced: judgment from context, experience, and the willingness to be wrong. The market hasn’t priced that correctly in a long time, but it’s about to have to.

    The irony here isn’t that machines are replacing us. It’s that the work they can’t do is the work we are being encouraged to stop practicing. That gap gets smaller every day. And even more expensive to maintain.

  • The Deskilling Paradox

    The industry is trying to sell us a half-true AI productivity story. These tools are undoubtedly making us faster at completing tasks, but that comes at a cost far beyond the time required for humans to verify machine outputs.

    Across five hundred companies and eight months, Professor Suproteem Sarkar at the University of Chicago Booth School of Business, in partnership with Cursor, tracked how developers actually used AI. As models improved, developers used them more. Perhaps even more revealing: they began offloading higher-complexity work to AI, allowing them to take on more ambitious projects previously beyond their immediate reach.

    None of this is particularly shocking in isolation. But a cognitive study from the same period tells a different story about the humans doing the producing–and the industry hasn’t reckoned with the tension between the two quite yet.

    The Cognitive Problem

    Sarah Baldeo, a researcher in AI and neuroscience at Middlesex University, publishing in Technology, Mind, and Behavior, found a correlation: that the more people relied on large language models, the less they trusted their own reasoning. Participants who depended heavily on large language models were more likely to report that the tools were thinking for them–not with them. As Baldeo puts it: “It really doesn’t have to do with the tool itself.”

    So the tool isn’t the variable. The interaction pattern is. Users who questioned, rejected, and edited AI output maintained their reasoning confidence, whereas those who offloaded their thinking entirely lost it. The problem isn’t AI–it’s the way people are choosing to use it, and the way most products are designed to encourage that use.

    Grace Liu and colleagues from Carnegie Mellon, MIT, Oxford, and UCLA took this a step further. They ran three randomized controlled trials across 1,222 participants with a simple premise: give people AI assistance during a learning phase, then remove it without warning and measure what happens.

    After removal, AI-assisted participants solved 57% of the problems they were given, compared to 73% for the control group that never had access. That’s a 16 percentage point gap that emerged after roughly ten to fifteen minutes of use. But here’s the part that stuck out to me: AI-assisted participants didn’t just perform worse when the tool disappeared–they stopped trying altogether. Skip rates nearly doubled. Participants who used AI for direct answers saw their scores drop up to ten points below their own pretest baseline. The control group improved by 1%.

    One line from the paper captures this dynamic precisely: “AI systems are fundamentally short-term collaborators: extraordinarily helpful in the moment, but indifferent to what that help does to the person receiving it over time.”

    So, here we see productivity going in one direction while comprehension is going in the other. And the gap between them is growing. It’s worth noting that these findings come from controlled settings, not enterprise-level production environments. However, if ten to fifteen minutes of AI assistance in a lab can produce a 16 point performance drop, the implications for developers using these tools eight hours a day deserves more scrutiny than it’s getting. In most contexts, that divergence is a management problem worth monitoring. Being a cybersecurity practitioner, it feels like something closer to a structural crisis.

    The Hinge

    Security researcher Mohan Pedhapati, CTO of Hacktron, demonstrated the economics of AI-assisted offense. Pedhapati leveraged Claude Opus 4.6 to generate a full working exploit for CVE-2026-5873, an out-of-bounds vulnerability in Chrome 138’s V8 JavaScript engine. It only cost him $2,283 in API fees and 2.3 billion tokens over twenty hours.

    Pedhapati didn’t write any exploit code. Instead, he spent those hours redirecting the model when it hit dead ends, cutting off unproductive lines of attack, and choosing which paths to pursue. He would recognize when the model got stuck and redirect, judging which lines of thought seemed promising and which to abandon. Opus did all the heavy lifting, and Pedhapati took care of the strategic reasoning. 

    The independent judgement about what to pursue and discard–that ability to leverage intuition and form hypotheses from it–is precisely the cognitive faculty the Baldeo and Liu studies show as eroding under heavy AI use. The one input that AI can’t yet generate on its own is degrading in the population most responsible for defense.

    Pedhapati’s $2,283 investment would have netted roughly $15,000 in combined bug bounties–a 6.5x return before accounting for his time. But models improve and API costs decline, and Pedhapati’s time spent was largely a learning curve that his next attempt won’t require. So, theoretically, the next exploit will be cheaper. In Pedhapati’s words: “Eventually, any script kiddie with enough patience and an API key will be able to pop shells on unpatched software.” Script kiddies already exist. AI lowers their barrier to entry (which feels ironic to say).

    The economics favor the attacker. And the one thing separating an experienced exploit developer and a kid with a credit card is the same capability that cognitive research shows eroding.

    The Arithmetic

    Oxford philosopher Toby Ord, known for his work on existential risk, recently broke down AI agent costs by the hour and, surprise, the productivity narrative doesn’t survive the math.

    On the surface, the numbers tell a compelling story. Some AI agents operate at roughly $0.40 per hour, whereas a human software engineer costs around $120. The efficiency argument writes itself. But Ord’s deeper finding is that costs compound as task duration and complexity increase. For example, GPT-5 costs $13 per hour for forty-five-minute tasks but $120 per hour for two-hour tasks, in line with human labor. O3 reaches $350 per hour, or nearly three times the cost of a human engineer. Ord concludes that what looks like progress is “increasingly lavish expenditure on compute,” not sustainable capability gains.

    You can’t secure an organization in forty-five-minute bursts. Defense lives in the long-duration, high-complexity tail of Ord’s analysis where AI costs approach or exceed human costs. Offense, on the other hand, is composable from the cheap-end of the curve–one brief, targeted interaction is all an attacker needs.

    The PRT-Scan campaign shows what cheap offense looks like at scale. A single threat actor, using six GitHub accounts, initiated over five hundred malicious pull requests over the course of a few weeks. Although the success rate was below ten percent, the campaign still compromised two npm packages (@codfish/eslint-config and @codfish/actions) and enumerated API keys and tokens for platforms like AWS, Cloudflare, and Netlify.

    But the breach isn’t the point–it is the evolution of the attack. In phase one, researchers noticed raw bash scripts targeting small repositories. By phase three, AI-generated wrappers were dynamically identifying each target’s language, framework, and CI configurations. Each iteration of the attack was more idiomatically convincing than the last, and the entire campaign cost virtually nothing.

    This is the arithmetic that breaks the productivity story. Comprehensive defense at scale costs disproportionately more than iterative offense at volume, and the gap is widening as AI makes low-skill, high-volume attack campaigns trivial to execute.

    The Loop

    In practice, most AI-driven productivity gains expand the attack surface. Each new line of code, each additional dependency, every expanded feature demands understanding, testing, and defense. Otherwise, the rational response to an attack surface you can no longer comprehend is to delegate. And that’s what some organizations are doing. But that delegation erodes the human capacity to understand what’s being defended. And as understanding erodes, the systems we build become less carefully examined, not because the code is worse in an obvious way, but because our capacity to foresee or foreclose vulnerability quietly disappears. Each cycle widens the gap between what we produce and what we comprehend.

    This might read as a slippery slope. It isn’t. Each step in the cycle has independent evidence behind it, and the components are individually documented. The question is whether anyone is modeling this system-level dynamic, and I haven’t found evidence that they are.

    That’s a hard thing to sit with. The productivity gains are real. Developers really are doing more ambitious work. The capability expansion is real, and the economic value is measurable. The answer is neither to reject AI nor to celebrate productivity without measuring what it erodes.

    I don’t have a confident answer to where this all leads. Anyone who does is either selling something, hasn’t spent enough time with the data, or is seeing something I’m not. The evidence says we’re optimizing for the wrong thing. We shouldn’t be measuring output–we should be measuring understanding. The question I keep arriving at isn’t whether this trajectory leads somewhere dangerous. It’s what happens when we lose the first principles knowledge required to maintain and defend the systems we’re building alongside AI–and whether anyone will notice before it’s gone.

  • After the Chain: Trust as Constellation

    In past articles I’ve largely argued that, with the emergence of quantum computing and its intersection with generative AI’s sprint towards AGI, that digital trust is failing at the layer where accountability is supposed to live.

    Both pieces arrived at the same conclusion: a chain is the wrong model and we don’t have a replacement.

    That’s not entirely true. We don’t have a replacement fully articulated. But we have the pieces of one and they are scattered across fields that don’t talk to one another currently. What follows is an attempt to assemble those pieces into something coherent. Not because I think I’ve solved the problem, but because I think the solution is starting to take shape, and if we wait to name it, then I think we will keep investing in the wrong solutions.

    The Reframe

    The proposition is this: trust, in a world of autonomous systems and probabilistic computing, can no longer be linear. Accountability can’t go through a series of links in a chain and depend on each link to vouch for the other. It has to emerge from patterns we learn to recognize as opposed to signatures we verify. Almost like a constellation.

    A chain locates trust in a system. A signature is verified, a certificate is checked, a commit gets audited, and if each check holds, then the chain holds. The trust exists in the artifact, independent of the observer. We inherited this model from history: notarization, custody-of-evidence, institutional verification. Cryptography just gave us a stronger math for each of these links.

    A constellation, on the other hand, locates trust on the relationship between the system and the observer. Cassiopeia is real in the sense that it is a useful navigational tool, but it is not native to the stars themselves. It’s a structure the observer brings and is only clear when you know what you’re looking for. Constellations do not depend on a single star to be verified. Even if one star is dimmer than you expect, or you misidentify one, the constellation still holds because it was always a pattern not a proof.

    I think this is the move we have to make. We have to stop asking whether a single artifact is trustworthy and start asking whether a system’s behavior forms a pattern consistent with what it claims to be.

    The Philosophical Vernacular

    This is a significant paradigm shift I don’t want to gloss over.

    Under a chain model, trust is a property of the system. Under a constellation model, trust is a capacity in the observer. It is something we cultivate, practice, and sometimes are bad at. For example, two people looking at the same night sky will see different constellations based on what they’ve been trained to see. The same will be true of digital systems. Trust will become perceptual, unevenly distributed, and require genuine investment to develop.

    This sounds like a regression, and in some ways it is. We are letting go of the fantasy that trust can be certified. And that’s okay because that fantasy was always fragile. Every form of digital governance has always depended on the institutions we chose to believe in and that was never as rooted in mathematics as we pretended. The chain model was only going to work as long as the systems it governed were small enough that verification was possible in principle. We are past that point.

    We are also gaining something in return. A constellation model is honest about what trust has always been: a social, contextual achievement that depends on observers who know how to look. It treats trust literacy as something we teach as opposed to something we assume. It accepts that trust does not have binary answers and is never going to.

    Compatibilism, which I talked about in my determinism piece, sits comfortably here. We can finally stop arguing about whether a system is deterministic or autonomous and start asking whether its behavior, observed across many different vantage points, forms a coherent pattern we can recognize and reason about. Whether that coherence is derived with math or something else doesn’t matter as much anymore, because the pattern is the pattern.

    The Technical Vernacular

    None of this is useful if it can’t be built.

    Trust, in a constellation model, is a statistical property of a behavioral distribution. It is inferred from many different observations, and becomes actionable when that distribution shows coherence across time, context, and the observer. The words here are important.

    The statistical property isn’t a flag on an artifact, it is a characterization of how the artifact behaves.

    In the behavioral distribution we are not a single action, but observing the shape of many actions and asking if that shape is consistent with who the system claims supposed to be, its purpose, and its constraints.

    And many different observations are overlapping, partially redundant vantage points whose agreement falls on the evidence itself. One observer can be fooled, but a hundred who agree on a pattern can’t without all one hundred being compromised. This is a much harder attack surface than forging a digital signature.

    Coherence is the mathematical analog of what we used to call integrity. Simply put, a system whose behavior forms a recognizable pattern for a week and suddenly diverges is not trustworthy.

    Notice the framing doesn’t require interpretation. We don’t need to understand why a system behaves the way it does, we just need to characterize its behavior well enough to recognize when it strays from its path. Kind of like an immune system, or how humans have always trusted one another. We trust people whose behavior patterns we recognize, and lose that trust when that behavior changes.

    Actual Implementation

    We don’t have the infrastructure to build this constellation yet. It would require fingerprinting at scale, distributed witness networks, models trained specifically on coherence, and a principled way of combining observations to produce high-confidence signals.

    I don’t even think we have the legal and regulatory vocabulary for this yet.

    More importantly, the compute power for this would be incredible. But it’s inline with the confluence of quantum computing and the race to AGI.

    Thankfully, we have the foundations to build this  right now. We have telemetry, we just need to reframe it for behavioral characterization. We can start doing that today.

    We are seeing decentralization in identity work, trust registries, attention networks, and beyond. But this work is a slow burn and has too much deference to the chain these efforts are meant to replace.

    Foundationally, we also already have the pattern literacy developing in the processes and tools that we are building to protect against autonomous threats. We need to study it, formalize it, and teach it as vigorously as we can.

    The only thing I’m most skeptical about is the change in culture required for this new mode of thinking and operating. The chain model is ingrained deeply in our DNA, so moving to a model where accountability emerges from patterns instead of from authority is going to force resistance from every organization whose business depends on it being the authority. And this is not something we can solve with technology.

    The Pattern We Haven’t Named

    I want to end where I started, by admitting that I don’t fully know what I’m describing. The constellation is a metaphor and metaphors are scaffolding not buildings. The actual architecture will look much different from what any of us can imagine in advance.

    But I think the shape is right. Trust has always been a pattern we recognized more than a property we verified and the chain model was an attempt to formalize that recognition. It’s worked well for a long time, but our systems are quickly outgrowing it and no amount of better math or audits will close the gap.

    What comes next is harder. We will be asked to give up the promise of certainty and replace it with the discipline of perception. We will actually have to treat trust as a relationship instead of a credential.

    This is the right direction though, and I think the honest version of this moment is to say it out loud, even if it’s been said before.

    The stars don’t form constellations on their own, after all. We have to learn to see them.

  • The Brother’s Karamazov by Fyodor Dostoevsky

    There’s not much I can say about this book that hasn’t already been said a hundred times before. I finished The Brother’s Karamazov after a month of picking it up and putting it down repeatedly for a number of reasons. But I was determined to finish it because it’s the perfect time of year for heavy classic Russian literature.

    I ask myself “What is hell?” And I answer thus: “The suffering of being no longer able to love.”

    I loved every moment of this book. Dostoevsky is a master of his craft. From the depths he gives each character and situation, to the huge questions he beckons you to wrestle with, to the allegory of nearly every plot point, everything is deliberate, sprawling, and it reads so naturally.

    It’s clear that the actual story is just a conduit for most of the philosophy that Dostoevsky explores: morally broken characters wrapped in a heinous event, each one coming to terms with their responsibility in that event, and the levels of despair each character falls largely determined by the lens they are all meant to be as a position in the broader discussion.

    I say ‘discussion’ in the singular, but this book tackles so many themes: greed, lust, murder, and justice, to name a few. Still, the central question is whether humanity can be good without faith. The ‘Grand Inquisitor’ is often called the book’s most significant moment, and I agree. The takeaway is that without a basis in faith, we lose the ability to calibrate our morality. If there is no God, then ‘everything is permitted’ because the very definition of ‘good’ disappears.

    However, I am more inclined to cite the discussion between Ivan Fyodorovich and Lucifer later in the novel as my personal standout. Lucifer argues that Ivan’s nihilism is self-defeating because, without evil, there can be no definition of good. Without giving ‘the good’ a tangible representation, we find ourselves at odds with our innate consciousness—which I define here as our epistemic intuition that good exists and must be defined through our pursuit of it.

    “…What good is faith by force? Besides, proofs are no help to faith, especially material proofs. Thomas believed not because he saw the risen Christ but because he wanted to believe even before that.”

    The need to reconcile what we know to be good and its existence, Dostoevsky argues, relies on a spiritual underpinning that we aren’t really required to reconcile. We have a natural desire to be good, we try to be good, we aren’t always good, but in the pursuit of the good we create the beauty necessary for an otherwise ugly world.

  • The Remains of the Day by Kazuo Ishiguro

    The year is drawing to a close very quick and I am doing lots of reading as one is want to do when it gets cold outside and the nights get longer. There’s no surprise then that this time of year brings with it a implicit attraction to some of the Russian classics; of which I’ve been taking my time getting through The Brothers Karamazov.

    However, when I finish one or two books from each part of the novel I put it down and dive into something else to give my brain time to catch up. Upon doing so this time I decided to pick up The Remains of the Day by Kazuo Ishiguro, which I’ve just finished.

    I tried reading Never Let Me Go in the past but it just could not hold my attention. However, The Remains of the Day was a profound and beautiful story. One I burned through over the course of three evenings.

    What kept me coming back was the sorrow I felt for the main character the whole time. As I read, I could see he had devoted so much of his life to his idea of order that he missed out on so much. Under the guise of dignity, he closes himself off to the beauty of the human experience.

    I suppose that is what happens to a lot of us. We will always have memories, but how we chose to spend our time determines how those memories return to us.

    Heartbreaking, wonderfully crafted, and engrossing. This was a 10 out of 10 read for me.

  • Chaos as a Crucible

    The singularity is what preceded the big bang. Just a hot, dense point in a dark vaccum that got so hot and so entropic that it exploded and the universe existed. A bunch of rocks and dust that evolved from the entropy into rocks that could sustain ecologies. That’s the current theory at least.

    From disorder came progress. From progress came disorder. And so on.

    It’s almost as if this cycle is the definition of in perpetuum. We exist in the cycle and, for a brief moment, we inherit some of its benefits and then we exit the chaos, hopefully having left enough of a mark on the mess to facilitate some progress.

    There’s so much economic uncertainly right now that is directly attributed to AI fatigue; particularly concern about the “AI bubble”. Many are absolutely convinced that there will be a burst. Others believe that the paradigm shift in our foundations of this Technological Revolution will result in long-term and lasting gains. Whether one or the other is true, there will certainly be progress that comes from this chaos.

    I think about this often: what if the human capital cost isn’t offset by the efficiency gains? Or, for that matter, what if it is? Wouldn’t there need to be policy that ensures basic sustenance for the current working class? Wouldn’t that policy need to extend to all who have been displaced? After all, if no one is working, what entitles one person to a universal basic income versus another, besides time spent out of the workforce?

    An AI burst would be catastrophic for the economy. A sustained period of prosperity would be catastrophic for the hoi polloi. One outcome impacts capital, and one doesn’t. And that is where I think the line of demarcation lies.

    Recent examples, like the Railway Mania of the 1840s and the Dot-com Bubble of the early 2000s are historical examples of new technologies (railways and the internet) driving economic bubbles due to excessive capital dumping and speculation. When both bubbles burst, they financially ruined not just wealthy investors, but also displaced professional and middle-class workers. The resulting policy responses, like the suspension of the Bank Charter Act in 1847 and the Sarbanes-Oxley Act of 2002 (SOX), were crucial for stabilizing global financial systems and allowing the displaced workforce to eventually rebuild within the actual utility of the surviving technology.

    The Industrial Revolution attracted massive capital investment and created long, sustained national prosperity. However, this success severely displaced skilled artisans and subjected the working class, including children, to brutal factory conditions and exploitation. Eventually the government had to do something to limit hours and enforce safer conditions, thereby regulating the technology and protecting workers.

    That’s where we are. An uncomfortable place to be, but not an unprecedented place to be. Don’t let the feeds convince you otherwise. The real question, or perhaps concern here is, will the progress be worth the entropy? If we go through the pain of an economic crisis and rebuilding, will the end justify the means?

    The answer to that question, as I’ve learned over the years in my career of helping build the technology that has led us to this moment, is: it depends.

  • Station Identification

    Currently listening:


    Trump admin will partially fund November SNAP benefits

    Words can’t express how vile I think weaponizing food is. Both sides are guilty of this at the expense of “protecting their platform” or whatever weird justification they are using.

    Currently reading: The Botanist by MW Craven

    Locked-room mysteries are not something I’ve ever explored before. Although I believe this is the 8th book in this series, I’m having a blast with near-zero insight into character backstories. Quite an easy read so far but the real spotlight here belongs to the dialogue. The conversations between characters in this book feel natural, not too full of shitty banter, and drive the story forward in a meaningful way. Really enjoying it so far.

    credit: Brickmason (cover art for: 𝐧 𝐢 g h t c 𝐨 m m u t 𝐞)

  • Human Potential and the Age of AI – Part I

    The Paradox of Progress

    Harvard psychologist Steven Pinker argues that, despite the grim picture often painted by the news, humanity has made astonishing progress throughout history. He points to significant gains in life expectancy, literacy, and the fight against extreme poverty. Pinker credits these achievements to the spread of Enlightenment ideals like logic, science, and morality.

    While this theory seems encouraging, a strong counterargument suggests that progress always comes at a cost. For example, globalization and economic policies have created an extreme income gap, and the resulting inequality can fuel populism, a force that has historically been detrimental to society.

    This pattern is a social cycle, and the current phase of the social cycle we’re in now is not the first, nor will it be the last. I believe there is an implicit limit to the amount of progress we can make toward a utopian future. Our cyclical nature means that genuine, equitable progress cannot be measured linearly. It is a constant process of moving the proverbial goalposts. This, I believe, is the very nature of the human condition.


    The Net / Net of Technological Progress

    Advances in technology are often sold as net positives for humanity. This is sometimes true, but not always. The existence of delivery services, for example, comes at the cost of gig-economy workers struggling to afford basic healthcare. However, I believe that technologies like generative / agentic AI could be a net positive for humanity, given that their development is not guided by transhumanist or “effective accelerationism” philosophies.

    To be clear, I do not believe that AI in its current form is a net positive yet. While I could discuss the environmental and infrastructural ramifications of this technology, that is not the primary focus of this series. Instead, I am concerned with the nature of our relentless pursuit of technological progress and whether we truly need it to realize our full potential.

    The problem with defining a “potential” for humanity is that it sets an ultimate, achievable goal, which contradicts our inherently cyclical nature. Therefore, when I use the term potential, I am not referring to a finite end-state. Instead, I am referring to our ability to adapt and respond to our ever-changing environment in service of guaranteeing the foundational needs of Maslow’s Hierarchy of Needs for as many people as possible.


    The Role of Technology in Human Progress

    In summary:

    1. Human progress is cyclical and always has a cost.
    2. Technology is not always a net-positive, but it has the potential to be.
    3. The goal should be to meet the foundational needs of as many people as possible, not to pursue a utopia.

    Therefore, the thoughtful use of technology should help accelerate the process of providing basic foundational needs for everyone. The key word here is should. Thoughtfully deployed technology should be a net positive for humanity. But is it necessary for this pursuit?

    Many would say yes, but I argue that while not strictly necessary, it is essential. For instance: small-scale communities can meet their needs without modern technology, but providing for a global population of billions requires technological systems for efficient food production, water purification, and resource distribution.

    The other key word is accelerate. Technology drives the pace of change. Without it, positive change would be slow and generational. But while technology accelerates progress, it almost always creates new problems, which in turn require further technological solutions. For example, renewable energy technologies are essential for addressing the environmental costs of industrialization. Technological advancement, in essence, requires more technological advancement.

    So, do humans require technology to realize our full potential? Yes. The question now becomes: at what point do we recognize our full potential with the help of machines?

  • Leakage

    When I decided to go back to college in my early 30’s I remember being committed to taking in as much of the information from my classes as I could. After all, I was paying for my education out of pocket this time around. High School was a blur to me and I promptly, almost purposefully, forgot everything I learned as soon as I threw my mortarboard in the air. But, this time around I was determined to learn.

    Which is why I have this vivid memory of my Physics professor smacking a table on a Zoom call and telling the class why our perception of that small act of violence was in direct contradiction to what quantum physics would have us believe. He then proceeded to tell us that he doesn’t understand quantum physics, that no one does, and that if anyone ever tells us they understand it that we should tell them they are full of shit.

    That was it. That was about as much of an introduction I got to the subject and it wasn’t until I read Carlo Rovelli’s Helgoland that I came to understand, at a high level, what quantum physics actually was: a predictive theory of probabilities and the study of the principles that govern those probabilities. The contradictions my professor tried to point out by smacking a table were the assumptions of classical physics we hold, most of which are deterministic.

    Fast-forward to yesterday when I read this article and it broke my brain.

    Quantum tunneling, wherein particles pass through energy barriers without the energy required to do so because of their associated wave-function, essentially means that some particles of an object can potentially appear on the other side of a barrier regardless of their total energy at that point in time. No interaction of the particle and the barrier is even required.

    Apparently the hypothesis is that in the early days of the universe, tunneling that occurred through high-energy barriers could have caused quantum fluctuations that may have caused gravitational collapse thus resulting in primordial black holes.

    I am sitting here imagining particles entering primordial black holes, being flung out the other end, and somehow ending up, through various processes on their way here, creating amino acids that ended up on Earth and thus aiding in creating life here. We’re talking about leakage all the way down, my friends. Particle leakage through energy barriers, particle leakage across various corners of the universe, and those particles falling (or otherwise leaking through the atmosphere) onto Earth and into the water.

    Those of you who know me know I hate a double-truth, and the double-truth right now is that I am both fascinated and terrified at this concept. I’m not sure I’ll ever watch Ant-Man the same way ever again.

  • Ionospherics

    Credit: Hilariusmart®

    There was a geomagnetic storm today that knocked out all of the 10m+ bands of the Earth’s radio frequencies. While trying to investigate this via the waterfall plots on SDR servers earlier, I stumbled upon a ‘net’ that was taking place in North Texas wherein a group of people were triangulating a 40-mile wide storm cell that was headed East.

    The kicker was that these operators were transmitting on the 2m bands given that the ionosphere was out of commission whereas they would otherwise be on the higher frequencies (most of them had their general class HAM licenses).

    To top it off, the National Weather Service would interject every once in a while to get field reports from these operators on the aforementioned storm.

    So in essence, the sun attacked the Earth and rendered a layer of it’s atmosphere inefficient for reflecting specific vibrations, so these people recalibrated their vibration modulation tools so as to operate under the atmosphere enough such that they could keep other people, including a Federal government agency, up to date on what how the Earth’s atmosphere was impacting their region’s climate. With 125-year-old technology no less.

    I find this absolutely fascinating.