Signal Intent

Author: Travis

  • [sign on] May 1 2026

    Today started with thunder and a notification from the National Weather Service about flash flooding in my area. So, we’re off to a great start this Friday morning.

    It’s currently thundering and raining as I write this, and to say that I am disappointed would be a lie. I’ve always been a fan of grey, rainy days, so this hits the spot for me. Currently sitting with a hot cup of coffee and some very nostalgic synthwave, and I just couldn’t be happier.

    Last month was an absolute whirlwind in terms of how many changes to the technological landscape occurred. It’s getting harder and harder to keep up with every new thing that happens. Something I’ll very likely write a long-form piece about in the future, but that is currently keeping me exhausted and, frankly, distracted.

    Disruption is a constant in my field. It keeps things interesting. But it also requires an immense amount of brain power. You’ve got to carve out the time to consume as much information as possible, process that information, and decide what to do with it. The problem is that by the time you’ve consumed the information and are moving into the processing stage, there’s already some other ground-shaking announcement or discovery that you’ve got to go understand.

    Don’t get me wrong: this is actually what I love about what I do here and over on Substack. Taking in large amounts of information, determining what it all means, and then formulating thoughts about those things to share with others. I would guess most writers feel like this is what they enjoy most about writing.

    But I am getting so tired of writing about technology in general. Aside from the fact that the moat between what I can process and what I have to consume gets larger and larger, there’s just not a whole lot I can say that hasn’t already been said in the practitioner space. Everybody’s an AI expert now. Everybody understands what’s possible with it, and how to use it efficiently. And if I am being honest, the more I read about it, the more I see it in action, the less I want anything to do with it.

    So while I am certainly never going to stop formulating opinions and waxing poetic about what new leaps in technological advancement mean for humanity, I’m going to publish those pieces less frequently than I have been. Instead, I’m going to split the time between my non-fiction writing and my fiction-writing.

    I’m more interested in telling stories anyway, not guessing what comes next.

    That’s me for now. It’s Friday, which means I’ll step away for a few days to decompress. Enjoy your selves while you can before the heat arrives.

    Currently listening:

    Currently reading: Underworld by Don Delillo

  • [sign on] Aril 29 2026

    Yesterday, I was able to get 1400 words out the door, so today will be a day of clean up. I’ve published more in the past two months than I think I have at any point in my life. And I must say: it’s fulfilling.

    If there’s one take-away I have from the velocity I’m moving, it’s that I don’t want to get pigeonholed into writing long-form technical essays. The beauty of this whole thing is that you can do whatever you want, write about what ever you want to write about.

    So, while I wait for notes on my larger project, I’m taking the time away from the keyboard to read good work. Literary fiction mostly, because that’s where the interesting things happen. If you’ve got anything you recommend as a detox from narrative non-fiction and science and technology, please send it my way.

    Nothing particularly interesting on the feeds this morning, so I’ll leave you with this:

    Currently reading: Underworld by Don Delillo

  • A Post-Regression World

    Whether by active strategy or passive habit, commodification is being woven into every level of the modern structure. And it might be the best thing that happens to industry.

    That feels counterintuitive, but as we sprint toward the boardroom’s utopia of faster time-to-production and lower human capital costs, the cybersecurity landscape is heading toward a distinction without a difference. Every vendor, every tool, and every output are converging on the same center.

    We’re already seeing it. Every knowledge worker recognizes the pattern in AI-generated output. We’ve even coined a term for it: AI slop. A bit of a glib phrase, to be sure, but it carries the weight of something we all sense and whose impact we haven’t yet triangulated.

    Regression to the Mean

    Generative AI is inherently prone to regression toward the mean. This is to be expected. Models trained on aggregated human data, constrained by capacity limits, compress the distribution of predictions—a phenomenon known as mode collapse. Researchers at Carnegie Mellon University confirmed this after running nearly 1,100 prompts through a benchmark exercise called NoveltyBench. And while they admit their methodology was prone to outliers, they concluded all frontier models, especially larger ones, underperform humans when it comes to distributional diversity.

    There’s a proposed academic name for this: Galton’s Law of Mediocrity. Sir Francis Galton found through experiments in behavioral genetics that extreme characteristics in progenitor peas tended to regress toward the population average in offspring. The name was re-christened in an October 2025 paper by researchers at 55mv Research Lab, Monash University, and Western Sydney University.

    Their study examined LLM creativity in a domain where creativity is non-negotiable–advertising. Using a two-phase evaluation framework, they found that models would first shed creative elements and drive toward succinct articulation of product points, then expand with lexically dense responses that appeared novel but lacked substance. In most cases, the models lost sight of the original messaging, and vivid ideas were distilled into generic facts. The researchers concluded that LLMs appear to inhibit human creativity rather than expand it.

    Both studies found that even with divergent prompting, this regression is a fundamental design characteristic. Training practices, large corpora, and decoding that favors higher-likelihood tokens all point to how we build models as the culprit. That may not be problematic in isolation, but the practices it enables might be.

    The Race to the Bottom

    Where regression manifests most visibly is in code. I’m not arguing that any two codebases are identical—I’m arguing that the playing field has leveled. AI is simultaneously raising the floor and lowering the ceiling, enabling the average user to ship software in hours instead of months.

    Tools like Malus.sh now allow users to create “clean room” versions of proprietary software via AI, free from copyright infringement. Virtually any application can be open-sourced overnight. And the question this raises isn’t so much about startup viability as it is about what differentiation means when anyone can reproduce your feature set over their morning coffee.

    Personally, I believe the answer is operational rigor. A sole proprietor with a Claude Max subscription can ship fast and match features, but will struggle to meet industry-standard SLAs and compliance. And if they turn to AI to solve those problems, they’ll just arrive at the same solutions as everyone else.

    This extends beyond code. MITRE ATT&CK vendor evaluation participation is dropping. Forrester notes differences in the EDR space are becoming increasingly marginal. Deepak Gupta traces the same pattern in benchmarks: traditional evaluations drive vendors toward an homogenization that mirrors Galton’s Law as applied to LLM outputs. Initial threat response, as IBM and Palo Alto Networks both report, now happens at line speed, so humans no longer need to operate at the response layer.

    The security industry has gone through this before. When antivirus signature databases became commoditized inputs, most AV companies were absorbed by larger players with EDR solutions or pivoted into multi-product plays. But something else happened in that transition. The professionals who could read the context around an alert and make a call about business risk became more valuable, not less. The ones who simply ran the tools became replaceable. AI is accelerating that pattern across the enterprise.

    The Judgment Gap

    Further research on this phenomenon converges on a paradox I touched on in my last article—the paradox of skill. As AI gets embedded deeper into business processes, the effects on human dependency hinge on whether automation substitutes for low-expertise or high-expertise tasks.

    A working MIT paper by David Autor and Neil Thompson suggests that when AI lowers expertise requirements, wages fall but more workers enter those roles. When AI raises the expertise requirement, wages rise but the qualified candidate pool shrinks. So what becomes the premium for human talent in an AI-augmented economy?

    It appears to be judgment. Prasad Setty, former Head of Google People Analytics and Stanford researcher, proposed at the Valence AI & The Workforce Summit that organizations are creating a judgment gap AI simply cannot fill. His theory is that the routine work AI automates is precisely the work where humans built pattern recognition, confidence, and professional instinct. When those jobs are offloaded to machines, the developmental pipeline for decision-making collapses. Therefore, the value that humans bring to the table shifts from intellectual capacity to judgment quality.

    The National Bureau of Economic Research describes this as the skill premium—where automating a high-value bottleneck skill enhances the productivity of workers with more common skills, making those workers more valuable.

    Now, here’s the paradox: the NBER also concluded AI augmentation discourages human learning, depleting general knowledge stock over time. AI dependency erodes cognitive confidence and creates further dependency. The more you need human judgment, the harder it becomes to develop it, so existing deep expertise becomes scarcer by the day.

    A Different Kind of Work

    If outputs are regressing to the mean and AI augmentation is hollowing out the developmental pipeline for judgment, what kind of work are we actually doing?

    In cybersecurity, that answer is shifting. The job most certainly becomes less about triaging incidents, determining response, and executing remediation, and more about ensuring organizational compliance through automation, architecting that automation, and aligning both to business continuity. In other words, management of systems, management of outputs, and management of the gap between what machines produce and what the business needs.

    This is where judgment stops being abstract. When automation triages a thousand alerts and resolves nine hundred of them, the remaining hundred require a human who understands the business well enough to determine which ones represent actual risk to the organization. Less a technical skill, and more of a contextual judgment call that no model can make, because the model doesn’t carry the organizational history, the regulatory obligations, or the risk appetite that inform the decision.

    Any CIO will tell you their budget isn’t dominated by tools—it’s labor: MSPs, MDR providers, consultants, retainers. As tools regress to the mean in terms of capabilities, any reduction in the cost to deliver outcomes through AI forces a reprice in the services layer. When vendors can’t compete on features, they compete on outcomes. And outcomes demand the judgment that will grow harder to develop.

    The Price of Judgment

    So, we see humans move up the stack. Now, what does that demand of them?

    Risk tolerance can’t be calculated in tokens. Trust models can’t be enforced by machines that don’t understand why they matter. We buy tools based not only on whether they work, but on whether we trust the people who built them. We maintain vendor relationships based on how they show up on a bad day. So, as tools and outputs homogenize, these judgments become the only variable that isn’t regressing.

    Autor and Thompson predict this bifurcation in their research. The proverbial middle compresses from both directions until it disappears. Those who understand how outputs are generated, what they mean, and how they should influence decisions—and who can articulate that to a boardroom or to a machine—will command premium positions. Those who offload their thinking will fill the rest.

    This is the net positive. Not because commodification is comfortable, but because it forces clarity. When the playing field levels for tools, code, and outputs, the only differentiator left is the one that can’t be reproduced: judgment from context, experience, and the willingness to be wrong. The market hasn’t priced that correctly in a long time, but it’s about to have to.

    The irony here isn’t that machines are replacing us. It’s that the work they can’t do is the work we are being encouraged to stop practicing. That gap gets smaller every day. And even more expensive to maintain.

  • [sign on] April 27 2026

    The weather is balmy here, and the mosquitos have decided that it’s time to make being outside miserable. This will go on for the next four months. Precisely during the time my grass requires watering the most. I am not amused.

    On a slightly related note, the city about 150 miles southeast of me will run out of water within the next year. Which is unfortunate, because I actually love visiting every summer. But I’m slightly more concerned that this doesn’t seem to be getting a lot more attention than it has.

    Lots happening this week. Hoping to get a new article out the door, and to continue reading through some research I am doing for a project. So mostly reading, jotting down notes, and writing. More on that front later.

    Finally got through the introduction to Delillo’s Underworld. This book is enormous, and I’m still not entirely sure what it is about. I’ve been intentionally staying away from recaps or reviews for it, since that’s what I did with White Noise and ended up loving it.

    So far, though I’m not sure what it’s about, I can see what Delillo is setting up. Not plot-wise, but thematically. If my senses are correct, then I am excited for this thing to unfold. It will likely be the last winter book I read for the year. We’ll see.

    That’s me for today. Here’s so music for your commute:

  • [sign on] April 24 2026

    Hello, I am alive.

    It feels interesting to be posting back here. Invigorating, yet slightly disappointing. Disappointing mostly because I’ve been leaning into writing quite a bit this month, and I wish I’d done that earlier

    My writing has been mostly technical in nature, for practical application in my career field. But I have started exploring some additional questions tangential to some of those applications, and I’m feeling inspired—hence invigorated.

    You should also expect more here, as I have given myself some mandates to publish more. I’ve also found time to read again, something that eluded me a bit mid-February.

    As I get older, I find myself really committing to the things I care about. Reading, thinking, and writing are three of those things, and I am going to allow myself the grace to make time for them. It’s not like we get any less busy at this point in our lives.

    Currently reading: Underworld by Don Delillo

    Currently watching: The San Antonio Spurs vs The Portland Trailblazers

    Do something slowly and intentionally this weekend.

  • The Deskilling Paradox

    The industry is trying to sell us a half-true AI productivity story. These tools are undoubtedly making us faster at completing tasks, but that comes at a cost far beyond the time required for humans to verify machine outputs.

    Across five hundred companies and eight months, Professor Suproteem Sarkar at the University of Chicago Booth School of Business, in partnership with Cursor, tracked how developers actually used AI. As models improved, developers used them more. Perhaps even more revealing: they began offloading higher-complexity work to AI, allowing them to take on more ambitious projects previously beyond their immediate reach.

    None of this is particularly shocking in isolation. But a cognitive study from the same period tells a different story about the humans doing the producing–and the industry hasn’t reckoned with the tension between the two quite yet.

    The Cognitive Problem

    Sarah Baldeo, a researcher in AI and neuroscience at Middlesex University, publishing in Technology, Mind, and Behavior, found a correlation: that the more people relied on large language models, the less they trusted their own reasoning. Participants who depended heavily on large language models were more likely to report that the tools were thinking for them–not with them. As Baldeo puts it: “It really doesn’t have to do with the tool itself.”

    So the tool isn’t the variable. The interaction pattern is. Users who questioned, rejected, and edited AI output maintained their reasoning confidence, whereas those who offloaded their thinking entirely lost it. The problem isn’t AI–it’s the way people are choosing to use it, and the way most products are designed to encourage that use.

    Grace Liu and colleagues from Carnegie Mellon, MIT, Oxford, and UCLA took this a step further. They ran three randomized controlled trials across 1,222 participants with a simple premise: give people AI assistance during a learning phase, then remove it without warning and measure what happens.

    After removal, AI-assisted participants solved 57% of the problems they were given, compared to 73% for the control group that never had access. That’s a 16 percentage point gap that emerged after roughly ten to fifteen minutes of use. But here’s the part that stuck out to me: AI-assisted participants didn’t just perform worse when the tool disappeared–they stopped trying altogether. Skip rates nearly doubled. Participants who used AI for direct answers saw their scores drop up to ten points below their own pretest baseline. The control group improved by 1%.

    One line from the paper captures this dynamic precisely: “AI systems are fundamentally short-term collaborators: extraordinarily helpful in the moment, but indifferent to what that help does to the person receiving it over time.”

    So, here we see productivity going in one direction while comprehension is going in the other. And the gap between them is growing. It’s worth noting that these findings come from controlled settings, not enterprise-level production environments. However, if ten to fifteen minutes of AI assistance in a lab can produce a 16 point performance drop, the implications for developers using these tools eight hours a day deserves more scrutiny than it’s getting. In most contexts, that divergence is a management problem worth monitoring. Being a cybersecurity practitioner, it feels like something closer to a structural crisis.

    The Hinge

    Security researcher Mohan Pedhapati, CTO of Hacktron, demonstrated the economics of AI-assisted offense. Pedhapati leveraged Claude Opus 4.6 to generate a full working exploit for CVE-2026-5873, an out-of-bounds vulnerability in Chrome 138’s V8 JavaScript engine. It only cost him $2,283 in API fees and 2.3 billion tokens over twenty hours.

    Pedhapati didn’t write any exploit code. Instead, he spent those hours redirecting the model when it hit dead ends, cutting off unproductive lines of attack, and choosing which paths to pursue. He would recognize when the model got stuck and redirect, judging which lines of thought seemed promising and which to abandon. Opus did all the heavy lifting, and Pedhapati took care of the strategic reasoning. 

    The independent judgement about what to pursue and discard–that ability to leverage intuition and form hypotheses from it–is precisely the cognitive faculty the Baldeo and Liu studies show as eroding under heavy AI use. The one input that AI can’t yet generate on its own is degrading in the population most responsible for defense.

    Pedhapati’s $2,283 investment would have netted roughly $15,000 in combined bug bounties–a 6.5x return before accounting for his time. But models improve and API costs decline, and Pedhapati’s time spent was largely a learning curve that his next attempt won’t require. So, theoretically, the next exploit will be cheaper. In Pedhapati’s words: “Eventually, any script kiddie with enough patience and an API key will be able to pop shells on unpatched software.” Script kiddies already exist. AI lowers their barrier to entry (which feels ironic to say).

    The economics favor the attacker. And the one thing separating an experienced exploit developer and a kid with a credit card is the same capability that cognitive research shows eroding.

    The Arithmetic

    Oxford philosopher Toby Ord, known for his work on existential risk, recently broke down AI agent costs by the hour and, surprise, the productivity narrative doesn’t survive the math.

    On the surface, the numbers tell a compelling story. Some AI agents operate at roughly $0.40 per hour, whereas a human software engineer costs around $120. The efficiency argument writes itself. But Ord’s deeper finding is that costs compound as task duration and complexity increase. For example, GPT-5 costs $13 per hour for forty-five-minute tasks but $120 per hour for two-hour tasks, in line with human labor. O3 reaches $350 per hour, or nearly three times the cost of a human engineer. Ord concludes that what looks like progress is “increasingly lavish expenditure on compute,” not sustainable capability gains.

    You can’t secure an organization in forty-five-minute bursts. Defense lives in the long-duration, high-complexity tail of Ord’s analysis where AI costs approach or exceed human costs. Offense, on the other hand, is composable from the cheap-end of the curve–one brief, targeted interaction is all an attacker needs.

    The PRT-Scan campaign shows what cheap offense looks like at scale. A single threat actor, using six GitHub accounts, initiated over five hundred malicious pull requests over the course of a few weeks. Although the success rate was below ten percent, the campaign still compromised two npm packages (@codfish/eslint-config and @codfish/actions) and enumerated API keys and tokens for platforms like AWS, Cloudflare, and Netlify.

    But the breach isn’t the point–it is the evolution of the attack. In phase one, researchers noticed raw bash scripts targeting small repositories. By phase three, AI-generated wrappers were dynamically identifying each target’s language, framework, and CI configurations. Each iteration of the attack was more idiomatically convincing than the last, and the entire campaign cost virtually nothing.

    This is the arithmetic that breaks the productivity story. Comprehensive defense at scale costs disproportionately more than iterative offense at volume, and the gap is widening as AI makes low-skill, high-volume attack campaigns trivial to execute.

    The Loop

    In practice, most AI-driven productivity gains expand the attack surface. Each new line of code, each additional dependency, every expanded feature demands understanding, testing, and defense. Otherwise, the rational response to an attack surface you can no longer comprehend is to delegate. And that’s what some organizations are doing. But that delegation erodes the human capacity to understand what’s being defended. And as understanding erodes, the systems we build become less carefully examined, not because the code is worse in an obvious way, but because our capacity to foresee or foreclose vulnerability quietly disappears. Each cycle widens the gap between what we produce and what we comprehend.

    This might read as a slippery slope. It isn’t. Each step in the cycle has independent evidence behind it, and the components are individually documented. The question is whether anyone is modeling this system-level dynamic, and I haven’t found evidence that they are.

    That’s a hard thing to sit with. The productivity gains are real. Developers really are doing more ambitious work. The capability expansion is real, and the economic value is measurable. The answer is neither to reject AI nor to celebrate productivity without measuring what it erodes.

    I don’t have a confident answer to where this all leads. Anyone who does is either selling something, hasn’t spent enough time with the data, or is seeing something I’m not. The evidence says we’re optimizing for the wrong thing. We shouldn’t be measuring output–we should be measuring understanding. The question I keep arriving at isn’t whether this trajectory leads somewhere dangerous. It’s what happens when we lose the first principles knowledge required to maintain and defend the systems we’re building alongside AI–and whether anyone will notice before it’s gone.

  • After the Chain: Trust as Constellation

    In past articles I’ve largely argued that, with the emergence of quantum computing and its intersection with generative AI’s sprint towards AGI, that digital trust is failing at the layer where accountability is supposed to live.

    Both pieces arrived at the same conclusion: a chain is the wrong model and we don’t have a replacement.

    That’s not entirely true. We don’t have a replacement fully articulated. But we have the pieces of one and they are scattered across fields that don’t talk to one another currently. What follows is an attempt to assemble those pieces into something coherent. Not because I think I’ve solved the problem, but because I think the solution is starting to take shape, and if we wait to name it, then I think we will keep investing in the wrong solutions.

    The Reframe

    The proposition is this: trust, in a world of autonomous systems and probabilistic computing, can no longer be linear. Accountability can’t go through a series of links in a chain and depend on each link to vouch for the other. It has to emerge from patterns we learn to recognize as opposed to signatures we verify. Almost like a constellation.

    A chain locates trust in a system. A signature is verified, a certificate is checked, a commit gets audited, and if each check holds, then the chain holds. The trust exists in the artifact, independent of the observer. We inherited this model from history: notarization, custody-of-evidence, institutional verification. Cryptography just gave us a stronger math for each of these links.

    A constellation, on the other hand, locates trust on the relationship between the system and the observer. Cassiopeia is real in the sense that it is a useful navigational tool, but it is not native to the stars themselves. It’s a structure the observer brings and is only clear when you know what you’re looking for. Constellations do not depend on a single star to be verified. Even if one star is dimmer than you expect, or you misidentify one, the constellation still holds because it was always a pattern not a proof.

    I think this is the move we have to make. We have to stop asking whether a single artifact is trustworthy and start asking whether a system’s behavior forms a pattern consistent with what it claims to be.

    The Philosophical Vernacular

    This is a significant paradigm shift I don’t want to gloss over.

    Under a chain model, trust is a property of the system. Under a constellation model, trust is a capacity in the observer. It is something we cultivate, practice, and sometimes are bad at. For example, two people looking at the same night sky will see different constellations based on what they’ve been trained to see. The same will be true of digital systems. Trust will become perceptual, unevenly distributed, and require genuine investment to develop.

    This sounds like a regression, and in some ways it is. We are letting go of the fantasy that trust can be certified. And that’s okay because that fantasy was always fragile. Every form of digital governance has always depended on the institutions we chose to believe in and that was never as rooted in mathematics as we pretended. The chain model was only going to work as long as the systems it governed were small enough that verification was possible in principle. We are past that point.

    We are also gaining something in return. A constellation model is honest about what trust has always been: a social, contextual achievement that depends on observers who know how to look. It treats trust literacy as something we teach as opposed to something we assume. It accepts that trust does not have binary answers and is never going to.

    Compatibilism, which I talked about in my determinism piece, sits comfortably here. We can finally stop arguing about whether a system is deterministic or autonomous and start asking whether its behavior, observed across many different vantage points, forms a coherent pattern we can recognize and reason about. Whether that coherence is derived with math or something else doesn’t matter as much anymore, because the pattern is the pattern.

    The Technical Vernacular

    None of this is useful if it can’t be built.

    Trust, in a constellation model, is a statistical property of a behavioral distribution. It is inferred from many different observations, and becomes actionable when that distribution shows coherence across time, context, and the observer. The words here are important.

    The statistical property isn’t a flag on an artifact, it is a characterization of how the artifact behaves.

    In the behavioral distribution we are not a single action, but observing the shape of many actions and asking if that shape is consistent with who the system claims supposed to be, its purpose, and its constraints.

    And many different observations are overlapping, partially redundant vantage points whose agreement falls on the evidence itself. One observer can be fooled, but a hundred who agree on a pattern can’t without all one hundred being compromised. This is a much harder attack surface than forging a digital signature.

    Coherence is the mathematical analog of what we used to call integrity. Simply put, a system whose behavior forms a recognizable pattern for a week and suddenly diverges is not trustworthy.

    Notice the framing doesn’t require interpretation. We don’t need to understand why a system behaves the way it does, we just need to characterize its behavior well enough to recognize when it strays from its path. Kind of like an immune system, or how humans have always trusted one another. We trust people whose behavior patterns we recognize, and lose that trust when that behavior changes.

    Actual Implementation

    We don’t have the infrastructure to build this constellation yet. It would require fingerprinting at scale, distributed witness networks, models trained specifically on coherence, and a principled way of combining observations to produce high-confidence signals.

    I don’t even think we have the legal and regulatory vocabulary for this yet.

    More importantly, the compute power for this would be incredible. But it’s inline with the confluence of quantum computing and the race to AGI.

    Thankfully, we have the foundations to build this  right now. We have telemetry, we just need to reframe it for behavioral characterization. We can start doing that today.

    We are seeing decentralization in identity work, trust registries, attention networks, and beyond. But this work is a slow burn and has too much deference to the chain these efforts are meant to replace.

    Foundationally, we also already have the pattern literacy developing in the processes and tools that we are building to protect against autonomous threats. We need to study it, formalize it, and teach it as vigorously as we can.

    The only thing I’m most skeptical about is the change in culture required for this new mode of thinking and operating. The chain model is ingrained deeply in our DNA, so moving to a model where accountability emerges from patterns instead of from authority is going to force resistance from every organization whose business depends on it being the authority. And this is not something we can solve with technology.

    The Pattern We Haven’t Named

    I want to end where I started, by admitting that I don’t fully know what I’m describing. The constellation is a metaphor and metaphors are scaffolding not buildings. The actual architecture will look much different from what any of us can imagine in advance.

    But I think the shape is right. Trust has always been a pattern we recognized more than a property we verified and the chain model was an attempt to formalize that recognition. It’s worked well for a long time, but our systems are quickly outgrowing it and no amount of better math or audits will close the gap.

    What comes next is harder. We will be asked to give up the promise of certainty and replace it with the discipline of perception. We will actually have to treat trust as a relationship instead of a credential.

    This is the right direction though, and I think the honest version of this moment is to say it out loud, even if it’s been said before.

    The stars don’t form constellations on their own, after all. We have to learn to see them.

  • [sign on] January 8 2026

    The turn of a new year always brings a tinge of hope with it. Historically I have found a kind of new motivation whenever this hope manifests itself. This was by no means my quietest New Year’s Eve: I had a short celebration with some extended family, had some tea while I read a book, and kissed my wife when the ball dropped. It was perfect.

    But I didn’t feel the need to try and reinvent any certain aspect of my life. To be completely honest, this is comforting to me. I’ve never had a new year where I didn’t have a resolution and I’m taking this as a sign of personal growth. Being content with who you are and where you are in life is something many of us put an enormous amount of time, effort and money into realizing. I am not immune. Now I’ve come to this point and I am happy to be here; although a part of me knows this feeling is fleeting.

    In 2025 I read seventeen books. While I’m not setting a “number-of-books-read” goal this year, I am setting a goal of intentionality. The trend perpetuated by apps like Goodreads or Fable of “reading goals” has, at times, driven me to simply get pages behind me as opposed to getting ideas burned into my memory. To that end, I’ve begun annotating books more; a practice I started late last year during my read-through of Karamazov. I’m finding that marking passages, ideas, and lines that stand out to me as I’m reading with tabs and writing in the marginalia of pages my initial thoughts force me to come back to them when I’m sitting with a book once completed.

    However, the real magic happens when I sit with a book after I’ve completed it. I’m trying to purposely not read other reviews or user thoughts online so that I can try and pull my own themes out of each book, wrestle with them on paper, and come out the other side with a coherent grasp of each piece. This has been the most impactful change to my reading I’ve made since November, and is something I intend to continue. Some of that might end up here, some won’t; you’re welcome to follow along.

    Currently reading: My Struggle (Book 1) – Karl Ove Knausgaard

    Currently listening:

    Happy New Year!

  • The Brother’s Karamazov by Fyodor Dostoevsky

    There’s not much I can say about this book that hasn’t already been said a hundred times before. I finished The Brother’s Karamazov after a month of picking it up and putting it down repeatedly for a number of reasons. But I was determined to finish it because it’s the perfect time of year for heavy classic Russian literature.

    I ask myself “What is hell?” And I answer thus: “The suffering of being no longer able to love.”

    I loved every moment of this book. Dostoevsky is a master of his craft. From the depths he gives each character and situation, to the huge questions he beckons you to wrestle with, to the allegory of nearly every plot point, everything is deliberate, sprawling, and it reads so naturally.

    It’s clear that the actual story is just a conduit for most of the philosophy that Dostoevsky explores: morally broken characters wrapped in a heinous event, each one coming to terms with their responsibility in that event, and the levels of despair each character falls largely determined by the lens they are all meant to be as a position in the broader discussion.

    I say ‘discussion’ in the singular, but this book tackles so many themes: greed, lust, murder, and justice, to name a few. Still, the central question is whether humanity can be good without faith. The ‘Grand Inquisitor’ is often called the book’s most significant moment, and I agree. The takeaway is that without a basis in faith, we lose the ability to calibrate our morality. If there is no God, then ‘everything is permitted’ because the very definition of ‘good’ disappears.

    However, I am more inclined to cite the discussion between Ivan Fyodorovich and Lucifer later in the novel as my personal standout. Lucifer argues that Ivan’s nihilism is self-defeating because, without evil, there can be no definition of good. Without giving ‘the good’ a tangible representation, we find ourselves at odds with our innate consciousness—which I define here as our epistemic intuition that good exists and must be defined through our pursuit of it.

    “…What good is faith by force? Besides, proofs are no help to faith, especially material proofs. Thomas believed not because he saw the risen Christ but because he wanted to believe even before that.”

    The need to reconcile what we know to be good and its existence, Dostoevsky argues, relies on a spiritual underpinning that we aren’t really required to reconcile. We have a natural desire to be good, we try to be good, we aren’t always good, but in the pursuit of the good we create the beauty necessary for an otherwise ugly world.