September 2025

An Analytics Renaissance is Coming

AI Won’t Fix Your Analytics. Building a Decision Engine Will.

At the heart of the current shift in technology is a simple but powerful idea: what we can measure, AI can master. When you measure right and aim true, AI wins. Clarity is the unlock.

We see this in games like Go and chess, where models outperform humans because the objectives are clear, the feedback is instant, and the rules are defined. Business, of course, is not so tidy. It’s noisy, shifting, and deeply human. But even here, an era is emerging where AI won’t just assist with analysis; it will actively participate in the decision-making loop. The companies that rewire their analytics for this new reality will outlearn, outbuild, and ultimately outcompete the rest.

The Hidden Bottleneck

For years, companies have spent millions on data lakes and dashboards, and now they’re assuming this gives them a head start in the AI race. The reality is that most of that data was never designed for AI. It was built either to run applications or for humans to slice and dice in reports. This is a key reason why so many AI initiatives stall out: the foundation isn’t ready for them.

This has left most analytics teams stuck in a reactive cycle. A stakeholder asks a question, an analyst pulls data, and the answer goes into a deck. This loop is slow, brittle, and designed for human conversation, not machine action.

But that’s starting to change. A quiet revolution is underway, driven by AI-powered workflows. A new kind of analytics stack is emerging, one that turns static metric trees into living systems. In this world, dashboards aren’t endpoints; they are starting points. AI agents don’t just surface insights; they diagnose issues, generate hypotheses, and propose interventions. And over time, they will do more than propose. They will act.

The Unchanging Foundation: Why Core Analytics Still Matters

Even as AI becomes more capable, the fundamentals of good analytics are timeless. The basic purpose of data, to measure the business accurately, is still table stakes. The real shift is moving from data built for human reporting to data built to power automated action. The latter demands a whole new set of qualities. It all starts with the four basics of any reliable data system:

  1. Measure what matters, not just what’s easy. Don’t just count clicks. Measure what truly reflects progress toward your goals.
  2. Get the numbers right. Your data must be clean, your definitions unambiguous, and your team aligned on what each number means.
  3. Understand how things connect. Know how your metrics influence one another. If one goes up, what else should move with it, and why?
  4. Know which levers actually matter. Distinguish the actions that cause change from the surrounding noise. You can’t improve what you don’t understand.

These steps are the bedrock. But data built only for reporting can get away with being slow and reliant on a human to provide the final layer of context. Automated systems are far more demanding. To power intelligent agents, your data needs its business context encoded so a machine can understand it. It needs to be easily navigable so an agent can connect dots across domains. And often, it needs to be in real time, because an insight about an abandoned cart is useless five minutes later.

This is where building out your intellectual infrastructure becomes critical. This includes things like comprehensive metric trees, a clear set of guardrail metrics (the things you can’t break), and a registry of your core user segments. Think of it as creating a structured map of your business logic. This helps your team:

  • Align on what success looks like.
  • See how actions tie back to outcomes.
  • Spot problems with greater precision.
  • Give AI tools a clear, rich, and structured target to optimize.

In the past, analysts had to trace these connections by hand. Now, you can build a system where AI agents can move inside that logic because you’ve encoded the rules of the game.

Rewiring the Loop: An Example

Let’s make this real. Imagine you’re on the data team at a fast-growing consumer tech company. You have dashboards and talented analysts, but insights take too long, and teams are asking you to “tell us what to do,” not just “what happened” and help us be “more strategic.” A tale as old as time.

Now, imagine retention for a key Gen Z cohort suddenly starts dropping.

In most companies, a slow, manual process kicks off. Someone eventually notices the drop in a dashboard. Analysts scramble to segment the data. The team debates causes. An experiment might get designed weeks later. The whole cycle is inconsistent and relies on heroic effort.

But it doesn’t have to be your ceiling. The alternative isn’t magic; it’s about deliberately designing your systems for AI consumption from the start. The individual tools like anomaly detection and root cause analysis aren’t new. What’s new is our ability to integrate them into a cohesive, high-speed workflow. The core idea is simple: the more of the process you can instrument, and the more context an AI can consume, the more of the execution it can own.

Here’s how that same scenario could play out, step-by-step:

1. A metric drops.

  • The Conventional Workflow: A human notices a dip in a weekly dashboard, days late.
  • Designing for AI Consumption: The metric tree is instrumented with automated anomaly detection. The goal is to pipe structured alerts into operational channels (like Slack) with clear ownership, creating a real-time signal a machine can intercept.
  • The Near Future: Agents will monitor these signals continuously, surfacing urgent issues with pre-built context on business impact.

2. The issue is traced to a group: Gen Z iOS users from TikTok.

  • The Conventional Workflow: An analyst runs manual segmentation with inconsistent definitions.
  • Designing for AI Consumption: Knowledge is codified in a segmentation registry. This makes the business context of “who your users are” explicitly available for automated systems, translating human knowledge into a machine-readable format.
  • The Near Future: Agents will reference this registry to instantly isolate affected groups and route the issue.

3. The root cause is found: a drop in quiz completion during onboarding.

  • The Conventional Workflow: Funnels are inconsistent or built ad-hoc.
  • Designing for AI Consumption: The team maintains clean, canonical funnel instrumentation. This makes critical user journeys permanently legible to machines, providing a stable map for an AI to analyze.
  • The Near Future: Agents will analyze drop-offs, correlate them to recent changes (like code deploys), and suggest likely causes.

4. Experiment ideas are proposed.

  • The Conventional Workflow: Teams brainstorm from scratch, relying on scattered documents and memory.
  • Designing for AI Consumption: Experiment memory is centralized in a structured, navigable knowledge base. This turns past learnings from conversations into a persistent asset an AI can query.
  • The Near Future: Agents will search this repository to suggest interventions with the highest probability of success.

5. An experiment is designed and scoped.

  • The Conventional Workflow: A slow, manual back-and-forth of writing docs and tickets.
  • Designing for AI Consumption: Reusable templates embed best practices, creating a structured input that is easier for a system to parse and eventually generate.
  • The Near Future: Agents will draft design docs and tickets automatically, pulling in all relevant context.

6. The experiment is monitored.

  • The Conventional Workflow: Manual check-ins lead to tests that run too long or are misinterpreted.
  • Designing for AI Consumption: Automated monitoring platforms with pre-set thresholds turn monitoring into a deterministic, observable system.
  • The Near Future: Agents will monitor results in real time and summarize outcomes the moment a test reaches significance.

7. It works. Now what?

  • The Conventional Workflow: Learnings are lost, and the value of the work evaporates.
  • Designing for AI Consumption: A closed-loop system is in place. A winning experiment automatically updates metrics and roadmaps, putting the outcomes back into the operational flow.
  • The Near Future: Agents will detect the lift, link it to strategic goals, and recommend follow-up experiments automatically.

The Mindset Shift: From Dashboards to Decision Engines

The thread connecting these steps is a fundamental shift in mindset. Many of the best companies today have already moved beyond static dashboards. They have strong, insight-driven teams that operate in a tight, effective loop of analysis, decision, and action. But even this highly effective “human-in-the-loop” model has a ceiling; it relies on heroic effort and scales only as fast as you can hire and train brilliant people.

The next evolution is moving from a human-driven loop to a system-driven one. You are no longer just building assets to support your team’s decisions; you are architecting an engine that makes many of those decisions alongside them. By doing the hard foundational work of defining metrics, segments, and logic, you create a system where AI agents can operate effectively.

This is how you solve the context problem. AI isn’t the hard part. Encoding your business context so a machine can use it is. And if your company is still early in its analytics maturity, that’s not a weakness. It’s a strategic advantage. You can skip the debt of legacy systems and build a foundation for action from the start..

A New Division of Labor: Human Judgment, AI Acceleration

So where does this leave us? The goal isn’t to replace humans, but to eliminate the work that prevents them from being strategic. The work naturally splits into two clear camps.

AI excels at:

  • Monitoring metrics and routing issues
  • Diagnosing anomalies and suggesting causes
  • Generating hypotheses and drafting experiments
  • Writing and monitoring project tickets
  • Synthesizing test results and recommending next actions

Humans still lead on:

  • Defining the right objectives and strategic constraints
  • Interpreting ambiguous or conflicting signals
  • Designing novel strategies from first principles
  • Making complex tradeoffs between competing metrics
  • Building trust, alignment, and influence across the organization

The best analysts of the future will be those who combine taste, judgment, and systems thinking, and then use AI as a force multiplier to accelerate every turn of the loop.

From the Sidelines to the Frontier

For the past few years, I was in venture capital. From the outside, it’s a front-row seat to innovation. You meet brilliant founders and spend a lot of time thinking about and talking about the future. But you don’t build it yourself. Venture, for me, was too much a business of filtering ideas and persuading others to believe in far-fetched stories.

Insight doesn’t come from a thousand slide decks; it comes from a thousand decisions. The feedback loops in venture are long and noisy. You often don’t know if you were right for years. Somehow, 75% of the people you speak with, think they’re going to be top quartile performers.

Operating is different. It’s rigorous, fast, and relentlessly real. The results of your choices show up in hours, not quarters. You build systems, ship products, drive behavior, and watch what moves. And when things don’t move, you fix them. The true frontier is inside companies, in the chaos of real decisions and the daily grind of turning noise into action. That’s where the real learning happens. That’s where this new era is taking shape. And that’s where I want to be.

The Race for Clarity

The promise of AI in analytics is real, but its power won’t be unlocked by a new tool. It will be unlocked by the deliberate, foundational work of evolving your data from a tool for reporting into an engine for action.

The competitive landscape is being redrawn. For the last decade, the winners were the companies best at manually iterating through the insight-to-action loop. But the next winners won’t just be the ones with the sharpest analysts; they will be the ones that build systems to amplify that talent at scale. The race is no longer just for insight, but for the fastest, most intelligent decision engine. For data leaders, this is the moment to flip the script: to stop being a cost center blamed for failed projects and start being the strategic growth engine that makes AI a reality. The future belongs to those who build it with discipline, today.

Don’t Outsource Your Thinking

It feels good to be writing in public again.

My time spent in the world of capital allocation, whether in venture or at a hedge fund, taught me that writing is often more about institutional messaging than personal inquiry. The layers of compliance and strategic consideration are necessary, of course, but I found they can temper the spirit of open exploration. The liberty to now write for myself, to simply think in public, is a freedom I’m excited to reclaim.

So, I’m back. And I thought I’d start by sharing a bit about the evolution of my process: how I think, how I write, and how I’m learning to partner with Artificial Intelligence.

Why I Write: The Search for Clarity

Since 2011, I’ve kept a running notebook (physical and digital) that has become a sprawling archive of my brain. It’s filled with notes, reflections, and half-baked essays on everything from statistical models and product strategy to being a better father and husband.

This started as a simple habit, but over time, I’ve found it’s become essential for my own mental well-being. Writing is the most reliable tool I have for untangling complexity. When I’m stuck on a problem, when a decision feels fuzzy, or when I just feel a general sense of unease about something, I write. For me, wrestling with an unformed idea is like navigating a thick fog. There’s a disquieting sense of knowing something is there without being able to see it, and the act of writing is what finally parts the mist, revealing the path ahead. It reveals the flaws in my own logic and the gaps in my thinking. I find that I can’t bluff myself on a blank page.

For years, this practice was mostly private, a tool for my own clarity. Now, I’m trying to add that final step of publishing, and I’m finding it adds a new, welcome layer of rigor to my process.

My New Sparring Partner

My writing workflow has changed a lot in the last few years. While I use AI every day, I’ve found my process looks a bit different from how some people seem to use these tools. For me, asking it to “write a blog post about X” would feel like outsourcing the part of the work I value most.

Instead, I’ve settled into treating it like an endlessly patient and brutally honest sparring partner. It’s become the bridge that helps me get from the initial chaos of an idea to a coherent, pressure-tested structure. My job is still to provide clear, nuanced direction; its job is to organize, challenge, and reflect my thinking back to me at the speed of light.

To share what this looks like in practice, let me walk you through a recent example. Awhile back, I was building the business case for a major investment in a new data analytics platform. My head was a mess of disconnected thoughts: total cost of ownership, complex integration challenges, potential ROI, vendor comparisons, competing stakeholder needs, optimistic vs. realistic implementation timelines, and the significant risks of doing nothing while deeply understanding the firm’s aversion to additional costs..

Step 1: The Brain Dump and Thematic Sort I dumped all of it into a prompt, hundreds of words of pure stream-of-consciousness. My request was simple: “I’m building a business case for a new analytics platform. Here are my raw thoughts. Please organize them into a logical structure for a strategy document.”

Instantly, the AI took my jumbled list and returned a clean structure with five sections: The Problem, The Proposed Solution, Financial Impact (ROI & TCO), Implementation Plan & Risks, and Expected Outcomes. It was a solid, standard starting point that immediately gave a skeleton to my chaotic thoughts.

Step 2: The Adversarial Pressure Test This, for me, is where the process gets really interesting. I then gave the AI (often a different LLM) a new persona: “Now, act as a skeptical CFO who is unconvinced of the ROI and deeply concerned about the budget and headcount required. Rip this business case apart. Tell me where the financial assumptions are weak, where the plan is naive, and what crucial questions I’m failing to answer.”

The response was informative and humbling. It pointed out that my ROI calculations were based on overly optimistic adoption rates and that I hadn’t adequately budgeted for the “hidden costs” of training. It flagged that my timeline didn’t account for likely delays from the engineering team. It was the kind of direct, ego-free feedback that is incredibly difficult to get from a human colleague on a busy afternoon, and it was exactly what I needed to see the weaknesses in my own case. It forced me to confront the base rate of failures, even though my contextual perspective suggested higher probabilities of success. It wasn’t clear who was right, but it sharped my understanding of the considerations.

Step 3: The Iterative Loop I spent the next hour in a back-and-forth dialogue with the model, refining my points while it challenged the revisions. By the end of this process, I didn’t have a finished document, but I had something much more valuable: a robust, coherent, and battle-tested argument that I felt much more confident in. Every once in awhile, if the step-by-step incremental approach isn’t yielding what I want, I’d paste a chunk of the conversation into a fresh chat window, with my feedback on how I want to “holistically re-imagine” the write-up with specific feedback, and I’ll often get a complete re-orientation that’s a marked improvement.

Only then did I open a blank page and begin to write the prose myself, with a lot less help from an LLM.

Protecting the Muscle

This partnership is working for me so far, but I’m still figuring out where to draw the line. For me, relying on AI to do the final writing would feel like taking an escalator instead of the stairs. It’s faster, but I worry that if I did it all the time, the mental muscles I need for the climb would begin to fade.

The act of choosing the right word, structuring a sentence, and weaving a narrative is where I find the deepest thinking about persuasion and effectiveness happens. That work relies on a set of skills I’m trying to cultivate: taste, intuition, and strategic empathy.

More importantly, it forces me to draw on context that AI simply does not have. So much of what informs my perspective comes from the data that isn’t in a dataset like the hesitation in a project lead’s voice, the unspoken dynamic between departments in a strategy meeting, or a pattern of resistance I might recognize from a dozen similar initiatives. I feel my writing needs to be infused with that full, messy, human picture to be valuable.

For me, figuring out what matters and how to say it remains the core work. My working theory is that in a world increasingly saturated with machine-generated content, this human act of judgment will become more important, not less.

What’s Next

Going forward, I’m excited to use this space to explore the questions I’m wrestling with. I’ll be writing about the evolution of data science and product development, the messy reality of decision-making in business, and the economic frameworks I’m using to try to make sense of a rapidly changing world.

This is just the process I’ve landed on for now, and I’m sure it will continue to evolve. I’m genuinely curious to hear how others are navigating this. If you have a workflow of your own, or see things differently, I’d love to hear about it.

Thanks for reading.

Scroll to Top