bryanking.design

Stock and Flow, AI Edition

this piece is a work in progress, and will be developed some more before publication

When I signed up for Claude Max six weeks ago, I cancelled all my other LLM subscriptions and went all in on Claude. At teh time, my mix was probably 70% ChatGPT, 20% Claude, and 10% everything else.

But Claude Code, with higher limits, was worth the consolidation. I occasionally miss ChatGPT's memory, although I think I'm better off without it (a story for another time). Anthropic is really on the trail of the maximally useful LLM for all business purposes, where ChatGPT seems content prioritizing consumer use cases.

And now that my LLM usage is focused solely on Claude, I feel like my LLM usage has grown 5x. I live in Claude on the desktop during the workday. It helps me validate designs, understand documentation, learn about new topics, and brainstorm about my ideas.

These are all things I've been using LLMs for over the last couple of years, I've just habituated it's use, especially in my day-to-day work.

But I'm finding myself in a predicament: even though I'm having all of these rich conversations, there's nothing to record the impact they've had on my thinking.

Sure, you can scroll back through the chat history to re-read an exchange. But have you ever tried to find something in tens of thousands of words of exchanges with the LLM?

Scroll down the sidebar, maybe the thread is recent enough to still be there. But the more you use Claude, the faster the sidebar fills up. With regualr usage, threads get pushed off the sidebar in a matter of days.

So into the full chat index we go, where you're greeted by the same list of items in your sidebar! Gotta scroll down to the bottom of the index to reveal them all.

Oh, great, now I've got five variations on jobs to be done threads, side by side. Which one was the good one? Who knows! Into the mines to figure out which one it was.

And once you find the right thread, you've got to find the right lump of coal. Where did we leave it? God forbid I'm looking for a riff, where I had Claude revise and re-revise and re-revise itself, as I try to hone in on a satisfactory answer.


All of this requires you to remember having the conversation in the first place! More and more, I'm finding myself stumbling upon a great thread from a few weeks ago, that I'd already forgotten about.

Much ink is spent debating whether or not LLMs are impacting human memory and skill. Some say it doesn't, and others might say it only will if you let it.

What I know is that it's very easy to let it happen. LLMs tend to produce satisfying answers to your inquiries. They transfer a sort of small-k knowing that feels good, but should not be confused with big-K Knowing, which (in general) is only possessable via some act of synthesis.

It's why teachers for time in memoriam have told us that we need to write our own notes. Because otherwise, you cannot internalize the knowledge, the Knowing. You leave that knowing deep inside a chat thread, off somewhere in us-east-2.


Flow is the feed. It’s the posts and the tweets. It’s the stream of daily and sub-daily updates that reminds people you exist.

Stock is the durable stuff. It’s the content you produce that’s as interesting in two months (or two years) as it is today. It’s what people discover via search. It’s what spreads slowly but surely, building fans over time.

Stock and Flow, by Robin Sloan

As they are today, LLM apps are Flow. Message after message after message, all floating down the stream. Gone forever.

The future of software requires us to figure out how to use LLM to create Stock.

In the handful of years before LLMs took off, the "second brain" concept was all the rage. Note-taking apps, as glorified text editors, were getting tens of millions of dollars in funding. Everyone took courses on building personal knowledge management systems (PKMs). That hype has really died down, and I think it's in large part because knowledge management has been outsourced to LLMs. Which means that the average user has no knowledge to manage. They are mere observers of the Flow of information up and down and across their screens.

Those notes in PKM systems were Stock! Many note-takers literally co-opted the term evergreen to refer to their durable notes, the ones that would be useful for years to come. Many people even share them with the world, because they're so helpful. If you still use search engines, you can stumble upon these published PKMs today! And you could probably bet your house that most of these evergreen notes are in the training data for your favorite LLM.

Is anyone taking these notes anymore? Or are we all just content with small-k knowing, and this information being trapped inside LLMs?


Some UX ideas for Claude

  • Introduce bookmarks within claude conversations. Allow users to save specific messages that were important. The bookmarks should also be able to be seen as a table of contents, so you can jump between the highlights of a thread.
  • Better export tools. Often times, I find myself wanting to reference a thread as context, either inside Claude or while coding, and there's not a good way to do this.
  • Document mode. Instead of a chat feed, I'd love for Claude to switch into document mode, which would enable a comment-like UI, which would feel like leaving comments in a Google Doc. It'd enable you to drill into an LLM answer without overwriting it, and even get quick clarifications, similar to how you'd ask a colleague "what's that?' to get filled in. You don't need to rewrite the whole answer for something like this, but with the big consumer LLM apps today, this is the best we've got. LLM coding apps, with their chat sidebar, are better at this, because they're working on a particular file with you. but often, when you need quick answer, you've got to take another turn in the main thread. I think the answer here is that you take an answer into document mode, and from there, you could split off separate chat threads using the main thread as context. and you can have sub-chats that maintain internal context with themselves, but they don't impact the main document, or any sibling sub-chat threads. I don't think this is an end-all UI, but it seems to solve a significant number of use cases, where users want to have a durable document that they can point to, without having to manage it themselves.
  • Splittable threads and documents. I often find myself changing gears within a chat, where it'd probably be helpful to split off the chat into a second pane with all the context of the original thread. Helpful if I'm talking about a project at a high level to start, but then I get into architectural details. But I also want to talk about interface details. At a certain point, these things don't mix, and i have to try yo diverge and converge in the same thread, which is a bad idea because it uses lots of context, and it becomes hard to navigate. Officially splitting the interface thread off from the main thread would be a huge help.

Notes from conversation with Claude

Key themes to explore further:

The one-way flow problem

  • Unlike the original web where flow feeds stock and stock feeds flow, LLM conversations are trapped in a one-way system
  • You can't contribute back to training data, share insights with other users, or even reliably share with your future self
  • It's like having brilliant conversations in a soundproof room - collective intelligence gets no benefit

The irony of LLMs

  • LLMs are built on humanity's greatest stock (books, papers, documentation) yet produce only flow
  • They're like a library that only lets you have conversations in the lobby but never check out books
  • We've built the most powerful thinking tools in history, but designed them to produce no lasting artifacts

Time decay and regret

  • LLM conversations have a weird half-life: most valuable right when you have them, nearly worthless a week later, then suddenly valuable again months later
  • That painful moment months later: "I really worked through this problem with Claude..." but all you have is a vague memory
  • In the moment, conversations feel so complete and satisfying that we don't write them down

The creator's dilemma

  • In Sloan's world, creators controlled their own stock/flow balance
  • With LLMs, we've outsourced our stock creation to a system that only produces flow
  • We're becoming consumers of our own thinking

Platform lock-in and the Obsidian contrast

  • Your intellectual history is trapped in Claude's servers
  • Obsidian's "your thoughts are yours" ethos vs LLM platforms where thoughts are ephemeral, platform-locked, unsearchable
  • If you switch LLMs, you lose everything

Social dimension lost

  • Sloan talks about flow "reminding people you exist"
  • LLM conversations are private by default - no social proof of thinking in public
  • No one knows about your brilliant Claude thread

Potential ending:

"We've built the most powerful thinking tools in history, but designed them to produce no lasting artifacts. We're all having our best thoughts in private, temporary conversations that benefit no one, not even ourselves."

Minor fixes needed:

  • Line 8: "teh" → "the"
  • Line 20: "regualr" → "regular"

(And yes, "stock" as in soup stock comes from the same root - something substantial you build up over time that becomes the foundation for other things!)

The Tolkien connection:

This relates to Tolkien's "Cauldron of Story" from "On Fairy-Stories" - where all the old bones of stories get thrown in and simmered together until new stories emerge. The metaphor connects across all meanings of "stock": soup (built from bones and scraps, simmered over time), financial (accumulated capital), inventory (durable goods), and content (ideas that develop richness through time and reuse). Tolkien understood that stories need both the ancient bones (stock) and fresh seasonings (flow). But with LLMs, we're just seasoning water - no bones, no depth, no foundation for future nourishment.