Experiment | MindFlow

Conversation Mind Graph

An experiment in turning AI chat into a navigable thought graph.

*You would need a Claude API key

Overview

AI chat is powerful, but structurally limited.

As conversations get longer, important ideas disappear into scroll. Follow-ups lose context. Revisiting a specific thought becomes retrieval work. The interface stays linear even when the thinking is not.

This experiment explores a simple question:

What if an AI conversation also generated a live visual map of itself?

Could that make long interactions easier to navigate, revisit, and branch from?

Conversation Mind Graph is my first prototype exploring that idea.


Observation

Most AI chat interfaces are built like transcripts.

That works well for short exchanges. But once a conversation becomes exploratory, layered, or strategic, the interaction starts breaking down in familiar ways:

  • important ideas get buried under later turns

  • follow-up questions drift away from their original context

  • branching into sub-topics becomes mentally expensive

  • returning to an earlier point means scrolling and searching

  • the structure of the conversation remains invisible

The problem is not the intelligence of the model. It is the structure of the interface.

In practice, many good AI conversations are not linear. They branch, loop back, split into sub-questions, and revisit earlier ideas. But the UI still treats them as one long vertical stream.


Hypothesis

If chat were paired with a visual concept graph, users could continue from ideas instead of only from the latest message.

That could make AI conversations:

  • easier to scan

  • easier to revisit

  • easier to branch from

  • easier to hold in working memory

  • more useful for deeper thinking

This prototype is an attempt to test that hypothesis in the simplest way possible.


The Interaction Model

The prototype uses a dual-pane layout:

  • Left: a live conversation graph

  • Right: the chat transcript

Each user question becomes a node.
Each AI answer is broken into crisp phrase-level nodes representing its main points.

This creates a visual layer over the conversation without replacing the chat itself.


Core behaviors

Questions and answers become nodes

User prompts become single nodes.
AI responses can become multiple nodes if they contain multiple ideas.

Nodes stay short

The graph does not show full messages. It shows compressed phrases — the essence of the turn.

Graph and transcript stay in sync

Clicking a node highlights the related message in chat and jumps directly to the relevant part of the conversation.

Follow-ups can branch from a selected idea

If a node is selected, the next question is asked in the context of that node.

Unselected questions still connect to relevant prior ideas

If no node is selected, the system tries to attach the new turn to the most relevant earlier node.


This turns the interface from a passive transcript into a more navigable thinking surface.


Why I Built This

I’ve been increasingly interested in where AI interfaces still feel structurally underdesigned.

A lot of attention goes into model capability, speed, and polish. But the interaction layer is still immature. Most interfaces are optimized for response generation, not for helping people think through complexity over time.

What interested me here was not “how do I make chat prettier?”

It was:
How do I make an AI conversation easier to think with?


The more I used AI for layered exploration, the more one thing became obvious:

The conversation often becomes more valuable as it grows, but also harder to navigate as it grows.

That tension felt worth probing.


Thought Process

The real issue is not chat itself – it is linearity

Chat is great for speed and natural interaction.

But linearity creates friction once the discussion becomes complex. A transcript is good at preserving sequence, but bad at surfacing structure.

If a conversation contains five useful ideas, the interface should not make those ideas equally hard to retrieve.


A “mind map” is attractive, but too loose as a product metaphor

My first instinct was to think of this as a mind map next to chat.

That was directionally right, but product-wise a bit soft.

A mind map suggests freeform ideation. What this actually needs is stronger structure. The graph has to stay useful, anchored, and tied to source text. Otherwise it becomes visual decoration.

So the idea evolved from “mind map chat” toward a conversation graph with controlled branching.

That shift matters.


The graph should summarize, not transcribe

If full answers are dumped into nodes, the graph becomes unreadable immediately.

So one of the core decisions was to keep nodes phrase-based and compressed. The graph should act as structural memory, not duplicate the transcript.


Selection has to mean something

A lot of prototypes let people click visual elements without changing the actual system behavior.

That would be weak here.

If a user clicks a node, that action should create real conversational context. The next prompt should continue from that selected idea, not just from the bottom of the transcript.

This makes node selection functional, not cosmetic.


The transcript still matters

I did not want to replace chat with a graph.

The graph helps with structure and navigation, but the transcript still carries nuance, detail, and explanation. So the right model felt like graph + transcript, not one instead of the other.


The hardest tradeoff shows up immediately: usefulness vs clutter

The biggest risk with this kind of interface is obvious.

If every answer creates too many nodes, the map becomes noise. If the graph is too sparse, it stops being useful.

So the quality of the experience depends heavily on summarization, clustering, and restraint.

That tension is central to the whole experiment.


What This First Release Does

This first release is a working HTML prototype, created using Claude.

It currently supports:

  • a dual-pane graph + chat layout

  • user and AI nodes with distinct visual styles

  • phrase-level decomposition of AI answers

  • click-to-jump synchronization between graph and transcript

  • node-based follow-up context

  • simple auto-linking for unselected prompts

  • a lightweight interactive demo flow

  • Highlighting the chat related to the node

  • Mark the parts of the chat that are relevant to the node context as a highlighter

It is intentionally simple.

This is not a finished product and not a production system. It is a functional probe designed to test whether this interaction model has real value.


What Worked

A few things felt immediately promising.

The graph made long responses easier to scan

Breaking AI answers into phrase nodes created a more digestible visual summary of the conversation.

Branching from an earlier idea felt natural

The strongest part of the interaction was the ability to return to a specific concept and continue from there without reconstructing context manually.

Graph + transcript felt stronger than either alone

The graph helped with structure. The transcript preserved depth. Together they created a more usable mental model of the conversation.

The interface exposed the shape of the discussion

Even in a simple prototype, it became easier to see what the conversation was actually about – not just what was said most recently.


What Broke or Still Feels Fragile

This first release also made the weaknesses obvious.

Graph clutter is a real risk

If node creation is too aggressive, readability collapses fast.

Phrase extraction is still heuristic

Right now the summarization logic is simple. It works enough for a prototype, but not enough for a robust product.

Auto-linking is useful, but not always right

When no node is selected, the system has to infer relevance. That is helpful when it works and distracting when it doesn’t.

Not every conversation deserves a graph

Some chats are quick and linear. A graph layer only helps when the conversation is actually complex enough to benefit from structure.

The graph needs stronger control mechanisms

Collapsing, clustering, filtering, and semantic grouping would matter a lot as conversations grow.


What This Suggests

This experiment reinforces a belief I keep coming back to:

AI interfaces still have a lot of unexplored surface area at the interaction level.

We have spent a lot of time making models more capable. We have spent less time rethinking the containers in which that capability is experienced.

If AI is going to support deeper thinking, strategy, learning, and exploration, then the interface may need to evolve beyond a simple transcript.

Not every conversation needs that.
But some clearly do.

This prototype is an early attempt to test one possible direction.


What Comes Next

This is the first release in an ongoing exploration of AI thinking tools.

Future iterations may explore:

  • better semantic linking between turns

  • cleaner clustering and branch control

  • exact text-span mapping for node highlights

  • different graph modes for research, strategy, or learning

  • memory across sessions

  • additional tools built around the same core idea

I expect the article and the prototype to evolve together as new experiments are added.


Release Log

Release 01 · Conversation Mind Graph

The first working prototype exploring how AI chat might become more navigable, branchable, and visually structured.

Future releases will extend the interaction model and add new tools to the same experimental system.


Closing Note

This is not a finished product. It is a probe.

A way to test whether AI conversations can become more explorable, revisitable, and spatial – not just longer.

If you’re working on similar questions around AI interfaces, thinking tools, or branchable interaction models, I’d love to connect.