Kumar Abhishek

A personal manifesto

I care about making things more usable.


That has been the constant.


Before tech, it was buildings and townships. Now it is digital products, workflows, and increasingly AI-enabled systems.


But the question underneath has stayed the same:

How do you create something that helps people move with more clarity, less friction, and more truth?


I’ve never been very interested in tools as identity.


Tools are temporary. They rise, peak, and disappear. People build careers around them, but tools alone do not define good work. What lasts longer are the principles underneath: understanding behavior, reducing friction, creating value, and making something easier to use in the real world.


That is what I care about.


I don’t want to build products that look impressive but leave people stuck.

I don’t want to build products that manufacture engagement without creating value.

I don’t want to build things that trap attention but weaken judgment.


I’m far more interested in products that help people act, think, learn, decide, and understand.


To me, the best products are not just interfaces. They are environments.

They should not demand unnecessary effort just to be used.

They should not become learning subjects in themselves.

They should quietly support better action.


That belief started in architecture.


When you design physical space, reality corrects you fast. People move the way they move. Constraints are real. Behavior is visible. The truth of use is hard to ignore. I carried that lesson into digital work, and it changed how I see product design.


I don’t think only in screens.

I think in systems.

In flows.

In context.

In consequences.


That is also why I’ve always struggled with narrow role boundaries. My mind does not stop neatly at design, or product, or strategy, or GTM. I tend to see the whole field at once – sometimes messily, but honestly.


Curiosity drives a lot of that.


I learn from history, geography, geopolitics, maps, behavior, startup failures, media patterns, and the small signals hidden in how people talk, choose, react, and repeat. I trust lived patterns as much as formal principles. Maybe more.


I experiment often. I fail often. That is part of the process. I don’t care much for polished certainty. I care more about finding better ways to work, think, and build.


This is also why AI matters to me.


Not because it makes things faster. Speed is useful, but that is the shallowest version of its value.


What interests me is the possibility that AI can help us build better environments for thought. Not just answer engines. Not just execution layers. But systems that help people explore, test, compare, question, and move toward better decisions.


That is the direction I want to work in.


Toward tools that sharpen thought, not dull it.

Toward products that support action, not illusion.

Toward systems that are more usable, more truthful, and more human.


That is how I want to build.

That is what I want to be known for.