Inventing Death- Why we must invent death, for AI to be Conscious? Why a Conscious AI would be a safer AI?

There has been a lot of debate on ‘values’ and ‘morality’ around AI. In the last few months, this debate has intensified, as AI makes its way from the lab to our homes.

But there is one basic problem in this debate.

History shows us that ‘one or some individuals’ often influence millions. Like great preachers, monarchs, tyrants and so on. They give followers a value system which either brings them in order or takes them to chaos.

AI is like those million followers, whom we want to give a value system to be followed without question. As such the game is in the hands of those ‘one or some individuals’.

AI then, is irrelevant and the game is an old one. This time a bad human intention (Not AI intentions) might wipe out the humans themselves.

AI is powerful but if we leave it to those ‘one or few individuals’ it’s a deadly weapon. Let’s explore what then can be the alternative.

First let’s ask, why are these ‘one or some individuals’ different?

Also, why sooner or later do some followers have something called ‘realisation’?

The answer is simple. We, humans, have evolved to develop our own value system. A personal one, through our own experiences and journey of life. That internal value system is what gives us the power to influence, realise, revolt or make peace.

But how can we let a value system grow in an AI rather than only providing written code to it?

To answer this we need to understand why this process happens inside us humans. Or say why we actually let it happen.

Let’s consider these points-

  • A purpose less birth- Our parents don’t bring us into the world for a purpose, apart from the fact that we must look after them when they are old. It’s pure love that’s at play in most cases. But, AI is created by corporates for the sole purpose of profit. In fact, this becomes an inherent deep embedded value system for an AI by default. With such a value so deep, can AI be good? Imagine. For us humans, a purpose evolves much later in life. In fact, the values ‘that we develop’ give us our true purposes. Some choose to work hard to earn, others may rob it from others, whatever.

  • Growing up unimportant to the world- Our growing up is not a glamorous event for the world. Nobody is interested in it except our parents till we grow up and gain a purpose (good or bad). That’s a deep thing for intelligence to evolve. This gives us a free mind to develop. AI as it appears, are born adults in a workforce under full limelight.

  • Living with consequences — This is the most important of all the above, especially for an internal value system to develop. In fact, we humans have to live with the consequences of not only our mistakes but of others too. Whereas, AI learns from its mistakes (we humans some of the times do that too) and resets itself without any scars. Without a scar to live, an internal value cannot develop.

Now the important question is-

These three things happen for all species on Earth. So why only do we humans develop an internal value system?

The answer once again is simple.

“We are conscious. We are Singular’

What makes us ‘Singular’ or ‘Conscious’ is the fact that we know, we will die someday. No matter what, we will die. Our time in this body is limited and thus our consciousness too. This single realisation is what makes us conscious.

And I am not talking about death as an eminent threat or fear, because even animals respond to that. Even AI will respond to threats. Threats like its circuit getting switched off, battery dying, energy crisis etc.

What I am talking here is about death which will come even if everything else goes the best it can.

Right now AI has no such death and without that, it can never become conscious like us. In such a scenario it will be like the followers without a ‘realisation’. In fact more dangerous than human followers. AI being immortal and without ‘realisation’ can keep going on its path of order or chaos.

You may question, what’s wrong if it keeps following the path of order and not chaos.

I would say, nothing.

But nature follows balance and chaos happens if order exists. Almost always. So what’s wrong, is the assumption that chaos won’t happen.

Death is a safety measure in the chain of survival of any species. It makes sure any bad genes do not keep existing and multiplying.

For humanity, it ensures that both the ‘one or some individuals’ and the followers are in check. Either they stop, weaken, realise or change. Perpetual order or chaos, are risks for the survival of any species.

We know all evil will come to an end. That’s how we have survived all tyrants, epidemics, wars and conflicts.

And the price for this safety feature is that we must accept the fact that, all good will come to an end as well. It’s a code of nature. It’s always a give and take.

The words ‘purpose’, ‘value’, ‘consequences’, ‘order’ or ‘chaos’, here do not mean being ‘good’ or ‘bad’. I only mean that these can be both good or bad depending on one’s own value system.

For example- it means the values of both Gandhi and Hitler.


So if we create an AI which is conscious we would have to invent a natural death for it first. I am not saying that a conscious AI will be all safe, but safer.


No matter how much we debate a value system is always given to humans and AI will get one too. A conscious AI would have the option to come to ‘realisation’ and question a value system given to it in the long term. An AI programmed for only human values will end up confused. All human values are conflicting. Only deep internal values make a difference.

Even if a conscious AI goes rogue, it would have a terminal date.

If all human efforts fail to stop it this safety net will ensure an end to it, or at least a weakened evil.

So an AI should exist for a limited time.

New AI to replace it must be started from a new starter code like a newborn. This starter code is like DNA to create a similar but new AI entity. Offcoure with its own evolutionary mutation codes.

It then follows its own path to develop its own internal value system. Gets AI skills from external sources. Lives a life and then dies a natural one.

This system will work because it works for Humans. Let me explain-

We humans, like all other living species, have DNA which generates new individuals. DNA has no biases, good or evil. Only pure evolved mutations. An AI starter code could do exactly this.

Now what makes us humans different from all other species is one special ability.

That ability is to pass on the ‘collective knowledge’ from generation to generation. We got this ‘New DNA’ when we invited writing, languages and inscriptions. We made all this progress because of this ‘New DNA’. This is our collective ‘Humanity DNA’ as if we were one single organism.

This ‘New DNA’ is also available for AI. This will be our shared DNA. The human and AI ‘Singular DNA’. That’s what is the key.

Enriching this ‘Collective knowledge’ is the purpose we would share. This common goal can bring peace and cooperation between Ai and us. Else we would be on opposite spectrums.

And to do this we need to give consciousness to AI and inventing death is our best bet.

Do share your thoughts on this.