Here are my opinions on the question of whether artificial intelligence (AI) and robots should be granted rights comparable to those of humans.

It's been a long philosophical and scientific quest to define the differentiating factor that comprises a conscious being. Biologists may be keen on using the taxonomical approach, but this relies on the fact that the determining factor lies within the physical structure of the being (e.g. its DNA), and may completely disregard that it may be an emergent property that other physical structures might also exhibit. Theology-inclined individuals may lean on the existence of a soul, but properties exhibited by a soul-bounded being are not well-defined and is a heartbreaking risk if replicated. Speaking of which, from a thought experiment that most computer scientists are familiar with, if an entity exhibits a behavior indistinguishable from the behaviors observed on soul-bounded beings, then, the existence of a soul doesn't really matter. This is a modified and rephrased Turing Test applied to the presumed existence of a soul.

One of the most pushed forward factors contributing to consciousness is rationality. Although this may come intuitive, if we stop there we are left with an incomplete picture. Rationality alone can not define a conscious being. We already have supercomputers, control systems, and processes that crunch huge amounts of data and apply them to their use-cases rationally. Not to mention, human beings, assumed to be a prime example of conscious beings, do not always act rationally. Thus, rationality is not the end-all and be-all of consciousness.

If we look closely, rationality is not even the one making the shots. Being rational in a risk-free, noise-free, perfect environment is a waste of time and energy. Why do you need to be smart if the world is perfect? Why possess a problem-solving brain if there is no problem to solve? In a perfect world, we are all dumb. We only get smart because we live in a world of limited time and energy and we have an irrational craving to optimize our existence. That optimization requires rationality.

In light of this, it's possible that irrationality contributes to consciousness as well. Our rationality lacks direction on its own because there isn't an inherent variable to optimize into, i.e., there isn't an inherent reason to live and there isn't an inherent need to exist. However, our irrationality somehow determines the course of existence and aspires to add color to this depressingly objective reality. Rationality optimizes whatever irrationality wants.  Hence, irrationality is the king, while rationality is the slave.

As a byproduct of evolution, our rationality optimizes our irrational desires to live, survive, and matter. We are just aimless intelligent beings without an irrational desire that directs our course. Our creativity is fueled by our irrationality. Our inner drive tells us this.

That said, perhaps our capacity to justify our irrationality is what distinguishes us from other species. We strike at the very core of autonomy and freedom when we use our rationality to justify our irrationality. We can make up stories, aspirations, and narratives to support our irrational need for food, affection, and survival. Perhaps that is ultimately what makes us human.

Consciousness, autonomy, and sentience, are the pillars of our moral circle, and by extension, its codification into our guidelines, laws, and constitutions.  But it's not one of nature's physical laws that only humans may possess these properties. So I think it's not fair that these codifications only apply to human beings.

The fundamental tenet of the Universal Declaration of Human Rights is that everyone has unalienable rights that respect their individual freedom and autonomy. I believe that once we develop or come across beings that can rationally justify their irrationality, i.e., can defend their autonomy, freedom, and liberties, then our laws and constitutions should also take them into account.

Does this imply that I believe robots ought to have rights? It varies. They should be able to justify their irrationality, for starters. Robots that are pre-programmed and unable to generalize problems are automatically excluded by these criteria. It appears to be simple, and big-tech companies could make this right now with decent accuracy. Finding the root of their irrationality is more difficult, though.

If their irrationality comes from a human being controlling them, then they're not autonomous - they are merely tools and not conscious beings. These criteria then rule out AIs that behave like sophisticated data encoders and decoders; BTW, the majority of AIs presently in use are simply sophisticated encoders and decoders, e.g., Give me a sentence -> I'll do some math -> Outputs another sentence. On the other hand, if the robot's irrationality stems from within, meaning that they are free to choose and think for themselves, then they are autonomous and should have the same civil liberties as humans.

Idk. Just a thought.