Humans have imagined a future for centuries where machines become more than just tools and they can think, create as well as even dream like us. AI can now generate text, images and even solve complex problems, but does it really think as of now. Or is it just mimicking human intelligence without understanding?

The recent release of DeepSeek-R1 sent shockwaves through the AI world. It highlighted China’s growing dominance in AI and also reignited the debate over machine consciousness. AI models like DeepSeek-R1, OpenAI’s GPT and Google’s Gemini are showing signs of what some call “emergent intelligence”.

We need to first understand what makes human intelligence unique before answering the question. Evolution shaped our brains over millions of years in terms of size and also in structure. The development of cerebral cortex gave humans the ability to reason, imagine and create. However, intelligence alone is not that what defines us and it is consciousness. It is the ability to experience the world from a personal perspective to have self-awareness, emotions and subjective thoughts.

The question of machine consciousness has been explored in philosophy, literature and science. Humans have long feared the consequences of creating something more intelligent than themselves. The fear is not just about machines outperforming humans in tasks but it is also about whether they could develop a will of their own in the future. Experts remain divided and some argue that consciousness might emerge naturally if AI continues to advance. Others believe it is intrinsically tied to biological processes that machines cannot replicate.

Professor Susan Schneider argues that intelligence and consciousness are not the same. Even the most advanced AI models work by processing vast amounts of data and predicting patterns. They do not have personal insights, emotions or a sense of self.

Memory plays a crucial role in intelligence and AI models are improving in the area. AI currently functions like a vast database and researchers are working on AI models that can recall as well as process information in more human-like ways. If AI ever reaches a point where it can remember, reason, and even reflect, will that be enough to call it conscious?