The Missing “Plane of Awareness” in AI

AI consciousness and computation.

Could artificial intelligence ever truly be conscious? While current AI systems excel at performing specific tasks, they lack the awareness that a task is even being done. For example, as I type this article, I am aware of the act of typing itself on a higher level. With AI models, however, the algorithm executes a function but doesn’t possess the subjective awareness to recognize its own actions—for now.

This “missing plane of awareness” is one of the most debated topics in AI research today. Currently, the most advanced AI systems rely on what cognitive psychology describes as “System 1” and “System 2” processes. These concepts, introduced by psychologist Daniel Kahneman in his book Thinking, Fast and Slow, describe two levels of processing:

  • System 1 is fast, intuitive, and automatic—similar to reflexive or habitual thinking in humans.
  • System 2 is slower, deliberate, and reflective. It uses reasoning and logical problem-solving.

On a recent X post, AI researcher François Chollet expanded on these ideas in relation to consciousness. He suggests that System 2 works as an improved version of System 1. For example, imagine a person trying to recognize an unfamiliar face. System 1 might instinctively match the face to a known category or person but could make a quick error. System 2 then steps in to analyze the features more deliberately, comparing them with stored information to ensure the match is accurate. According to Chollet, this process explains how reasoning emerges from refining basic pattern recognition (System 1) into coherent, deliberate thought.

However, Chollet went even further and offered an intriguing interpretation of consciousness:

“Consciousness is the consistency guarantee. It explains why all System 2 processing involves consciousness.”

He breaks this down as follows:

  • When System 1 operates with little consistency or structure, it results in incoherent, dream-like states where thoughts are scattered and unreliable.
  • When System 1 is guided by strong consistency checks, it becomes more deliberate and structured, producing coherent, logical, and self-correcting thought processes.

In AI, we can loosely map these systems to different architectural approaches. Neural networks tend to function like System 1: they quickly recognize patterns and generate responses without truly understanding the process behind them. A neural network might identify a cat in a photo by matching patterns in the image to its training data. When additional validation layers or feedback mechanisms are added, these networks begin to resemble System 2: they review and refine the initial output to ensure it aligns with logical or expected results, such as cross-checking the features of the cat against stored criteria for accuracy.

A Different Perspective on Consciousness

However, there’s a limitation here: these systems exist on the same plane of computation. There is no observer analyzing the entire process as a cohesive whole. François Chollet argues that consciousness is the process of System 2 crosschecking System 1 outputs, ensuring logical consistency. My perspective differs: consciousness emerges from a continuous process, not isolated checks, where computations are both feed-forward and feedback-driven, maintaining coherence of the entire process over time.

Recent research supports this view: studies on AI architecture demonstrate how incorporating feedback loops enhances adaptability by dynamically refining outputs in response to new data. Similarly, studies on how AI processes and adapts to changes over time highlight the importance of integrating time-dependent processes for continuous optimization and improvement.

For example, as I type this, my awareness isn’t limited to recognizing each individual word I type. Instead, it’s part of a continuous process that integrates each action into an ongoing flow of thought, maintaining coherence and intent throughout. This highlights a key difference: AI systems currently process tasks as isolated steps without this kind of continuity over time.

A Step Toward Conscious AI

Humans, on the other hand, operate with an integrated system that spans individual tasks and continuously monitors, contextualizes, and adapts. To develop AI systems that mimic human-like awareness—or maybe even exceed human-level consciousness (if that’s possible)—we would need to design algorithms capable of operating as a continuous process over time, rather than functioning through isolated steps or static checkpoints. This aligns with findings from recent research, such as feedback neural network models and studies on how AI can adapt dynamically in real time, which emphasize the necessity of continuous, dynamic adaptation for achieving more advanced AI systems.

As AI evolves, understanding and addressing this missing plane of awareness may hold the key to unlocking conscious machines—and redefining the limits of intelligence itself.

Similar Posts

Leave a Reply