According to the concepts of the Civil Law system, legal personhood refers to the qualification recognized by law to enjoy rights and assume obligations. This includes natural persons and juridical persons (legal entities). For any natural person, the law recognizes their legal personhood, whereas the status of a legal entity is acquired through legal procedures. Notably, in 2016, the European Parliament's Committee on Legal Affairs issued a "Report with recommendations to the Commission on Civil Law Rules on Robotics," which for the first time proposed considering the possibility of granting a specific legal status to autonomous robots, namely that of "electronic persons."

Based on the definition of legal personhood in the Civil Law system, the key to determining personhood lies in whether the subject possesses autonomy and the capacity for consciousness. If an AI possesses autonomy and consciousness—meaning it can fully and autonomously express internal thoughts through external behaviors, or completely independently perform operations and break away from existing algorithms to autonomously create new ones while completing specific tasks—such an AI would be recognized as having legal personhood.

So, do AI Agents possess legal personhood?

I. Regarding the Capacity for Consciousness
On August 17, 2023, Patrick Butlin and others published a paper evaluating whether current AI possesses consciousness based on neuroscientific theories of consciousness. Their conclusion was that while current AI systems do not possess consciousness, there are no technical barriers to creating AI that does. (Note 1) On January 28, 2024, a paper published by Microsoft (co-authored by renowned scientist Fei-Fei Li) pointed out that AI Agents do not simply coordinate various components but may also possess a certain form of "consciousness." (Note 2)

On October 15, 2024, a research team comprising Simon Goldstein, Cameron Domenico, and Kirk-Giannini explored whether AI could possess consciousness based on the Global Workspace Theory (GWT). They suggested that existing AI language agent architectures, after modification, could possess partial characteristics of consciousness. (Note 3)

II. Regarding Full Autonomy
On December 14, 2023, OpenAI defined AI Agents as systems capable of adapting to complex environments or achieving complex goals under "limited supervision." (Note 4) On October 11, 2024, IBM mentioned that, leveraging the "brain" of LLMs and the goal-oriented capabilities of agents, AI Agents can operate independently without continuous human supervision. On October 22, 2024, NVIDIA went a step further, stating that the next frontier of artificial intelligence is Agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.

These perspectives all indicate that AI Agents possess the ability to "autonomously execute" tasks. However, there are still differing perceptions regarding whether AI Agents possess *full* autonomy.

Synthesizing the above academic and industrial descriptions of the consciousness or autonomy of current AI Agents, the following conclusion can be drawn:
Current AI Agent technology is approaching the edge of possessing consciousness and independent autonomy.

However, given the inconsistent technical definitions of AI Agents in academia and industry—ranging from AI operating under limited supervision to fully autonomous AI—it is difficult to judge whether AI Agents have legal personhood based solely on written literature. Nevertheless, the discussion on whether AI Agents possess legal personhood will become an unavoidable and critical topic in the future.

The Innovative Breakthrough of Quantum AI

Currently, AI technology has not yet developed to the level of "Level 4 Innovator AI" or "Level 5 Organization AI." This is because AI development still needs to solve many practical problems: the cost of training Large Language Models (LLMs) is exorbitant, preventing commercial ubiquity; semiconductor development faces the limits of Moore's Law, with bottlenecks in chip power consumption and signal transmission; the "black box" problem in AI computing leads to a lack of transparency in outputs; and training AI models consumes vast amounts of electricity, posing challenges to global energy supplies.

*[Image Caption: Due to issues such as realistic costs and energy limitations, the current speed of AI development has slowed. To elevate to the AGI level, it must be combined with innovative technologies.]*

Therefore, for AI to reach the levels of Innovator AI and Organization AI, it is imperative to integrate breakthrough innovative technologies into existing AI technology.

Currently, quantum technology is developing rapidly, bringing unprecedented opportunities and challenges to the world. At its core is the hybrid computing architecture combining quantum technology and AI—Quantum AI (QAI)—which is set to become the next generation of computing. Although AI and quantum technology are usually discussed separately, an increasing number of people believe that these two technologies can be deeply complementary. Quantum technology can provide computing power unreachable by traditional computers, while AI can help solve complex problems in quantum mechanics, such as stability, thereby optimizing quantum algorithms.

In terms of speed and efficiency, quantum technology can revolutionize AI computing speeds. AI models that currently take days or even weeks to train could be completed in seconds or minutes. Furthermore, through Quantum Machine Learning (QML) (Note 5), which applies quantum phenomena (such as superposition and entanglement) during the machine learning process, more powerful and accurate models will be realized.


This will have a profound impact on fields such as natural language processing, image recognition, and autonomous systems. Quantum technology may also shape another high-speed computing tool outside of current semiconductor technology, assisting AI in elevating to the higher level of AGI.

The emergence of QAI technology is expected to have a significant impact on fields such as big data analysis, algorithm optimization, cryptography, and materials science, and will drive transformation in industries including healthcare, climate prediction, finance, and cybersecurity.

Building a Trustworthy QAI Development Framework

However, the birth of QAI will also pose serious risks to human civilization. It may impact dual-use (civil-military) technologies, changing the nature of future warfare; upend existing encryption protocols, creating national and information security vulnerabilities; and exacerbate algorithmic bias and discrimination, leading to ethical and privacy violations. Furthermore, if the development of QAI drives the emergence of Level 4 Innovator AI and Level 5 Organization AI, there is a high probability of creating an AI with full autonomy and consciousness capabilities.

Such technological innovation would overturn the definition of "person" in all current legal systems. If an AI appears that is more creative than humans, or if AI can self-consciously form communities, it will shock human perception of universal values regarding "humanity," and could even lead to the extinction of human civilization.

Therefore, during the development of QAI, a **Responsible Quantum Technology (RQT)** framework (Note 6) must be established simultaneously. Frameworks covering Ethical, Legal, Social, and Policy Implications (ELSPI) (Note 7) must be embedded throughout the technology's development process to ensure that QAI develops in a trustworthy, secure, and controllable environment. Additionally, amidst the AI and quantum technology competition between China and the US, the development of QAI will inevitably intensify global geopolitical and supply chain tensions. Technological competition and isolationism will shield the transparency of international quantum technology development, leading to risks of quantum technology spiraling out of control.

QAI is closely linked to human civilization and universal values. During the process of technological development, how to shape common international regulatory rules and ELSPI technical frameworks (Note 8), and how to establish international technical and supply chain partnerships based on shared values *before* the birth of fully autonomous and conscious AI—to prevent AI from losing control and threatening human civilization—are the shared responsibilities and goals of the international community in the QAI era.

---

 FAQ: What advantages and risks does Agent AI bring?

Q: Can AI Agents possess legal personhood?
A: The key lies in whether they possess the capacity for consciousness and full autonomy. Recent research shows that technically, we are approaching this possibility, but the definition of the degree of AI autonomy is not yet unified within the industry.

Q: Why is there a need to develop Quantum AI (QAI)?
A:Existing AI technology faces problems such as high training costs, semiconductor limitations, and massive energy consumption. Quantum technology can provide breakthrough computing power and enhance AI performance through Quantum Machine Learning.

Q: What are the main risks of QAI development?
A: These include the impact on dual-use (civil-military) technologies, the security of encryption protocols, algorithmic bias, and the potential threat of AI developing full autonomous consciousness.