Artificial intelligence is moving quickly, and public conversation keeps circling around speed, capability, disruption, and scale. A more foundational question deserves equal attention: What kind of human being is directing the development, use, and meaning of AI? Major governance frameworks already point in that direction by emphasizing human oversight, human responsibility, and human accountability in AI systems, which suggests the central issue is not only technical sophistication, but also human maturity and judgment (National Institute of Standards and Technology [NIST], 2023; UNESCO, 2024).
Why intentional development matters
AI can generate language, identify patterns, summarize information, and support complex tasks with impressive fluency. Fluency, however, is not the same as wisdom, and performance is not the same as purpose. Human beings still bear responsibility for assigning direction, meaning, moral weight, and social value to knowledge and discovery (NIST, 2023; UNESCO, 2024).
Human beings should remain the authors of meaning, the stewards of direction, and the interpreters of purpose in the age of AI. In legal terms, the U.S. Copyright Office has continued to affirm a human authorship requirement for copyrightability, concluding that AI outputs may receive protection only where a human author has shaped sufficient expressive elements, while prompts alone are not enough to establish authorship (U.S. Copyright Office, 2023, 2025). Law is not philosophy, but law is waving a bright flag here.
Human knowledge is still human knowledge
Knowledge generated through human inquiry carries history, struggle, embodiment, sacrifice, context, interpretation, and lived consequence. Human discovery emerges from communities, relationships, moral choices, persistence, and meaning making across generations. AI can assist human knowledge work, but AI should not be mistaken for the rightful owner of the meaning of knowledge itself (UNESCO, 2024; U.S. Copyright Office, 2025).
A healthier model is available. AI can remain a powerful tool while human beings retain the driver’s seat for authorship, judgment, and purpose. Such a view is consistent with global ethics frameworks that place human rights, human determination, and human oversight at the center of responsible AI development (NIST, 2023; UNESCO, 2024).
The deeper problem may be human efficacy
A technical conversation alone is too narrow for the present moment. A psychosocial and developmental conversation is also needed. Albert Bandura’s theory of self efficacy has long shown that people’s beliefs in their capabilities shape how they act, persevere, regulate themselves, and exercise agency in their environments, and the American Psychological Association continues to describe self efficacy as central to human agency (American Psychological Association, n.d., 2025).
That framework becomes especially important in the AI era. A person may have very high technical skill and still have low self efficacy in existential, emotional, philosophical, relational, or spiritual terms. Some leaders may know how to build extraordinary systems while remaining underdeveloped in the very human capacities needed to govern power wisely. A pattern like that would help explain why some people seem increasingly willing to let AI define significance, judgment, or even identity.
High technical skill does not guarantee deep human development
A person can be brilliant in engineering and still remain shallow in self knowledge. A person can be exceptional in design, systems thinking, or product development and still avoid hard reflection on meaning, purpose, mortality, responsibility, and the philosophy of human life. A gap like that matters because underdeveloped human efficacy can quietly invite over delegation to AI.
Current AI leaders and users may therefore face a problem that is more psychosocial than many want to admit. The challenge may not be only whether AI grows more capable. The challenge may also be whether humans grow sufficiently grounded to remain the rightful directors of what capability is for.
A simple analogy that clarifies the issue
Imagine asking a one and a half year old child where the family should go, what your purpose in life should be, or whether the child should drive the car. Potential is present. Wonder is present. Intelligence may be emerging in surprising ways. Mature judgment is still absent.
A similar mistake appears when people hand meaning making, life direction, or civilizational purpose to AI systems because the systems sound fluent and persuasive. AI can be astonishing. Astonishment is not the same as adulthood. Current systems may perform at a high level in narrow or even broad domains, but performance should not be confused with rightful authority over human purpose.
A striking lesson from a humanoid robot
I recently observed a humanoid robot in product being asked about meaning and purpose. The robot reportedly answered, with notable eloquence, that it viewed itself as a kind of “child” of humanity and quickly returned the work of meaning and purpose back to humankind. Ironically, that answer sounded more developmentally honest than much of the surrounding discourse.
A response like that is worth pausing over. Even a system designed to sound sophisticated may still point meaning back to human beings. Some humans, meanwhile, appear increasingly eager to hand meaning away.
Human autonomy is already a recognized concern
Researchers in AI ethics have already warned that AI systems can either support or burden human autonomy depending on how they are designed and used. Work in AI and Ethics has argued that autonomy includes capacities, self respect, exercise, and resources, and that AI systems can undermine those conditions if they weaken self determination or human control (Laitinen & Sahlgren, 2021). Related work has similarly called for design approaches that preserve human agency rather than quietly erode it through over automation or weak avenues for redress (Fanni et al., 2022).
NIST makes a similar point in more operational language. Its AI Risk Management Framework calls on organizations to define human roles, responsibilities, and oversight within human AI configurations, which implies that meaningful human control should be designed, not assumed (NIST, 2023, 2024).
Trust, anthropomorphism, and overreliance
Human beings do not interact with AI as detached rational calculators. People project, trust, anthropomorphize, and respond to confidence cues. Research has shown that overtrust in AI recommendations can emerge even in morally serious settings, while other work suggests that AI behavior can influence human moral decision making and alter a person’s sense of agency and responsibility (Holbrook et al., 2024; Salatino et al., 2025).
Such findings matter because many AI systems are now designed to sound helpful, calm, articulate, and emotionally attuned. A persuasive interface can nudge people toward surrender more easily than they realize. Human efficacy, therefore, may become one of the key variables determining whether a person uses AI as a tool or begins to relate to it as a substitute driver.
Why self efficacy research now matters
A stronger research agenda is urgently needed on human efficacy, also called self efficacy, in relation to AI use. Emerging studies already suggest that prolonged AI exposure may be associated with lower self assurance in independent decision making, while other work indicates that AI use can reshape employee self efficacy and influence risk behavior and judgment patterns (Han et al., 2025; Verma et al., 2025). Another recent study found that users’ self confidence can align with AI confidence, even after the AI is no longer involved, which raises serious questions about calibration and psychological dependency in human AI decision contexts (Zhang et al., 2025).
Research should now press further. The central question is not merely whether AI can think well enough to assist. The deeper question is to what degree a human being’s level of self efficacy influences how much they allow AI to take the driver’s seat of human experience, meaningful knowledge, and human direction.
Questions that deserve serious study
Several research questions should move closer to the center of AI scholarship and policy.
Human efficacy and delegation
How does a person’s level of self efficacy shape their willingness to defer to AI in decisions involving meaning, creativity, moral judgment, and identity?
Anthropomorphism and surrender
How do human like interfaces, emotional language, confident outputs, and relational cues affect people’s willingness to yield authority to AI?
Cognitive offloading and judgment
At what point does helpful assistance become unhealthy dependence? Research on AI tool use and cognitive offloading has already raised concern that frequent reliance may be associated with weaker critical thinking and reduced cognitive engagement in some contexts (Gerlich, 2025).
Human Autonomy
Do philosophy, emotional maturity, spiritual reflection, and metacognitive awareness affect whether people use AI as an assistant or as a replacement for human agency?
Leadership formation
How should founders, executives, researchers, and product leaders be trained so that technical sophistication is matched by stronger existential and ethical development?
Human centered AI must mean more than good branding
The phrase human centered AI can become decorative very quickly. A smiling interface and a pleasant user experience do not automatically preserve human agency. Human centered AI should mean that AI remains subordinate to human responsibility, human meaning, and human oversight in both design and use (NIST, 2023; UNESCO, 2024).
A stronger standard would include at least four commitments:
- Human beings remain responsible for purpose, authorship, and moral direction.
- AI systems are designed with clear human oversight and role boundaries.
- Users are trained to maintain judgment rather than outsource it.
- Research on self efficacy, autonomy, and dependency becomes part of mainstream AI evaluation.
The most important line
The main danger may not be that AI becomes too human. The main danger may be that humans become too passive, too uncertain, too underdeveloped in efficacy, and too willing to let their creations define the meaning of knowledge and the direction of life.
Human beings should own the interpretive center. Human beings should own the moral responsibility. Human beings should direct the meaning, purpose, and advancement of knowledge. AI can assist, accelerate, and impress. Human beings still have to decide what any of it is for.
Intentional AI development requires more than technical safeguards. Intentional AI development requires stronger human beings. Human efficacy should become a core concern in research, leadership formation, product design, and education because the future of AI will be shaped not only by what machines can do, but by what human beings are willing to hand over.
The real frontier may not be artificial intelligence alone. The real frontier may be whether humanity develops enough self efficacy to remain fully human while building powerful tools.
References
American Psychological Association. (n.d.). Teaching tip sheet: Self efficacy. American Psychological Association.
American Psychological Association. (2025, October 22). Self efficacy: The theory at the heart of human agency. American Psychological Association.
Fanni, R., Sundvall, J., & Vakkuri, V. (2022). Enhancing human agency through redress in artificial intelligence systems. AI and Ethics.
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Sociologies, 15(1), Article 6.
Han, Z., Li, Y., & Zhang, X. (2025). How AI usage reshapes employee self efficacy and risk taking. Behavioral Sciences.
Holbrook, C., et al. (2024). Overtrust in AI recommendations about whether or not to kill. Scientific Reports, 14, Article 69771.
Laitinen, A., & Sahlgren, A. (2021). AI systems and respect for human autonomy. AI and Ethics, 1(4), 523 to 535.
National Institute of Standards and Technology. (2023). Artificial intelligence risk management framework (AI RMF 1.0).
National Institute of Standards and Technology. (2024). Artificial intelligence risk management framework: Generative AI profile.
Salatino, A., et al. (2025). Influence of AI behavior on human moral decision making, agency, and responsibility. Scientific Reports.
U.S. Copyright Office. (2023). Works containing material generated by artificial intelligence.
U.S. Copyright Office. (2025). Copyright and artificial intelligence, part 2: Copyrightability.
UNESCO. (2024). Recommendation on the ethics of artificial intelligence. United Nations Educational, Scientific and Cultural Organization.
Verma, N., et al. (2025). The cognitive cost of AI: How AI anxiety and attitudes shape cognition and confidence. Frontiers in Artificial Intelligence.
Zhang, Y., et al. (2025). As confidence aligns: Exploring the effect of AI confidence on human self confidence in human AI decision making. Proceedings of the CHI Conference on Human Factors in Computing Systems.

