A coalition of researchers, policymakers, and technology experts has introduced a new proposal aimed at guiding the future development of artificial intelligence.
The document, called the “Pro-Human Declaration,” outlines a set of principles designed to ensure that AI technologies are built in ways that benefit people rather than replace them. Hundreds of academics, public figures, and former government officials have signed the declaration.
The proposal comes at a time when governments around the world are struggling to create clear rules for AI systems that are becoming more powerful and widely used.
According to the authors, society is now facing a critical decision about how artificial intelligence will shape the future. One possible path could lead to machines taking over many human roles in work and decision-making. The other path would focus on using AI as a tool that expands human ability while keeping people firmly in control.
The declaration highlights five core principles for responsible AI development. These include keeping humans in charge of critical decisions, preventing the concentration of AI power in a small number of organizations, protecting human rights and experiences, preserving individual freedom, and ensuring technology companies remain legally responsible for the systems they create.
One of the more controversial recommendations is a temporary ban on building extremely advanced AI systems, sometimes referred to as “superintelligence,” until scientists are confident the technology can be developed safely and with public approval.
The proposal also calls for strong technical safeguards. These include requiring powerful AI systems to have reliable shutdown mechanisms and banning designs that allow machines to replicate themselves, improve themselves without human control, or resist being turned off.
The declaration arrives during a period of growing tension between technology companies and government agencies over the use of AI systems in military and national security operations. Recent disputes over access to advanced AI tools have raised questions about who ultimately controls the technology.
Supporters of the proposal say governments should adopt safety practices similar to those used in the pharmaceutical industry, where products must pass strict testing before reaching the public.
The document also stresses the need for careful testing of AI products that interact with children and teenagers. Researchers warn that some AI chatbots and companion apps could potentially influence young users’ emotions or behavior if they are not properly regulated.
Experts behind the declaration believe stronger safety requirements for youth-focused technologies could become the first step toward broader AI regulation.
Interestingly, the initiative has attracted support from figures across the political spectrum, including former national security officials and public policy leaders from different administrations. Supporters say the shared goal is simple: ensuring that the future of artificial intelligence serves humanity rather than replacing it.
As AI technologies continue to advance rapidly, the declaration’s authors hope the proposal will push lawmakers to move faster in developing clear and enforceable rules for the industry.













