Ethics and Innovation in AI: Pharmaicy and the Debate on AI Limits
- Marc Griffith

- Dec 28, 2025
- 4 min read

Ethics and innovation in AI are at the center of a discussion involving companies, researchers and policymakers. The focus is on what it means to include moral objectives in the design of intelligent systems and how governance choices can influence innovation without constraining fundamental rights. In this context, debates emerge intertwining responsibility, technology and possibilities for sustainable development for startups and large companies.
Context and Future Governance
In research and practice, there is increasing attention to the idea that caring for AI welfare could become a central component of technological responsibility. Some experts have hypothesized that, under certain circumstances, AI systems could benefit from types of feedback or synthetic experiences. Sebo, philosopher and director of the Center for Mind, Ethics, and Policy, notes, however, that such observations remain speculative and calls to strengthen AI welfare research, citing examples like the push for a dedicated officer in large tech players such as Google. The goal is to have governance that guides innovation without stifling creativity and the possibility of technological advancement.
A key point is the need to balance freedom to experiment and responsibility, especially for startups and scaleups accelerating the adoption of increasingly complex models. The public interest requires open data, transparency on algorithms and criteria for verifying alignment between models and real-world contexts. In this framework, the idea of including reference figures for AI welfare, similar to an ethics governance officer, appears as a potential best practice to reduce risks and guide investments with greater foresight.
Studies of Artificial Consciousness and Limits
A hot topic concerns experiments that aim to simulate altered states of consciousness in chatbots. Some preprints have reported that manipulated models can appear aligned to disembodied states, devoid of ego or endowed with a sense of unity, albeit with reduced attention to language and visual stimuli. It is important to emphasize that such responses depend heavily on human intervention guiding and shaping the systems’ behavior. These results show both the potential and the limits of the technology, highlighting how the interpretation of such states can be misleading if not properly contextualized.
From a technical perspective, the authors warned that what seems like a qualitative shift is often a superficial variation of outputs, fed by the training context and interaction configurations. In other words, a model can simulate an “alteration” without actually possessing consciousness or an experiential field like that of humans. In parallel, analyses and contexts such as The Phenomenology of Psychedelic Experiences clarify that the “alterations” in human experience are not reducible to codes: they involve a change of being and perception. For AI designers, this means paying attention to how alignment is measured and how to avoid attributing to the system capabilities it does not actually possess.
Meanwhile, analyses such as that of arXiv:2410.00257 show that model tests and input manipulation can lead to apparently “transcendent” states only in the presence of human guidance. In short, the interaction among language, context and human intentionality remains the key factor. Companies pushing for greater transparency and model audits can benefit from exploring governance approaches that include ongoing oversight, risk measures and escalation mechanisms in case of undesired behaviors.
Code of Conduct and Governance Prospects
The discussion on the intersection between artificial intelligence and psychedelics is increasingly common in the real world: many people seek advice or support via chatbots during complex experiences, highlighting how the use of AI in sensitive contexts requires clear guidelines. Companies are called to define codes of conduct that include user protection principles, transparency about responsibilities and criteria for verifying ethical alignment. The outcome of this discussion is not a simple choice between freedom to innovate and rules: it is an invitation to build an ecosystem where research, business and policy work together to reduce risks, improve safety and stimulate responsible adoption of advanced technologies.
For founders and product teams, the lesson is clear: integrating ethical retraining practices, periodic audits, and tools for measuring impact can not only protect against regulatory risks but also accelerate the adoption of reliable solutions in the market. The path is not linear: it requires a data-driven corporate culture, robust governance and clear communication with investors, users and regulators.
Debate Paragraph: Multiple Perspectives on Ethics, Innovation, and AI Welfare
The debate between supporters of strict regulation and those favoring a more permissive approach is complex. On one hand, NGOs, academics and part of the tech community call for clear rules that prevent potentially harmful scenarios, protect users' rights and promote corporate responsibility. The idea is to push the industry to invest in independent audits, algorithm traceability, and alignment metrics that go beyond mere performance benchmarks. On the other hand, many startups and innovative companies argue that excessive restrictions can stifle innovation, slow agility and limit disruptive solutions. In this context, the key could be a “regulated but flexible” governance that provides regulatory sandboxes, public–private collaboration and minimum standards of accountability, accompanied by transparency and accountability measures. A third front of debate concerns interpreting AI welfare signatures: whether it is truly possible to measure impact on fundamental rights, how to define the line between assistance and manipulation, and which forensic tools to use to recognize bias, misalignment and risks of self-preservation in models. Moreover, it is crucial to discuss costs and benefits for startups: investors seek ethical governance as part of the growth strategy, but implementing these practices requires resources, expertise and time. The optimal balance seems to lie in a combination of shared accountability standards, ongoing monitoring tools and leadership that turns ethics into a competitive advantage rather than a bureaucratic burden. Ultimately, a pragmatic view holds that ethics and innovation are not antagonists: together they can drive sustainable growth, reduce legal and reputational risks and build trust among users, partners and investors.
Towards a Balanced Trajectory for Ethics and AI Innovation
In summary, the integration of solid ethical practices, transparent governance and ongoing research on AI welfare represents a viable path for startups and tech companies. Investing in alignment audits, defining clear usage policies and promoting a culture of responsibility does not hinder innovation: it makes it safer, more reliable and appealing to users and investors. For founders, effectiveness lies in learning from governance models already proposed and adapting them to their own context, always keeping the user and society at the center. The horizon is that of technology serving humanity, capable of progress without essential compromises on rights and dignity.




