Avocado: Closed AI Models and the Race to Superintelligence
- Marc Griffith

- Dec 14, 2025
- 3 min read

The transformation of AI is accelerating attention to closed AI models, where access, training, and use are controlled by the owning company. In this context, Meta is advancing with Avocado, an internal model that promises to be released by spring 2026. This is a proprietary version not intended for external developers, aimed at ensuring safety, compliance, and centralized control of innovation. The move comes at a time when open solutions have shown limitations in terms of security and dissemination, and when the company seeks to preserve its competitiveness in the global landscape.
Avocado marks a significant turning point: it positions itself as a closed model, available only within the Meta ecosystem. The decision was also justified by the alleged failure of Llama 4 in early 2025, which fueled reflection on the value of a controlled system. According to CNBC, the push toward closed AI models stems from concerns that open technologies could be incorporated into projects outside the original ecosystem, thereby limiting the management of safety standards and data.
Closed AI models: a new development paradigm
With Avocado, Meta aims to create a centralized AI ecosystem where algorithms are not available for public download or freely modifiable. This enables the company to update and manage the models in a controlled manner, maintaining high safety standards and reducing the risks of misuse. Framing it as a proprietary form also helps protect intellectual property and ensures tighter governance over data and usage interfaces.
Why Meta is betting on Avocado
Choosing closed AI models serves security, IP control, and regulatory foresight goals. The Avocado team includes Tbd Lab, a subgroup of Meta AI Superintelligence Labs led by Alexandr Wang, an advocate for stricter model governance and centralized development. Recently, Meta signaled that it would shift resources from historic projects like the Metaverse toward advanced AI, aiming to focus on tools of strategic value for the corporate ecosystem.
This trend contrasts with the traditional open AI narrative: open models have spurred rapid innovation but have also raised security, IP, and compliance challenges. The impression is that large companies are moving toward a hybrid or fully closed model, entrusting AI management to internal governance capable of balancing innovation, accountability, and stakeholder empowerment.
Implications for startups and investors: the horizon of closed AI models
For startups and investors, the rise of closed AI models requires a reassessment of development and collaboration strategies. Companies could lean toward solutions where proprietary data and training pipelines stay within a controlled ecosystem, while also offering partnership opportunities with large players who centrally manage AI resources. Adopting closed models could promote stricter safety standards and greater control over sensitive data flows, with potential implications for regulatory compliance and privacy management, especially in regulated markets.
Ethical perspectives and debate
The discussion around closed AI models is not without controversy: on one side there is the argument for safety, stability, and data protection; on the other, criticism of limited innovation and the potential development of closed ecosystems that exclude developers and startups. Proponents argue that internal governance can prevent abuses, reduce risks of spreading harmful content, and ensure reliable results. Critics, instead, point out that closed AI models could slow the innovation ecosystem, create dependencies on a few players, and hinder research transparency. In any case, regulation and governance will be decisive factors in shaping the future of AI.
The discussion remains practical: companies like Meta seek to balance innovation with the need for control, safety, and compliance. At the same time, the startup and venture capital landscape will be called to innovate around hybrid models: APIs for integration, data management tools, and platforms that offer governance without sacrificing development speed and experimentation. In this context, closing models like Avocado is not necessarily an obstacle to innovation, but could reshape the dynamics of those investing in AI, promoting partnerships with large players able to provide advanced infrastructure and security standards.
Concrete cases and market signals
Keywords like Avocado and closed AI models are becoming useful indicators for understanding where the AI sector is moving. News about Meta, internal developers, and strategic choices show a trend toward governance, security, and protection of intellectual property. Even though Llama 4 represented a benchmark in open-source discussions, the current direction seems to reward centralized management of AI resources, with tangible implications for investments, technology alliances, and market-focused project planning.
Conclusion: between control and innovation
The trajectory of closed AI models as indicated by Avocado reflects a persistent tension between openness and control. For startups and innovators, it means asking which AI models offer the best balance of speed, safety, and competitive value. The next phase of AI will likely be defined by more sophisticated governance and targeted cooperation between large companies and new innovative players, where quality and responsible use will be at the heart of decisions. Ultimately, closed AI models could redefine how projects and investments are planned in the coming years.




