OpenAI has not tried to live up to the “open” in its name.
While not open-sourcing any of its new models, the company has spent this week explaining more about how it approaches AI and what problems the tech enhances or enables (like disinformation/deepfakes). and future plans.
Today, it was unveiled.”Model detailsA framework document designed to shape the behavior of AI models used within the OpenAI Application Programming Interface (API) and ChatGPT, and is seeking public feedback. Web form hereWill be open till May 22.
As OpenAI co-founder and CEO Sam Altman Posted about it on X.: “We’ll listen to it, discuss it, and adapt it over time, but I think it’s very helpful to be clear when it’s a big vs. decision.”
Why is OpenAI releasing a model spec?
OpenAI says issuing this working document is part of its broader mission to ensure that AI technologies work in ways that are beneficial and safe for all users.
Of course, this is easier said than done, and doing so leads into the realm of long unresolved philosophical debates about technology, intelligent systems, computing, tools, and society in general.
As OpenAI writes in it. Blog post announcing model details:
“Even if a model is intended to be broadly beneficial and helpful to users, these intentions may conflict in practice. For example, a security company may use phishing emails as synthetic data. wants to develop to train and develop raters who will protect their users, but the same functionality is harmful if used by scammers.”
By sharing the first draft, OpenAI wants the public to engage in a deeper conversation about the ethical and practical issues involved in AI development. Users can submit their comments through OpenAI. Model spec feedback form on its website For the next two weeks.
After that, OpenAI says it will “follow updates on changes to the model spec, our response to feedback, and how our research is progressing in shaping the model’s behavior.” will share in the year.
Although OpenAI announced the model spec in its blog post today, it doesn’t explain how it affects the behavior of its AI models—and whether some of the rules written in the model spec are “system prompts.” or included in the “pre-prompt” or not. AI is used to align the system before it’s released to the public – but it’s safe to assume it has a big impact on it.
In some ways, ModelSpec seems to me to be similar to Anthropic AI’s “constitutional” approach to AI development, a major difference initially but not heavily emphasized by the latter company in some time.
A framework for AI behavior
Model spec is structured around three main components: objectives, rules, and predefined behaviors. These elements serve as the backbone to guide the AI model’s interactions with human users, ensuring that they are not only efficient but also adhere to ethical standards.
- Objectives: The document sets out broad, comprehensive principles that are intended to help developers and end users alike. These include helping users achieve their goals effectively, taking into account the potential impact on various stakeholders, and maintaining OpenAI’s commitment to reflect positively on the community.
- Principles: To navigate the complex landscape of AI interactions, ModelSpec establishes clear rules. This mandate is to comply with applicable laws, respect intellectual property, protect privacy, and strictly prohibit the creation of non-safe for work (NSFW) content.
- Default behavior: The guidelines emphasize the importance of assuming good intentions, asking clarifying questions when needed, and being as helpful as possible without being overly helpful. These defaults are designed to balance the different needs and use cases of different users.
University of Pennsylvania professor Ethan Mullack has likened the influence of AI to mythicism by the likes of the Wharton School of Business. “The Three Laws of Robotics” was coined by sci-fi author Isaac Asimov. Back in 1942
Others took issue with the current implementation of what causes ChatGPT or other AI models to behave differently due to OpenAI’s model spec. As Tech writer Andrew Kern pointed out the X.an OpenAI example included in ModelSpec shows a hypothetical “AI assistant” that doesn’t challenge the user on their false claim that the Earth is flat.
Continuous engagement and development
OpenAI recognizes that the model spec is an evolving document. It is not only a reflection of the organization’s current practices, but also a dynamic framework that will adapt based on ongoing research and community feedback.
This consultative approach aims to bring together diverse perspectives, particularly from global stakeholders such as policy makers, trusted institutions and domain experts.
The feedback received will contribute to improving the model spec and shaping the development of future AI models.
OpenAI intends to keep the public updated with changes and insights gained from this feedback loop, reinforcing its commitment to the development of responsible AI.
Where to go from here?
By clearly defining how AI models should behave with its models, and by seeking continuous input from the global community, OpenAI aims to foster an environment where AI is a part of society. can thrive as a positive force — even as it faces lawsuits and criticism for training artists on the work of non-consensual artists.
Credit : venturebeat.com