Meta Platforms Inc, which owns Facebook, is opening up access to a broad language model for artificial intelligence research, the social media company said on Tuesday.
Meta said its model was the first 175 billion-parameter language model to be made available to the wider AI research community.
“Large language models” are natural language processing systems that are trained on massive volumes of text, and are able to answer reading comprehension questions or generate new text.
In a blog post, Meta said the release of its “Open Pretrained Transformer (OPT-175B)” model would improve researchers’ abilities to understand how large language models work.
Meta said restrictions on access to such models had “hampered progress in efforts to improve their robustness and mitigate known issues such as bias and toxicity.”
Artificial intelligence technology, which is a key area of research and development for several major online platforms, can perpetuate human societal biases around issues like race and gender. Some researchers worry about the harms that can spread through large language patterns.
Meta said it “hopes to increase the diversity of voices defining the ethical considerations of these technologies.”
The tech giant said that to prevent abuse and “maintain integrity” it is releasing the model under a non-commercial license to focus on search use cases.
Meta said access to the model would be granted to academic researchers and those affiliated with government, civil society and academic organizations, as well as industry research labs. The release will include the pre-trained models and the code to train and use them.