Meta Platforms announced on Tuesday that it will give researchers access to components of a new “human-like” artificial intelligence model that it claims can analyse and complete unfinished images more accurately than existing models.
The I-JEPA model, according to the company, uses background knowledge about the world to fill in missing pieces of images rather than looking only at nearby pixels as other generative AI models do.
This approach incorporates the type of human-like reasoning advocated by Meta’s top AI scientist, Yann LeCun, and aids the technology in avoiding common AI-generated image errors, such as hands with extra fingers, according to the company.
Meta, the parent company of Facebook and Instagram, is a frequent publisher of open-sourced AI research through its in-house research lab.
According to CEO Mark Zuckerberg, sharing models developed by Meta’s researchers can help the company by spurring innovation, identifying safety gaps, and lowering costs.
“For us, it’s way better if the industry standardises on the basic tools that we’re using, and therefore we can benefit from the improvements that others make,” he told investors in April.
The company’s executives have dismissed industry warnings about the technology’s potential dangers, declining to sign a statement last month backed by top executives from OpenAI, DeepMind, Microsoft, and Google that equated its risks with pandemics and wars.
One of the “godfathers of AI,” Lecun, has railed against “AI doomerism” and argued in favour of incorporating safety checks into AI systems.
Meta is also incorporating generative AI features into its consumer products, such as ad tools that can generate image backgrounds and an Instagram product that can modify user photos based on text prompts.