Site icon News Central TV | Latest Breaking News Across Africa, Daily News in Nigeria, South Africa, Ghana, Kenya and Egypt Today.

OpenAI Empowers Board to Halt AI Model Release Despite Leadership Approval

OpenAI Empowers Board to Halt AI Model Release Despite Leadership Approval

OpenAI, the artificial intelligence (AI) company, has outlined a safety plan that grants its board the authority to delay the release of an AI model, even if the leadership deems it safe. This move underscores the company’s commitment to empowering its directors to enhance safeguards for advanced AI technology. The guidelines, released on December 18, detail how OpenAI plans to address extreme risks associated with its most powerful AI systems. The release comes after a period of internal turmoil at OpenAI, highlighting the balance of power between the board and the company’s c-suite.

OpenAI, backed by Microsoft, will deploy its latest technology only if it is considered safe in specific areas such as cybersecurity and nuclear threats. The company is establishing an advisory group to review safety reports and submit them to the company’s executives and board. While executives will make decisions, the board retains the authority to overturn those decisions.

Since the launch of ChatGPT a year ago, concerns about the potential dangers of AI, such as spreading disinformation and manipulating humans, have been prominent. OpenAI’s newly formed “preparedness” team will continuously assess its AI systems across categories, including cybersecurity and threats related to chemicals, nuclear issues, and biological hazards. The company aims to address any perceived risks and hazards posed by the technology.

Aleksander Madry, leading the preparedness group, stated that the team would send monthly reports to a new internal safety advisory group. This group will analyze the reports and provide recommendations to CEO Sam Altman and the board. While Altman and the leadership team can make decisions based on these reports, the board has the authority to reverse those decisions.

OpenAI hopes that its guidelines will be used by other companies to evaluate potential risks from their AI models. The guidelines formalize processes that OpenAI has previously followed when assessing released AI technology. Madry and his team developed the details over the past couple of months, receiving feedback from others within OpenAI.

This move towards safety measures in AI development follows industry leaders and experts calling for a six-month pause in developing systems more powerful than OpenAI’s GPT-4, citing potential risks to society. A Reuters/Ipsos poll in May found that over two-thirds of Americans are concerned about the possible negative effects of AI, with 61% believing it could threaten civilization.

Exit mobile version