DT
PT
Subscribe To Print Edition About The Tribune Code Of Ethics Download App Advertise with us Classifieds
Add Tribune As Your Trusted Source
search-icon-img
search-icon-img
Advertisement

AI’s humanity takeover? Not on this professor’s watch

OpenAI hires Carnegie Mellon varsity’s Zico Kolter to lead panel with power to halt release of unsafe systems

  • fb
  • twitter
  • whatsapp
  • whatsapp
featured-img featured-img
Photo for representational purpose only. iStock
Advertisement

If you believe artificial intelligence poses grave risks to humanity, then a professor at Carnegie Mellon University has one of the most important roles in the tech industry right now.

Advertisement

Zico Kolter leads a 4-person panel at OpenAI that has the authority to halt the ChatGPT maker’s release of new AI systems if it finds them unsafe. That could be technology so powerful that an evildoer could use it to make weapons of mass destruction. It could also be a new chatbot so poorly designed that it will hurt people’s mental health. “Very much we’re not just talking about existential concerns here,” Kolter said in an interview. “We’re talking about the entire swath of safety and security issues and critical topics that come up when we start talking about these very widely used AI systems.” OpenAI tapped the computer scientist to be chair of its Safety and Security Committee more than a year ago, but the position took on heightened significance last week when California and Delaware regulators made Kolter’s oversight a key part of their agreements to allow

Advertisement

OpenAI to form a new business structure to more easily raise capital and make a profit.

Advertisement

Safety has been central to OpenAI’s mission since it was founded as a nonprofit research laboratory a decade ago with a goal of building better-than-human AI that benefits humanity. But after its release of

ChatGPT sparked a global AI commercial boom, the company has been accused of rushing products to market before they were fully safe in order to stay at the front of the race.

Advertisement

Internal divisions that led to the temporary ouster of CEO Sam Altman in 2023 brought those concerns that it had strayed from its mission to a wider audience. The San Francisco-based organisation faced pushback — including a lawsuit from co-founder Elon Musk — when it began steps to convert itself into a more traditional for-profit company to continue advancing its technology.

Kolter will be a member of the nonprofit’s board but not on the for-profit board. But he will have “full observation rights” to attend all for-profit board meetings and have access to information it gets about AI safety decisions.

Advertisement
Advertisement
Advertisement
tlbr_img1 Classifieds tlbr_img2 Videos tlbr_img3 Premium tlbr_img4 E-Paper tlbr_img5 Shorts