AI’s humanity takeover? Not on this professor’s watch
Unlock Exclusive Insights with The Tribune Premium
Take your experience further with Premium access. Thought-provoking Opinions, Expert Analysis, In-depth Insights and other Member Only BenefitsIf you believe artificial intelligence poses grave risks to humanity, then a professor at Carnegie Mellon University has one of the most important roles in the tech industry right now.
Zico Kolter leads a 4-person panel at OpenAI that has the authority to halt the ChatGPT maker’s release of new AI systems if it finds them unsafe. That could be technology so powerful that an evildoer could use it to make weapons of mass destruction. It could also be a new chatbot so poorly designed that it will hurt people’s mental health. “Very much we’re not just talking about existential concerns here,” Kolter said in an interview. “We’re talking about the entire swath of safety and security issues and critical topics that come up when we start talking about these very widely used AI systems.” OpenAI tapped the computer scientist to be chair of its Safety and Security Committee more than a year ago, but the position took on heightened significance last week when California and Delaware regulators made Kolter’s oversight a key part of their agreements to allow
OpenAI to form a new business structure to more easily raise capital and make a profit.
Safety has been central to OpenAI’s mission since it was founded as a nonprofit research laboratory a decade ago with a goal of building better-than-human AI that benefits humanity. But after its release of
ChatGPT sparked a global AI commercial boom, the company has been accused of rushing products to market before they were fully safe in order to stay at the front of the race.
Internal divisions that led to the temporary ouster of CEO Sam Altman in 2023 brought those concerns that it had strayed from its mission to a wider audience. The San Francisco-based organisation faced pushback — including a lawsuit from co-founder Elon Musk — when it began steps to convert itself into a more traditional for-profit company to continue advancing its technology.
Kolter will be a member of the nonprofit’s board but not on the for-profit board. But he will have “full observation rights” to attend all for-profit board meetings and have access to information it gets about AI safety decisions.