When Emmett Shear, the former chief executive of the livestreaming site Twitch, was named the interim chief executive of OpenAI on Sunday night, it might have seemed a curious choice.

After graduating from college in 2005, he spent nearly his entire career at Twitch, the Amazon-owned platform popular among video gamers, as it grew from a fledgling site called Justin.tv to a behemoth with more than 30 million daily viewers, before leaving earlier this year.

Mr. Shear, 40, an avid video game player, was viewed as a competent leader who steered Twitch through several transitions. But he faced criticism, including over his handling of claims in 2020 that Twitch’s workplace culture was hostile toward women, and over the site’s slowness in responding to harmful content. Some employees and livestreamers also complained that his focus on maneuvering Twitch toward profitability through cutting costs was eroding the quality of the platform.

He also knows Sam Altman, who was forced out of OpenAI by its board of directors on Friday. The two were in the same group at Y Combinator, the start-up fund that invested in both of their early companies.

But in interviews and on social media, Mr. Shear has articulated a view about the risks of artificial intelligence that could appeal to the board members of OpenAI, who pushed out Mr. Altman at least in part over their fears that he was not paying enough attention to the potential threat posed by the company’s technology.

Appearing on a technology podcast in June, Mr. Shear voiced concerns about what could happen if and when A.I. reaches artificial general intelligence, or A.G.I., a term for human-level intelligence. He worried that at such a point, an A.I. system could become so powerful that it could continue to improve itself without the need for human input, and would have the capacity to destroy humanity.

Mr. Shear could not be immediately reached for comment on Monday. In a post on X, the platform formerly known as Twitter, early Monday morning, he wrote that he would spend the first month of his tenure investigating how Mr. Altman had been pushed out and reforming the company’s management team.

“Depending on the results everything we learn from these, I will drive changes in the organization — up to and including pushing strongly for significant governance changes if necessary,” he said.

On the podcast, Mr. Shear discussed a concept, often discussed in A.I. circles, that focuses on paper clips: In short, the idea is that even giving an all-powerful A.I. as mundane a goal as making as many paper clips as possible would lead to it to determine that eradicating humans would be the most efficient way to accomplish that goal.

“Step one is, ‘take over the planet,’ right? Then I just have control over everything. Step two is ‘I solve my goal,’” he said.

If A.I. gets to that point, Mr. Shear said, the potential catastrophe would be like a “universe-destroying bomb.”

“It’s not just human-level extinction; extincting humans is bad enough,” he said. “It’s like, potential destruction of all value in the light cone. Not just for us, but for any alien species caught in the wake of the explosion.”

Mr. Shear said he was not as worried as some A.I. theorists about this type of world-ending event: partly because he did not think the current A.I. technology was close to such a breakthrough, and partly because he thought it might be possible to ensure A.I. systems’ goals were aligned with those of humans. But he still embraced industry safeguards.

“I’m in favor of creating some kind of fire alarm, like maybe, ‘Not A. I.s bigger than x,’” he said. “I think there’s good options for international collaboration and treaties about some sort of A.I. test ban treaty.”

In social media posts on X, Mr. Shear has reinforced those points, referring to himself as a “doomer” and suggesting that companies should tap the brakes on their technological advancements.

“I’m in favor of a slowdown,” he replied to another user in September. “We can’t learn how to build a safe AI without experimenting, and we can’t experiment without progress, but we probably shouldn’t be barreling ahead at max speed either.”

Leave a Reply

Your email address will not be published. Required fields are marked *