Today, an excellent piece of investigative journalism from The New Yorker.
Anyone remember those brief few days in 2023 when Sam Altman was removed from the board of OpenAI ? Of course you do. But now, more than two years later, we finally get an inside look into what the hell that was all about. Months of investigations have revealed in extreme detail just what was going on behind closed doors. If you're at all interested in these kinds of companies, then this is well worth your time.
In the most compact form possible, this amounts to : "Sam Altman isn't a paedophile, but he's definitely a lying scumbag". As far as a character assessment goes, this is devastating. With regards to OpenAI the company, the picture is considerably more nuanced, especially in terms of the actual researchers involved (Altman himself is only a businessman). To their great credit, the author here interviews everyone involved, including Altman on multiple occasions. This is by no means a hit piece, but what emerges is abundantly clear. Virtually all those involved say that Altman is an inveterate liar, and while Altman may rightly dispute the interpretation of some recollected conversations, the idea that everyone else but him is misremembering beggars belief.
In PDF form this runs to 39 pages. No doubt other media outlets will produce a more distilled version, but in the meantime, what follows below is my own 2200 word-summary in pure quotation form. To keep the narrative coherent, some of these are rearranged from their original sequence but no paraphrasing is used. Highlights are my own.
But first, there are two comments I should add for my own perspective. First, the main concerns of the other employees and board members revolved around the issue of AI safety, especially whether it would be aligned with human values and suchlike. I personally believe that these concerns are – provisionally – not sensible. I've said many times that while we can legitimately call LLMs intelligent in a carefully-defined way, I don't believe they show the merest glimmer of consciousness. Safety in terms of "killer robots will be the death of us all" is, I believe, complete hogwash; LLMs have absolutely no will of their own, and if putting them in certain situations is a bad idea, that is entirely the result of human error : we can simply choose not to do that, and nothing stops us from pulling the plug. I said almost from the very start that the whole "this is too dangerous" thing was a genius bit of marketing spin, not a genuine concern on Altman's part.
The provision I make here is that safety and alignment may still matter in the lesser regard of AI not behaving as expected. This certainly does pose problems and it clearly is important to have it behave in a basically-predictable way, as a useful tool rather than as an independent entity. But given a lack of any consciousness, I don't worry at all about a rogue AI deliberately trying to murder everyone of its own volition, as many of the top researchers are apparently genuinely concerned with.
What matters here in terms of Altman's character is, then, not who's right about safety, but what he said he'd do about it. He lied about this over and over again, and whereas absolutely everyone tells lies from time to time, the degree to which Altman does this, routinely, is inexcusable. He's also a bullshit artist. It's fine to change your mind, but you should acknowledge that you've done this, as well as explaining how and why. Altman simply doesn't bother.
This leads me to my second point : people are being awfully naïve about this. The subreddits exploding with people saying they were ditching OpenAI for Anthropic never made any sense; the idea that the US government bunch of deranged fascist fuckwits wouldn't be using LLMs was beyond belief, never mind that Anthropic were already deeply in bed with the US military (the article also goes into Anthropic's misbehaviour as well, though to a lesser extent). Trying to lionise Anthropic for pushing back on a couple of points is, in my view, a classic example of naivety curving back on itself in an ouroboros of cynicism. And again, to pretend that a CEO will never lie is an absurdity, but this should not be the take-home message of this piece. Rather it should be the more realistic and far more important point that Altman lies so much you can never trust him about anything. That's what should concern everyone, not the non-story that "CEO is a scumbag". That's par for the course.
That's more than enough from me. On, then, to the summary.
In a tense call after Altman’s firing, the board pressed him to acknowledge a pattern of deception. “This is just so fucked up,” he said repeatedly, according to people on the call. “I can’t change my personality.” Altman says that he doesn’t recall the exchange. “It’s possible I meant something like ‘I do try to be a unifying force,’ ” he told us, adding that this trait had enabled him to lead an immensely successful company. He attributed the criticism to a tendency, especially early in his career, “to be too much of a conflict avoider.” But a board member offered a different interpretation of his statement: “What it meant was ‘I have this trait where I lie to people, and I’m not going to stop.’ ” Were the colleagues who fired Altman motivated by alarmism and personal animus, or were they right that he couldn’t be trusted?
Most of Altman’s employees at Loopt liked him, but some said that they were struck by his tendency to exaggerate, even about trivial things. One recalled Altman bragging widely that he was a champion Ping-Pong player—“like, Missouri high-school Ping-Pong champ”—and then proving to be one of the worst players in the office. (Altman says that he was probably joking.) As Mark Jacobstein, an older Loopt employee who was asked by investors to act as Altman’s “babysitter,” later told Keach Hagey, for “The Optimist,” a biography of Altman, “There’s a blurring between ‘I think I can maybe accomplish this thing’ and ‘I have already accomplished this thing’ that in its most toxic form leads to Theranos,” Elizabeth Holmes’s fraudulent startup.
If everything went right, the OpenAI founders believed, artificial intelligence could usher in a post-scarcity utopia, automating grunt work, curing cancer, and liberating people to enjoy lives of leisure and abundance. But if the technology went rogue, or fell into the wrong hands, the devastation could be total. China could use it to build a novel bioweapon or a fleet of advanced drones; an A.I. model could outmaneuver its overseers, replicating itself on secret servers so that it couldn’t be turned off; in extreme cases, it might seize control of the energy grid, the stock market, or the nuclear arsenal.
Not everyone believed this, to say the least, but Altman repeatedly affirmed that he did. He wrote on his blog in 2015 that superhuman machine intelligence “does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal . . . wipes us out.” OpenAI’s founders vowed not to privilege speed over safety, and the organization’s articles of incorporation made benefitting humanity a legally binding duty. If A.I. was going to be the most powerful technology in history, it followed that any individual with sole control over it stood to become uniquely powerful—a scenario that the founders referred to as an “AGI dictatorship.”
By September, 2017, though, Musk had grown impatient. During discussions about whether to reconstitute OpenAI as a for-profit company, he demanded majority control. Altman’s replies varied depending on the context. His main consistent demand seems to have been that if OpenAI were reorganized under the control of a C.E.O. that job should go to him. Sutskever seemed uncomfortable with this idea. He sent Musk and Altman a long, plaintive e-mail on behalf of himself and Brockman, with the subject line “Honest Thoughts.” He wrote, “The goal of OpenAI is to make the future good and to avoid an AGI dictatorship.” He continued, addressing Musk, “So it is a bad idea to create a structure where you could become a dictator.” He relayed similar concerns to Altman: “We don’t understand why the CEO title is so important to you. Your stated reasons have changed, and it’s hard to really understand what’s driving it.”
By 2018, Amodei had started questioning the founders’ motives more openly. “Everything was a rotating set of schemes to raise money,” he later wrote in his notes. In early 2018, Amodei has said, he started drafting a charter for the company and, in weeks of conversations with Altman and Brockman, advocated for its most radical clause: if a “value-aligned, safety-conscious project” came close to building an A.G.I. before OpenAI did, the company would “stop competing with and start assisting this project.” According to the “merge and assist” clause, as it was called, if, say, Google’s researchers figured out how to build a safe A.G.I. first, then OpenAI could wind itself down and donate its resources to Google. By any normal corporate logic, this was an insane thing to promise. But OpenAI was not supposed to be a normal company.
That premise was tested in the spring of 2019, when OpenAI was negotiating a billion-dollar investment from Microsoft. Although Amodei, who was leading the company’s safety team, had helped to pitch the deal to Bill Gates, many people on the team were anxious about it, fearing that Microsoft would insert provisions that overrode OpenAI’s ethical commitments. Amodei presented Altman with a ranked list of safety demands, placing the preservation of the merge-and-assist clause at the very top. Altman agreed to that demand, but in June, as the deal was closing, Amodei discovered that a provision granting Microsoft the power to block OpenAI from any mergers had been added. “Eighty per cent of the charter was just betrayed,” Amodei recalled. He confronted Altman, who denied that the provision existed. Amodei read it aloud, pointing to the text, and ultimately forced another colleague to confirm its existence to Altman directly. (Altman doesn’t remember this.)
In the course of several meetings in the spring of 2023, Altman seemed to waver. He stopped talking about endowing a prize. Instead, he advocated for establishing an in-house “superalignment team.” An official announcement, referring to the company’s reserves of computing power, pledged that the team would get “20% of the compute we’ve secured to date”—a resource potentially worth more than a billion dollars. The effort was necessary, according to the announcement, because, if alignment remained unsolved, A.G.I. might “lead to the disempowerment of humanity or even human extinction.”
The twenty-per-cent commitment evaporated, however. Four people who worked on or closely with the team said that the actual resources were between one and two per cent of the company’s compute. Furthermore, a researcher on the team said, “most of the superalignment compute was actually on the oldest cluster with the worst chips.” The researchers believed that superior hardware was being reserved for profit-generating activities. (OpenAI disputes this.) Leike complained to Murati, then the company’s chief technology officer, but she told him to stop pressing the point—the commitment had never been realistic... the superalignment team was dissolved the following year, without completing its mission.
By then, internal messages show, executives and board members had come to believe that Altman’s omissions and deceptions might have ramifications for the safety of OpenAI’s products. In a meeting in December, 2022, Altman assured board members that a variety of features in a forthcoming model, GPT-4, had been approved by a safety panel. Toner, the board member and A.I.-policy expert, requested documentation. She learned that the most controversial features—one that allowed users to “fine-tune” the model for specific tasks, and another that deployed it as a personal assistant—had not been approved.
Last June, on his personal blog, Altman wrote, referring to artificial superintelligence, “We are past the event horizon; the takeoff has started.” This was, according to the charter, arguably the moment when OpenAI might stop competing with other companies and start working with them. But in that post, called “The Gentle Singularity,” he adopted a new tone, replacing existential terror with ebullient optimism. “We’ll all get better stuff,” he wrote. “We will build ever-more-wonderful things for each other.” He acknowledged that the alignment problem remained unsolved, but he redefined it—rather than being a deadly threat, it was an inconvenience, like the algorithms that tempt us to waste time scrolling on Instagram.
Some people defended Altman’s business acumen and dismissed his rivals, especially Sutskever and Amodei, as failed aspirants to his throne. Others portrayed them as gullible, absent-minded scientists, or as hysterical “doomers,” gripped by the delusion that the software they were building would somehow come alive and kill them. Yoon, the former board member, argued that Altman was “not this Machiavellian villain” but merely, to the point of “fecklessness,” able to convince himself of the shifting realities of his sales pitches. “He’s too caught up in his own self-belief,” she said. “So he does things that, if you live in the real world, make no sense. But he doesn’t live in the real world.”
Yet most of the people we spoke to shared the judgment of Sutskever and Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”
Six people close to the inquiry alleged that it seemed designed to limit transparency. Some of them said that the investigators initially did not contact important figures at the company. Others were uncomfortable sharing concerns about Altman because they felt there was not a sufficient effort to insure anonymity. “Everything pointed to the fact that they wanted to find the outcome, which is to acquit him,” the employee said.
Given OpenAI’s 501(c)(3) status and the high-profile nature of the firing, many executives there expected to see extensive findings. In March, 2024, however, OpenAI announced that it would clear Altman but released no report. The company provided, on its website, some eight hundred words acknowledging a “breakdown in trust.” People involved in the investigation said that no report was released because none was written. Instead, the findings were limited to oral briefings.
Many former and current OpenAI employees told us that they were shocked by the lack of disclosure. Altman said he believed that all the board members who joined in the aftermath of his reinstatement received the oral briefings. “That’s an absolute, outright lie,” a person with direct knowledge of the situation said. Some board members told us that ongoing questions about the integrity of the report could prompt, as one put it, “a need for another investigation.”
In a meeting with U.S. intelligence officials in the summer of 2017, he [Altman] claimed that China had launched an “A.G.I. Manhattan Project,” and that OpenAI needed billions of dollars of government funding to keep pace. When pressed for evidence, Altman said, “I’ve heard things.” It was the first of several meetings in which he made the claim. After one of them, he told an intelligence official that he would follow up with evidence. He never did.
“My vibes don’t match a lot of the traditional A.I.-safety stuff,” Altman said. He insisted that he continued to prioritize these matters, but when pressed for specifics he was vague: “We still will run safety projects, or at least safety-adjacent projects.” When we asked to interview researchers at the company who were working on existential safety—the kinds of issues that could mean, as Altman once put it, “lights-out for all of us”—an OpenAI representative seemed confused. “What do you mean by ‘existential safety’?” he replied. “That’s not, like, a thing.”
Altman is not a technical savant—according to many in his orbit, he lacks extensive expertise in coding or machine learning. Multiple engineers recalled him misusing or confusing basic technical terms. He built OpenAI, in large part, by harnessing other people’s money and technical talent. This doesn’t make him unique. It makes him a businessman. More remarkable is his ability to convince skittish engineers, investors, and a tech-skeptical public that their priorities, even when mutually exclusive, are also his priorities. When such people have tried to hinder his next move, he has often found the words to neutralize them, at least temporarily; usually, by the time they lose patience with him, he’s got what he needs. “He sets up structures that, on paper, constrain him in the future,” Wainwright, the former OpenAI researcher, said. “But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was.”
Even people close to Altman find it difficult to know where his “hope for humanity” ends and his ambition begins. His greatest strength has always been his ability to convince disparate groups that what he wants and what they need are one and the same. He made use of a unique historical juncture, when the public was wary of tech-industry hype and most of the researchers capable of building A.G.I. were terrified of bringing it into existence. Altman responded with a move that no other pitchman had perfected: he used apocalyptic rhetoric to explain how A.G.I. could destroy us all—and why, therefore, he should be the one to build it. Maybe this was a premeditated masterstroke. Maybe he was fumbling for an advantage. Either way, it worked.
No comments:
Post a Comment
Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.