Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion

Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.

    Support MFO

  • Donate through PayPal

Matt Levine- OpenAI: Nonprofit governance


Because of the seriousness of the entire current controversy regarding the evolution of AI, I've reproduced here an unedited commentary from Matt Levine, of Bloomberg Opinion. His examination of the Open AI situation is fascinating in many respects, especially regarding the power and influence of money and profit vs the restrictions required for serious safety.

OpenAI: Nonprofit governance

One way to look at the OpenAI situation is that OpenAI is a nonprofit organization, and it is not that uncommon for nonprofits to have tension between their mission and their staff.

This is arguably a silly way to look at the situation, because, for a few years ending last Friday, nobody really thought of OpenAI as a nonprofit. OpenAI was an $86 billion tech startup that was building artificial intelligence tools that were expected to result in huge profits for its investors (Microsoft Corp., venture capital firms) and employees (many of whom owned stock). But technically that OpenAI — OpenAI Global LLC, the $86 billion startup with employee and VC and strategic shareholders — was a subsidiary controlled by the nonprofit, OpenAI Inc., and the nonprofit asserted itself dramatically last Friday when its board of directors fired its chief executive officer, Sam Altman, and threw everything into chaos.

But for a moment ignore all of that and just think about OpenAI Inc., the 501(c)(3) public charity, with a mission of “building safe and beneficial artificial general intelligence for the benefit of humanity.” Like any nonprofit, it has a mission that is described in its governing documents, and a board of directors who supervise the nonprofit to make sure it is pursuing that mission, and a staff that it hires to achieve the mission. The staff answers to the board, and the board answers to … no one? Their own consciences? There are no shareholders; the board’s main duties are to the mission.

Often, as a general matter, a nonprofit’s staff will be more committed to the mission than the board is. This just makes sense: The staff generally works full-time at the nonprofit, doing its mission all day; the directors are normally fancy outsiders with other jobs who just show up for occasional board meetings. Of course the staff cares more than the board does.

But it isn’t always quite that simple. Because the staff works full-time at the nonprofit, they will care much more about the practical conditions of the job than the board will. The board is disinterested and comfortable and can care entirely about the abstract mission of the nonprofit; the staff members have to pay rent and student loans. And so sometimes there will be a conflict between the mission of the nonprofit and the conditions of the job, and the staff will prefer better working conditions while the board will prefer the mission.[1]

So a charity to feed the homeless might have to decide whether to spend a marginal dollar of donations on food for the homeless or higher salaries for the staff. It is not obvious that the staff will prefer higher salaries while the board will prefer feeding more clients, but it is possible; really it is a pretty standard story of agency costs, and the board’s role is to manage those costs. Or last year Ryan Grim wrote about conflicts within progressive advocacy groups after the killing of George Floyd: “In the eyes of group leaders ... staff were ignoring the mission and focusing only on themselves, using a moment of public awakening to smuggle through standard grievances cloaked in the language of social justice,” while the staff “believed [that] managers exploited the moral commitment staff felt toward their mission, allowing workplace abuses to go unchecked.”

OpenAI is a very strange nonprofit! Its stated mission is “building safe and beneficial artificial general intelligence for the benefit of humanity,” but in the unavoidably sci-fi world of artificial intelligence developers, that mission has a bit of a flavor of “building artificial intelligence very very carefully and being ready to shut it down at any point if it looks likely to go rogue and kill all of humanity.” The mission is “build AI, but not too much of it, or too quickly, or too commercially.” As of last week, it had a board with six members, three of whom (including Altman) worked at OpenAI and three of whom did not.

And it is easy to see how the board’s view of the mission could conflict with the staff’s views of their jobs. Like, you are a cutting-edge AI researcher, you come into work every day excited to do cutting-edge AI research, you succeed in doing cutting-edge stuff, and the board shows up and is like “hey this edge is too cutting, we worry it’s going to kill us all, slow it down there tiger.” It’s condescending! It stops you from doing the thing that you are committed to do! They’re Luddites! But the thing that you are committed to do (build cutting-edge AI stuff) is not quite the thing that OpenAI is committed to do (build safe AI stuff). And the outside directors — who don’t go to work at OpenAI all day — might care more about its official mission than the staff does.

From the board’s perspective, a nonprofit with the mission of “be first to build artificial general intelligence, but only if we can do it safely” will have a staffing problem. To achieve that mission it will have to hire staff who are talented and driven enough to be the first to build AGI, but those staff will probably be more enthusiastic about AI, generally, than the mission calls for. Or you can hire staff who are super-nervous about AGI, but they probably won’t be the first ones to build it. So you hire the good AI developers, but you keep a watchful eye on them.

From the staff’s perspective, the board is a bunch of outsiders whose main features are (1) they are worried about AI safety and (2) they don’t work at OpenAI. (Well, three of them do, but three — a majority of those who voted to oust Altman — don’t.) They have no idea! They are meddling in stuff — AI research but also intra-company dynamics — that they don’t really understand, driven by an abstract sense of mission. Which kind of is the job of a nonprofit board, but which will reasonably annoy the staff.

Also, of course, the material conditions of the OpenAI staff are pretty unusual for a nonprofit: They can get paid millions of dollars a year and they own equity in the for-profit subsidiary, equity that they were about to be able to sell at an $86 billion valuation. When the board is like “no, the mission requires us to zero your equity and cut off our own future funding,” I mean, maybe that is very noble and mission-driven of the board. But, just economically, it is rough on the staff.

Yesterday virtually all of OpenAI’s staff signed an open letter to the board, demanding that the board resign and bring back Altman. The letter claims that the board “informed the leadership team that allowing the company to be destroyed ‘would be consistent with the mission.’” Yes! I mean, the board might be wrong about the facts, but in principle it is absolutely possible that destroying OpenAI’s business would be consistent with its mission. If you have built an unsafe AI, you delete the code and burn down the building. The mission is conditional — build AGI if it is safe — and if the condition is not satisfied then you go ahead and destroy all of the work. That is the board’s job. It’s the board’s job because it can’t be the staff’s job, because the staff is there to do the work, and will be too conflicted to destroy it. The board is there to supervise the mission.

I don’t mean to say that the board is right! The board really are outside kibbitzers! Between OpenAI’s staff, who know what they’re talking about but also kinda like building AI, and OpenAI’s board, who lean more to being AI-skeptical outsiders, I guess I’d bet on the staff being right.[2] (Also if the board’s job is to prevent the development of rogue AI, burning down OpenAI is unlikely to accomplish that, just because there are competitors who will gleefully hire the staff.) I am just saying that this is a standard and real problem in nonprofit governance, and what’s weird about OpenAI is that it’s an $86 billion startup with nonprofit governance.

I guess the other thing to say is that, generally speaking, a staff is often more essential to a nonprofit than a board is? (Except that at a lot of nonprofits — not OpenAI! — the directors tend to also be big donors and fundraisers.) Like, the staff does the work; the board just goes to occasional meetings. If the staff all quit then the nonprofit is in trouble; if the directors all quit they’re pretty replaceable. As of last night here’s the state of things, from Bloomberg’s Shirin Ghaffary:

OpenAI said it’s in “intense discussions” to unify the company after another tumultuous day that saw most employees threaten to quit if Sam Altman doesn’t return as chief executive officer.

Vice President of Global Affairs Anna Makanju delivered the message in an internal memo reviewed by Bloomberg News, aiming to rally staff who’ve grown anxious after days of disarray following Altman’s ouster and the board’s surprise appointment of former Twitch chief Emmett Shear as his interim replacement

There’s strong momentum outside OpenAI to get Altman reinstated too. OpenAI’s other investors, led by Thrive Capital, are actively trying to orchestrate his return, people with knowledge of the effort told Bloomberg News Monday. Microsoft CEO Satya Nadella told Emily Chang in a Bloomberg Television interview that even he wouldn’t oppose Altman’s reinstatement. ...

“We are continuing to go over mutually acceptable options and are scheduled to speak again tomorrow morning when everyone’s had a little more sleep,” Makanju wrote. “These intense discussions can drag out, and I know it can feel impossible to be patient.”

Comments

  • There is news on this thriller - may be a great PR in the hindsight. Sam is back, there is a new Board, almost everyone is happy. Here is my take,
    TwitterLINK
    Initial "new" Board is all external/independent. Don't know how long this will remain.
    Inherent conflict in the "old" Board was 3-3 split between nonprofit OpenAI & for-profit OpenAI. It was complicated by the fact that some were founders of nonprofit OpenAI.
    Need new Bylaws.
  • edited November 2023
    The nonprofit arm could have a hard time with the IRS if the profit side controls it, say for example with a majority of the board from the profit side. If you've looked at a 990 IRS info return form lately, there are a LOT of questions about the existence and degree of control of a nonprofit by another organization.

    I imagine they've got attorneys working on that angle, but it's at least a little perilous. From what I'm reading, as a died-in-the-wool nonprofiteer, I kinda doubt that enterprise deserves nonprofit status, especially now that they appear to have dumped the public benefit orientation and gone all profit-hungry.

    The Levine text about nonprofit staff vs. board isn't quite on point. The board is legally responsible for keeping the organization on the up-and-up. Ignoring or downplaying that responsibility is an invitation to corrupt, illegal behavior.
  • Well, it is NOT uncommon for nonprofit entities to have for-profit units. Examples include universities/colleges (athletics, bookstores, arenas/pavilions, auxiliaries), TIAA, etc.

    The most famous and controversial may have been Howard Hughes' nonprofit (now a very respectable HHMI; it became so after the death of Howard Hughes in 1976) that owned for-profit defense contractor (Hughes Aircraft, still around as a REIT HHH) for a tax dodge that was so obvious, but the IRS didn't/couldn't do anything. There were Congressional investigations/hearings but nothing came out of those.

    Law/IRS only requires budgetary/financial separation and tax payments on the for-profit activities. That's all.

    As I noted in my X/Twitter post above, problem/conflicts arose at OpenAI due to overlapping Board representation on the nonprofit OpenAI by key players on the for-profit side.

    In another X/Twitter post, I joked that the nonprofit OpenAI should make the involuntary spinoff to Microsoft official, get MSFT stock in exchange, and be done with this. But the solution found wasn't that and let us see how long that works.
  • edited November 2023
    YBB: Well, it is NOT uncommon for nonprofit entities to have for-profit units.

    Of course it's not. I didn't say it's uncommon or illegal as such.

    YBB: Law/IRS only requires budgetary/financial separation and tax payments on the for-profit activities. That's all.

    The extent and method of control is also part of the legal equation, and it's complex. Have a look at the 990 and the 1023 application for status to see what the IRS is doing. It's not always as simple as a university foundation and a bookstore.

    P.S. Yogi, my first post was not arguing with you; it was responding to the article OJ posted.
  • edited November 2023
    My take on the fundamental question of "safe" AI is simply this: It makes absolutely no difference how much we (the US) monitor, control, or limit the development of AI to keep it within "safe" limits.

    No difference. None.

    Regardless of what we may or may not think or do there are governments who will actively promote AI as another weapon to use against their enemies. To suggest just a few:

    China
    Russia
    Iran
    North Korea
    Israel





  • Following is an excerpt from a report from the Guardian:

    OpenAI ‘was working on advanced model so powerful it alarmed staff’

    Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking

    OpenAI was reportedly working on an advanced system before Sam Altman’s sacking that was so powerful it caused safety concerns among staff at the company.

    The artificial intelligence model triggered such alarm with some OpenAI researchers that they wrote to the board of directors before Altman’s dismissal warning it could threaten humanity, Reuters reported.

    The model, called Q* – and pronounced as “Q-Star” – was able to solve basic maths problems it had not seen before, according to the tech news site the Information, which added that the pace of development behind the system had alarmed some safety researchers. The ability to solve maths problems would be viewed as a significant development in AI.

    The reports followed days of turmoil at San Francisco-based OpenAI, whose board sacked Altman last Friday but then reinstated him on Tuesday night after nearly all the company’s 750 staff threatened to resign if he was not brought back. Altman also had the support of OpenAI’s biggest investor, Microsoft.

    Many experts are concerned that companies such as OpenAI are moving too fast towards developing artificial general intelligence (AGI), the term for a system that can perform a wide variety of tasks at human or above human levels of intelligence – and which could, in theory, evade human control.

    Andrew Rogoyski, of the Institute for People-Centred AI at the University of Surrey, said the existence of a maths-solving large language model (LLM) would be a breakthrough. He said: “The intrinsic ability of LLMs to do maths is a major step forward, allowing AIs to offer a whole new swathe of analytical capabilities.”
  • The society needs to let the science run where the science goes and divert their own energy in coming up with ways for all humanity to share the rewards and establish a equitable way of patrolling the direction of the applications.
Sign In or Register to comment.