• June 11, 2024
  • roman
  • 0

“The board is a nonprofit board that was set up explicitly for the purpose of making sure that the company’s public good mission was primary, was coming first — over profits, investor interests, and other things,” OpenAI former board member Helen Toner said on “The TED AI Show” podcast, according to a CNBC story. “But for years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board.”

Toner said Altman gave the board “inaccurate information about the small number of formal safety processes that the company did have in place” on multiple occasions. “For any individual case, Sam could always come up with some kind of innocuous-sounding explanation of why it wasn’t a big deal, or misinterpreted, or whatever. But the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn’t believe things that Sam was telling us, and that’s just a completely unworkable place to be in as a board — especially a board that is supposed to be providing independent oversight over the company, not just helping the CEO to raise more money.”

Let’s put this into context. Since the first company hired its first CIO, IT execs and managers have struggled to trust vendors. It’s in their nature. So, a lack of trust regarding technology is nothing new. But AI — and specifically genAI and all of its forms — are being given capabilities and data access orders of magnitude more extensive than any software ever before..

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *