Today's Shorts: Who Wins the AI Battle? Big Tech Investors or Humanity?
Will AI destroy humanity by starting a robot war? Or will AI become our speediest employee, smartest ghost writer, new best friend and favorite distraction? Or all of the above?
AI: Proceed with Caution or ‘Move Fast and Break Things?’
While you were snagging early Black Friday deals last weekend, the world of artificial generative intelligence (AI) went through a major upheaval, the ripples of which are being felt days later.
On Friday, November 17, 2023, OpenAI CEO and ChatGPT co-creator Sam Altman was fired by the OpenAI Board of Directors because he “was not consistently candid in his communications with the board.”
The unexpected ouster of a rising star plunged OpenAI and the entire industry into a dystopian world of uncertainty ... for a few days anyway.
How could their entrepreneurial wet dreams of making billions from stealing our images, text, personal data and creativity collapse overnight because of an oddball corporate structure and too many nonprofit do-gooders on OpenAI’s Board of Directors? High-stakes, behind-closed-doors deal-making among corporate giants ensued over the weekend. After a few days of drama within the AI community and an open revolt by 700+ OpenAI employees, Altman was hired on Monday, November 20, 2023, by Microsoft, who just happens to be OpenAI’s biggest financial backer and owner of 49 percent of OpenAI’s shares.
On that Monday, according to the New York Times article The Winners and Losers of OpenAI’s Wild Weekend, Microsoft — one of the “winners” in the article — decided to set up an internal AI research section and a “mini-OpenAI” headed by Altman and staffed by many former OpenAI employees. The plan was for them to develop new AI tools under Microsoft’s label, rather than the start-up’s label. With this move, Microsoft would effectively own 100 percent of OpenAI because they hired the former CEO and planned to employ hundreds of former staff. This move would have protected Microsoft’s AI investment and its market dominance. They couldn’t buy OpenAI outright because of anti-trust laws. Hiring everyone would get around that and protect their multi-billion-dollar investment in AI development and commercialization. Just a few days ago, it looked as if the OpenAI Board’s altruistic move to slow down AI commercialization had led to monopolization by Microsoft. But that was so Monday.
Disinformation. Plagiarism. Identity theft. Surveillance. Job loss. Economic upheaval. Wrongful death.
Consequences be damned! There’s $billions$ to be made!
Late Tuesday, Altman was reinstated as OpenAI CEO. His return makes him the fourth CEO in five days. This level of instability is not giving me confidence that the Young Geniuses at OpenAI are the “chosen ones” who can be “trusted” to develop artificial generative intelligence safely for the world.1
When will the American public cast off its blind faith in Big Tech and the Young Genius archetype? “Move fast and break things” isn’t working for the rest of us. Look at fallen crypto king Sam Bankman-Fried, disgraced Thermos founder Elizabeth Holmes, right-wing political player and co-founder of PayPal Peter Theil, and tight-fisted Mark Zuckerberg and old money Elon Musk, who both buy up competitors and deflect blame for spreading misinformation on their platforms. Decades ago, Bill Gates of Microsoft and Steve Jobs of Apple were the Young Geniuses. Their market dominance feuds certainly caused the rest of us decades of hassles and planned obsolescence.
Three OpenAI board members who opposed Altman stepped down, including Chief Scientist Ilya Sutskever, who “was said to be growing alarmed that the company’s technology could pose a significant risk, and that Mr. Altman was not paying close enough attention to the potential harms,” according to John Koblin, Kevin Granville and Jason Karaian writing in the New York Times. The Times also reported that Altman required board changes before he would return to the company. (To learn more about the former chief scientist, check out The Guardian’s Documentary, which was released this month: Ilya: The AI Scientist Shaping the World.)
In What’s the Real Frankenstein Monster of AI? Robert Reich details OpenAI’s complicated corporate structure. OpenAI had “a nonprofit board stacked with ethicists and specialists in the potential downsides of AI”— instead of a board packed with Wall Street types, according to Reich. To prevent investors from taking over OpenAI, Reich writes that there was a “limit how much profit could flow to the investors (through a so-called ‘capped profit’ structure) and [they] wouldn’t put investors on the board.” According to Reich, for Altman and the 700+ OpenAI employees, who lobbied for his return as CEO, developing AI tools under a start-up company, which will eventually be sold, is more lucrative for them than moving to a Microsoft research lab now.
That structure and philosophy of OpenAI seem to be shifting after AI’s “wild weekend.” Two new members join Adam D’Angelo (chief executive of Quora and only remaining board member) on the OpenAI board. They are: Bret Taylor (former Salesforce CEO and former Twitter chairman who arm-wrestled Musk2 over the sale of Twitter) and long-time heavy-hitter Larry Summers (former US Secretary of Treasury, Wall Street hedge fund manager and a whole lot more including being involved in the repeal of the Glass Steagall Act of 1933, arguably the worst decision of the Clinton Administration. Taylor has been named chairman of the OpenAI Board. With the new board, OpenAI will be moving full speed ahead.
“The people who seem to have won out in this case are the accelerationists,” said Sarah Kreps, a Cornell professor of government and the director of the Tech Policy Institute in the university’s school of public policy…
“What we’ll see is full steam ahead on AI research going forward. Then the question becomes, is it going to be totally unsafe, or will it have trials and errors? OpenAI may follow the Facebook model of moving quickly and realizing that the product is not always compatible with societal good,” she said.
What’s accelerating the AI arms race among OpenAI, Google, Microsoft and other tech giants, Kreps said, is vast amounts of capital and the burning desire to be first. If one company doesn’t make a certain discovery, another will – and fast. That leads to less caution ...
— ‘Huge Egos Are in Play’: Behind the Firing and Rehiring of OpenAI’s Sam Altman, The Guardian, November 23, 2023.
AI Isn’t ‘Intelligent’
AI itself is just a massive deepfake. It’s not intelligence, but it is artificial.
ChatGPT is a highly sophisticated computer program that generates text and illustrations in different styles, based upon key words the user enters and information it gleans from the Internet. There is no thought involved.
Everything ChatGPT knows it learned from us. Every social media post, every click, every pause in scrolling, every word, every typo, every photo, every video, every online survey, every avatar, every cute Facebook game, every online sports bet, every entry into your period-tracking app, every porn report to your partner on the Covenant app — we are creating data points for software developers continuously, often without our knowledge and without an option to opt out of their continuous surveillance of our lives.
The gold rush around so-called “generative artificial intelligence” (AI) tools like ChatGPT and Stable Diffusion has been characterized by breathless predictions that these technologies will be the harbingers of death for the traditional search engine or the end of drudgery for paralegals because they seem to “understand” as well as humans.
In reality, these systems do not understand anything. Rather, they turn technology meant for classification inside out: instead of indicating whether an image contains a face or accurately transcribing speech, generative AI tools use these models to generate media. They may create text which appears to human eyes like the result of thinking, reasoning, or understanding, but it is in fact anything but.
The latest generation of these systems mimic textual form and artistic styles well enough that they have beguiled all sorts of investors, founders and CEOs.
— Alex Hanna and Emily M. Bender, “AI” Hurts Consumers and Workers — and Isn’t Intelligent, writing in Tech Policy Press, August 4, 2023.
AI: It’s All about Money & Influence
As of November 2023, ChatGPT has 100 million weekly active users. It grew from 152.7 million visits in the first month (November 2022) to 1.5 billion visits in October 2023.
Given that AI content may be only 80 percent accurate — because the robot is just stringing text together and sometimes make sh*t up — that’s a lot of potential misinformation being created and disseminated every day.
ChatGPT Statitics: Detailed Insights On Users (2023) by Rohit Shewale includes some astonishing user and reach statistics for a product that has been on the market for only a year. Given the data, “proceeding with caution” seems so quaint … and so December 2022.
Of course, users and views exploded. ChatGPT is a powerful, fast, intriguing and free app with many capabilities — from writing a term paper to reducing labor costs by replacing workers — all this while being a clever mechanism to take, without permission or payment, an unfathomable about of human-generated content and use it for profit.
Giving us cool stuff for free to keep us distracted, connected and constantly transmitting data to cloud databases has worked wonders to track and control the masses while making new billionaires.
ChatGPT and other online platforms are “free” because they are collecting, storing, analyzing, using and selling our personal data. When the app is free, the consumer is the product. DemandSage reports that OpenAI spends $700,000 per day to run ChatGPT. At first glance, that seems like a large daily investment for a basically free product, but when you think about ChatGPT’s phenomenal reach in one year and the potential profit over time, $700,000 per day is jsut the cost of doing business.
Bread and circus worked for the Romans, and it works for Big Tech.
AI also gets LOADS of media coverage — not all good press but any media is better than no media. Many of the New York Times articles on Microsoft and the other AI investor corporations focus on billions of dollars that they put in to ChatGPT and competitive products and the billions more that they are expecting to make.
Very little is said about privacy, surveillance, job loss, wage theft, discrimination, disinformation, robot wars and the other downsides of a powerful technology that was launched before the bugs were worked out and before regulatory safeguards were in place.
AI creators, including Altman, have warned that “AI is an existential threat to humanity.” AI academics say that rather than focus on end times predictions, we should focus on how “AI causes real harm” in people’s everyday lives and “hurts consumers and workers” now. This technology shouldn’t be in the market place in its current format, particularly when there are no regulations and an unacceptable level of inaccuracy.
Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
— Alex Hanna and Emily M. Bender, AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype, writing in Scientific American, August 12, 2023
AI Divide: ‘Oh shit!’ vs ‘Gee whiz!’
The OpenAI drama revealed a rift in the AI community between people like some original board members, who wanted to slow down the development of AI (hopefully to make it safe and accurate), and people like Altman and Microsoft CEO Satya Nadella, who want to go full speed ahead with AI commercialization to keep their competitive edge, make billions and keep investors happy.
“The AI that we’re looking at now is immature. There are no standards, no professional body, no certifications. Everybody figures out how to do it, figures out their own internal norms,” said Rayid Ghani, a professor of machine learning and public policy at Carnegie Mellon University. “The AI that gets built relies on a handful of people who built it, and the impact of these handfuls of people is disproportionate.”
— ‘Huge Egos Are in Play’: Behind the Firing and Rehiring of OpenAI’s Sam Altman, The Guardian, November 23, 2023
In the general public, there is a similar divide between the cautious who see the risks in programs like ChatGPT and those who throw caution and their personal data to the wind and gleefully try every shiny new thing on the Internet.
I'm in the camp that believes ChatGPT was put on the market too quickly — given the inaccuracy of content and the role AI could play in mass disinformation and growing distrust and hatred.
Social media is already full of bullying and hate speech toward targeted groups, including women, people of color and LGBTQ folks. Fake robot-generated images and text can feed online outrage … and clicks!$
A few of my friends on Facebook have shared dramatic images labeled as being from the Israeli-Hamas War. They are presented as photographs or lifelike art created by someone in the midst of the conflict, but, in reality, these images aren’t reality, in my opinion. AI-generated images have dramatic a video game/movie poster vibe. They’re illustrations generated by key words. There is no thought, emotion or creativity behind the images. These images are media designed to outrage and persuade.
Shifting from war to “beauty”, I’ve also seen friends sharing hyper-real gorgeous photos of unidentified women or starlets from the past (like Marilyn Monroe). These AI images are eye-catching photos of traditionally beautiful women, but computer-generated “beauty,” created with key words, perpetuates unrealistic beauty standards.
Also, we don’t need deepfake hackers — or corporate AI — stealing our personal likenesses or voices and using them without our knowledge and without payment. The Guardian documentary My Blonde GF: the Experience of Being Deepfaked for Pornography details the horrors encountered by a woman whose face was put on another woman’s body and used in multiple pornographic videos. Luckily, a male friend, who saw it, overcame his embarrassment and told her, or she would have never known her face had been stolen and used for profit by pornographers. Are the hyper-real photographs of nameless women shared social media by my friends actually real people who don’t know their photos are being shared by total strangers? I bet some of them are.
There's enough inaccurate, harmful and unnecessarily inflammatory human-generated content on the Internet. We don't need computer-generated misinformation and click-bait to stir up more hate and outrage.
We need AI regulations and safeguards for the public … now.
It’s time to …
Slow down AI commercialization to make the products accurate, safe and less personally intrusive.
Create and enforce AI regulations to guard against plagiarism, identity theft, copyright violation, unfair use and other similar legal issues.
Set up systems for independent review and evaluation of artificial intelligence products before they are put on the market.
Protect privacy and prevent predators, human traffickers, Internet criminals, foreign countries, organized crime, corporations or anyone else from taking, using, storing, analyzing and/or selling our personal data without permission and/or payment.
Create a payment system to reimburse people for the use of their images and information and create a privacy opt-out.
Develop an AI labeling system and regulations for content created by or assisted by artificial intelligence.
Analyze federal, state and local tax breaks for major corporations. How much AI and Big Tech development has been and continues to be funded by taxpayers?
Study the environmental impact of AI and Big Tech’s usage of water and electricity.
Tax Big Tech for their water, electricity and infrastructure use — instead of featherbedding them with governmental tax breaks.
World governments need to regulate AI and Big Tech before someone decides this house of cards is “too big to fail.” And … before someone starts a robot war!
Related Links
‘Huge Egos Are in Play’: Behind the Firing and Rehiring of OpenAI’s Sam Altman, The Guardian, November 23, 2023
What’s the Real Frankenstein Monster of AI?, Robert Reich on Substack, November 2023
Sam Altman Is Reinstated as OpenAI’s Chief Executive, New York Times, November 2023
Sam Altman to Return as CEO of OpenAI, The Guardian, November 2023
Microsoft Hires Sam Altman Hours After OpenAI Rejects His Return, New York Times, November 2023
The Winners and Losers of OpenAI’s Wild Weekend, New York Times, November 2023
What Happened in the World of Artificial Intelligence?, New York Times, November 2023
Ilya: The AI Scientist Shaping the World, Documentary by The Guardian, November 2023
ChatGPT Statistics: Detailed Insights on Users (2023), DemandSage, November 2023
Is ChatGPT free and unlimited? In short — yes, PC Guide, November 2023
From Land Mines to Drones, Tech Has Driven Fears About Autonomous Arms, New York Times, November 2023
As AI- Controlled Killer Drones Become Reality, Nations Debate Limits, New York Times, November 2023
Facial Recognition Is Now Rampant. The Implications for Our Freedom Are Chilling, The Guardian, November 2023
I Tried Mike Johnson’s Favorite Anti-Porn App. It Didn’t Go Well, The Guardian, November 2023
My Blonde GF: the Experience of Being Deepfaked for Pornography, Documentary by The Guardian, October 2023
AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype, Scientific American, August 2023
“AI” Hurts Consumers and Workers — and Isn’t Intelligent, Tech Policy Press, August 2023
Will Artificial Intelligence Replace Writers & Editors? Pamela Powers on Substack, June 2023
Papers and Podcasts by Dr. Emily Bender, Professor, Director: Professional MS in Computational Linguistics
Thinking about AI Regulation, Robert Robb on Substack, May 2023
A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn, New York Times, May 2023
OpenAI creators believed that they were the right people for the job of developing and rolling out AI, according to New York Times articles.
Elon Musk was a co-founder of OpenAI, but he is no longer involved.
Thank you explaining AI and potential goals to contain this new threat to our lives🌵