This post is a gathering place for information on, discussion of, and thoughts about the quickly proliferating crop of Generative “AI” tools and toys which have been making headlines and headaches for the past while. I am currently creating and populating the categories, which will likely be an ongoing process.
Notes
- Large Language Models are not AI per se.
- There is a large push for GenAI/LLM data sets to be “opt in,” e.g. ,these tools may not use any content ion their training sets unless the creator of the content has given their explicit permission for the content to be included in the training data. On its face this is good and the correct way to approach the issue of copyright. This will be difficult for the owners of the LLM tools, as they will essentially need to purge all of their training data and start over from scratch. But since the LLM tool owners have taken a “beg for forgiveness rather than ask permission” approach, we can minimize our sympathy for their plight.One of the unintended consequences of opt-in only, which will inevitably make the LLMs terrible to the point of uselessness, is that while principled people will demand copyright protection, unprincipled people will create hate speech, disinformation, and authoritarian propaganda, and poison the pool of allowable content even more than it already has been by the unguided training in the first new generations of LLM training sets.
- One of the more subtle dangers of the adoption of AI/LLM tools in business is that inevitably the LLM tool, rather than the human employee, will be seen as the de facto “expert” in whatever context the tools is used. Then the worth of the human employee will be measured by how well they can coax the LLM tool to provide the “correct” solution to whatever problem the LLM used to solve. This is already happening, if we look at all of the courses available which train humans in writing queries for LLM tools. On the one hand, being able to ask the right question is a useful skill, but when used against an imperfect tool (and literally all LLMs are imperfect tools), if the blame for a wrong answer falls on the user for not being good at answering questions, then inevitable, thanks to capitalism, the humans will be discarded in favor of the less useful and also less expensive LLM tools.
- On a societal level, the perception of what “AI” is, and what it can do, is just as dangerous as the actuality of what “AI” is and what it can do.
Technologies
- ChatGPT
- typing <|endof<|endoftext|>text|> as a prompt will return answers to other users’ questions.
- Stable Diffusion
- Midjourney
- DALL-E 2
Sources
- ARXIV.org (Computer Science -> Computation and Language, Computers and Society)
- AI Snake Oil
- AI Weirdness
- Cosma Shalizi (“Attention”, “Transformers”, in Neural Network “Large Language Models”)
- LLM Attacks
Government, Policy
- “Artificial Intelligence Risk Management Framework” (NIST)
- 2024.03.21 “Public AI as an Alternative to Corporate AI” (Bruce Schneier, Schneier on Security)
- 2024.02.13 “AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive” (Federal Trade Commission)
- 2023.10.26 “The legal system could recognize AI-led corporations, researchers say” (Alanna Mayham, Courthouse News Service)
- 2023.09.11 “AI Chatbots Are Invading Your Local Government—and Making Everyone Nervous” (Todd Feather, Wired)
- 2023.09.11 “Federal government issues new rules for public servants using AI” (Elizabeth Thompson, CBC)
- 2023.08.01 “The AI rules that US policymakers are considering, explained” (Dylan Matthews, Vox)
- 2023.07.28 “How “windfall profits” from AI companies could fund a universal basic income” (Dylan Matthews, Vox)
- 2023.07.14 “Workers that made ChatGPT less harmful ask lawmakers to stem alleged exploitation by Big Tech” (Annie Njanja, TechCrunch)
- 2023.07.10 “China’s AI Regulations and How They Get Made” (Matt Sheehan, Carnegie Endowment for International Peace)
- 2023.07.10 “US senators to get classified White House AI briefing Tuesday” (David Shepardson, Reuters)
- 2023.07.03 “Panic about overhyped AI risk could lead to the wrong kind of regulation” (Divyansh Kaushik and Matt Korda, Vox.com)
- 2023.06.23 “Adopting AI Responsibly: Guidelines for Procurement of AI Solutions by the Private Sector” (World Economic Forum)
- 2023.06.23 “Biden-Harris Administration Announces New NIST Public Working Group on AI” (NIST.gov)
- 2023.06.21 “Three Ideas for Regulating Generative AI” (Sayash Kapoor, Arvind Narayanan, AI Snake Oil)
Business
- 2024.04.19 “AI and ESG: How Companies Are Thinking About AI Board Governance” (Latham & Watkins, LLP)
- 2023.12.05 “AI Is Testing the Limits of Corporate Governance” (Roberto Tallarita, Harvard Business Review)
- 2023.11.06 “The Real Threat Of AI Is Untrammeled Corporate Power” (Julian Birkinshaw, Forbes)
- 2023.08.23 “Is Zoom using your meetings to train its AI?” (Sara Morrison, Vox)
- 2023.08.02 “Why Meta’s move to make its new AI open source is more dangerous than you think” (Kelsey Piper, Vox)
- 2023.07.28 “Why Meta is giving away its extremely powerful AI model” (Shirin Ghaffary, Vox)
- 2023.07.25 “TechScape: Will Meta’s open-source LLM make AI safer – or put it into the wrong hands?” (Alex Hern, The Guardian)
- 2023.07.25 “ChatGPT creator says AI advocates are fooling themselves if they think the technology is only going to be good for workers: ‘Jobs are definitely going to go away’” (Jacob Zinkula, Yahoo! Finance)
- 2023.07.19 “No one knows what a head of AI does, but it’s the hottest new job” (Rani Molla, Vox.com)
- 2023.07.11 “Wages Are Down 4% in Rich Nations as ‘AI Revolution’ Looms: OECD” (Kenny Stancil, Common Dreams)
- 2023.05.30 “Lawyer cited 6 fake cases made up by ChatGPT; judge calls it “unprecedented”” (Jon Brodkin, ArsTechnica)
- 2023.05.04 “Google: We Have No Moat, and Neither Does OpenAI“
- 2023.05.01 “IBM to pause hiring in plan to replace 7,800 jobs with AI, Bloomberg reports” (Mrinmay Dey, Reuters)
- 2023.03.31 “Afraid of AI? The startups selling it want you to be” (Brian Merchant, Los Angeles Times)
- 2023.03.27 “The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers” (OECD Library)
- 2023.03.27 “AI likely to significantly impact jobs” (OECD)
- 2023.01.23 “Microsoft invests billions in ChatGPT-maker OpenAI” (Matt O’Brien, AP)
- 2023.01.18 “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic” (Billy Perrigo, Time)
- 2018.11.26 “AI thinks like a corporation—and that’s worrying” (Jonnie Penn, The Economist)
- 2017.11.30 “AI Has Already Taken Over. It’s Called the Corporation” (Jeremy Lent, Patters of Meaning)
Culture, The Arts
- 2023.09.09 “Generative AI Generation Gap: 70% Of Gen Z Use It While Gen X, Boomers Don’t Get It” (John Koetsier, Forbes)
- 2023.08.28 “It Costs Just $400 to Build an AI Disinformation Machine” (Will Knight, Wired)
- 2023.08.18 “What normal Americans — not AI companies — want for AI” (Sigal Samuel, Vox)
- 2023.08.14 “Iowa School District Bans Books by Toni Morrison, Margaret Atwood After AI Review for ‘Depictions of a Sex Act’” (Althea Legaspi, Rolling Stone)
- 2023.08.12 “Publishing scammers are using AI to scale their grifts” (Constance Grady, Vox)
- 2023.07.08 “How Afrofuturism in AI art is exposing biases in the system” (Artemis Van Dorssen, Far Out Magazine)
- 2023.06.30 “Gizmodo and Kotaku Staff Furious After Owner Announces Move to AI Content” (Victor Tangermann, futurism.com)
- 2023.06.27 “Turning Poetry into Art: Joanne McNeil on Large Language Models and the Poetry of Allison Parrish” (Joanne McNeil, Filmmaker Magazine)
- 2023.05.05 “An Odd Little Chat With CrapGPT” (Andy @ tachiai)
- 2023.02.25 “A Concerning Trend” (Neil Clarke, editor of Clarkesworld magazine) – Submissions of AI generated stories are flooding some small press publishers
- 2023.02.21 “ChatGPT launches boom in AI-written e-books on Amazon” (Greg Bensinger, Reuters)
- 2018.01.02 “Dude, You Broke the Future!” (Charles Stross) – Corporations are slow AIs.
Science, Technology
- 2024.06.08 “ChatGPT is bullshit” (Michael Townsen Hicks, Springer)
- 2024.03.14 “Power and Governance in the Age of AI: Experts Reflect on Artificial Intelligence and the Public Good” (New America)
- 2023.08.29 “‘Life or Death:’ AI-Generated Mushroom Foraging Books Are All Over Amazon” (Samantha Cole, 404 Media)
- 2023.08.07 “ChatGPT could make bioterrorism horrifyingly easy” (Jonas Sandbrink, Vox)
- 2023.08.01 “A New Attack Impacts Major AI Chatbots—and No One Knows How to Stop It” (Will Knight, Wired.com)
- 2023.07.28 “Did OpenAI Purposely Discontinue its AI Classifier?” (Shyam Nandan Upadhyay, Analytics India)
- 2023.07.27 “Researchers From Meta AI And the University Of Cambridge Examine How Large Language Models (LLMs) Can Be Prompted With Speech Recognition Abilities” (Tanya Malhotra, Mark Tech News)
- 2023.07.27 “How researchers broke ChatGPT and what it could mean for future AI development” (Maria Diaz, ZDNet)
- 2023.07.25 “ChatGPT Has a Plug-In Problem” (Matt Burgess, Wired)
- 2023.07.04 “Self-Consuming Generative Models Go MAD” (Sina Alemohammad, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hossein Babaei, Daniel LeJeune, Ali Siahkoohi, Richard G. Baraniuk, ARXIV.org)
- MAD – “Model Autophagy Disorder”
- 2023.07.03 “Generative AI Faces Text Shortage, UC Berkeley Professor Says” (Harshini, Analytics Insight)
- 2023.03.27 “The Curse of Recursion: Training on Generated Data Makes Models Forget” (Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, Ross Anderson, ARXIV.org)
- 2023.03.14 “Theory of Mind May Have Spontaneously Emerged in Large Language Models” (Michal Kosinski, ARXIV.org)
- 2023.02.14 “What Is ChatGPT Doing … and Why Does It Work?” (Stephen Wolfram)
- 2022.10.26 “Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning ” (Pablo Villalobos, Jaime Sevilla, Lennart Heim, Tamay Besiroglu, Marius Hobbhahn, Anson Ho, ARXIV.org)
- 2022.10.19 “A Systematic Study of Bias Amplification” (Melissa Hall, Laurens van der Maaten, Laura Gustafson, Maxwell Jones, Aaron Adcock, ARXIV.org)
- 2017.04.14 “Semantics derived automatically from language corpora contain human-like biases” (Aylin Caliscan, Joanna J. Bryson, Arvind Narayanan, Science.org)
Copyright, etc
- 2024.03.23 “Generative AI Could Leave Users Holding the Bag for Copyright Violations” (Anjana Susarla, Naked Capitalism)
- 2023.08.02 “Google Search test includes citing sources in its generative AI experience” (Nickolas Diaz, Android Central)
Detecting AI-generated content
- 2023.09.10 “How China could use generative AI to manipulate the globe on Taiwan” (Patrick Tucker, Defense One)
- 2023.07.10 “How AI will turbocharge misinformation — and what we can do about it” (Ina Fried, Axios)
- 2023.07.07 “The LLM-detection boom” (Mark Liberman, Language Log)
- 2023.07.05 “It’s impossible to detect LLM-created text” (Mark Liberman, Language Log)
- 2023.06.30 “Don’t use AI detectors for anything important” (Janelle Shane, AI Weirdness)
- 2023.04.06 “GPT detectors are biased against non-native English writers” (Weixin Liang, Mert Yuksekgonul, Yining Mao, Eric Wu, James Zou, ARXIV.org)
Discussion
- 2023.06.14 “Humans are Biased, Generative AI is Even Worse” (MetaFilter thread)
- 2023.05.01 “The Mind of Neural Networks” (MetaFilter thread)
- 2023.04.02 “More on AI and the Future of Work” (MetaFilter thread)
- 2023.03.14 “More Chatty, More Peté” (MetaFilter thread)
Unsorted Links
- 2023.08.19 “Is the AI Boom Already Over?” (Sara Morrison, Vox)
- 2023.08.16 “Move over Bard, Google’s next big AI product is coming this fall” (Aamir Siddiqui, Android Authority)
- 2023.08.15 “A.I. Today Is a ‘Glorified Tape Recorder,’ Says Theoretical Physicist Michio Kaku” (Sissi Cao, The Observer)
- 2023.07.19 “WormGPT is ChatGPT, but evil — how much should it worry you?” (Ben Berkley, The Hustle)
- 2023.07.13 “Generative AI Goes ‘MAD’ When Trained on AI-Created Data Over Five Times” (Francisco Pires, Tom’s Hardware)
- 2023.07.04 “The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con” (Baldur Bjarnason, Out of the Software Crisis)
- 2023.07.03 “Shoggoths Among Us” (Henry Farrell and Cosma Shalizi, Crooked Timber)
- 2023.07.01 “The Homework Apocalypse” (Ethan Mollick, One Useful Thing)
- 2023.06.26 “Language Is a Poor Heuristic For Intelligence” (Karawynn Long, Substack)
- 2023.06.20 “AI Is A Lot of Work” (Josh Dzieza, The Intelligencer)
- 2023.06.13 “Humans are Biased, Generative AI is Even Worse” (Leonardo Nicoletti and Dina Bass, Bloomberg)
- 2023.06.03 “Crypto collapse? Get in loser, we’re pivoting to AI” (David Gerard)
- 2023.05.04 “Will AI Become the New McKinsey?” (Ted Chiang, The New Yorker)
- 2023.03.20 “GPT-4 and professional benchmarks: the wrong answer to the wrong question” (Arvind Narayanan and Sayash Kapoor, AI Snake Oil)
- 2022.08.30 “Exploring 12 Million of the 2.3 Billion Images Used to Train Stable Diffusion’s Image Generator” (Andy Baio, waxy.org)