Skip to content

Ecce Signum

Immanentize the Empathy

  • Home
  • About Me
  • Published Works and Literary Matters
  • Indexes
  • Laboratory
  • Notebooks
  • RSS Feed

Tag: artificial intelligence

IWSG, October 2023: An Eye On AI

2023-10-042023-10-04 John Winkelman

Yellow Garden Spider

Wow, was September busy. Very little reading, very little writing. I am in the second week of a stay-at-home vacation now, and my brain is slowly un-kinking, and for the first time in months I feel like I might actually be able to write again.

I have several projects in the works right now. The most immediate is NaNoWriMo which is a mere 27 days(!) away. This year I am going to attempt fifty flash fictions. To that end I have created a simple prompt generator which you can access here. Simply click on the button at the bottom of the page to generate a prompt (or perhaps more accurately a “seed”) made up of subject, setting, and genre. With this programmed tool in hand, I feel confident that I will be able to reach the goal of 50,000 words by the end of November.

So it is an excellent coincidence that the October IWSG topic is something very much in my wheel-house. The Insecure Writer’s Support Group question for October 2023 is: The topic of AI writing has been heavily debated across the world. According to various sources, generative AI will assist writers, not replace them. What are your thoughts?

Short answer: It depends on the context, the writer, and what is being written. And it also depends on what is meant by both “assist” and “replace.”

I have been a programmer since 1999 and have been researching ChatGPT and similar technologies (hereafter abbreviated as “LLM”) for a little over a year at this point (Notebook here). Here is a bulleted list of some of my thoughts.

  • LLMs write passable prose. The more technically specific the prose, the closer their output approximates that of a competent human writer. LLMs will likely be a big boost for technical writers, assuming the data sets on which the LLMs have been trained include well-written technical documents.
  • LLMs are trained on staggeringly huge amounts of data. ChatGPT uses everything that the owners could scrape from the entire (English language, primarily) internet as of two years ago. This includes innumerable works of fiction. What LLMs produce is a distillate of the available ingredients, based on the recipe, which is the prompt entered by a user. Therefore, ultimately, the quality of the output will vary according to the quality of the prompt, with respect to the entirety of the data set from which the LLM pulls its response.
  • LLMs have been called “sparkling autocomplete” and “stochastic parrots,” both of which are accurate if incomplete assessments. Their responses to queries are not random, nor are they completely predetermined. What LLMs return is the most statistically likely collection of words based on a request. LLMs have no concept of a “right” or “wrong” answer. It’s all probabilities based on word order in their training sets.
  • Therefore technical writers and writers of non-creative nonfiction will likely be most affected by the advent of LLMs, simply because these types of writing most closely adhere to formal grammars and constrained syntax. In other words, the closer the desired output is to something that could be used as logical input (e.g. programming), the more likely that the output will be useful to human users.
  • But since LLMs have no concept of “correct”, there will always need to be subject-matter experts who can verify the output of LLMs, in order to ensure the accuracy of the responses. So technical writers may find their job descriptions changed to “technical editors”, or something similar.
  • When it comes to creative output (fiction, poetry, etc.), the work produced by LLMs ranges from “terrible” to “competent.” Just as these tools have no intrinsic understanding of right and wrong answers, they also have no concept of “good” and “bad” writing. And given that the overwhelming majority of creative content in their training sets is “mediocre” to “competent,” the distillate of that work will be of a similar quality.
  • But while the output of LLMs may never be better than “good,” in many cases, “good” may be good enough. As much as writing is a skill, so is reading, and what people like to read is completely subjective. The same story told by five different authors may have zero crossover in their readers. We like what we like.
  • Publishers of creative writing are already finding themselves inundated by LLM-produced works, and while the editors are generally competent enough to spot the difference, this wastes resources which could be put to better use publishing good content from real humans.

So I don’t think creative writers will be “replaced” by LLMs, but we do have additional competition for attention. Writers and readers alike will need to continually improve their craft if they want to stay ahead of the machine-generated slush pile.

 

Insecure Writer's Support Group BadgeThe Insecure Writer’s Support Group
is a community dedicated to encouraging
and supporting insecure writers
in all phases of their careers.

Posted in Literary MattersTagged artificial intelligence, ChatGPT, IWSG 7 Comments on IWSG, October 2023: An Eye On AI

AI and Art: What Goes In Is What Comes Out, At Most

2023-03-032023-03-03 John Winkelman

Back in January, I participated on two AI Art-themed panels panels at ConFusion 2023. I discussed these panel briefly in my ConFusion 2023 follow-up post, but I wanted to add some thoughts here, specifically around ChatGPT and the use of computer generated content in the context of writing.

When it comes to ChatGPT creating content, whether that content be fiction or nonfiction, it does what all of these tools do: remixes previously existing content. I make no claims about whether the thing created by an algorithm is “art” or “creative” or even “new,” but what the new content does not do is transcend its input.

ChatGPT and similar tools are trained by scanning and (hopefully) contextualizing all of the text on the internet. While ChatGPT has (or had) safeguards in place to counter the large amount of hate speech endemic to the modern internet, it still has literally centuries or even millennia of content in its input stream. A great deal of that content is regressive or even revanchist by today’s sensibilities.

And since these machine learning tools can not imagine the new, they will continue to remix the old. Even as new, human-created works become available, this new data is miniscule compared to the vast troves of work on which these tools have already been trained. And a sizeable portion of the new inputs from these tools will be previous output from the same tools, resulting in a sort of solipsism which quickly becomes untethered from any human creativity or input, thus making a large portion of the output of those tools useless except as a point of curiosity.

Additionally, here are a few points of reference:

  1. That which is called “AI” in these contexts is not artificial intelligence as it is generally understood, but is variously either neural networks, the output of machine learning tools, pattern-matching algorithms, or (usually) some combination of the three, and in all cases the output is the result of running these tools against input which was generated, overwhelmingly but not exclusively, by humans.
  2. The landscape of AI-generated art, which includes text, music, and visual arts, is rapidly evolving.
  3. Opinions on the use of AI in the arts, as well as the effects of AI generators upon the profession and livelihood of artists, are wide and varied, and continue to evolve and gain nuance.

Some more links on this general topic:

  • Jason Sanford’s Genre Grapevine post on this subject on his Patreon, written around the time of the ConFusion panels
  • “AI = BS” at Naked Capitalism
  • The 2023 State of the World conversation at The Well
  • ChatGPT Is a Blurry JPEG of the Web – Ted Chiang
Posted in Current EventsTagged art, artificial intelligence, ChatGPT, machine learning, writing comment on AI and Art: What Goes In Is What Comes Out, At Most

Links and Notes for the Week of January 13, 2019

2019-01-21 John Winkelman
  • China Mieville on his book October.
  • Via Bruce Sterling at the 2019 State of the World discussion over at The Well, Procedural Rhetoric.
  • From the end of 2017, Charles Stross talk at CCC. In particular, Corporations as slow A.I.s. (transcript here)
  • This is pretty cool: The World’s Writing Systems is working on translating all of the world’s writing systems into Unicode.
Posted in Links and NotesTagged artificial intelligence, Bruce Sterling, China Miéville, writing comment on Links and Notes for the Week of January 13, 2019

Creating a Sensory Input-Based Monster AI, Part I

2006-05-30 John Winkelman

As a thought experiment I am putting together a generic artificial intelligence which I can use for bad guys/NPCs in a variety of different games. There are myriad paths I could follow in creating AI, so for right now I am going to concentrate on two inter-related tasks: awareness and morale. In other words, when does X become aware of another entity, and what does X do in response to that awareness?

For the purposes of this essay there will be two entities: a deer and a wolf. I will discuss the reactions of the deer.

The first step is to create a triggering event. In this case, proximity. Using whatever senses are available to it, somehow, at some point as the wolf approaches, the deer becomes aware of it. This could be something like a twig snapping, or movement in shadows, or wolf-smell on the wind. In any case, the first level of this system is Awareness.

Once the deer is aware that Something is out there, the next step is to determine what that thing is. It could be another deer, or a faun, or a human, or the wolf. Without making that determination the deer cannot react appropriately. It might run in terror from the faun, or stand still while the wolf attacks. So the second level of the system is Recognition.

Once the approaching entity is recognized, the deer can take the appropriate action; in this case, run in terror from the wolf. Or if the deer is protecting a faun, move to attack/distract the wolf while the faun flees. This level of the system is Reaction.

So: Awareness to Recognition to Reaction. Think of them as concentric rings centered on the deer. As the wolf enters these rings its proximity triggers different responses. These distances can be displayed as a sequence of numbers; for instance [20/10/5]. [Awareness/Recognition/Reaction].

Awareness will always be the largest number. Without being aware of something, the deer cannot either recognize or react to it.

Either recognition or reaction may be the next largest number, or they may be equal. In any case, neither of them may be larger than the Awareness number, although they may be equal to it.

So: [10/5/10] would be a “legal” description, but [5/10/5] would not.

Using this system a wide variety of behaviors may be put into place with little effort. The following are some examples:

[10/5/2] — Long-range awareness, medium-range recognition, short-range reaction. A semi-tame, slow-moving, not-too-bright animal. A farm cow, for instance. Knows you are there, knows who you are, doesn’t much care.

[10/5/10] — Long-range awareness and reaction, medium-range recognition. Guards at a gate. Something is out there, so immediately set out after it. Once they are close, it may be recognized and perhaps another action performed.

[10/5/1] — Long-range awareness, medium-range recognition, extremely short-range reaction. A bored, disaffected clerk at a store. Knows you are there, knows who/what you are, doesn’t do anything until you actually poke him in the shoulder.

[10/1/10] — Long-range awareness and reaction threshold, contact-range recognition. A rhinoceros during mating season, which charges anything it detects, and only stops when it recognizes “Ooh! That’s a train!”, or something of the like at extremely short range. Basically this is an unthinking reaction to the presence of another entity.

It occurs to me that this could be made less “broad” and more “deep” by changing to a two-level “awareness/reaction” and “recognition/reaction” system. For the purpose of simple- to medium-complexity games I like the three-level approach. In particular playing around with the distance between “recognition” and “reaction”, allowing for simulating different levels of intelligence or bravery, and startle-reactions, such as an enemy suddenly appearing well within the “reaction” threshold, causing a panic reaction; or a particularly slow-witted (think “drunk”) critter standing around gob-smacked while being charged by a bull.

Note that this AI system is for an “initial contact” situation, where another entity is first entering into awareness range, or has been outside of awareness range long enough that the “deer” has forgotten the entity was there. Reactions when actually interacting with another entity will be discussed in an upcoming essay.

Posted in ProgrammingTagged artificial intelligence, game development comment on Creating a Sensory Input-Based Monster AI, Part I

Life, or Something Like It

2002-01-082024-05-13 John Winkelman

I had a flash of insight today regarding the programming of simple artificial life experiments. The simplest would be a series of algorithms running in the background of an interface, with a series of readouts of statistics… how many are left, which generation they are on, all that general stuff. Adding a graphic representation of the data improves the life metaphor, allowing visible representations of the “creatures” to visibly interact with one another.

With fairly simple object-oriented programming the artificial life (AL) forms could be given rudimentary traits — aggression, speed, strength, reproduction, life span, etc., and be allowed to interact with one another. A sidebar could keep track of the averages in the population: average aggression, average age of the group, likelihood of breeding… and, based on random starting variables, after a few or a few hundred generations, evolution will have occurred.

With a little more programming mojo–but still in the realm of the simple–the ALife individuals could be made to “cannibalize” one another, and tests could be run to see what version of the life is most likely to succeed: that which is harmful or helpful.

The idea occurred to me while I was browsing the AI Depot.

In other news, I added three Flash mouse trailers to the tech section. Flash: I love it, I hate it.

[2024.05.13 UPDATE – fixed link to AI Depot]

Posted in ProgrammingTagged artificial intelligence, artificial life, game development comment on Life, or Something Like It

Personal website of
John Winkelman

John Winkelman in closeup

Archives

Categories

Posts By Month

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Links of Note

Reading, Writing
Tor.com
Locus Online
The Believer
File 770
IWSG

Watching, Listening
Writing Excuses Podcast
Our Opinions Are Correct
The Naropa Poetics Audio Archive

News, Politics, Economics
Naked Capitalism
Crooked Timber

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

© 2025 Ecce Signum

Proudly powered by WordPress | Theme: x-blog by wpthemespace.com