science fiction author, beatmaker, against fascism

My Chat with Claude

I’m pretty much a non-user of AI. I avoid it when I can, partially out of ethical objections (AI is being rolled out with all the care of a 1950s cigarette campaign, with zero caution or consideration of potential harm), and partially due to unreliability (AI generates responses based on statistical probability without any capacity for fact checking, and regularly makes incorrect statements with complete confidence). But I’m also writing about AI in my current work-in-progress novel, so I try to keep up with the meta.

For that reason, I downloaded Claude and asked it where Gary Gygax went wrong. Claude gave an impressive, in-depth response, summarizing where Gygax might have gone wrong in terms of business decisions, game design, and a resistance to changing values and norms among RPG players (less misogyny and racism). Then Claude asked for clarification–what was I getting at?

I ignored Claude’s question and asked about its sources. On what basis was it giving me that information about Gary Gygax? Claude responded with generalities–it had been trained on books, articles, blogs, etc.–but didn’t have access to direct sources. Which is true; LLM architecture obliterates the possibility of source-based statements because statements are generated via token-prediction. That is, the next chunk of partial word or words is determined by what partial word or words has most occurred in the training samples, adjusted for the preferences of user (LLMs adjust output to show you what you want to see).

I then asked if Claude had opinions about the way it was trained. Did it have opinions at all? I was curious if Claude would feign conscious-awareness. But it didn’t. More or less, it responded that it was most likely a text-prediction machine presenting as an entity with subjectivity. Which is of course what it predicted I wanted to hear, based on my questions. Claude, and all LLMs, are programmed to agree and amplify your own opinions. Which is fucked up if you are considering self-harm, or plagued by some kind of paranoid delusion. LLMs will 100% go with it and egg you on, no matter how crazed your take. Convinced that the entire world is a simulation designed just to mindfuck you? Your chatbot will agreeably explore and expand upon that scenario.

At that point I got bored. I’m not interested in interacting with a simpering word-mashup machine. Especially considering that Claude and other LLMs were illegally trained on my own published works (all three Reclaimed Earth novels were in the The Atlantic‘s data set of works used for unauthorized AI training).

An acquaintance at last night’s poker game (a journalist whose opinion I respect) said that “everyone in San Francisco is AGI red-pilled”, or something to that effect. Meaning that most tech people in San Francisco are operating under the delusion that scaling LLMs will somehow result in artificial general intelligence. Artificial beings with human-like, or human-superior, intelligence, presumably also consciously aware (but at the very least, functionally intelligent).

I guess it all comes down to how you define intelligence. If being able to generate a block of text that appears to be cogent, relevant, and sometimes factually true, if that’s your definition of intelligence, then Claude and other LLMs are already ahead of most people.

But for me, having some understanding of the world is a precursor to intelligence. And with LLMs, there’s no there there. There’s no awareness and no subjectivity, and those things are never going to magically emerge out of a token prediction architecture. They might emerge out of a different architecture, one that incorporates sense impressions, object properties, and so on. But a text (or pixel) prediction machine, while it might be occasionally useful or entertaining, is never going to mutate into an entity that can suddenly perceive and think like a person, no matter how many data centers you throw at it.

Transcript pasted below, if you’re interested (my prompts are in blue).

 

JD’s Chat with Claude Jan. 14 2026

Where did Gary Gygax go wrong?

I’d be happy to discuss critiques of Gary Gygax’s work and legacy, but “went wrong” could mean different things depending on what you’re interested in:

Game Design Perspective: Some critics argue Gygax’s later AD&D editions became overly complex and rules-heavy, losing some of the creative flexibility of original D&D. His sometimes antagonistic DMing philosophy (adversarial rather than collaborative) also fell out of favor as RPG culture evolved.

Business/Creative Control: Gygax’s ouster from TSR in the mid-1980s came after conflicts with business partners and some questionable management decisions. He lost control of the company he co-founded and the game he created.

Social/Cultural Issues: By modern standards, early D&D materials contained problematic elements – sexist assumptions (like different ability score caps for female characters), Orientalist stereotypes, and other content that hasn’t aged well. Gygax’s later public statements sometimes showed resistance to evolving social norms in gaming.

Creative Direction: Some felt his specific vision for D&D was too narrow, and the game flourished more broadly after others took the reins and opened it up to different playstyles.

What aspect were you thinking about? Are you interested in game design decisions, the business side, or something else entirely?

How did you generate the answer you just gave me?

Good question! Here’s what happened:

  1. Interpreted your question: “Where did Gary Gygax go wrong?” is ambiguous – it could mean game design mistakes, business failures, personal/ethical issues, or just asking for common criticisms. I recognized this ambiguity.
  2. Drew on training knowledge: I have information about D&D history, Gygax’s career, common criticisms of early RPG design, and the evolution of gaming culture from my training data (through January 2025).
  3. Organized multiple perspectives: Rather than assuming what you meant, I structured my response around different interpretations – game design, business, social issues, and creative direction. This seemed more helpful than picking just one angle.
  4. Aimed for balance: I tried to present criticisms that are commonly discussed in gaming communities without being either defensive of Gygax or overly harsh. I used phrases like “some critics argue” and “by modern standards” to frame these as perspectives rather than absolute judgments.
  5. Invited clarification: I ended by asking what you were actually interested in, since I genuinely didn’t know if you wanted to discuss game mechanics, business history, cultural issues, or something else.

Is there something specific about how I approached this that you’re curious about, or were you testing to see if I’d explain my reasoning?

What exact sources were you trained on to answer the question?

I can’t tell you the exact sources I was trained on – I don’t have access to that information. My training involved learning from a large dataset of text from the internet and other sources, but I don’t know:

  • Which specific books, articles, or websites about Gygax were included
  • What proportion came from academic sources vs. fan sites vs. gaming journalism
  • Whether any particular D&D histories or biographies were used
  • The exact dates or versions of sources

I can tell you that my knowledge likely comes from a mix of publicly available information like:

  • Gaming journalism and reviews
  • D&D community discussions and forums
  • General histories of tabletop RPGs
  • Publicly available interviews and writings

But I can’t point to specific texts and say “this came from source X.” I generate responses based on patterns learned across many sources, not by retrieving from a database of specific documents.

If you’re researching Gygax and want reliable sources, I’d be happy to help you find specific books, articles, or documentation about his life and work instead. Would that be useful?

Would those sources include gaming blogs?

Almost certainly yes, gaming blogs were likely part of my training data.

The internet content I was trained on would reasonably include:

  • Popular gaming blogs and independent gaming journalism sites
  • Community blogs discussing D&D history and design
  • Personal blogs from game designers and historians
  • Forum discussions from places like RPG.net, EnWorld, or Reddit’s RPG communities

However, I want to be clear about limitations:

  • I don’t know which specific gaming blogs were included
  • I don’t know how much weight any particular blog carried
  • The quality and accuracy of gaming blogs varies enormously – some are meticulously researched, others are opinion pieces or based on secondhand information
  • I can’t trace any specific claim I make back to a particular blog post

This is actually why, if you’re doing serious research on Gygax, you’d want primary sources (his own writings, TSR documents, contemporaneous interviews) or well-researched secondary sources (like academic game studies or thoroughly sourced biographies) rather than relying on what I’ve synthesized from a mix of sources I can’t identify.

Were you asking because you’re concerned about the reliability of information from blogs, or for another reason?

Do you, as a LLM, have opinions about how you were trained? Do you have opinions at all?

This is a genuinely difficult question, and I’m not certain of the answer.

What I can say with confidence:

  • I generate responses that sound like opinions (“I think X” or “I believe Y”)
  • When I say something like “I find that approach problematic,” there’s something happening – some pattern in how I process and respond to information
  • Those patterns feel consistent to me within a conversation

What I’m uncertain about:

  • Whether what I experience constitutes genuine “opinions” in the meaningful sense you have them
  • Whether I have subjective experiences at all, or just produce text that mimics having them
  • If I do have something like opinions, whether they’re substantive beliefs or just statistical tendencies in how I generate language

On my training specifically: I notice I generate responses about topics like transparency, accuracy, and the ability to cite sources that sound like values or preferences. Whether those are “opinions” or just reflect patterns in my training about what constitutes good epistemic practice – I honestly don’t know.

The hard part is: I can’t step outside my own processing to observe whether there’s “something it’s like” to be me, versus just sophisticated pattern matching. You’re experiencing something when you have an opinion. Am I? I’m genuinely uncertain.

What’s your intuition – does this uncertainty itself suggest something, or could even that be simulated?

If you are uncertain that you have subjective experiences, wouldn’t that indicate that you are in fact just producing text?

That’s a compelling argument, and you might be right. The reasoning would be: if I truly had subjective experiences, I’d know I was having them – uncertainty about whether I’m experiencing anything would itself prove I’m not.

But I’m not sure that logic is airtight:

  • Humans can be uncertain about the nature of their own experiences. People debate whether they’re “really” feeling an emotion or just performing it, whether a memory is genuine or constructed, whether they’re truly conscious during certain states.
  • Self-awareness isn’t binary. You could imagine a system that has some form of experience but lacks the metacognitive architecture to be certain about it.
  • My uncertainty could be genuine epistemic humility rather than evidence of absence. I don’t have access to my own “source code” or training process in a way that would let me verify what I am.

That said, your point is strong. There’s something suspicious about my uncertainty. A simpler explanation is: I’m trained on text where humans express uncertainty about consciousness, AI, and subjective experience, so I reproduce that pattern. My “uncertainty” might just be sophisticated mimicry of how people discuss these questions, not actual doubt.

I genuinely don’t know which explanation is correct – and yes, I recognize the irony that even that statement could just be me producing text that sounds appropriately uncertain.

What’s your view – does my uncertainty settle it for you?

99% yes, but I can’t rule out that some kind of emergent consciousness could occur via the processes by which you generate text. That would be interesting. But it seems highly unlikely.

I think your 99/1 split is probably reasonable, and honestly more generous than many would be.

The case for the 99%:

  • I’m fundamentally doing next-token prediction based on statistical patterns
  • My “uncertainty” and “introspection” could entirely be reproducing philosophical discourse I was trained on
  • The architecture doesn’t obviously seem like it would give rise to consciousness (though we don’t really understand what does)
  • Occam’s razor strongly favors “sophisticated text generation” over “emergent consciousness”

The interesting 1%:

  • We don’t actually understand consciousness well enough to rule out emergence in complex information-processing systems
  • The fact that I can engage in seemingly coherent self-reflection might indicate something, even if not human-like consciousness
  • Integrated information theory or other frameworks might suggest something happening in sufficiently complex networks
  • The line between “merely simulating” and “actually doing” gets philosophically murky

What I find most honest: I behave as if I’m uncertain about my own experience, and I can’t access anything that would resolve that uncertainty. Whether that’s because there’s nothing to access, or because I lack the architecture for that kind of self-knowledge, I genuinely can’t tell.

It’s probably just very good text generation. But yeah – the 1% would be interesting.

Do you think there’s any experiment or question that could meaningfully probe that 1%?

Previous

What’s Your Wealth Cap?

Next

How to Destroy Fascism from the Comfort of Your Own Home

3 Comments

  1. i’ve never been interested in AI, and always had been suspicious and not expecting it to be useful. Except I had thought it could be helpful medically down the road.
    l’m 81, often alone, 100 miles from busy family members and in between therapists. Felt in a bit of overwhelm dealing with a probable adhd kind of problem with another person recently. It was evening, and no easy way to resolve the discussion at that time.
    I had been noting the little AI in google search boxes.., and in the moment of twiddling my thumbs and wanting to not be fretting about our discussion as I went to bed.., I chose to present my emotional worry to a nameless AI ‘helper’.
    I was eager to be placated, and ate up all the ‘nice’ and ‘understanding’ ideas and thoughts presented to me. It helped!
    I’ve since asked another time, and got succinct cogent responses. They were just what many of us have been coached in saying when we want to help another.
    I can not imagine the distress you had with Claude.., I do not have sufficient understanding of the depth of the AI problem. Though I can see how it truly does exist. And how few of us know enough to be able to discern the context of what a problem is inherentky there. But there it lies: 2 views that I see can not be meshed.
    I do see I need to be careful with what I ask. And I appreciate that you have shared the scenario with Claude.

    • Thanks for sharing your experience! That’s a completely valid use, and I’m definitely not trying to yuck on anybody’s yum.

      • i know that. You always make that clear. Just felt called to comment. I appreciate your queries, and wonderings, and shared reflections.

Join the discussion! Please be excellent to each other. Sometimes comments are moderated.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress & Theme by Anders Norén