Behavior, Bias & Choices: What LLMs Know, and Don’t

New research shows large language models mirror human decision patterns with many options (except in the messy stuff like satisfaction and stopping behavior).

A new SSRN study from Katherine Hallen finds that large language models (LLMs) reproduce many human decision behaviors when presented with different choice set sizes. (Basically, how many options a person sees.)

The experiments cover responses like stopping, tasting, purchasing, and satisfaction.

LLMs like GPT-5 act more rational, more systematic.

But they fail to mimic how humans feel about being overloaded, or when they decide they’ve seen “enough.” 

If you build products and bundles, option counts matter.

Too many versions, too many features, or too many bundles cause users to delay decisions, feel regret, or simply avoid picking at all.

Hallen shows prompt design (even framing options narratively, using “persona” or “role-play”) helps LLMs behave more like humans.

That gives you a tool:

Simulate user reactions with varying option counts to detect the tipping points before launch. 

Bottom line: Don’t assume your customers will stay calm or happy when faced with a wall of options.

Even powerful AI models can’t replicate that human overload effect.

Start small, test your choices, and give buyers a clear, confident path instead of a maze.

Want to make your product irresistible? That’s what we do as product marketing consultants at Graphos Product, helping innovators turn need-driven ideas into market-ready successes.