BETA — Сайт у режимі бета-тестування. Можливі помилки та зміни.
UK | EN |
LIVE
Технології 🇺🇸 США

LLMorphism: When humans come to see themselves as language models

Hacker News okey 0 переглядів 6 хв читання
Computer Science > Computers and Society arXiv:2605.05419 (cs) [Submitted on 6 May 2026] Title:LLMorphism: When humans come to see themselves as language models Authors:Valerio Capraro View a PDF of the paper titled LLMorphism: When humans come to see themselves as language models, by Valerio Capraro View PDF
Abstract:LLMorphism is the biased belief that human cognition works like a large language model. I argue that the rise of conversational LLMs may make this bias increasingly psychologically available. When artificial systems produce human-like language, people may draw a reverse inference: if LLMs can speak like humans, perhaps humans think like LLMs. This inference is biased because similarity at the level of linguistic output does not imply similarity in cognitive architecture. Yet, LLMorphism may spread through two mechanisms: analogical transfer, whereby features of LLMs are projected onto humans, and metaphorical availability, whereby LLM vocabulary becomes a culturally salient vocabulary for describing thought. I distinguish LLMorphism from mechanomorphism, anthropomorphism, computationalism, dehumanization, objectification, and predictive-processing theories of mind. I outline its implications for work, education, responsibility, healthcare, communication, creativity, and human dignity, while also discussing boundary conditions and forms of resistance. I conclude that the public debate may be missing half of the problem: the issue is not only whether we are attributing too much mind to machines, but also whether we are beginning to attribute too little mind to humans.
Comments: 16 pages
Subjects: Computers and Society (cs.CY)
Cite as: arXiv:2605.05419 [cs.CY]
  (or arXiv:2605.05419v1 [cs.CY] for this version)
  https://doi.org/10.48550/arXiv.2605.05419 Focus to learn more arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Valerio Capraro [view email]
[v1] Wed, 6 May 2026 20:27:15 UTC (226 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled LLMorphism: When humans come to see themselves as language models, by Valerio Capraro
  • View PDF
license icon view license

Current browse context:

cs.CY < prev   |   next >
new | recent | 2026-05 Change to browse by: cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

× loading... Data provided by:

Bookmark

BibSonomy Reddit Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Code, Data and Media Associated with this Article alphaXiv Toggle alphaXiv (What is alphaXiv?) Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) DagsHub Toggle DagsHub (What is DagsHub?) GotitPub Toggle Gotit.pub (What is GotitPub?) Huggingface Toggle Hugging Face (What is Huggingface?) ScienceCast Toggle ScienceCast (What is ScienceCast?) Demos Demos Replicate Toggle Replicate (What is Replicate?) Spaces Toggle Hugging Face Spaces (What is Spaces?) Spaces Toggle TXYZ.AI (What is TXYZ.AI?) Related Papers Recommenders and Search Tools Link to Influence Flower Influence Flower (What are Influence Flowers?) Core recommender toggle CORE Recommender (What is CORE?) About arXivLabs arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
Поділитися

Схожі новини