About
The Cognitive Integrity Lab explores how artificial intelligence is changing the way we think, learn, and build trust.
As AI outputs become indistinguishable from human work, the question of whose judgment lies behind them grows urgent. Did the student wrestle with the essay, or did the model hand it to them? Did the doctor weigh the symptoms, or did the system generate the diagnosis while the doctor clicked through? When work can be produced without the reasoning it once guaranteed, trust becomes hollow.
We design protocols and systems that make it possible to verify how human and AI contributions were combined in a piece of work, without surveillance and without guessing from style. For a short overview, see our Conversation article; for a fuller account, see our longer paper.
Questions our research addresses include:
• How can we tell when work is truly our own?
• How can technology support rather than replace authorship and reflection?
• What does trust mean when AI mediates our relationships with others and with our own thoughts?
We draw on philosophy, protocol design, and empirical work to develop principles for cognitive integrity: ways of making human and AI contributions visibly distinct and governable in AI-mediated work. This involves treating AI as part of the environments in which people think and learn, rather than as a simple external tool. Our work contributes to an emerging area of research focused on how humans and AI can interact while keeping the core activities of understanding, evaluation, and learning human-directed.
© 2025 Cognitive Integrity Lab · Temple University
