Research Foundations
The research that informs Connected Classroom's work spans cognitive science, developmental psychology, privacy law, and learning theory. This page organizes the key studies and frameworks behind the ideas published in The Algorithmic Mind column at Psychology Today, the Connected Classroom blog, and the Cognitive Privacy Project.
This is a living document. It is updated as new research emerges.
Friction and Why Learning Requires It
The central claim across all Connected Classroom work is that learning requires friction: struggle, confusion, error, and the cognitive effort of working through difficulty. AI tools are designed to eliminate friction. This creates a structural tension between how learning actually works and how these tools operate.
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux. The brain defaults to effortless "System 1" processing. Effortful "System 2" thinking, the kind required for judgment and critical analysis, must be deliberately engaged. AI offloading caters to System 1 and starves System 2.
Carr, N. (2010). The shallows: What the Internet is doing to our brains. W. W. Norton & Company. Frictionless information architectures physically rewire neural pathways to prioritize shallow skimming over sustained, deep synthesis.
Hildebrandt, M. (2015). Smart technologies and the end(s) of law. Edward Elgar Publishing. Democratic citizens require "agonistic friction" (struggle against resistance) to develop autonomy. Frictionless technology environments produce passive subjects, not autonomous agents.
Frischmann, B., & Selinger, E. (2018). Re-engineering humanity. Cambridge University Press. Predictive systems that remove friction from daily life engage in "techno-social engineering," treating humans as stimulus-response machines and eroding the capacity for free will.
Gruber, M. J., Gelman, B. D., & Ranganath, C. (2014). States of curiosity modulate hippocampus-dependent learning via the dopaminergic circuit. Neuron, 84(2), 486-496. https://doi.org/10.1016/j.neuron.2014.08.060 Curiosity is a biological prerequisite for memory formation. Bypassing the intrinsic drive to know by delivering synthesized answers prevents the hippocampus from encoding information.
Immordino-Yang, M. H., Darling-Hammond, L., & Krone, C. R. (2019). Nurturing nature: How brain development is inherently social and emotional. Educational Psychologist, 54(3), 185-204. https://doi.org/10.1080/00461520.2019.1633924 Emotion and cognition are biologically inseparable. Outsourcing the emotional struggle of learning prevents cognitive scaffolding from forming
Cognitive Offloading and Critical Thinking
When people delegate thinking to AI, they practice thinking less. Skills that are not practiced atrophy in adults or never develop in children.
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006 666 participants. Strong correlations: AI usage to cognitive offloading (r = +0.72), cognitive offloading to critical thinking (r = -0.75), AI usage to critical thinking (r = -0.68). Correlational, not causal. The age pattern is critical: participants 17-25 showed highest dependence and lowest scores.
Shen, J. H., & Tamkin, A. (2026). How AI impacts skill formation. arXiv preprint, arXiv:2601.20245. https://doi.org/10.48550/arXiv.2601.20245 Programmers using AI assistance completed tasks faster but showed a 17% drop in conceptual comprehension. They could not identify errors in their own AI-assisted code. Productivity is not competence.
Ahmad, S. F., et al. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10, Article 311. https://doi.org/10.1057/s41599-023-01787-8 AI offers benefits in education but raises concerns about the loss of human decision-making, increased cognitive laziness, and privacy risks among students.
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7-19. https://doi.org/10.1093/analys/58.1.7 Human cognition extends beyond the skull into external tools. When those tools are algorithmically controlled, the manipulation of the tool is a manipulation of thought itself.
Atrophy vs. Foreclosure: Why Children Are Different
Adults who offload cognitive tasks are choosing to skip work they already know how to do. The foundation exists and can theoretically be rebuilt. Children who offload are closing the door to building capacity in the first place. There is no foundation to return to.
Gopnik, A. (2016). The gardener and the carpenter. Farrar, Straus and Giroux. Children develop through "lantern consciousness," a broad, exploratory, seemingly inefficient mode of attention. AI tools optimize for "spotlight consciousness," the focused, goal-directed attention of adult productivity. Forcing children into spotlight mode too early forecloses the messy exploration required for cognitive flexibility.
Gopnik, A. (2009). The philosophical baby. Farrar, Straus and Giroux. Children's brains use simulated annealing: chaotic, wide-ranging exploration that settles into globally optimal configurations. Premature efficiency (handing children optimized AI outputs) is like cooling metal before it has been heated. The structure looks fine. It is brittle.
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68-78. Autonomy, competence, and relatedness are biological requirements for motivation. Surveillance and algorithmic mediation thwart autonomy directly.
Haidt, J. (2024). The anxious generation. Penguin Press. The shift from a play-based to a phone-based childhood has systematically rewired the neurological health of a generation. AI tools accelerate this trajectory by removing the remaining friction from digital interaction.
Internet Matters. (2025). Research on teen AI companion use. https://www.internetmatters.org/ 40% of teenagers who use AI companions trust their guidance without question. 36% are uncertain whether they should be concerned about AI advice at all.
Gilly, T. (2025). Children as canaries: The pediatric sentinel effect in algorithmic harm. Real Safety AI Foundation. https://ai-literacy-labs.org/ Developing minds are early indicators of algorithmic degradation. The cognitive foreclosure visible in children today predicts broader population effects.
Rooney, T. (2010). Trusting children: How do surveillance technologies alter a child's experience of trust, risk and responsibility? Surveillance & Society, 7(3/4), 344-355. https://doi.org/10.24908/ss.v7i3/4.4158 Surveillance denies children the "right to risk," eliminating the boundary-testing required to develop internal moral compasses.
Homogenization: When Everyone Thinks the Same Thing
Large language models converge toward the statistical mean of their training data. When these systems mediate how people engage with information at scale, variance compresses. The question is what happens to a society that loses cognitive diversity.
Quattrociocchi, W., Capraro, V., & Perc, M. (2025). Epistemological fault lines between human and artificial intelligence. arXiv preprint, arXiv:2512.19466. https://doi.org/10.48550/arXiv.2512.19466 AI creates "epistemological fault lines" by replacing the friction of truth-seeking with plausibility-matching. The concept of "epistemia": possessing the feeling of knowledge without having done the cognitive labor of evaluation.
Ding, A. W., & Li, S. (2025). Generative AI lacks the human creativity to achieve scientific discovery from scratch. Scientific Reports, 15, Article 9587. https://doi.org/10.1038/s41598-025-93794-9 AI can write more creatively than the average human but cannot reach the output of highly creative individuals. Population-level reliance on AI compresses creative variance toward the mean.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. Algorithmic systems reflect and amplify existing biases by presenting them as neutral, data-driven conclusions.
The Personalization Myth
AI companies market "personalized learning." What AI personalizes is content delivery: which information reaches a student, in what sequence, based on engagement signals. Personalized learning requires cultural context, perspective, lived experience, metacognition, the experience of struggle, and a relationship with the learner. No AI system has any of these.
Herrington, J., & Oliver, R. (2000). An instructional design framework for authentic learning environments. Educational Technology Research and Development, 48(3), 23-48. https://doi.org/10.1007/BF02319856 Authentic learning requires contexts that reflect how knowledge is used in real life, not algorithmically sequenced content delivery.
Noddings, N. (2013). Caring: A relational approach to ethics and moral education (2nd ed.). University of California Press. Genuine pedagogical relationships require trust. The caring relation between teacher and student depends on the student believing their vulnerability will be protected, not exploited.
Wiggins, G., & McTighe, J. (2005). Understanding by design (2nd ed.). ASCD. Backward design emphasizes performance-based assessments that provide authentic evidence of understanding, not engagement metrics or standardized test scores.
Epistemic Justice and Algorithmic Bias
AI systems trained on historical data reproduce historical patterns of whose knowledge counts. This is not incidental bias. It is structural epistemic injustice operating at population scale through a single point of failure with no contestation mechanism.
Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press. Two forms of systematic knowledge exclusion: testimonial injustice (credibility deficits based on identity) and hermeneutical injustice (lack of interpretive frameworks for marginalized experience). AI automates both at scale.
Solove, D. J. (2025). Artificial intelligence and privacy. Florida Law Review, 77. AI systematizes bias, making it more pervasive and inescapable than individual human bias. Privacy law must shift from protecting inputs (data collected) to regulating outputs (inferences made).
Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104, 671-732. Biases from past decisions become codified into formalized algorithmic rules, producing systematic discriminatory impacts.
Henrich, J. (2020). The WEIRDest people in the world. Farrar, Straus and Giroux. Western, educated, industrialized, rich, and democratic populations are psychologically unusual, not universal. AI training data overrepresents WEIRD perspectives and presents them as default human psychology.
Cognitive Privacy and Surveillance
Current privacy law protects data: what you said, purchased, or where you went. Nothing protects the cognitive process itself: how you thought about it, what you struggled with, what you almost did but reconsidered.
Magee, P., Ienca, M., & Farahany, N. A. (2024). Beyond neural data: Cognitive biometrics and mental privacy. Neuron, 112(18), 3017-3028. https://doi.org/10.1016/j.neuron.2024.07.025 Proposes a "privacy floor" for cognitive biometric data built on informed consent, data minimization, data rights, and data security. Recommends edge processing as default standard.
Penney, J. W. (2016). Chilling effects: Online surveillance and Wikipedia use. Berkeley Technology Law Journal, 31(1), 117-182. Knowledge of surveillance changes behavior. Wikipedia traffic to sensitive articles dropped significantly after Snowden revelations. If adults self-censor under observation, children in mandatory educational settings are more vulnerable still.
Foucault, M. (1975). Discipline and punish: The birth of the prison. Gallimard. The panopticon principle: when people know they could be observed at any time, they internalize surveillance and police themselves. Digital monitoring systems in schools create the same dynamic.
Zuboff, S. (2019). The age of surveillance capitalism. PublicAffairs. Human experience is claimed as free raw material for behavioral prediction and modification. The extraction of behavioral data from daily life is the business model, not a side effect.
The Intelligence Suite: Proof of Concept
The Connected Classroom Intelligence Suite was built to demonstrate that AI tools for education can operate under different architectural principles: ephemeral processing (no data retention after session), no login required, no user profiling, and zero commercial interest. It exists to prove that the choice between "AI tools that capture cognitive data" and "no AI tools at all" is a false binary. A third option exists: tools designed to solve a specific teacher problem without extracting anything from the interaction.
This page reflects the research informing Connected Classroom's published work as of March 2026. For the full Cognitive Privacy Project research framework, visit cognitiveprivacyproject.org.
© 2026 Timothy Cook / Connected Classroom. All rights reserved. Licensed under CC BY-NC-ND 4.0. You may share this work with attribution. Commercial use and derivatives require written permission.