top of page

Exploring Metacognition, Digital Divides, and Algorithmic Bias with Dr. Christy Hamilton

Updated: Sep 16

Dr. Christy Hamilton is an Assistant Professor in the Department of Communication at the University of California, Santa Barbara. Her research sits at the intersection of cognitive psychology and communication, with a focus on how people think about their own thinking (metacognition) in digital environments. She studies how technology influences self-awareness, decision-making, and learning, with particular attention to issues such as digital divides, misinformation, and user interaction with platforms. Her work bridges theory and practice, exploring both the risks of biased or manipulative systems and the potential for technology design to support equitable outcomes.

How do our devices shape the way we think about ourselves? In this interview, Fair Tech Policy Lab founder Alina Huang speaks with Dr. Christy Hamilton, Assistant Professor in the Department of Communication at UC Santa Barbara, about the role of metacognition in digital environments. The conversation explores how people navigate constant information flows, the emergence of new digital divides, the risks of over-reliance on familiar technologies, and the ethical implications of nudging and personalization in platform design. Together, they reflect on how technology influences self-awareness that can reinforce inequality, and discuss potential interventions.



Key Insights from this Interview:

  • How metacognition works in digital environments

  • The rise of the second-level digital divide (skills) and third-level divide (benefits)

  • Why people overestimate their abilities when using familiar devices and platforms

  • How “self-extension” of technology can reinforce inequality

  • The ethical risks and opportunities of nudging and self-endorsement in platform design

  • Where interventions could be most impactful: misinformation and high-risk decision-making


Metacognition in the Digital Age

Alina: Hi Dr. Hamilton, thank you so much for taking the time to meet with me today. I’m Alina, the founder of a think tank called the FairTech Policy Lab. We’re focused on addressing algorithmic bias; especially how algorithms embedded in our everyday lives often cause real but invisible harm across demographics, whether in loan applications, medical queues, job applications, or even the content we get in our social media feeds. Through research, advocacy, and public engagement, we aim to raise awareness and shape more equitable digital systems. So I think we’ll start with a question on metacognition. Could you start by explaining the concept of metacognition—how it works in digital spaces and what it looks like when we’re constantly switching between tabs, apps, and algorithms?


Dr. Hamilton: Sounds good. Metacognition, in general, is the concept of thinking about your own thinking. In order for us to make decisions about media use, we need a moment of self-awareness. Our ability to do that is called metacognition. There are two ways we can enact it: metacognitive monitoring—where we judge how much we know and the source of our knowledge—and metacognitive control, where we decide whether to keep going, stop, or switch.


Alina: And what does it look like in the digital era, when we’re constantly switching between tabs, apps, and algorithms?


Dr. Hamilton: Metacognition originated in helping students understand how to study—making decisions about how much they know and when to stop studying. The same idea applies to managing media. But with technology, we now have always-available devices, rapid access to information, and other factors that can interfere with our cognitive processes. So, the digital world can both help and hinder our metacognitive decisions.


Beyond Access: The Second Digital Divide

Alina: That’s really interesting. Beyond access to devices, I know you’ve also identified a second digital divide—where some people have the metacognitive tools to navigate digital systems effectively, and others don’t. Could you talk more about that and its real-world consequences?


Dr. Hamilton: Sure. The first digital divide was about access—who had internet or smartphones and who didn’t. That gap has narrowed in many Western countries. Now we’re facing a second-level divide: skill. Even if people have access, they don’t always have the skills to use technology well. This leads to a third-level divide—benefits. If you lack skills, you don’t get the same benefits from technology.


Devices as a "Self-Extension"

Alina: Thank you. I also wanted to ask about one of your experiments, where you showed that people feel more confident in their cognitive skills when using a familiar search engine like Google—especially on a mobile device. Could you tell me more about that and how this sense of technology as a “self-extension” might unintentionally reinforce bias or over-reliance on digital tools?


Dr. Hamilton: Totally. One goal of metacognition is good self-awareness—knowing what you do or don’t know. What I found is that people using seamless, familiar tech—especially their own mobile devices—tend to overestimate their knowledge. In the experiment, participants took a trivia quiz and were either allowed to use a device or not. Those who used their device, especially if it was mobile and personal, rated themselves as better at thinking and remembering—even though the device did the work. This creates a blurry boundary between self and technology. People may start thinking the tech’s answers reflect their own abilities.


Alina: That’s really interesting—it seems like this alignment between tech and cognition could influence how people view themselves. Do you think it also reinforces stereotypes or discrimination?


Dr. Hamilton: That’s a great question. I haven’t done research on that specifically, but it’s possible. If people experience biased outcomes from tech, they might internalize them—blaming themselves. Conversely, people who are advantaged might feel extra confident, believing the tech reflects their ability. If tech advantages certain groups and disadvantages others, it can reinforce inequalities in self-perception.


From Personalization to Persuasion: Self-Endorsement in Design

Alina: That’s powerful. On a more positive note, I know your research also looks at self-endorsement—how framing recommendations as personalized can influence behavior. What are some key takeaways from that, and what responsibilities do platforms have in using that strategy?


Dr. Hamilton: Yes, self-endorsement refers to when people feel a tech recommendation reflects their own values. This started in VR research: when users saw their avatar wearing a branded shirt, they felt more favorable toward the brand. We extended this to text-based systems—like Netflix or Spotify. We found that if users think a recommendation is based on their personal data, they’re more likely to accept it and even internalize the associated values—like sustainability. So yes, tech can encourage pro-social behaviors, but platforms have an ethical responsibility. It’s a form of manipulation. Until there are policies, we need designers to intentionally pursue pro-social outcomes.


Nudges, Interfaces, and Digital Literacy

Alina: That’s a really nuanced perspective. So do you think we can—or should—use tech to “nudge” people toward better decisions? Are there risks in doing that?


Dr. Hamilton: I do see it as a kind of manipulation, yes. And yes, there are risks—especially because there’s little policy in place to prevent companies from manipulating for their own gain. Ideally, we’d educate users to be more metacognitively aware—to understand when and how they’re being nudged. But it’s hard. Most people won’t spontaneously reflect. That’s why embedding cues into interfaces could help—like visual signals prompting reflection.


Alina: That’s different from traditional education, right? Instead of teaching digital literacy in a classroom, you’d build it into the tech interface?


Dr. Hamilton: Exactly. Most people won’t read lengthy disclaimers or terms. So the cues have to be subtle, visual, and relevant to their goals. Otherwise, people ignore them—a phenomenon called banner blindness. If people are focused on using ChatGPT for an assignment, they’ll ignore unrelated warnings unless those cues help with their task. Either we align cues with goals (top-down processing), or we use perceptual contrast—like colors or positioning (bottom-up processing)—to get attention.


Alina: That’s really interesting. What’s the research process like for exploring those two strategies—top-down and bottom-up attention?


Dr. Hamilton: There’s a lot of theoretical research on top-down processing—how goals drive attention. But it’s hard to apply that in real contexts like ChatGPT. We need more applied research to test how to actually align interface design with user goals. Unfortunately, researchers often aren’t incentivized to do applied work—especially not equity-focused work. Funding often comes from corporations with their own priorities.


Alina: That’s a challenge across many fields. Do you think there’s a way to better align academic research with implementation—especially in tech design?


Dr. Hamilton: Possibly. Some companies like IBM used to publish research questions for grant applications. But today, most corporate-funded research serves the company’s goals—not necessarily equity. So we’d need either a shift in corporate incentives or more public/government funding. But even government grants for equity-minded research are being cut.


Advice to Fair Tech Policy Lab

Alina: That makes sense. So as a think tank, if we want to create public interventions that make algorithmic bias more visible and intuitive, what advice would you have for us?


Dr. Hamilton: I’d say two promising areas are: (1) misinformation and disinformation—people are starting to care more about this and want tools to navigate it, and (2) high-risk decision contexts, where algorithmic decisions have serious consequences. Those are easier to regulate and raise awareness about. Everyday algorithmic influence is harder to tackle, but still important.


Alina: Thank you so much. I think our two main takeaways will be: first, advancing public education on misinformation and metacognition, and second, continuing research and advocacy around algorithmic bias. Thank you so much for your time, Dr. Hamilton—this was such an insightful conversation.


Dr. Hamilton: Of course. It was a pleasure. Thank you.


Our Takeaway

This conversation with Dr. Hamilton highlights the importance of metacognitive awareness in navigating today’s algorithm-driven world. As Dr. Hamilton emphasizes, technologies do not only inform us but also shape how we perceive ourselves, reinforcing inequalities if left unchecked. At the Fair Tech Policy Lab, we see public education on misinformation and metacognition, alongside advocacy around algorithmic bias, as critical next steps in building more equitable digital futures.


bottom of page