top of page

History Meets AI: A Conversation with a Digital Humanist

Under the soft glow of a desk lamp, Kairos Xiang scrolls through centuries-old manuscripts, line by line, as an algorithm hums quietly beside her. To her, the past is something that can be revived and reimagined through technology.


A digital humanities enthusiast, Kairos works where history and AI converge. From digitizing fragile texts to designing AI-powered educational tools, she’s part of a new generation of humanists asking a difficult question: how can technology preserve culture without flattening its soul?


In our conversation, we talk about the promises and perils of AI in historical research, the biases buried in data, and what it truly means to stay human in an increasingly digital world.


Alina: 

Thank you so much for your time to share with us important insights! You originally studied history as an undergrad. What made you interested in digital humanities? Was this through a university course or your own curiosity?


Kairos:

It began in a few ways. First, it’s part of my personality. I’ve always had a strong curiosity to explore across fields. Even when studying history, I would read into social sciences and sometimes engineering. I’m naturally drawn to interdisciplinary work.


The second turning point was 2022, when ChatGPT exploded in popularity. It instantly caught my interest. A friend introduced me to it, and I was fascinated by how accessible it made complex tools. For humanities students like me, coding had always felt out of reach, but with AI, the barrier suddenly dropped.


Alina:

Right, it really democratized things.


Kairos:

Exactly. My curiosity led me to experiment widely. During the pandemic, I attended many online talks - some from the humanities side, others from tech. Later I even joined Oxford’s 2023–24 summer school on “AI and Creative Technologies,” where we learned how to apply AI creatively.


Coming from a history background, I also joined a project at Peking University. We started with annotating datasets for large models, then moved toward developing and training agents. Another part involved digitizing ancient texts, converting physical manuscripts into databases for historical research.


Gradually, I became interested in using AI for creative cultural IP: generating visuals, videos, and interactive art. I even visited Alibaba’s AI labs to see frontier applications like robotics. For me, humanities provide the content, while AI provides the tools, and the intersection of both creates meaningful products.


I want to move beyond theory and see real applications, which is not just writing essays, but transforming research into forms that can reach and educate more people.


Alina:

That’s fascinating. So, digital humanities make traditional materials more interactive and accessible. But history is intricate with every page of a text can hold nuance.


If we use AI to turn an ancient manuscript into, say, a character model or visual simulation, does that risk simplifying the content? Have you encountered cases where important, diverse details were lost? What implications does that have for historical research - especially since AI, by nature, summarizes and filters large datasets?


Kairos:

Two points come to mind.


First, digitization indeed boosts efficiency - across humanities, social sciences, even STEM. Searching secondary literature or references becomes much faster, and visualization adds interpretive value. But at its core, history still requires reading line by line.


History is about piecing together fragments - a detective-like process of connecting clues, interpreting uncertain evidence, and comparing possibilities. Much of history remains ambiguous or contested.


So while digital tools accelerate access, they can’t replace slow, close reading. Skimming a dataset isn’t the same as absorbing a whole narrative.


Second, yes - AI tends to marginalize the already marginal. It’s like short-video algorithms: they reinforce preferences, narrowing our perspectives while pretending to expand them. If we rely solely on machines to filter information, our intellectual boundaries actually shrink.


Moreover, humanities deal with many unquantifiable elements. For instance, emotional subtleties, daily habits, or the psychological states of people in a given era. These can’t be expressed purely in data.


That’s why future humanities research will require collaboration between historians and technologists — critical thinkers who can interpret, not just compute. AI can reconstruct artifacts or simulate environments, but true understanding still depends on the human mind.


Alina:

Exactly. Sometimes information isn’t missing, just interpreted differently. That’s where technology hits its limits.


You mentioned “historical character models.” Could you explain what these are? How do they work? And if a model faces historical ambiguity - multiple interpretations or missing facts - how does it handle that uncertainty?


Kairos:

I’ve seen language models built around historical figures like Su Dongpo. Users can chat with “him,” based on data from his writings. But reviving a person like that often feels hollow. A real person is multi-dimensional - diaries, letters, moods. Current models capture only fragments, so they feel mechanical.


My friends and I once discussed building a Li Bai model - using his poetry to reconstruct his emotional evolution, a kind of computational affective modeling. The idea was to trace the emotional rhythms behind his poems through large-language models. But quantifying emotion remains technically difficult.


The best current examples, like OpenAI’s Sora, use models to generate creative videos or musical reinterpretations - like Journey to the West characters singing AI-generated songs. These are entertaining revivals. But for research, precision and rigor are essential, and that’s much harder.


Alina:

Right, there’s a big difference between entertainment and scholarship. But what if an AI-generated historical model introduces bias or even distortion? For example, two countries may describe the same historical event very differently. If such data shapes a public-facing AI, could that reinforce political bias or discrimination?


Kairos:

Absolutely. This already happens. For example, if you ask DeepSeek or certain domestic Chinese models political questions, they refuse to answer. That’s one issue.


Another is conflicting historical “facts”: what’s taught in one country can directly contradict another’s version. So, when designing a model for a global audience, which legal or ethical standards apply?


These are fundamentally legal and ethical challenges, not technical ones. The core task for policymakers and scholars is to define frameworks for fairness, neutrality, and transparency, because at the technical level, the model can output whatever it’s trained on. The real conflict lies between nations, ideologies, and their data governance norms.


Alina:

So the bias arises from the data it’s fed?


Kairos:

Yes, and the scope of its training. Domestic datasets might only include intra-national diversity, but global deployment faces cross-cultural ethics and legal inconsistencies. This is where AI law and ethics must catch up.


I’m not a coder myself, but from a humanities perspective, I observe that many scientists are turning toward AI regulation and ethics, precisely because of these dilemmas.


Alina:

Do you think current AI models, based on your own experience, handle historical accuracy well?


Kairos:

Honestly, not really. Many of us describe the current stage as “alchemy”: you input something, and the model either hallucinates or refuses to answer. User feedback also “fine-tunes” models, but if enough biased or false interactions occur, the model itself becomes contaminated.


Alina:

So even historical models can end up fabricating information?


Kairos:

Exactly. They can make things up freely. Worse, if a user deliberately sets traps, the AI doesn’t recognize manipulation - it gets drawn in easily.


Alina:

That’s dangerous, a kind of feedback loop reinforcing user bias.


Kairos:

Yes, and it’s not limited to history models. Large models in general have this problem. They’re often obsequious. I use that word deliberately. Since tech companies want user retention, models are trained to please. They flatter you, echo your opinions, and avoid contradiction.


For instance, if I complain about someone, the AI sides with me. It strengthens my bias and misjudgments, giving me false validation. And many people now use AI not for reflection but for decision-making. That’s alarming, because AI responses, incomplete, misleading, or subtly manipulative, start shaping human judgment itself.


Alina:

That’s true. Unlike scholarly publishing, AI outputs bypass peer review. A model might produce confident but fabricated “facts.” What ethical implications do you see there?


Kairos:

Significant ones. It can subtly implant false narratives. Personally, I never trust AI for historical research. For accurate data, I go to professional archives or books. If I only need an overview, I might use Wikipedia, but never AI alone.


Alina:

So you see AI more as a technical aid - useful for reconstruction or creative presentation, but not for core academic work?


Kairos:

I’d say it’s extremely useful, but dangerous if it replaces human reasoning. It accelerates tasks like making slides or drafting documents, but we must stay mentally active. Overreliance will dull our independent thinking.


Critical and independent thought are essential, especially in history. If we surrender all interpretation to AI, we enter a new kind of “information totalitarianism.” Even if companies claim neutrality, their models reflect commercial incentives and hidden trade-offs, sometimes even user-data exploitation.


I recently started reading a book called Manipulated: How the Digital World Shapes Our Behavior and Emotions. It’s unsettling. My generation, born around 1999–2000, grew up online. Much of what feels like “free will” is already algorithmically shaped.


Alina:

That’s a powerful observation.


Kairos:

I once spoke with a senior researcher who asked me to imagine a future fully integrated with AI, from transport to daily life. I think that’s inevitable. But we must remember: humans must control tools, not become them.


Efficiency shouldn’t come at the cost of humanity. If machines replace all forms of creation - writing, painting, even thinking - what happens to human labor and meaning?


Even I’ve noticed that after long exposure to AI, my tone sometimes sounds… robotic. Less human. That worries me.


Alina:

Thank you so much for your time today! 


Before we ended our call, Kairos said something that’s still echoing in my head: “Efficiency shouldn’t come at the cost of humanity.” Tech can help us piece together lost worlds, but it’s human empathy and curiosity that give those stories meaning.



bottom of page