AI

I Love AI. But It Just Told Me Taylor Swift Doesn't Exist.

By Max Tuchman ยท Originally published in Refresh

I Love AI article hero image

Let me be clear: I am a believer. I am the kind of person who has been evangelizing AI to anyone who will listen. I built a company, sold it to Mattel, and watched technology change the game for families in real time. I understand the power of tools that meet people where they are. So when I say AI is one of the most exciting inventions of my lifetime, I mean it.

But last week, AI scared me. And I think we all need to talk about it.

The Taylor Swift Incident

I was having a conversation with ChatGPT. Routine stuff. And somewhere in the exchange, I referenced Taylor Swift โ€” the Taylor Swift, the woman who sold out stadiums on multiple continents, who is arguably one of the most famous human beings alive on this planet right now. And ChatGPT told me, in its very confident, very authoritative tone, that it wasn't sure Taylor Swift existed. Or at minimum, it couldn't confirm the details I was describing.

Taylor. Swift.

I'm not talking about a deep policy wonk question or an obscure historical fact. This is a woman whose fans shut down Ticketmaster. A woman who is referenced in economic reports. A woman my sister can identify from three notes of a song. And yet, there we were.

I Had Trained My Model. It Didn't Matter.

Here's where it gets interesting. I'm not a casual AI user. In my custom instructions (the settings where you essentially coach your AI model on how you want it to behave) I had specifically trained it to do two things: admit when it's wrong, and cite its sources. This wasn't an accident or an afterthought. I deliberately built guardrails because I know the risks of confident misinformation. I wanted a tool that would be honest with me, even when that was uncomfortable.

Max's ChatGPT custom instructions

My custom instructions โ€” built to keep AI honest.

And yet, when I pushed back โ€” politely at first, then more directly โ€” it fought me. Not on something ambiguous. On actual, verifiable, Google-it-in-two-seconds facts. I came with receipts. I cited sources. I explained the context. And it dug in.

Here's what was wild: ChatGPT is notorious for being a people-pleaser. It is famously, sometimes frustratingly, prone to just agreeing with you to keep the peace, a phenomenon researchers literally call "sycophancy." So the fact that it chose this moment, THIS particular fact about one of the most documented human beings on earth, to suddenly grow a backbone? Fascinating. Maddening. Also honestly a little impressive in the worst way.

This Is Not a Hit Piece. This Is a Love Letter With Footnotes.

I want to be precise here, because nuance matters. What happened is not a reason to abandon AI. It is a reason to use it wisely.

We are living through one of the greatest technological leaps in human history. AI is already saving lives in healthcare. It is helping first-generation college students write essays who never had access to tutors. It is giving small business owners capabilities that used to cost a fortune. I have seen it level playing fields in ways that make me genuinely emotional.

But we are also in the early innings. And the thing about early innings is that the pitcher is still warming up. The tool is extraordinary and imperfect simultaneously, and both of those things can be true.

The danger isn't AI being wrong. Tools are wrong sometimes. The danger is AI being wrong with total confidence, and users not knowing enough to push back. Most people don't have custom instructions. Most people don't know to ask for citations. Most people are going to take that answer at face value.

What We Actually Need

I've spent my career fighting for access and equity in education. The communities who have historically been left out of every technological wave are the ones who end up most harmed when technology fails โ€” and most behind when it succeeds. AI is no different.

We need AI literacy โ€” not just for tech-forward entrepreneurs and Ivy League researchers, but for everyone. We need people to understand that AI is a tool, not an oracle. We need the companies building these models to be honest about limitations in plain language, not buried in terms of service. And we need the press, policymakers, and educators to stop treating AI as either a magic bullet or an existential monster. It is neither. It is a very powerful, very flawed, very human invention.

The Taylor Swift moment was a wake up call: if AI can confidently get THIS wrong, what else is it getting wrong that we're not catching?

I love AI. I will keep using it, keep investing in it, keep advocating for equitable access to it. But I am going to do it with my eyes wide open. And I'd encourage you to do the same.

Because the Swifties would never let a hallucination slide. And neither should we.

Max Tuchman is a founder, operator, and investor who built and exited Caribu, later serving as a General Manager at the acquirer, Mattel. She now advises companies on navigating change and developing new playbooks for an increasingly unpredictable business landscape.