Reese asked 10 women about AI. I asked 8,964.
8,964 comments, one dinner party, and what women are actually saying about AI.
I was at a women's event this weekend. A small one, only a few of us. AI came up. Most of the women were curious, a few were using it, one was sharply critical. She said something that made me want to ask her why she felt that way. I did not ask. I still think about it.
I am writing this piece, in part, because I did not ask her. Because the question I should have asked in person is the same question I ended up asking of 8,964 comments instead.
A few days earlier, Reese Witherspoon posted a short video on April 15 about learning AI. She said it was time. She said she was at book club with 10 women, and only 3 of them used AI, and only 1 of those 3 felt confident. She scaled that up to "70% of women are not keeping up" and told the women watching they did not want to be left behind.
The post went viral. It picked up a community note within 24 hours. It picked up 13,000 comments. I pulled 8,964 of them over four days.
Now, just to say the quiet part out loud. Ten women in a Hollywood book club is a dinner party, not a data set. My daughter came running into the room to tell me exactly that before I had even finished watching the video. She was right. One anecdote from one evening with ten friends is not "70% of women." It is ten women.
So I did what the post deserved. I pulled the comments, ran them through a sentiment and theme analysis, and looked at what women actually said when they had a chance to respond in their own words. Fittingly, I used AI to help. Claude classified every comment by cluster, and I read through hundreds of them myself to make sure the patterns held up.
Is 8,964 comments a representative sample of "women"? No. It is a representative sample of women who follow a 30-million-follower actress and chose to comment on one specific video. But it is three orders of magnitude more data than ten women at a book club. And it is the closest honest answer we have to what women were actually saying in the moment the post landed.
What I found was not what I expected. The ratio of pro-AI to anti-AI is only half the story, and it is the less interesting half.
The headline number, and the one underneath it.
Pro-AI comments beat anti-AI comments by about three to one. That is the headline. Out of the 6,521 comments with actual content in them (setting aside the 2,443 that were pure emoji or generic compliments), 2,629 were some form of "yes, count me in" and 814 were some form of "no, and here is why."
Reese has a big, supportive audience. That part is not a surprise.
But those numbers hide three things that are worth naming.
First, the love is wide but thin. Of the pro comments, almost all of them are one-line fan support. "Yes!" "Count me in." "Let's go." Only a small fraction actually describe using AI. Only a smaller fraction add any nuance about risk or regulation. The pro column is a wave of enthusiasm, not a wave of argument.
Second, the pushback is narrow but sharp. The anti comments are longer on average, more specific, more likely to cite facts and name places. If you were grading these comments as essays, the anti side wins easily. They are not a fringe. They are the most considered voices in the thread.
Third, the mood kept shifting. The first 24 hours looked like a landslide. By day three, the pro-AI share had collapsed and the anti-AI share had doubled. I will show you the chart in a moment.
So yes, by count, the love won. But the love was loud, and the criticism was sharp, and by day three the criticism had pulled even. That is the real shape of this comment section.
What is surprising is the anti-AI column itself. Because it is not what I expected it to be. It is not mostly people afraid of losing their jobs. It is not mostly people worried about AI becoming sentient. It is not even mostly people worried about artists.
It is almost all one thing.
Environmental concern was the single biggest category by a wide margin. Water and data centers, specifically. Women cited specific places. Virginia wells running dry. Kearny, Arizona, on the verge of a water emergency. The Great Lakes. Desert communities watching their resources go to server farms. They named towns and regions.
If you add up the "pro" anti-AI themes that overlap with environment (the "humanity and critical thinking" column frequently pairs with environmental concern), you are looking at well over half of all pushback pointing at a single worry. That is not a fringe position. It is the dominant objection when women take the time to type more than one sentence.
And here is the thing that makes all of this land harder.
AI has the potential to cause job displacement, create deepfakes and misinformation, perpetuate bias, and pose significant environmental costs through high energy consumption. Experts also cite dangers such as lost critical thinking, reduced human accountability & security threats.
A single AI facility can consume as much power as 100k homes, threatening grid stability, increasing electric bills for communities. They also demand millions of gallons of water daily for cooling, causing shortages.
That is not a comment from the thread. That is the community note Instagram attached to the post itself. Notice the second paragraph. The one about power consumption and water for cooling. It reads like a summary of what the 435 environmental commenters have been saying in their own words for four days.
When the platform's own fact-check infrastructure makes the same argument the critics are making, that is not a coincidence. That is convergence.
The conversation changed by the day.
Instagram post comments are usually front-loaded. The first fans show up, the mood sets, the thread levels off. This one did something different.
In the first 24 hours, the comment section looked like a landslide. Pro-AI comments outnumbered anti-AI comments more than four to one. It felt like a friendly room.
Then the post traveled. The community note appeared. The comments kept coming. And the mood kept changing. By day three, pro and anti were nearly tied.
Look at that third panel. Day one, pro-AI was beating anti-AI by almost five to one. By day three it is a tie. That is not a rounding error. That is a conversation fundamentally reshaping itself in real time.
The people responding on day two and three were, on average, more critical, more argued, and more likely to cite specific facts. Several directly called out the community note.
Those are not cherry-picked. Those are the comments the audience itself elevated. Every one of them is a critique, and every one of them is calm, considered, and built on an argument. This is what the room sounded like when the room was paying attention.
There is a third group, and nobody is talking to them.
The loudest voices in the comments were the two obvious ones. Women saying "yes, teach me." Women saying "absolutely not, this is destroying the planet." But in the middle of those, there was a third group. Smaller, quieter, and in my opinion, the most important one to understand.
These are women saying, in various phrasings, the same thing:
"I do not want to use AI. But I want to understand it. Because my kids are using it. Because it is being used on me. Because I need to have an informed opinion."
That first quote is one of the most-liked nuanced takes in the entire thread. Read it again. "Humans in the middle, guiding the narrative, always." That is not a pro-AI take. That is not an anti-AI take. That is a request for agency.
This is a smart, careful, ethical position. It is also the position that basically no one in the AI education space is speaking to. The standard pitch is "learn AI to get ahead." The counter-pitch is "don't use AI to protect yourself." Almost nobody is saying "learn AI so you can decide, on your own terms, whether to use it or not."
That is an access question. And it is a different kind of access than "can you afford the tools." It is "do you have enough real information to form your own view."
Four other threads worth naming.
Before I move on, there are four smaller patterns in the data that deserve a mention. They are not the main story, but they are real, and ignoring them would flatten what is actually a much more textured conversation.
The ethical refusers. A distinct cluster of commenters rejected AI on moral grounds, even after saying they understood it technically. One reply put it plainly: "Mindfully and ethically are already gone. Just say no." Another: "Understanding it does not equal using it. I understand it, which is why, given the choice, I prefer not to use it." This is not a knowledge gap. It is a values position. Any piece of content that assumes "more information will change their minds" will bounce off this group entirely.
The "I have no choice" group. A smaller but painful cluster described being forced to use AI by their employer or industry. One comment: "Many of us work exclusively in fields where saying no to it means being long-term unemployed. I'd love to take the moral high-ground and say no, but then who is going to pay my mortgage when my husband and I lose our jobs." This is the quietest part of the thread and, in some ways, the most important. The women who are using AI are not always the women who want to.
The tone critics. A small group pushed back not on the message but on the delivery. "If that in fact is what you're attempting, your delivery was way off. And clearly many of us seem to feel this way." This cluster is not anti-AI. They are readers who agreed AI is worth understanding but felt the framing was miscalibrated for an audience with mixed feelings and varying stakes.
The "is this an ad?" question. A smaller but sharply-engaged cluster read the post as paid promotion. "This is such a disappointing take, and I fear an undisclosed ad" drew 1,200 likes. "This feels like a set up for some highly sponsored future content" drew 1,385 likes on a verified account. "Did they pay you to say this?" drew 1,729. Whether the post was in fact sponsored is not something I can verify from comment data. What I can verify is that a meaningful portion of the audience read it as one, and that reading was driven at least partly by Reese's 2021 NFT and crypto promotion, which several commenters referenced by name. The lesson, for anyone who teaches AI for a living: audiences will pattern-match you against every tech trend that came before, whether that is fair or not.
I am not going to pretend I solved any of these threads in this piece. I am naming them because they exist, because they are real, and because anyone trying to talk to women about AI right now needs to know they are in the room.
The 243 women this piece could have been named after.
Buried in all those comments is a small, quiet signal. 243 women wrote some version of "where do I start." They did not ask Reese to convince them. They did not ask Reese to debate the ethics. They asked her to point them somewhere.
Pay close attention to what these women are saying. Because it is not what the framing of Reese's post would lead you to expect.
These women are not locked out. They already have access. What they do not have is awareness.
Those are two different things, and the conversation keeps collapsing them into one. Access is: can you get to the tools. Do you have the phone, the internet, the free tier, the five minutes. Awareness is: do you know what to do once you are there. Do you know which tool, for which job, in which order, with what expectations.
The story Reese told about her book club is actually the clearest proof of this distinction in her whole post, even if it was not framed that way. Ten women were in the room. These are women with access. They have the means, the devices, the time. Three of them use AI. One of them feels confident. That is not a gap in access. That is a gap in awareness.
I have to be honest here though. Saying "access is solved" is true for the women in that book club. It is true for most of the 243 women asking where to start in the comments. It is not true across the board.
For a woman on a rural data cap, access is a real problem. For a Spanish-first speaker whose AI tools default to English and miss nuance, access is a real problem. For a Black woman whose face is misidentified by computer vision systems, access is a real problem in a different direction. For a low-income woman whose job already got automated before she had a chance to learn the tools that did it, access is a real problem. Women and minorities hit both gaps, often at the same time, and the gaps do not solve the same way.
This piece is about the awareness gap, because that is what the 243 women in the comments are describing. The access gap across race, income, language, and geography is a separate and much bigger piece. I am not going to pretend to solve it in a paragraph. I will come back to it.
One more thing, while we are being careful. Awareness is not the same as agreement. A woman can be fully aware and still say no. She can learn what AI is, what it does, what it costs, and decide to refuse. That is still awareness working. That is still a real choice. "Learn, then decide" is the frame. Not "learn, then use."
The 243 women in the comment section are saying the same thing in their own words. They are not asking how to get to a tool. They are asking what to do now that they are standing in front of one.
This is the gap I keep writing about. Not the gap between women and men. Not just the gap between rich and poor, though that one is real too and deserves its own post. The gap between curiosity and a first useful step. For the women in Reese's book club and the 243 in the comments, the tools are free or close to free. The information is everywhere. None of that matters if there is no trusted person saying "start here, skip that, this is what actually matters."
Reese probably will not answer them, one at a time. Most public figures with 30 million followers cannot. But I think about who could.
Access is the easy part. Awareness is the work.
The "left behind" framing is where most of the heat came from in the comments. Reese used it. A lot of AI content for women uses it. I have probably used it myself.
The women who pushed back hardest pushed back on exactly that language. And they were right to. Not using AI is not a passive state. It is often an active, informed choice. When you tell a woman she is "behind" for making a considered decision, she hears you calling her uninformed. She is not wrong to bristle.
Access is about having real choice. Awareness is what makes the choice real.
When a woman understands what AI actually is, what it can do, what it costs the planet, what it does well, what it does badly, then whatever she decides becomes a real decision. Maybe she uses it heavily. Maybe she uses it for one specific thing. Maybe she refuses entirely and becomes a more effective advocate against it. All of those are legitimate outcomes of awareness.
What is not a legitimate outcome is a woman feeling like she does not have enough information to even pick a lane. That is what the 243 women in the comments are describing. That is the actual problem.
One honest note before we go further. This piece does not address the environmental impact of AI head-on. That is not because I think it does not matter. Look at that chart again. It matters more than any other concern in the data. It matters to the audience's most-liked voices. It matters enough that Instagram's own community note cited it. It is because I want to do it properly, which means giving it its own deep study instead of a paragraph. My daughter has been sending me articles about this for months. She is right. That study is coming next.
Read what women actually said.
The quotes in this piece are a small sample. Filter by theme below to read real comments from the dataset. This is raw. Unedited.
We are in a Gutenberg moment.
Here is the part that has been sitting with me since I started this analysis.
Gutenberg's printing press did not teach people to read. It distributed the means of distribution. It made books possible at scale. But literacy was a separate, slower, messier, more unevenly-distributed project that took centuries to actually reach most people. The press came first. The reading came later.
We are in that exact moment with AI.
The tools are spreading faster than the ability to use them well. The awareness gap is the literacy gap of our decade. And it is not going to close on its own.
Reese pointed at the press and said "it is time." That is not wrong. But pointing at the press is not the same as teaching someone to read. And scaling up from one book club does not tell you how widespread the literacy gap actually is.
The 8,964 comments do tell you something. They tell you there are thousands of women ready to learn. Hundreds asking out loud where to start. A growing group of women with serious questions that no one with a platform is answering carefully. And a conversation that is getting sharper by the day, not softer.
That is not a crisis. That is an opportunity. And it is the right kind of opportunity, because it is about helping people think well rather than selling them a shortcut.
Where I sit.
Before I close, I want to be honest about where I am in all of this. I use AI every day. I believe it can be a real tool when a human stays in the middle, guiding it. I believe we are in a Gutenberg moment, even as I know the stats and the critiques are ahead of the optimism right now. I am in the bubble, and I know I am in it.
I am writing this for the women who are curious, who want to learn, who are asking where to start. Those are the women I can help. And I am open to understanding more about the concerns and the fears, and I hope to continue the conversation.
I wrote my own post on this the day after Reese's went up. It is called Only 1 out of 10 women feel confident using AI. Let's fix that.
In that piece, I wrote one line that I keep coming back to: "That is not just a tech problem. It is an awareness and access problem. And it is one we can close."
What I did not know when I wrote it is that there were 243 other women typing the same thing into the same comment section on the same day. I was not speculating about an awareness gap. It was right there.
The work, for me, is building real things for those women. Not "10 ways to use ChatGPT" listicles. Not fearmongering. Not hype. Something that treats a grown woman like someone who can make up her own mind if you give her enough real information.
If that sounds like what you need, come along. This is just the beginning.
Want to see where you actually stand with AI?
I built a free diagnostic at ready.camino5.com. It shows you how AI tools like ChatGPT, Claude, Perplexity, and Google's AI see your business right now.
Check your AI readiness →