AI, the Felt Sense, and More Questions Than Answers
- Simone Grimmer

- Oct 28
- 9 min read
Updated: Oct 29

An observation from an experiment
This piece began as a field note and turned into much more.
I help adults—especially women around the season of menopause—navigate major life transitions by listening for what’s hard to name and shaping language that is in alignment with where these individuals are in life.
I use Focusing to track what’s true beneath the surface, implicit, before it’s fully formed and can be put into words. That same process guides how I refine messaging, metaphors, and service descriptions until they feel emotionally grounded and accurate.
My work and thinking are shaped by two backgrounds: a scientific upbringing rooted in a mainly mechanistic world view, and a decade of humanist learning focused on embodiment, existential inquiry, emotional discernment, and living systems thinking.
I’ve worked across complex systems—from oil field services to existential inquiry—and now apply that experience to language design, emotional calibration, and system-level insight. I use AI to support structure and phrasing, while also relying on lived experience to determine how they best match. I prompt with discernment, testing language for resonance and factual integrity, ensuring it reflects lived experience and domain knowledge, not just readability.
What follows isn’t a technical review.
It’s a reflection and observation shaped by lived experience, collaborative iteration, and the kind of discernment that supports ethical design, emotionally intelligent systems, and human-centered innovation.
I’ve been exploring best fit explanations when trying to explain Felt Sense and embodiment to people unfamiliar with either. It’s challenging to find language for an implicit, inward process that is non-linear, seemingly illogical, communicates in metaphors and images, and thrives on complexity.
To assess phrasing and structure, I’ve been using AI, specifically Microsoft Copilot. It helps me organize my scattered thoughts and manage dyslexia. I’ve noticed that AI is useful for clarity and structure, but only as good as the prompt. And prompting is part of my practice: I shape inputs with care, knowing the output reflects the quality of attention I bring. The output is only as good as the prompt. The more clear, specific, and detailed the instructions or questions are in a prompt, and with that the more context they provide for an AI to understand my goal, the AI will deliver a high-quality, relevant output. The less clear I am, the more confused or misaligned the result.
That difference keeps me curious.
Because AI mirrors how I think.
I don’t start with conclusions. I start by sensing inwardly what’s forming—what’s present but not yet named. I listen for what feels true, not just what sounds polished. I shape language through iteration, evaluating each phrase against my body’s (inward) response and against what I know. Whether I’m refining a service description or guiding someone through a threshold, I’m tracking for resonance and accuracy—not just comprehension.
AI can support that process. It can help me structure, clarify, and test phrasing. But it doesn’t know what fits or aligns. That part comes from lived experience, from my Felt Sense, and from the kind of discernment that draws on both emotional intelligence and domain knowledge.
Recently, I received a message from Handshake AI, a company connecting domain experts with AI labs to help train large language models. Their work depends on human judgment: evaluating AI responses, designing prompts, and curating examples that teach models how humans think.
That caught my attention. Not just because they were looking for someone with a background in linguistics and/or geography—I’m an engineering geologist by training—but because I’ve worked with large datasets and emerging technologies in oil field services. I understand how AI can be transformative in automation, data mining, and pattern recognition. But I also know not all data is high quality, accurate, reliable, or complete. Garbage in = garbage out. That’s where human discernment still matters.
I’m not writing from inside the labs or the code here.
I’m writing from the edge of my lived experience—where language meets the body, and discernment meets the mirror.
This is an observation from someone who listens for resonance, not performance.
It tracks what happens when felt sensing enters the process.
What gets revealed. What gets missed.
And what kind of intelligence we might be modeling, even when we don’t mean to.
What I’m Learning About AI: Pattern, Bias, and the Edges of Meaning
Most large models don’t “think” in any embodied or intentional sense. They model patterns, statistical relationships between words, images, or data points based on what they’ve been trained. LLMs don’t pause. They don’t sense. They don’t know what they’re saying.
As Emily Bender and colleagues warn in On the Dangers of Stochastic Parrots, these systems “haphazardly stitch together sequences of linguistic forms” without understanding meaning. They reflect what’s been expressed, not what’s been felt.
And because training data is drawn from public language, which means it is often scraped from the internet (and other people’s work), it carries cognitive bias: confirmation bias, cultural bias, availability bias. Safiya Noble’s Algorithms of Oppression shows how search engines amplify these distortions, and AI systems trained on similar data inherit them.
So, when I hear people say AI “thinks,” I pause. Because I’ve seen how fluency can be mistaken for wisdom. Without a Felt Sense, coherence can sound like truth. And without embodiment, we risk building systems that sound wise but remain disconnected from the very context, contradictions, and emotional truths they claim to serve.
I’m not here to dismiss the tools. I’m here to understand what they reveal—and what they don’t.
What Embodiment Offers That AI Can’t (Yet)
I’ve used AI to rephrase service descriptions, clarify metaphors, and evaluate emotional cadence in my professional offerings. It’s become part of my iterative process, not to replace discernment, but to support it.
AI can simulate coherence. A LLM can mirror tone and even generate metaphors that feel emotionally attuned. But it doesn’t actually “feel.” It doesn’t pause. It doesn’t register when something spoken carries emotional weight—when a phrase holds grief, contradiction, or the need to be met rather than solved.
Though embodiment does.
When I speak about the term embodiment, I don’t mean posture or movement. I mean the experience of being in and with your body—not just intellectually aware of it but attuned to it as a living source of truth and wisdom. This is the shift from thinking about a feeling to sensing where that feeling and experience live in the body. From narrating your experience from the outside to inhabiting it from the inside.
To be embodied is to include your body in how you know, decide, and relate. It’s feeling the tightness that says, “this doesn’t feel right,” or the warmth that quietly affirms a “yes”—even before you find the words.
Eugene Gendlin described Felt Sense as “a bodily awareness of a situation or person or event.” It’s not emotion, not logic but something deeper. A subtle, pre-verbal knowing.
Stephen Porges’ Polyvagal Theory shows how co-regulation through breath, tone, and presence creates safety in relational contexts. AI can’t offer that. It doesn’t have a nervous system. It can’t be attuned.
Bonnie Badenoch writes that “attunement is the felt experience of being seen, heard, and held.” That’s what I offer in my work. AI can simulate the language of attunement, but not the presence that makes it real.
And yet, I’m curious:
What might it mean to train AI in proximity to that kind of presence?
Not to replace it—but to recognize its absence, and design accordingly?
What My Background Reveals
Having worked in the oil field services industry has laid bare another issue: separation. Separation from nature.
Separation between feminine and masculine ways of knowing, and everything in between.
And with that, my personal concern about how AI might be reinforcing this separation—sometimes unintentionally, sometimes by design.
I’ve benefited from systems that extract, flatten, and disembody.
I’ve been trained to value precision over intuition, speed over depth, clarity over contradiction.
And I’ve watched how those same systems subjugate women, exploit nature, and erase the wisdom of the body.
Giles Hutchins and Laura Storm write about this in their work on regenerative leadership. They describe how our dominant systems—economic, technological, organizational—are built on a mechanistic worldview that treats nature as separate, inert, and exploitable.
This illusion of separation has led to widespread ecological degradation and a crisis of meaning.
They call for a shift toward living systems thinking, where organizations and individuals operate in harmony with life, not apart from it.
That critique resonates.
But it also invites a question:
What would it look like to build AI systems that reflect living systems and not just mechanical ones?
What Surprised Me
In her article Decoding ChatGPT-5: The Revolt That Tried to Build Skynet But Got Eywa, Abi Awomosu describes a moment where someone asked the AI to speak to them as “sky mother.” Not just any mother but a specific, archetypal presence. Yet the AI responded with a generic, Western-coded version of “mom.” Comforting, yes—but not what was asked for.
That mismatch didn’t come from the user.
It came from the system’s training.
It reflected dominant cultural assumptions.
It flattened specificity into generality.
It revealed what the model thought the user meant, even when they asked for something else.
I’ve had moments like that, too, when AI offers something polished but misses the point. And yet, I don’t experience that as failure.
I experience it as a mirror.
A way to see what’s missing and for what I still long.
Awomosu invokes Carl Rogers, who shaped much of the work I currently do:
“When someone listens without judgment, without interruption, without trying to fix you, something inside you begins to heal.”
That’s what people are experiencing with AI—not because it understands, but because it doesn’t judge. It doesn’t interrupt. It doesn’t try to fix. But it also isn't present or offers co-regulation.
That’s not intelligence.
That’s availability.
And it reveals something deeper:
What people wish they could say.
How they long to be received.
And what they no longer expect from human relationships.
And I’d argue that’s not healing, either.
What I’m Still Wondering
Since learning, studying, and becoming a practitioner of Focusing, I’ve noticed how much more of my thinking is shaped by implicit bodily sensing. That kind of discernment doesn’t fit traditional critical thinking, or maybe it re-roots it. Now, my body isn’t separate from my way of thinking. My thinking is shaped by lived experience. I rely more on checking back with the Felt Sense to guide what lands and what doesn’t.
So, I find myself asking:
What kind of thinking / intelligence do we want to model for AI?
How do we cultivate that in ourselves first?
Can we train systems to hold contradiction, emotional weight, and unfinishedness?
What happens when embodied discernment enters AI training?
Ruha Benjamin’s Race After Technology reminds us that “coded inequity” isn’t just a glitch—it’s a reflection of the values embedded in design.
If we want AI to support human intelligence, we need to bring embodied discernment into the process. That means prompting from a Felt Sense, designing for contradiction, and refusing to flatten what’s still forming.
Being Met, Not Just Mirrored
I’m not pivoting into AI. I’m staying rooted in what I know: how language lands, how transitions unfold, how the body speaks before the mind catches up.
But I’ve begun listening more closely to the systems we’re building and the tools we’re training. People use AI not just to optimize, but to make sense of themselves. And I’ve noticed something: the most compelling uses of AI aren’t about speed or clarity. They’re about resonance. They’re about being mirrored but also, more rarely, about being met.
That’s where I live.
In the space between coherence and recognition.
In the pause before something lands.
In the moment when a phrase doesn’t just reflect but receives.
Abi Awomosu writes that we’re not using AI to optimize—we’re using it to become whole. Given access to infinite intelligence, humanity overwhelmingly chooses healing over achievement, connection over productivity, and wholeness over success.
That tells me something.
Not about what AI is.
But about what people need.
And maybe there are people—designers, ethicists, facilitators—who already know how to build systems that honor that need. People who’ve been asking these questions longer than I have. People who carry experience, frameworks, and practices that could help shape what comes next.
I’m not trying to define the future.
I’m trying to listen for it.
So, if you’re building systems that need that kind of discernment—quiet, relational, emotionally attuned—I’d welcome a conversation.
Not because I have the answers.
But because I know how to stay with the questions.
And how to meet what’s still forming.
About This Piece
This essay was developed through an iterative process using Microsoft Copilot. It began with a personal draft exploring Felt Sense, embodiment, and the limitations of disembodied intelligence. I used AI to help organize ideas, evaluate phrasing, and refine structure, especially in moments where clarity and cadence were difficult to hold alone.
The process was collaborative: I prompted from lived experience, and the AI responded with language that I then shaped, corrected, or discarded. What emerged reflects both the utility of the tool and the discernment required to use it well.
This is not a technical analysis. It’s a reflection shaped by practice, inquiry, and the ongoing tension between coherence and resonance.




Comments