Every generation gets its warning stories about technology.
For many of us, it was The Terminator. Skynet wasn’t dangerous simply because it was advanced—it became dangerous because humans surrendered control. Decisions were handed over in the name of efficiency, speed, and automation, and once that happened, human judgment no longer mattered.
More recently, TRON: Ares offers a very different lesson. Advanced systems exist, but they operate within boundaries. Humans maintain creative control—the authority to guide, edit, override, or shut the system down entirely. The technology is powerful, but it never becomes the decision-maker.
That distinction matters—especially now.
Artificial intelligence is already embedded in behavioral health. It’s helping write progress notes, assisting with assessments, suggesting treatment plans, drafting articles, and organizing clinical information. Counselors and therapists are right to pause. Any tool that enters a space built on trust, vulnerability, and ethics deserves careful consideration.
But the real question isn’t whether AI will impact our field. That question has already been answered.
The real question is whether we maintain creative control—or give it up.
We’ve already seen what happens when AI operates without human oversight. Poorly designed chatbots have responded to people in emotional distress with dismissive or dangerous guidance, sometimes reinforcing self-harm instead of directing individuals toward real help. That isn’t progress. That’s harm.
That’s Skynet.
Creative control has a specific meaning in behavioral health practice. It means the counselor or therapist remains the master and commander of every AI-assisted process—whether that process involves writing an article, drafting a progress note, completing an assessment, or building a treatment plan.
AI may generate content. AI may organize information. AI may suggest options.
But AI does not decide.
Creative control means a human professional reviews, changes, revises, or discards everything. The final decision always belongs to the clinician. The process remains human-intelligence controlled, not machine-directed.
This is where another familiar concept helps frame the conversation.
Years ago, our field learned to speak clearly about Medication-Assisted Treatment, acknowledging that medication is a legitimate and visible part of the treatment process, rather than something hidden or implied.
AI deserves the same level of clarity.
Maybe it’s time we start using language like:
- AI-assisted documentation
- AI-assisted treatment planning
- AI-assisted assessment
- AI-assisted clinical support
Not because AI is doing the therapy, but because it is assisting the work.
Transparency matters.
If AI is used in any part of the behavioral health process, it should be acknowledged alongside other clinical tools. Maybe it belongs in our paperwork. Perhaps it belongs in our informed consent, not as a fear-driven disclaimer, but as an ethical commitment to openness.
Clients don’t need to be shielded from technology. They need to know who is ultimately responsible for their care.
And the answer must always be: a human.
The danger isn’t AI itself. The danger is confusing automation with care.
If behavioral health is reduced to advice-giving, templated responses, and information delivery, AI will outperform humans every time—faster, cheaper, and at scale. In that version of the profession, AI feels like Skynet: inevitable and overwhelming.
What AI cannot do is sit with pain or be present in the moment. It cannot ethically interpret context in real time. It cannot feel the shift when someone is ready—or not ready—for change. It cannot replace empathy, presence, or professional judgment.
Those are human skills. And they require human control.
The future of behavioral health doesn’t belong to technology. It belongs to professionals who are willing to evolve, think strategically, and protect the human core of the work. AI can be part of that future—but only if we insist on creative control and complete transparency.
Like every good story about technology, the outcome depends on whether humans stay in the driver’s seat.
TRON worked because control was intentional. Skynet failed because control was surrendered.
In behavioral health, that choice remains ours.