AI and Mental Health: What You Need to Know
This e-interview is with Art Abal, the Australia-based co-founder of Vana, a data and AI infrastructure company that emerged from the MIT Media Lab, focused on how people can collectively own, govern, and benefit from the data used to train AI systems. Abal has also worked in large-scale data sourcing and human subjects research, including policy-adjacent field research through Harvard, which gave him “early exposure to how technological systems, incentives, and measurement frameworks shape real human behaviour—often in ways that designers don’t anticipate”. He shares that he co-founded the Wellbeing Society at the Harvard Kennedy School, to focus on how policymakers can design institutions and policies that prioritise human wellbeing, not just economic output or speed. “That work highlighted a persistent structural issue: systems scale faster than human capacity, and the emotional and cognitive costs are usually externalised onto individuals,” he tells us.
The Mental Health & Wellness Connect
Interestingly, Abal also teaches yoga, and practises mindfulness and meditation which informs how he thinks about “attention, nervous system regulation, and the difference between short-term emotional relief and long-term psychological resilience—particularly relevant when people turn to AI tools while distressed or vulnerable”, as he puts it (which has emerged as a huge concern).
1. Can you share some of the biggest concerns when it comes to the (now widespread) use of GenAI / LLMs for mental health and emotional health support?
One of my biggest concerns is that most GenAI tools being used for mental and emotional health simply aren’t designed, at a foundational level, to work with human emotion.
Large language models aren’t trained to understand emotional nuance in the way humans or trained clinicians do. From an architectural standpoint, they’re pattern-matching systems trained on text, not relational agents trained to build rapport. In psychology, the relationship between a patient and a care provider—the trust, attunement, and shared context—is a major driver of healing. It’s hard to imagine a base LLM, or a lightly fine-tuned chatbot layered on top of one, reliably replicating that dynamic.
Even when models are given additional training to behave in more “therapeutic” ways, there are still serious limitations. Some of these tools can absolutely be helpful as aids—particularly for accessibility. They lower barriers for people who can’t easily access therapy, and they’re often more responsive than generic self-help books. That’s real value, and it shouldn’t be dismissed.
The risk is when these tools start to provide a false sense of security. Mental health is highly individual. It’s shaped by personal history, trauma, worldview, and context in ways that are subtle and hard to surface, even for trained professionals. If people begin to treat AI systems as substitutes for therapy, human relationships, or deeper forms of self-development, that’s where things can go wrong.
There’s also a serious data governance issue that often gets overlooked. When people disclose sensitive information to a licensed therapist, there are strict ethical and legal frameworks governing confidentiality. Those protections don’t exist in the same way as most AI tools. If someone is sharing deeply personal or distressing information with a general-purpose chatbot, there are open questions about how that data is stored, used, or potentially monetised—especially given the tech industry’s track record of exploiting behavioural data. That’s not a hypothetical concern.
Finally, there’s the issue of error and unpredictability. Generative AI systems are, by nature, probabilistic. That randomness is what makes them creative and engaging—but in mental health contexts, you often don’t have room for error. A poorly framed response, a missed signal, or an inappropriate suggestion can cause real harm. Trained psychologists are explicitly taught to recognise and mitigate those risks. AI systems aren’t.
So for me, the core issues are structural and ethical: models not built for emotional care, lack of standardisation and safeguards, unresolved data governance risks, and the danger of treating probabilistic systems as if they were accountable care providers. Used carefully, these tools can support people—but framing and guardrails matter enormously.
2. There have been tragic cases reported globally, including loss of life; OpenAI has said that they’ve changed how ChatGPT will respond to questions around mental health and that there are safeguards being put in place. Can you comment on this and share what the guardrails should be across the board?
When companies like OpenAI talk about putting safeguards or guardrails in place, it’s important to understand what that actually means at a systems level. At a high level, large language models can sometimes generate outputs that could be harmful to the user or to others. To reduce that risk, commercial systems layer a second set of models or filters on top of the base model. These filters use AI to detect problematic content and intervene before a response reaches the user.
The key point is that both layers—the base model and the filtering system—are probabilistic. They’re subject to the same kinds of randomness and edge cases. What companies are really doing is increasing the sensitivity and specificity of detection so that the margin for error becomes very small. That’s meaningful progress, but it’s important not to overstate it: this doesn’t eliminate risk, it reduces it.
One could argue that the remaining margin for error is smaller than what exists in some human-led mental health interactions, and that may be true. But the presence of guardrails shouldn’t be confused with a guarantee of safety.
Where I think the conversation is still lacking is around accountability. In traditional psychological or medical practice, if a practitioner causes serious harm through negligence, ethical breach, or reckless conduct, they’re held to account. They can lose their licence, be barred from practising, and in serious cases face legal or medical liability. There’s a clear chain of responsibility.
If we’re treating AI systems as functionally equivalent aids in mental health contexts, then they should be held to comparable standards. At the moment, most platforms rely heavily on terms of service and disclaimers to contract away responsibility. That wouldn’t be acceptable in healthcare, and it shouldn’t be acceptable here if we’re using a like-for-like framework.
I actually think introducing clearer liability and accountability standards would be a healthy forcing function. It would push companies to be far more rigorous about how these systems are designed, tested, deployed, and monitored. You shouldn’t be able to provide something that meaningfully intervenes in someone’s mental health, and then disclaim responsibility when it goes wrong.
More broadly, if AI tools are positioned as supporting mental health, they should be evaluated against the same evidence-based standards we apply to psychological tools. Right now, there’s limited empirical research showing that routine use of these systems reliably improves human wellbeing. If they do demonstrate benefit, then they should also be subject to the ethical, scientific, and data governance obligations that come with providing any form of healthcare or psychological support.
Ultimately, guardrails can’t just be technical. They need to be scientific, ethical, and legal. And they need to answer the same basic question we ask of any mental health intervention: who is responsible when harm occurs?
3. We recently had an Indian mental health expert say that the headlines and news cases documented are largely not taking into account what’s happening outside the US – would you agree, and what are the bigger concerns here?
I’m currently based in Australia, and from what I’m seeing, many of the issues being reported internationally aren’t fundamentally different from the concerns we’re grappling with here.
The headlines and cases emerging globally—particularly around young people, emotional distress, and the unintended consequences of technology—mirror a lot of the challenges being discussed in Australia. That’s part of why the Australian government has taken relatively strong positions on technology use among younger populations, including increased scrutiny around platform design, safeguards, and exposure.
From my perspective, this isn’t an issue that cleanly maps onto a US versus non-US divide. The underlying dynamics—rapid adoption, limited guardrails, and tools being used in ways they weren’t originally designed for—are fairly consistent across jurisdictions. What differs is how quickly governments and institutions are willing to intervene.
So while it’s important to keep a global lens, I don’t think the risks are uniquely American. If anything, the similarities suggest this is a structural issue with how these technologies are being deployed, rather than a cultural anomaly tied to any one country.
4. There are AI X Mental Health tailored applications and products – do you have concerns about these as well?
My concerns around AI-native mental health applications are really about standards and evidence, rather than intent.
Some of these tools are genuinely well-intentioned and thoughtfully designed, and a few are more carefully fine-tuned than general-purpose chatbots. But the biggest red flag for me is the lack of standardisation. Unlike psychometric instruments or therapeutic methods used in psychology, most of these products aren’t evaluated against agreed-upon minimum performance or safety standards.
In mental health research, tools are expected to be empirically tested. They’re validated using established scientific methods, and their limitations are clearly understood and documented. With many AI-driven wellbeing tools, that foundation simply doesn’t exist yet. We don’t have robust evidence showing how effective they are, for whom they work, or where they might cause harm. Without that, it’s very easy to over-attribute authority or therapeutic value to systems that haven’t earned it.
The second concern is around consistency and governance. There’s no shared baseline for how these tools should handle user data, how they’re allowed to interact with vulnerable users, or what safeguards are required when sensitive psychological information is involved. That variability is risky, particularly when users may assume they’re engaging with something that meets healthcare-level standards.
If these tools are going to be positioned as mental health supports, then they should be held to the same expectations as other psychological tools: empirical validation, transparent limitations, ethical data practices, and clear boundaries around use. Without that, we risk blurring the line between experimentation and care in a space where the consequences of getting it wrong are very real.
5. Is there a broader concern about people over-relying on GenAI, and what are the implications here?
I do think there’s a broader concern about over-reliance on generative AI, and it goes beyond productivity or convenience.
There was research published by MIT this year looking at how people’s cognitive engagement changes when they rely on generative AI for specific tasks. One of the key findings was that, over time, people offload not just execution but thinking itself—their brains become less likely to engage deeply with certain types of problem-solving when AI is consistently doing that work for them. That doesn’t make AI bad, but it does tell us something important about how these tools shape human cognition.
I see a strong parallel here with earlier technological shifts. Convenience foods after World War II and ultra-processed foods in the 1990s brought real benefits—time savings, accessibility, affordability—but we eventually had to learn, often the hard way, how and when to consume them to maintain long-term health. Generative AI feels like we’re at a similar inflection point, except the system it’s acting on isn’t our metabolism, it’s our brains.
What feels more concerning is that these impacts directly affect cognition, attention, and our perception of reality. Unlike previous tools, generative AI doesn’t just change what we do—it changes how we think, and potentially whether we think at all in certain contexts. That makes the question of intentional use much more urgent.
The incentives here also matter a lot. We’ve already seen with social media that platforms are often more profitable when users are more dependent, more habituated, and less reflective. If similar incentives apply to generative AI—and there’s every reason to think they might—then there’s a real risk that companies benefit economically from users becoming less mentally engaged over time.
We don’t yet understand the long-term consequences of that: what it means for cognitive ageing, mental health, susceptibility to misinformation, or neurodegenerative disease later in life. Those are slow-moving effects, and historically, they’re the ones societies are worst at anticipating.
So for me, this isn’t about asking whether generative AI is “good” or “bad.” It’s about treating it as an incentives problem. Whether the tools are developed by for-profit companies, as in the US, or state-backed systems, as in places like China, we need to be honest about where their incentives might diverge from human wellbeing—and design guardrails, norms, and usage patterns accordingly.
6. What can you share about the need for guardrails (and what should these be) more broadly in this race towards superintelligence?
My primary guardrail for any technology—especially something as powerful as superintelligence—is simple: it has to be human-first.
I consider myself a futurist. I genuinely believe technology can help humans live more meaningful, fulfilled lives. But that only holds if humans are prioritised over profit, ego, or power. Otherwise, we’re running directly against the very justification for building technology in the first place. If a system doesn’t materially improve human wellbeing—or worse, undermines it—then it’s not progress.
That’s why, when it comes to superintelligence, I think the most important question isn’t technical capability, it’s incentives. Who is building these systems? What pressures are they under? And are they genuinely aligned with a human-first outcome, or are they optimising for market dominance, geopolitical leverage, or speed at all costs?
We can talk about specific guardrails—principles like “do no harm,” alignment research, containment strategies, or oversight mechanisms—and those are all important. Philosophers and technologists have been debating those for decades. But without a clear commitment to human wellbeing as the North Star, those guardrails tend to get eroded the moment they become inconvenient.
The last 10 to 15 years have given us plenty of examples where technology was framed as being “for the betterment of humanity,” while its actual incentives pushed in the opposite direction—towards extraction, addiction, and concentration of power. That history should make us sceptical, not cynical, but cautious.
So for me, the foundational guardrail is this: technology is only justified to the extent that it demonstrably advances human wellbeing. If that principle isn’t embedded at the incentive level—not just the marketing level—then everything else becomes window dressing.



7. Especially heading into the new year, what would some of your tips be or advice you’d give your friends and loved ones about the use of AI tools, especially when it comes to mental health and support?
I really like this question, because it’s where all of this becomes very human and very practical.
I do some coaching around how people interact with technology, and the framework I tend to use most is addiction—not because technology is inherently bad, but because it’s one of the clearest ways to understand dependency, overuse, and loss of agency. We’ve seen this play out repeatedly with new technologies, and generative AI is no different.
My starting point is presence. Whatever tools you’re using, the question is whether they’re helping you be more human or slowly pulling you away from that. A simple but surprisingly revealing exercise is to ask: What happens if I take this away? If the thought of removing a tool triggers a strong visceral reaction—anxiety, irritation, fear of falling behind—that’s often a signal worth paying attention to. Not always a problem, but a signal.
One thing I often suggest is short, low-stakes experiments. A day without your phone. A weekend without certain apps. Not as a moral stance, but as an information-gathering exercise. How does your body react? How does your mood change? What happens to your attention and your relationships? If the reaction looks a lot like withdrawal, that’s usually a sign that some support—from a coach or a mental health professional—might be helpful, because these patterns do start to affect decision-making, wellbeing, and connection over time.
More broadly, I encourage people to be mindful about consumption, especially over-consumption. We’re in a very experimental phase with AI tools, and I think rigid yes-or-no rules miss the point. What matters more is intentionality: trying tools consciously, noticing how they affect your psyche, and being willing to step back if something isn’t serving you.
A good example is AI products designed for children. I don’t think there’s a single rule that applies to every child or every family. Personally, I wouldn’t give an AI toy to my own child today, largely because of unresolved data and ethical issues. But more broadly, we don’t yet have solid empirical evidence showing long-term benefits or harms. That means experimentation—done carefully, in constrained environments, with strong emotional support—matters.
So my advice to friends and loved ones is this: stay curious, but stay grounded. Consume technology mindfully, watch how it shapes your inner life, and don’t outsource your intuition. We don’t need panic or prohibition—but we do need awareness, restraint, and the willingness to put human wellbeing first as we navigate this next phase.
8. Do you have any resolutions of your own in this area more broadly?
One of my main resolutions is to invest more deliberately in collaboration and research in this space, rather than just commentary.
Through my work with the Vana Foundation and Open Data Labs, we sit on real, permissioned human data that gives us a unique lens into how algorithms and digital systems shape people’s behaviour, wellbeing, and sense of self. I want to spend more time leveraging that responsibly—to produce evidence-based insights about how AI and algorithmic systems actually affect people’s lives, not just how we assume they do. My goal is to contribute more original research and public-facing work that helps move this conversation beyond speculation.
On a more personal level, I’m also committed to continuing to lead by example in how I relate to technology. I work with coaching clients and mentees, and I don’t think it’s credible to talk about mindful technology use unless I’m practising it myself. For me, that means a daily meditation practice and regular, intentional tech detoxes—not as an escape, but as a way to stay attuned to how my body, attention, and thinking are being shaped. That’s not easy when you’re a founder running a large protocol, but I think demonstrating that it’s possible matters. If I can do it, others can too.
One area I’m particularly interested in exploring more deeply this year is how rapid technological change is affecting identity—especially for men. I believe social progress has, overall, moved in a positive direction. But alongside that, there are many men who are struggling to understand their place in a changing world, and technology often amplifies that confusion rather than resolving it. We’ve seen how loneliness, detachment, and anger can manifest when people feel unmoored, and that’s a risk to both individual well-being and broader social stability.
I’m interested in how we can help reframe healthier, more grounded forms of identity and masculinity that allow men to be fully expressed in a technologically mediated world—without retreating into reactionary or harmful spaces. That’s not an argument against progress; it’s an argument for making sure people aren’t reacting negatively towards it.
You can find out more about Vana on: Vana.org
Disclaimer: Views expressed are personal and do not necessarily reflect the views of The Health Collective.

