TORONTO — Kids Help Phone says it’s turning to artificial intelligence to help respond to the “enormous need” as more and more young people reach out for mental health help and support.
“Young people are changing fast and technology is changing faster,” said Michael Cole, senior vice-president and chief information officer for Kids Help Phone.
The helpline is partnering with Toronto-based Vector Institute, which bills itself as a consultant for organizations, businesses and governments to develop and adopt “responsible” AI programs.
The planned AI will be able to recognize key words and speech patterns from young people who reach out to Kids Help Phone to help busy counsellors zero in on what they need and tailor their support accordingly.
But Kids Help Phone says it’s keenly aware that the term “artificial intelligence” could alarm people as they conjure up images of a computer or chatbot, rather than a human, on the other end of the helpline.
That’s not how its AI program will work, said Katherine Hay, the organization’s president and CEO.
“It’s always human to human,” Hay said. “It’s not taking the place of a human-to-human approach.”
Instead, the information gathered by AI will be available to human counsellors as they work with the young person on the other side of the call or text exchange, she said.
The 24-7 national support line for kids and adults has had a huge rise in demand for its services since the COVID-19 pandemic began. After receiving about 1.9 million calls, texts, live chats or visits to its website during 2019, Kids Help Phone has seen that number jump to more than 15 million since 2020, according to numbers provided by the organization.
The organization is already using some AI technology to help triage texts, Hay said.
For example, if someone uses trigger words or phrases such as “‘I feel hopeless, I think I want to die,’ or something along those lines, it will put that conversation at the front of the line (to speak with a counsellor),” she said.
Roxana Sultan, Vector’s chief data officer and vice-president of its health division, said treating AI as a tool, not a replacement for humans, is a critical part of using the technology responsibly in health care.
“We’ve been very clear with all of our partners that the tools that we are developing are always meant to be a support to the clinicians. They are never meant to replace clinician judgment, clinician engagement,” Sultan said.
The Kids Help Phone AI tool will use “natural language processing” to identify “keywords or trigger words that correlate with specific types of issues,” she said.
“If a young person uses a specific word in their communication that is correlated with or connected to a specific issue or concern, it will be flagged by this model and it will alert the professional staff,” Sultan said.
For example, AI can be trained to recognize words that suggest a possible eating disorder, allowing a counsellor to turn the conversation in that direction and offer up specific resources and supports.
AI can also be trained to identify new words and trends related to situations that are causing distress and anxiety, such as a pandemic, climate change, wildfires or a mass shooting.
“It’s really meant to augment the services that the professional staff are providing,” Sultan said. “(It) helps them to be more efficient and effective in terms of how they then manage the issues that are arising over the course of the conversation.”
The key, Sultan said, is to make sure the AI tools are thoroughly tested by clinicians before launching it. Kids Help Phone and Vector expect to launch the new technology sometime in 2024.
After it launches, it’s critical that the front-line staff using it constantly evaluate the information they get.
“You really cannot blindly follow what any algorithm is informing you to do in practice. So as high quality as the model may be, as well-trained as it may be, it is never intended to replace your judgment and your experience as a clinician,” Sultan said.
If the AI is generating something that seems “a little off,” that should be flagged and investigated, she said.
Another concern people may have about AI is the confidentiality of their personal information, she said.
“It’s really important to be clear that all of the data that are used to train the models are de-identified,” Sultan said.
“So there is no risk of knowing information about somebody’s name or you know, any identifying factors are all removed upfront.”
AI’s use in mental health is on the rise across the country, said Maureen Abbott, a manager in the access to quality mental health services department at the Mental Health Commission of Canada.
Its current applications vary from individual services to monitoring social trends to driving a growth in mental health apps, she said.
“AI is being used in speech recognition to pick up a cadence in a person’s voice and help diagnose manic episodes and depression,” Abbott said.
“It’s used in chatbots for machine-led learning and also in social media to identify trends for suicidal ideation, for example, to scan for phrases and words.”
Abbott said there’s a need to develop and implement standards that govern the use of AI in mental health in Canada to catch up with its rapidly increasing prevalence.
“AI is being used already in our everyday life whether we’re seeing it or not,” Abbott said.
“So it’s only natural that it’s happening for mental health as well and it’s happening quickly.”
This report by The Canadian Press was first published July 5, 2023.
Canadian Press health coverage receives support through a partnership with the Canadian Medical Association. CP is solely responsible for this content.
You can support trusted and verified news content like this.
FIPA’s news monitor subscribers, donors and funders help make these available to everyone rather than behind a paywall. We appreciate every contribution because it makes a difference.
If you found this article interesting and useful, please consider contributing here.