A former Liberal cabinet minister says the government was close to introducing legislation around the internal use of artificial intelligence (AI) but ran out of time before governments changed after the Progressive Conservatives won the fall general election.
“ I’m very sad that my efforts towards legislation did not get fulfilled before the government changed, but that’s how it works,” Sarah Stoodley told The Independent in a phone interview last week.
Stoodley, who was re-elected as MHA for the St. John’s riding of Mount Scio in October and previously served as the Liberal government’s Department of Modernization and Service Delivery, said the province had been waiting to see what would come of proposed federal legislation which, if passed, would have introduced federal legislation around the use of AI in the public and private sectors.
The federal Liberals introduced Bill C-27 in June 2022. Among other things, the legislation would have enacted the Artificial Intelligence and Data Act, which would, to an extent, regulate the use of AI systems in the public and private sectors. In January 2025 the bill died on the order paper after its second reading and after the House of Commons Standing Committee on Industry and Technology had paused reviewing the proposed legislation.
Stoodley said there was a sense that Newfoundland and Labrador and other provinces would be guided by the federal government’s work on AI policies and legislation. “I think everyone, all the provinces—because I used to go to the [federal, provincial, territorial] meetings with all the provinces and [heard] the federal ministers talking about artificial intelligence—and I think a lot of people had our eggs in the federal government C-27 bill.
“We have not yet seen what the new federal government is going to do, if they’re going to refresh that or how they’re gonna change it,” she added.
In the absence of federal legislation, and in the wake of two scandals involving AI-generated errors in major government-commissioned policy reports, Stoodley said the province needs its own AI legislation. She points to her government’s April 2025 implementation of its Responsible Use of Artificial Intelligence (AI) Technology policy for public service employees.
The policy requires public government workers to obtain approval before using AI programs. It also states that “[a]ny information generated by approved AI technologies must be validated by an individual for accuracy before use and noted when used.
In September Radio-Canada reported that at least 15 AI-generated errors appeared in the province’s new Education Accord, which was released in late August, more than four months after the government’s AI policy came into effect. Accord co-chairs Karen Goodnough and Anne Burke, both Memorial University professors, later told CBC the errors must have been added to the report after it had been submitted to government.
Last week, the Department of Education and Early Childhood Development hinted at the government’s own culpability in the scandal. Media Relations Manager Lynn Robinson shared a statement with The Independent saying “[t]he inaccurate citations and references generated by AI was unacceptable,” and that the government “will ensure that any application of AI within government is subject to strict review, human verification, and transparent quality controls.”
On Nov. 22 The Independent reported another major government report contained errors likely generated by AI. This time, the Health Human Resources Plan, released last May—after the government’s AI policy was put in place—contained at least four fake citations. The 500-plus-page document, commissioned from Deloitte at a cost of $1.6 million, used sources The Independent confirmed do not exist in supporting claims related to recruitment strategies, monetary recruitment and retention incentives, virtual care, and impacts of the COVID-19 pandemic on healthcare workers.
While that report was released in May, and Deloitte has admitted fault for the errors, it came a week after Stoodley announced government had introduced AI training “for all government employees.”
The second AI scandal in under three months prompted NDP Leader Jim Dinn to call on the government to develop AI regulations. Premier Tony Wakeham’s office then directed Minister of Government Services Mike Goosney to “undertake a review of what guidelines should be put in place to stop this from happening in the future.”
In a statement to The Independent, Goosney said government “will ensure that any application of AI within government is subject to strict review, human verification, and transparent quality controls,” and that his department will “engage the Office of the Privacy Commissioner to seek their input and guidance on the use of artificial intelligence.
“Our priority is to maintain public trust,” Goosney said. “That means ensuring the accuracy and protection of all government information, regardless of whether they are prepared using traditional methods or assisted by new technologies. We remain committed to embracing innovation responsibly, with proper oversight and accountability at every step forward.”
Asked if he is reviewing the draft legislation initiated under Stoodley’s leadership, Goosney would only say that he will be consulting the Office of the Privacy Commissioner to “assess the need for AI legislation and regulations around responsible use of artificial intelligence within departments.”
The Local Journalism Initiative (LJI) is a federally funded program to add coverage in under-covered areas or on under-covered issues. This content is created and submitted by participating publishers and is not edited. Access can also be gained by registering and logging in at: https://lji-ijl.ca
You can support trusted and verified news content like this.
FIPA’s news monitor subscribers, donors and funders help make these available to everyone rather than behind a paywall. We appreciate every contribution because it makes a difference.
If you found this article interesting and useful, please consider contributing here.