The context in which we understand privacy is shifting in the current landscape of big data analytics, machine learning, and artificial intelligence. What was generally considered a broad topic with varying normative understandings is now at the forefront of debates and policy work as the varied stakeholders attempt to narrow its scope in the Canadian context. Simultaneously, we are seeing an increasing and more varied quantity of data being given to and collected by both private and public bodies in the name of technological innovation – most of which are fueled by data, thereby making privacy concerns more acute.
Resulting from this dichotomous relationship is a narrative where privacy is pitted against innovation; where privacy protection is seen as a check on innovation. This is a dangerous discursive framing of the issue because it implies that unless individuals want to remove themselves for this new form of a social contract, they must give up control over their personal information. AI represents a paradigm shift in technology: rather than an incremental expansion of existing methods and practices, we are seeing a revolution of Big Data, which has already had widespread and – in many cases – deleterious implication for privacy rights. The intersection of machine learning and Big Data has the potential to fundamentally alter what it means to have a ‘reasonable expectation of privacy.” The implications of AI transcend any meaningful distinctions between public and private sector (including privacy laws). Crucially, history has shown us that when new privacy-impacting technologies are adopted ahead of corresponding changes in law and regulations governing privacy, it can be difficult to ‘roll back’ established practice and undo erosions to privacy rights. Technology moves quickly from being novel and extraordinary to routine and normalized. This calls for a precautionary approach – one that involves restrictions on the adoption and use of privacy-impacting AI until such time that robust and updated legal and regulatory frameworks are in place.
We urge the policy makers to keep PIPEDA (and other legislation) technology-neutral. Privacy remains a broad concept that is not limited to technological contexts and that can also transcend any particular technology. Rather than writing a new legislation targeted specifically at AI, we maintain that existing privacy laws need to be strengthened significantly to expand their scope to be able to govern AI within the existing legal framework.
We are pleased to see that the Office of the Privacy Commissioner (OPC) is moving towards adopting a human-rights based approach to privacy rights. This framing of privacy no longer sees the interplay between privacy and innovation as zero-sum; rather, it emphasizes the foundational role of trust [through privacy] to support the digital economy. Building on this notion, we emphasize the importance of keeping meaningful consent and transparency central to data privacy and AI governance.
Lastly, we fully support the numerous calls for expanding the Information and Privacy Commissioners’ powers to support a robust enforcement regime where they can utilize ordermaking powers and administrative monetary penalties to enforce compliance. We have been calling for these changes for over a decade and believe that given the move towards more automation and increasing sophistication of data processing methods, this is a requirement now more than ever.
Categories
Law Reform