A joint investigation by federal and provincial privacy watchdogs has concluded that OpenAI violated Canadian privacy laws during the initial launch and training of its ChatGPT chatbot. The probe, initiated in 2023, was led by federal Privacy Commissioner Philippe Dufresne and his counterparts from British Columbia, Alberta, and Quebec. The investigation's findings were presented in a report released today.
The privacy commissioners determined that OpenAI's methods of collecting data to train its AI models were too expansive, resulting in the compilation and utilization of sensitive personal information of Canadians. This included data scraped from social media, discussion forums, and other publicly accessible websites. The watchdogs noted that OpenAI failed to adequately inform users that personal information gathered from these sources would be used in this manner.
According to the report, the collected data included sensitive details such as individuals' health conditions, political views, and information about children. The privacy commissioners also raised concerns about the potential for inaccuracies in ChatGPT responses and the lack of proper notification regarding these inaccuracies. Commissioner Dufresne expressed concerns about OpenAI's lack of accountability for launching a product that did not adhere to Canadian law.
While acknowledging the violations, the Privacy Commissioner of Canada stated that the complaint is considered "valid, but conditionally resolved," as OpenAI is reportedly engaging in good faith to address the identified issues. The commissioners highlighted the need for updates to Canada's privacy laws to address the challenges posed by AI and the internet in meeting current consent requirements. The investigation also revealed that OpenAI faced scrutiny in Canada following a mass shooting in Tumbler Ridge, B. C., after it was revealed the shooter had interacted with ChatGPT prior to the event.





