Snap AI chatbot investigation launched in UK over teen-privacy
Spread the love

The U.K. government has opened an investigation into Snap’s artificial intelligence chatbot over potential privacy concerns. 

A preliminary enforcement notice was issued Friday by the Information Commissioner’s Office (ICO), the country’s data protection regulator, alleging that the chatbot, My AI, may pose risks to Snapchat users.

John Edwards, the Information Commissioner, said in a release that the preliminary findings of his investigation indicate Snap did not adequately identify and assess privacy risks before launching ‘My AI’.

Before a final decision is made, Snap will have the opportunity to address the provisional concerns. As a result of the ICO’s provisional findings, Snap may have to stop offering the AI chatbot to UK citizens until the privacy concerns are resolved.

“We are closely reviewing the provisional decision of the ICO. As with the ICO, Snap is committed to protecting user privacy, a spokesperson told CNBC. In line with our standard approach to product development, My AI has undergone a rigorous legal and privacy review process.

In order for the ICO to feel comfortable with Snap’s risk-assessment procedures, the tech company will continue working with it. Using OpenAI’s ChatGPT, the AI chatbot alerts parents if their children have used the chatbot. As part of its general guidelines, Snap says its bots must refrain from making offensive comments.

Previously, the agency published a “Guide to Artificial Intelligence and Data Protection” that was followed by a general notice in April that listed questions developers and users should ask when dealing with artificial intelligence.