By now ChatGPT hardly needs an introduction. The chatbot, based on an advanced AI model (GPT-3) developed by OpenAI, has brought generative AI into the public consciousness and has become inescapable.
In the midst of all the excitement, on 31 March 2023 Italy's data protection authority (Garante per la Protezione dei Dati Personali, the 'Garante') stated that it would temporarily block access to ChatGPT effective immediately and would launch an investigation into ChatGPT's data collection policies. Whilst the ban remains in place many rushed to access ChatGPT using virtual private networks (VPNs). This highlights the difficulties that countries and regulators across the world face when seeking to ban or limit access to websites not least because companies like OpenAI use servers across the world in order to service their substantial userbase (thereby increasing someone's ability to 'hop around' using a VPN). The ban – and its consequences – does not affect OpenAI or ChatGPT users alone. It underscores issues that companies and developers at all stages of an AI's lifecycle need to reckon with in view of incoming regulation across jurisdictions. (...) What has been the effect of the ban? Italy's move to ban ChatGPT due to privacy and data protection concerns has emboldened other data protection authorities within the EU to do the same. Sweden, France, Germany and Ireland have also launched investigations into the way in which OpenAI collects data and then uses it. On 13 April 2023, the EU's European Data Protection Board (EDPB) also announced that it was launching a dedicated taskforce in order to ensure that data protection authorities in Member States could cooperate and exchange information on possible enforcement actions. This was a sensible move by the EDPB as without coordination across the EU (whilst we await the implementation of the EU AI Act) there is a risk that different data protection authorities may require companies like OpenAI to comply with different requirements for their user base leading to a lack of harmony across the EU and creating confusion for businesses developing and deploying AI products used by EU users. Further afield, Canada's Privacy Commissioner has also launched a similar investigation. How does this impact OpenAI and, more widely, other companies developing similar websites and apps? It's important to note that OpenAI does not have an office in the EU. However, in line with the upcoming EU AI Act (and with the extra-territorial scope of the GDPR), from the EU's point of view, what matters is whether outputs produced by the AI system are then used in the EU. OpenAI is not the sole target of data protection authorities. Companies and developers at all stages of an AI's lifecycle should therefore ensure that they are complying with data protection rules in all jurisdictions they operate in by, for example, ensuring that data is collected and processed in accordance with the GDPR. It will be interesting to see whether the Garante – or indeed any other data protection authority - will also move to regulate companies operating in the metaverse on similar grounds (notably, Meta's Horizon has issues with age verification). Although the EU AI Act has yet to come in force, this is not stopping data protection authorities and regulators from using their powers under the GDPR to challenge the way in which companies collect and process personal data to train their algorithms. Whether these bans and investigations are effective in prompting companies like OpenAI and Replika to be more transparent and respect data-protection rules will be an important test-case for companies, users and regulators across the world. Stay tuned for an update following the 30 April 2023 deadline.... Source
0 Comments
Leave a Reply. |