26 June 2024
Share Print

Data Diaries - June 2024

To The Point
(5-7 min read)

The Spanish Data Protection Agency (DPA) and three regional DPAs have issued guidelines for using Wi-Fi tracking in public spaces, shops, and events for capacity monitoring and marketing. The UK Parliament's dissolution on 4 July stopped the Data Protection and Digital Information Bill (DPDIB) and the AI Regulation Bill. The Conservative manifesto focuses on AI investment without regulation, while Labour plans to regulate powerful AI models and enhance online safety. The EU AI Act, adopted on 21 May, will implement bans on unacceptable AI systems starting early 2025. The Seoul AI Summit agreements emphasize international cooperation on AI safety. The European Data Protection Board (EDPB) and UK Information Commissioner's Office (ICO) addressed the "consent or pay" model for online platforms, requiring consent for tracking or a free alternative. The European Data Protection Supervisor (EDPS) found the European Commission's use of Microsoft 365 violated data protection laws, stressing the need for clear data processing purposes and compliance with international transfer regulations.

Click on the links below to read more:

 

Spanish DPA issues guidance on WI-FI tracking technologies

The Spanish DPA, together with three other regional DPAs, published in May a series of recommendations addressed to data controllers that are using, or intend to use, Wi-Fi tracking technologies either in public spaces, in shops and/or in mass events for purposes such as monitoring the maximum authorized capacity or to deploy marketing engagement initiatives within the premises. The Guidance provides useful insights of the Spanish DPA, among others, on the following issues:

  • A DPIA shall be executed, in all cases, given that this tracking technologies may constitute a large-scale systematic observation and the fact that they do not usually offer the data subjects the possibility to exercise their rights.
  • Anonymization shall be applied immediately and as close as possible to the point of collection and, if technically feasible, on the capture device.
  • When using a fingerprint to identify a user device, e.g., data exposed using network communication protocols, it is required to mask the metadata at the time of capture to avoid the identification of the relevant data subject.
  • Legitimate interest will not operate as a legal basis if the processing involves larger areas of coverage or regular visits, as opt-out mechanisms would be difficult to implement.
  • Data subjects could be informed about the processing of their personal data by means of visible dashboards or public signage.
So what?

Therefore, before engaging in the implementation of any Wi-Fi tracking technology in Spain, retailers must carry out a proper data privacy impact assessment, particularly to evaluate if the potential pseudonymisation and anonymisation techniques are sufficient to safeguard data subject rights, as well as to properly document any retention periods. Also, it is essential to implement an adequate due diligence/audit scheme on any data processor offering any platform and/or devices required to operate the relevant Wi-Fi tracking technology.

The potential interplay with artificial intelligence systems would require reviewing the EU AI Act, for example, on the information to be provided around the impact of AI in the decision-making procedure. The Spanish DPA will continue to monitor the use of these technologies and a random inspection to the retail sector might be one of the next steps.


 

UK general election: What have the parties said about data protection and AI?

Following the Prime Minister's announcement of a general election on 4 July, the dissolution of Parliament meant that the Data Protection and Digital Information Bill (DPDIB) and the Artificial Intelligence (Regulation) Private Members' Bill did not pass. This has left privacy professionals wondering about what changes the new government may introduce in these areas. While the outgoing Conservative government had previously stated that it would reintroduce the DPDIB if it were re-elected, neither the Conservative nor the Labour manifesto refers to data protection reform.

Turning to AI, the Conservative government's position was that it did not intend to introduce regulation at this time, although there were rumours that DSIT (the Department for Science, Innovation and Technology) staff were secretly drafting AI legislation. The Conservative manifesto refers to investing in and accelerating AI development, but does not refer to AI regulation. The Labour manifesto states that a Labour government would introduce regulation on companies developing the most powerful AI models to promote safe development and use of AI models. It also refers to banning the creation of sexually explicit deepfakes and building on the Online Safety Act, exploring further measures to keep people safe online, particularly when using social media.

So what?

While UK organisations may be relieved that the DPDIB will not progress at this time, meaning that UK data protection law remains closely aligned to EU law, there is now uncertainty around the likely change of government and what this may mean for data protection and, in particular AI regulation. It will be interesting to see how Labour's plans to regulate powerful AI models align with the EU AI Act. AG's experts will monitor the position closely and be ready to support your business once the position becomes clearer.


 

EU AI Act formally adopted: What happens next?

The EU AI Act was formally adopted on 21 May, and it has been reported that it will be published in the EU Official Journal on 12 July, starting the implementation timetable. The prohibition on unacceptable risk AI systems will become applicable in early 2025, the rules on general purpose AI become applicable in summer 2025, and some of the rules on high-risk AI systems become applicable in summer 2026.

In another AI-related development, the AI Summit (the follow-up to the 2023 Bletchley Park summit) took place in Seoul on 21 and 22 May and was co-hosted by the Republic of Korea and the UK. The key developments from the Summit are:

  • 16 AI tech companies including Amazon, Google, Meta, Microsoft, OpenAI and Zhipu.ai (China) signed up to the Frontier AI Safety Commitments. The Commitments include publishing safety frameworks on how they will measure risks of their frontier AI models, such as examining the risk of misuse of technology by bad actors. In extreme circumstances, the companies will not develop or deploy systems which involve "intolerable risks".
  • The EU plus Australia, Canada, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the USA and the UK have signed the Seoul Statement of Intent toward International Cooperation on AI Safety Science, an agreement to establish an international network of publicly-backed AI Safety Institutes, and the Seoul Declaration, a commitment to work together to ensure that AI advances human-wellbeing and helps to address world’s greatest challenges in a trustworthy and responsible way.
  • 27 countries (including the UK, Spain, the EU, and the USA) have signed the Seoul Ministerial Statement agreeing to develop shared risk thresholds for frontier AI development and deployment, including agreeing when model capabilities could pose ‘severe risks’ without appropriate mitigations.
So what?

The agreements signed at the Seoul Summit show that there is widespread support for cooperation at an international level adoption, and publication of the EU AI Act means the Act's prohibition of "unacceptable risk AI systems" and requirements for general purpose AI will apply from next year. Organisations that provide, import, or distribute AI systems or general-purpose AI models that are placed on the EU market, or use such systems or models in the EU, need to take action to identify which provisions they need to comply with and how to achieve compliance. Some key actions to consider are:

  • Identify whether the relevant system is an "AI system" within the meaning of the Act.
  • Identify which category of AI system it falls into to work out which obligations apply and when.
  • Identify the relevant parties and their relationships to work out which obligations apply to which parties. Note that organisations using AI systems ("deployers") have obligations, in particular in relation to high-risk systems.
  • Organise a team that includes the necessary expertise, both legal and technical, to draw up and implement a compliance plan.

The European AI Office was launched on 29 May and is expected to start to issue guidance to help organisations understand the EU AI Act's requirements and how to achieve compliance. Watch out for updates in future issues of Data Diaries.


 

Consent or pay: The EDPB opinion and the ICO call for views

"Consent or pay", also known as "pay or OK", refers to a subscription model under which users can choose to pay to access a website or app without being tracked for the purpose of personalised advertising, or can receive access free of charge in return for accepting such tracking. In October 2023, following a European Court of Justice (CJEU) decision in July 2023 and a request by the Norwegian data protection authority (DPA), the European Data Protection Board (EDPB) adopted an urgent binding decision that Meta could not rely on the lawful bases of contractual necessity or legitimate interest for behavioural advertising, effectively leaving consent as the only available option. Meta responded by adopting a "consent or pay" model for Facebook and Instagram in the EEA.

Following a request by several data protection authorities, the EDPB published its long-awaited "consent or pay" Opinion on 17 April. In the UK, the ICO launched a call for views on its emerging thinking on 4 March. While these will both primarily impact platforms which sell behavioural advertising services, they will also impact on organisations which currently use such services to promote their products.

The EDPB Opinion's scope is limited to "large online platforms". While this term is not defined in legislation, it is defined (broadly) in the Opinion and links to the terms "very large online platforms" in the Digital Services Act and "gatekeepers" in the Digital Markets Act. The Opinion states that: "In most cases, it will not be possible for large online platforms to comply with the requirements for valid consent if they confront users only with a binary choice between consenting to processing of personal data for behavioural advertising purposes and paying a fee." The Opinion explains that large online platforms should consider offering a "Free Alternative Without Behavioural Advertising". This means a further alternative that is free of charge and does not involve behavioural advertising but could include a form of advertising involving the processing of less (or no) personal data, such as contextual advertising or advertising based on topics the individual has selected from a list.

The Opinion also focuses on the requirements that must be fulfilled for consent to be lawful, including the need for specific and granular consent, ensuring that consent is freely given, meaning that the individual must not suffer a detriment if they refuse consent, and refreshing consent at appropriate intervals.

The ICO call for views is not limited to large online platforms and so does not include the requirement for a "Free Alternative Without Behavioural Advertising". The ICO states that, while data protection law does not prohibit consent or pay models, any organisation considering such a model must ensure that consent to processing of personal information for personalised advertising has been freely given and is fully informed, as well as capable of being withdrawn without detriment. This is broadly consistent with the EDPB Opinion and emphasises that organisations should consider the following:

  • The power balance between the service provider and its users – does the user have a choice about whether to use the service or not?
  • Equivalence – are the ad-funded service and the paid-for service basically the same?
  • Is the fee reasonable? Can it be justified?
  • Privacy by design – are the choices presented clearly, fairly and equally?
  • Transparency – organisations must inform people about how they intend to use their personal information as payment for the service they receive, and what it means if they say no, now or in the future.
  • Consent – Organisations must offer individuals easy ways to withdraw consent at any time. It must be as easy for people to withdraw consent as it is to give it.
So what?

While the EDPB Opinion is limited in scope to large online platforms, it is much more detailed than the ICO's emerging thinking set out in its call for views and provides a useful indication of the EDPB's views on the pay or consent model and on behavioural advertising more generally. The EDPB is expected to publish guidelines of wider application later this year.  While EDPB guidelines are no longer directly relevant to the UK regime, they will apply to UK organisations trading in the EEA and still provide helpful guidance. In addition, the ICO may change its position in the light of the EDPB Opinion and the feedback that it received during the consultation period.

"Consent or pay" is one part of a wider review of the AdTech industry, which includes real time bidding and the removal of cookies from Google Chrome, intended to be replaced with its Privacy Sandbox, subject to competition concerns being resolved. Organisations should continue to assess their marketing strategy in the light of these developments. Those that currently rely on behavioural advertising may wish to consider alternatives, such as contextual advertising based on the subject matter of the webpage or app, and focusing on first-party data, meaning data collected directly from the individual.


 

Use of Microsoft 365 and international data transfers: EDPS action against the European Commission and Microsoft

Following a long investigation, on 8 March the European Data Protection Supervisor (EDPS) ordered the European Commission to take steps to bring its use of Microsoft 365 into compliance with data protection law. The EDPS found that the Commission had breached the purpose limitation principle and the rules on international transfers and failed to prevent unauthorised disclosures of personal data. Many of the Commission's breaches were caused by its failure to clearly identify what data was to be processed under its contract with Microsoft for which purposes, and to which third parties this data was to be transferred. As the controller, the Commission was responsible for ensuring that these actions were taken and that the contract reflected the processing and transfers that were taking place. The Commission and Microsoft have both appealed the EDPS's decision.

So what?

The list of actions that the EDPS required the Commission to take provides useful guidance to organisations sharing personal data with Microsoft or other suppliers of software products and related services on how to ensure that such use is legal. The essential steps include clearly and specifically identifying what data is being processed for what purposes and who it is being shared with, ensuring that you put in place a contract which reflects this and conducting all necessary risk assessments before permitting international transfers of personal data:

  • carry out a transfer-mapping exercise identifying what personal data is transferred to which recipients in which third countries, for which purposes and subject to which safeguards, including any onward transfers.
  • ensure, by way of contractual provisions and other technical and organisational measures, that:
    • all personal data is collected for explicit and specified purposes.
    • the types of personal data are sufficiently determined in relation to the processes for which they are processed
    • any processing by a processor, its affiliates or sub-processors is only carried out on the controller's documented instructions (subject to limited exceptions as required by law)
    • no personal data is processed in a manner that is not compatible with the purpose for which the data was collected in accordance with the applicable criteria (in the GDPR/UK GDPR these are set out in Recital 50)
    • all transfers or personal data outside the EEA must be made in accordance with the GDPR and the additional safeguards required by the Schrems II judgment.

If you would like advice on sharing personal data with software providers, or on any of the legal issues covered in this issue of Data Diaries, please contact one of our specialist data lawyers.

Our team

Partner, IS and Technology, Data Protection & Intellectual Property
France

Partner, Commercial and Data Protection & Head of Data
Edinburgh, UK

Partner, Intellectual Property, Data Protection & IT, Commercial
Germany

Partner, Commercial & Data Protection
Aberdeen, UK

Partner, Commercial and Data Protection
London

Partner, Commercial and Data Protection
Manchester

Partner, IP/IT & Data Protection
Dublin, Ireland

To the Point 


Subscribe for legal insights, industry updates, events and webinars to your inbox

Sign up now