While that sounds fantastic for employers, it's important to keep in mind that AI can also give rise to risk. For example:
- Biases and discrimination - AI can perpetuate biases present in the data they are trained on, leading to discriminatory outcomes, such as biased hiring decisions, unequal treatment of employees, or unfair targeting of certain groups. It wasn't long ago that Amazon had to abandon its AI-based recruiting tool which taught itself to favour male candidates (the tool had been trained on past resumes, predominantly from male applicants, which led to biased results).
- Lack of transparency and explainability - AI algorithms can operate as "black boxes," making it difficult to understand how they arrive at their decisions. This lack of transparency can be problematic when it comes to accountability, as it becomes challenging to explain or challenge AI-generated outcomes.
- Job displacement - Using AI to automate more and more tasks is likely to result in job changes, potential job losses, or the need for employees to acquire new skills to remain employable. The retail industry has already experienced some job losses as AI-powered self-checkout systems and automated kiosks become more prevalent, reducing the need for human cashiers and customer service staff.
- Data privacy and security - AI systems often rely on vast amounts of data, raising concerns about privacy and security. Mishandling or unauthorised access to sensitive data can have severe consequences, including breaches of confidentiality, identity theft, or manipulation of personal information. In 2018, it was revealed that Cambridge Analytica, a political consulting firm, had obtained personal data from millions of Facebook users without their consent. While not directly related to AI, this incident highlighted the potential misuse of personal information collected through social media platforms, which can be utilised in AI-driven targeted advertising campaigns.
- Overreliance and dependency - Overreliance on AI systems without human oversight can lead to errors or incorrect decisions. Blindly trusting AI-generated outputs without critical evaluation can have negative consequences, especially in critical areas such as healthcare, finance, or legal decision-making. Striking the right balance is important.
You may have heard that Lord Justice Birss, a Court of Appeal judge, recently admitted to asking ChatGPT to provide him with a summary of a particular area of law to include in his judgment. He emphasised that he took full responsibility for the judgment, and simply asked AI to perform a task which he was about to do and which he knew the answer to and could recognise as being acceptable. Subsequently, official guidance has been issued to thousands of judges by the Judicial Office which confirms that AI can be useful for summarising large amounts of text or in administrative tasks. It does, however, warn that AI chatbots must not be used to conduct legal research or undertake legal analysis, as they are prone to making up fictitious cases or legal texts. Judges were also advised not to enter private information into a chatbot as it could end up in the public domain.
- Ethical considerations - Dilemmas may arise when AI systems make critical decisions, necessitating careful consideration of responsibility and accountability. For instance, if an autonomous vehicle is faced with an unavoidable collision, it may have to make split-second decisions about protecting its passengers versus minimising harm to pedestrians or other vehicles. Whilst extreme, this raises interesting questions about the balance between human judgement and AI decision-making and could pose future challenges for employers seeking to utilise autonomous machinery or vehicles.
- Technical limitations and errors - AI systems are not infallible and can make mistakes or encounter technical limitations. These errors can lead to incorrect results, faulty predictions, or unreliable recommendations, potentially impacting business operations or decision-making processes.
Without legal protections, the adoption of AI in the workplace can lead to a range of negative consequences.
In December 2023, MEPs and the European Council reached agreement on proposed legislation for AI regulation in the European Union. It is intended that AI systems will be categorised into four groups: unacceptable, high, limited, or minimal risk. This grouping will be based on the potential harm to health, safety, fundamental rights, the environment, democracy, and the rule of law. The final text of the European AI Act is expected during 2024, followed by a phased implementation period. (See our article here).
To date, the UK has adopted a more 'light-touch' approach, noting in its March 2023 White Paper that, for the time being, it will devolve the work of regulation to sector-specific regulators, such as the ICO, CMA, HSE, EHRC and the FCA. Regulators are expected to start publishing guidance in 2024 explaining how these principles will apply within their remit, having regard to the underpinning principles set out in the AI White Paper of (1) safety, (2) security and robustness, (3) transparency and explainability, (4) fairness, (5) accountability and governance, and (6) contestability and redress.
In September 2023, the TUC announced the launch of an AI taskforce aiming to establish new legal protections for employees and employers. Their plan includes publishing a draft "AI and Employment Bill" by early 2024, which is expected to propose measures such as:
- A legal duty on employers to consult trade unions on the use of high-risk AI.
- A legal right for workers to have a human review of decisions made by AI systems.
- Amendments to existing legislation to guard against discriminatory algorithms.