In January of this year, the Government released its Artificial Intelligence (AI) Opportunities Action Plan. The plan did not depart from the Government’s ‘pro-innovation’ approach to AI regulation, providing minimal detail of any regulatory proposals or incoming safeguards. Given the lack of Government guidance or UK-specific laws, how should employers responsibly govern AI and implement it in recruitment contexts?
Why is proper AI governance important?
AI algorithms can only be as unprejudiced as the data they are fed: they can learn imperfectly from outcomes and historical datasets, and cannot cleanly mitigate against biases or learnt discriminatory behaviour. Back in October 2023, the Government put forward its concerns around bias, fairness and representational harms within its discussion paper ahead of the AI Safety Summit, stating that:
AI models can contain and magnify biases ingrained in the data they are trained on, reflecting societal and historical inequalities and stereotypes. These biases, often subtle and deeply embedded, compromise the equitable and ethical use of AI systems, making it difficult for AI to improve fairness in decisions. Removing attributes like race and gender from training data has generally proven ineffective as a remedy for algorithmic bias, as models can infer these attributes from other information such as names, locations, and other seemingly unrelated factors. […] Frontier AI systems have been found to not only replicate but also to perpetuate the biases ingrained in their training data.
Under section 39(1) of the Equality Act, an employer’s obligation not to discriminate extends to potential employees. The use of automated decision making and machine learning in advertisement and applicant selection carries substantial risks for diversity. Issues may arise where algorithms are not properly trained to think inclusively about the client base; for example, AI might filter out candidates with protected characteristics based on prejudices from training data, or inappropriately scrape candidates’ personal data from external sources.
The risks of automated AI in the workplace are demonstrated by the key case of an Uber Eats driver, Pa Edrissa Manjang. Mr Manjang’s access to the Microsoft-powered Uber Eats app, which was necessary for working for the company, was barred after facial verification checks failed to recognise him. Mr Manjang received a financial settlement following allegations that the facial-recognition checks and AI used by Uber Eats were racially discriminatory.
How things stand in the UK
We still await any laws governing how businesses use AI in the UK. This lacuna provides all the more reason for employers to understand how existing data protection and confidentiality laws affect how AI can be used in the workplace, and to be aware of the discriminatory risks stemming from new technologies. As for when we can expect to see the status quo change, the Secretary of State for Science, Innovation and Technology has suggested the UK Government will introduce a comprehensive AI regulation bill in the next parliamentary session (likely mid-2026).
In the context of AI use in recruitment, there is a selection of useful resources available to use in the interim. The Department for Science, Innovation and Technology’s Responsible AI in Recruitment Guide coupled with its Responsible AI Toolkit provides a good reference point for businesses operating in the UK, as does the ICO’s outcomes report of its audit of AI-powered tools used for sourcing, screening and selection in recruitment. The ICO also focused on AI and automated decision-making in the recruitment context in their AI and biometrics strategy, released on 5 June 2025.
In particular, the outcomes report offers recommendations for both AI providers and recruiters and pays particular attention to jobseekers’ rights; it highlights the potential risks of AI for discrimination or bias in recruitment, as discussed in the above section.
The report’s key recommendations can be considered a wider guide for good practice. They include ensuring fairness in data processing; transparently informing applicants that AI is being used and how it is being used; permitting decisions to be challenged and being able to explain the metrics for decision-making, using the minimum personal data required to ‘develop, train, test, and operate each element of the AI’, and conducting Data Protection Impact Assessments early on and ideally at the procurement stage of an AI tool’s adoption. As a complement to these recommendations, do read the ICO’s six key questions ‘organisations should ask when procuring AI tools to help with their employee recruitment’ as an aid for responsible implementation.
What the EU is doing
For businesses operating in, or from, the EU market, there is greater clarity regarding the regulation of workplace technology through the combination of the General Data Protection Regulation, the EU AI Act, local legislation, and the Platform Work Directive.
The European Commission have published a General-Purpose AI Code of Practice to support the general-purpose AI model obligations of the EU AI Act, which recently came into effect on 2 August 2025. Although it is voluntary, adhering to the Code will be a means for signatories to demonstrate compliance with the Act’s obligations.
The European Parliament’s Committee on Employment and Social Affairs also published a draft report on 12 June 2025, designed to address perceived regulatory voids left by the EU AI Act in employment scenarios and to properly regulate ‘algorithmic management’ systems being used in the workplace that fall outside the definition of ‘AI’ in the EU AI Act. The report was made against a backdrop of increasing concerns regarding AI’s use in recruitment, productivity tracking and algorithmic decision-making, and made proposals for specific regulation of the technological impact of AI on the workforce. The report specifically calls for legislation to align AI deployment with fundamental rights, and for a comprehensive framework to govern AI specifically in a workplace context.
What can employers do now?
Aligning priorities
Workplace AI training and policies can go some way towards reducing the prevalence of risks posed by AI, especially where they result from a lack of training/technological understanding of the way AI tools work.
For example, the use of ‘shadow AI’, whereby employees use AI in work contexts in an unregulated and unmonitored way, is an increasing concern for employers. Shadow AI exposes both employers and employees to risks, including data breaches arising from feeding personal data into AI, and intellectual property issues due to the incorporation of licensed material into prompts.
In the absence of clear legal bounds, employers would benefit from a governance framework that mitigates legal risks and supports AI literacy throughout the business, including through clearly articulating AI policy, engaging with AI use-cases, and providing training for responsible AI use. Considering AI’s potential for boosting productivity, developing trust between employer and employee surrounding AI’s use goes hand-in-hand with both parties reaping the benefits of innovation.
Using AI in recruitment contexts
Given that the way bias manifests in AI outputs can be extremely subtle and complex, human oversight will continue to be necessary as a counterweight and sense-check when using technology for employment processes: it is vital that any decisions taken using AI or made with input from AI tools are robust and can be properly explained. Envisage a redundancy selection scenario: how have the company’s decisions been reached? Can the reasoning be followed? The Employment Tribunal will want to see this evidence, and may be at liberty to make adverse findings if sufficient working cannot be produced.
If your business uses AI for HR processes, you may consider taking the following practical measures: investigating software to reduce biases relating to protected characteristics under the Equality Act 2010; checking with your technology providers how AI tools you use have been developed and tested to withstand bias and seeking indemnities in respect of non-discriminatory behaviour if necessary; auditing your new and existing technologies regularly to flush out issues; and revising your EDI policies to safeguard against risks posed by technology and heighten technological awareness in the workplace.