AI and lawtech policy
Learn more about the UK government’s latest AI regulations and how these could affect your legal practice.
Artificial intelligence (AI) and lawtech are changing the way our sector operates. For each potential benefit, such as increased productivity, there is a risk, such as data security and privacy.
Government policy and regulation play a central role in how technology is adopted across legal services.
Whether it’s rules about generative AI, or measures to combat deepfakes and fraud, the UK government’s decisions will affect how you and your firm practise.
Our view
The UK government should adopt a balanced approach to AI regulation.
Policy should support innovation while safeguarding ethical standards and the rule of law.
Our research on how lawtech is being adopted shows a growing willingness among legal professionals to embrace technology.
We will continue working with the government to make sure AI benefits both firms and clients.
Our AI Strategy establishes the three key outcomes we are working towards to support our members:
- innovation: AI is used across the legal sector in ways that benefit both firms and clients in legal service delivery
- impact: there is an effective AI regulatory landscape that has been informed and influenced by the legal sector
- integrity: the responsible and ethical use of AI has been used to support the rule of law and access to justice
Share your views
What are your current experiences of AI and lawtech? Are there issues you think we should raise with government?
Is there support and guidance you think is currently missing?
Let us know by emailing janis.wong@lawsociety.org.uk.
What we’re doing
We’re working hard to influence the government and ensure our members’ views and challenges are reflected in AI and lawtech policy.
In April 2025, we responded to the Ministry of Justice’s call for evidence on use of evidence generated by software in criminal proceedings.
The call for evidence proposes updating rules about using computer-generated evidence in court. The proposals focus on ensuring these rules are fit for purpose in the modern world.
The Post Office Horizon scandal demonstrated that computer evidence isn’t always reliable. In the most serious cases, this can lead to people being wrongly accused and convicted.
Updates to the law
Our submission stresses that existing laws must keep pace with technological advances. Failure to do so may damage public confidence in the justice system.
There should be assurance, regular review and disclosure of a system’s technical standards and performance across the board. This will ensure that the use of computer evidence does not lead to miscarriages of justice.
Definition of computer evidence
The government defines ‘computer evidence’ as evidence generated by software or a device. Its definition does not include evidence captured or recorded by software or a device.
A definition of these terms in a judicial context should be provided by experts. Without this, there may be legal uncertainties as to what constitutes computer evidence.
Using AI in court
Our response stressed that AI and generative AI outputs need to be carefully considered when used as evidence in court.
Any changes to the law on evidence should follow existing regulation, best industry practice and automated decision-making transparency requirements. These clearly explain how a computer decision is made.
In April 2025, the Law Society and Bar Council submitted a joint response to the Home Office’s ransomware consultation.
The consultation proposes new legislation to reduce the threat of ransomware.
New rules about ransomware payments and reporting data breaches will directly impact the legal profession.
We agree with the government that breaking the cycle of ransomware payments is important. However, decisions on whether to pay ransoms must be made on a case-by-case basis. There are often existing contractual and external factors to consider.
In our response, we called for:
- more guidance to support the government’s proposals. This includes guidance for compliance and management processes following a cyberattack
- cyber risk plans to respect the rights of law firms and businesses to control their own data and systems
- more evidence gathering to establish whether banning ransomware payments is an effective way to deter cyber criminals
- support for the legal profession to strengthen cybersecurity and prevent ransomware attacks
- more resources for small and medium-sized enterprise (SME) organisations to help them comply with new requirements
Download our full ransomware consultation response (PDF 201 KB)
In February 2025, we responded to the UK government’s consultation on copyright and artificial intelligence.
The consultation proposed ways to encourage AI innovation while still protecting intellectual property (IP) rights.
Changes to the copyright regime directly affect the legal profession. Many solicitors and law firms use AI to deliver services and so may be rights holders themselves.
AI-generated content must be regulated to meet industry standards. Without appropriate regulation, trust in the legal profession and the quality of legal advice risks being undermined.
In our response, we called for:
- legal clarity around AI regulation and copyright law
- proportionate legal consequences for ignoring rights reservations
- a new compensation fund for copyright owners
- greater transparency from AI developers
- greater standardization of rights reservation protocols and compliance measures
- rights awareness training for small and medium sized enterprise (SME) AI developers
- AI model training to be allowed on certain publicly available datasets. For example, annual reporting disclosures of public companies
In January 2025, we provided feedback on the Department for Science, Innovation and Technology’s new AI Management Essentials (AIME) tool.
The tool provides practical steps to help organisations implement AI systems and processes. It offers advice for small and medium enterprises (SMEs) and tips for avoiding potential AI risks and harms.
In our response, we emphasised the need for AI training in the legal profession. We believe this tool will be a helpful resource for improving AI literacy in the sector.
We called for:
- improvements to the tool's structure and questions to better support effective AI management
- enhancements to the risk management section. This includes considerations for client requirements and financial impacts
- improvements to the communication section to ensure organisations follow best practices and regulatory requirements
In November 2024, we responded to a consultation on the UK government’s new industrial strategy, Invest 2035.
The strategy sets out the government's 10-year plan for achieving economic growth.
In our response, we emphasised the critical role of lawtech in the legal services sector. We also highlighted the UK's position as a global lawtech leader.
We called for:
- government support to boost the lawtech sector, including grants and schemes for small and medium-sized law firms
- AI and lawtech training through the new growth and skills levy
- support for smaller firms to enhance their cybersecurity measures
In October 2024, we gave evidence to the House of Lords inquiry into interpreting and translation services in the courts.
The inquiry examines the effectiveness of interpreting services in the UK courts. It explores the use of technology to improve interpreting services.
Head of Justice at the Law Society, Richard Miller, gave evidence to the inquiry.
He highlighted the potential benefits of AI and technology, including:
- automated translation tools to enhance interpreting services and help overcome language barriers
- remote interpreting technology to improve access to qualified interpreters and reduce delays in court proceedings
Richard also warned the UK government to consider:
- how to maintain high standards of quality and accuracy in AI-driven interpreting services
- training and support for interpreters to use AI and technology in their work
- the ethical implications of using AI in interpreting services. For example, data privacy and the potential for bias in AI algorithms
In March 2024, we responded to the Information Commissioner’s Office (ICO) consultation on the lawful basis for web scraping to train generative AI models.
We highlighted the need to ensure responsible and ethical use of web-scraped data for training generative AI.
In our response, we called on the ICO to carefully consider data protection, intellectual property, and regulatory collaboration.
In June 2023, we responded to the UK government’s white paper on regulating the use of artificial intelligence (AI) technology.
The paper sets out how the government will support the innovation of AI technologies. It also provides a framework for regulation.
In our response, we urged the government to adopt a nuanced, balanced approach to AI development and application in legal services.
Tasks and decisions that should be handled by humans must be clearly defined, along with those which can be delegated to technology. This will allow the legal profession to fully benefit from new technologies and ensure accountability is maintained.
We also raised concerns about how AI regulation differs across different sectors and jurisdictions.