How Can Artificial Intelligence Be Used In The Legal Profession?


Artificial Intelligence can help people process lots of text and various documents, including documents like commercial contracts. AI can also use its logic reasoning capabilities to assist legal processes, which in turn raises a number of questions like: how do you determine a fair sentence for a particular person using an AI system, or can you have an AI judge? In this article, we are going to cover the various aspects of AI and the law and the legal profession.

AI can be used to help in both of the two main types of legal issues: the first is commercial law, which concerns legal documents and contracts of a commercial nature, and the second is criminal law, which concerns legal issues related to criminal aspects.

Commercial transactions are generally framed and underpinned by some particular contract between two or more parties, that have agreed to such a contract and have agreed to abide by its fine print and content. AI can help make better sense of contractual obligations, spot loopholes, potential pitfalls and risks, while also looking out for elements of unfairness with regards to the different parties. AI can also point such issues more efficiently when it comes to huge volumes of contracts in a short timeframe that is very difficult for humans to achieve. This is especially important in situations like due diligence exercises where there are a lot of documents to go through, analyse and summarise.

I see AI helping the legal profession by performing a role that is similar to that of a junior lawyer. AI that can spot and flag issues, make relevant suggestions to improve specific clauses, and also check the relevant law and case law that exists for a particular jurisdiction and point out the strengths and weaknesses. Issues that are flagged up may be checked by humans and resolved in a collaborative manner. AI can cut down on the manual effort involved in compliance checks and reducing the increasing regulatory burdens imposed in the modern world, for example due to corporate governance and anti-money laundering obligations, helping everyone comply and effect regular transactions in an easier and cheaper manner. This will lead to benefits for everyone because the costs will go down across board, and the system can be adjusted to focus on the items that really matter — everyone is better off as a result.


In the legal world of criminal law, AI is being used to judge cases, to find better case law and to make sure that there is a fairer understanding of the issues at hand. When it comes to matters like judgements and sentencing, there have been a number of cases, in which AI was used, that led to the identification of biases in the AI system itself. AI was previously seen as always being an unbiased logical system, so why does AI suffer from bias, and where does the source of bias come from?

Current AI systems are based on the concept of learning from training data, learning by example, with less emphasis on logical reasoning as with AI systems that were prevalent in the past. Unfortunately, over time, people who did suffer from different types of bias and prejudices have ended up being the sources for the training data that is now being fed to the AI system as its training examples. The AI system assumes that it can rely on such training data to be unbiased — yet in practice the training data is not perfect — and this is something which we really must be on guard for. For example, old judgements used to be unfortunately biased against people of particular skin colour and race. This bias then trickled down into the AI system training data, which unwittingly picked up the same bias trend, since it was trained on biased data in the first place. As a society we must make sure that our AI doesn’t end up learning these biases and applying them for the future because otherwise we are going to end up with a permanently biased judgement. If such unbiased training data is simply not available, then the AI system should have some way of detecting such bias and either flagging it up clearly or preferably attempting to correct for it or even eliminate it from its internal model. I strongly believe that when it comes to impacting people lives and making these decisions, AI should be a tool that does not make a final decision and that should always be double-checked by a human. This stems from the concept of AI being used as a tool that just helps and assists people rather than replacing them.

Bias in AI is a very important topic, because if we don’t detect it and don’t protect against it, then it is going to lead to very unfair results and a general backlash against AI. The general concept of trustworthy AI, that can explain itself and earn trust from users via a transparent collaboration is something that is being increasingly recognised in various countries, and has been included in the OECD, EU and US guidelines as best practices for AI system development.

Source —

AI can help the legal profession in many ways, both from the commercial and the criminal aspect. I think that the application of AI worldwide is going to help people have better written laws, better application and interpretation of the laws, and faster and fairer conflict resolution. Having things like explainable AI that can tell you the basis of this decision and how it came to a particular conclusion, will also help us achieve a more trustworthy and better AI for everyone.

Angelo Dalli is a serial entrepreneur and super-angel investor with a tech background and AI expertise - and a part-time marathon runner :-)