Artificial intelligence is becoming a familiar feature of board governance. Until recently, that has largely meant tools for managing board papers, summarising documents or assisting with minute‑taking. A recent Financial Times article asks a more provocative question: should boards be inviting AI into the boardroom itself?
Some technology providers are now offering “AI board members” or digital advisers trained on board materials, external data and market research, capable of offering different specialist viewpoints to directors ahead of meetings.
While the language may sound radical, the reality is more evolutionary than revolutionary.
What is actually happening?
In practice, these AI tools are not sitting at the board table or taking part in live debate. Under English law AI bots cannot be legally appointed as a director. They are being used by boards primarily as preparation tools, helping directors to analyse information, test assumptions, review historic decisions and identify risks or omissions in board papers before a meeting takes place.
For boards dealing with increasing workloads and regulatory complexity, the attraction is obvious. AI can process large volumes of material quickly and prompt questions that might not otherwise be asked. Provided that the AI tools are selected with care, there is nothing wrong in using secure, private AI platforms for board work. This helps reduce the risk of confidential documents being analysed through public AI tools, which can create data protection and confidentiality issues.
An opportunity, but not a substitute
Used sensibly, AI has the potential to improve the quality of board decision‑making. It can act as a sounding board, challenge prevailing assumptions and help directors focus on what really matters.
That said, AI must not be a decision‑maker. Under English law, directors owe duties under the Companies Act 2006 and must exercise independent judgment and personally discharge their statutory duties. Such duties, including the duty to promote the success of the company and the duty to exercise reasonable care, skill and diligence, cannot be delegated to a machine.
Put simply, an AI tool may inform a decision, but responsibility for that decision remains firmly with the directors acting collectively as a board.
What are the risks?
As boards integrate AI into their governance processes, they need to understand how these tools work and what risks they introduce. This includes questions around data sources, bias, cybersecurity, record‑keeping and how AI outputs are used and challenged.
Boards should also be mindful of accountability. It is very dangerous to let AI influence board decisions to a material degree, especially because of its tendency to flatter and present incorrect information in a confident manner. Directors will need to be able to explain how they have exercised their own judgment rather than simply following an automated recommendation.
Clear governance frameworks and transparency around the role of AI will be essential.
What should boards be doing now?
For English companies, the key point is balance. AI can be a powerful support tool, but it should be treated with caution. Directors should use it to enhance insight, not to dilute responsibility. AI’s arrival in the boardroom is inevitable, but boards must be disciplined about where it genuinely adds value and crystal clear about its limits.