Iken

Product

Solutions

Resources

Our Software

Support

21st January 2026

4 min read

AI Governance Without the Guesswork

AI Governance Without the Guesswork

A practical approach to responsible AI adoption across organisations

Phil Coleman

Chief Information Officer

Last week, Zak, Iken’s Chief Product Officer, shared some interesting thoughts about the opportunities and considerations of artificial intelligence in case management. That piece centred on a guiding principle: understanding the purpose of incorporating AI into products and processes, and measuring the outcome and impact on productivity.

This week, I want to broaden that out and look at the wider challenges and potential for AI across an organisation.

Artificial intelligence, often framed as the next big vision… or villain, has been discussed for many years. What has changed recently is public awareness and widespread usage. Over the last couple of years, AI tools have become far more accessible, which means organisations, large or small, can no longer ignore how AI will affect them.

Not all companies will be looking to harness these capabilities directly. However, employees in most organisations will end up using some common tools, whether management or security teams like it or not. That reality means organisations should consider how AI will be used, managed, or restricted, to protect and enable the company, its staff, and its customers.

Learning from recent history

The urgency here is comparable to the “race to remote working” during the early days of the COVID pandemic. Organisations didn’t have the usual months or years to review policies, evaluate options, and plan implementations. Much of that work had to be done quickly, and sometimes retrospectively.

AI presents a similar challenge. Tools such as ChatGPT and Microsoft Copilot are already available to staff and are being used for business purposes, whether organisations feel ready or not. With that in mind, I want to outline the steps Iken has taken to create a safe and secure framework to understand, evaluate, and educate on the use of AI… ensuring it remains a supportive tool rather than a security risk.

“Responsible AI adoption starts with governance, values, and purpose, not tools.”

Phil Coleman

Chief Information Officer

Start with governance, not tools

Iken already had a comprehensive set of policies in place covering information security, data protection, and corporate governance, supported by core values that underpin how we work and the products we deliver. Before reviewing specific tools or technologies, we defined an AI Policy aligned with those values and existing considerations around intellectual property, data protection and privacy, data sovereignty, and reputation.

While that policy-level work is important, it needs to translate into practical guidance. We therefore shared four clear principles to guide how AI should be used day to day:

Purpose

Privacy

Confidentiality

Accuracy

AI should be a supporting resource and should always have appropriate human oversight or validation.

These principles help ensure that any tools we approve can deliver value, extend capability, and do so accurately and reliably.

Creating space for informed experimentation

In recent months, we have introduced an AI Working Group, bringing together colleagues with an interest in AI alongside those responsible for compliance and security. This has provided a forum for constructive discussion and pragmatic decision-making, while remaining aligned with our existing change management processes. Over time, I expect this to become part of normal change management as AI becomes more commonplace.

This approach has allowed us to introduce AI-supported development tools safely to support our delivery teams. We have approved a limited set of generative AI tools for wider use to evaluate their potential to improve productivity, while remaining within existing data classification and protection rules and maintaining our voice and values in all internal and external communication. More recently, we have undertaken a proof of concept exploring how AI could be integrated into our products, which we will share more about in a future post.

Confidence comes from clarity

In summary, AI within your organisation isn’t something you can ignore but it also shouldn’t be feared. With appropriate controls, alignment to existing policies and values, and engagement from the right people across the organisation, AI can be evaluated and adopted in a way that enhances productivity while remaining ethical, transparent, and accurate.

“AI isn’t something organisations can ignore… but with the right controls, it also isn’t something to fear.”

Phil Coleman

Chief Information Officer

Learn More 

If you’d like to learn more about how Iken supports local authority legal and governance teams, get in touch or explore our latest case studies from councils across the UK.

© 2025 Iken Business Ltd. First Floor, PS21, 21 Prince Street, Bristol, BS1 4PH, UK
Registered in England & Wales | Company Registration Number:- 2776536 | VAT Registration Number: GB 609526041

© 2025 Iken Business Ltd. First Floor, PS21, 21 Prince Street, Bristol, BS1 4PH, UK
Registered in England & Wales | Company Registration Number:- 2776536 | VAT Registration Number: GB 609526041

© 2025 Iken Business Ltd. First Floor, PS21, 21 Prince Street, Bristol, BS1 4PH, UK
Registered in England & Wales | Company Registration Number:- 2776536 | VAT Registration Number: GB 609526041