AI adoption and ambition in the public sector

7 insights for moving from experimentation to trustworthy transformation

As public sector organisations move from AI experimentation to operational reality, we explore how governments can deploy AI responsibly, at scale and with trust, drawing on insights from senior leaders in Scottish Government and Welsh Government.

Public sector organisations are moving rapidly from AI experimentation toward operational adoption and are now confronting the same question: how do we unlock AI’s potential while protecting trust, equity and inclusion?  

The central question is no longer ‘should we use AI?’ but ‘how do we deploy it responsibly, at scale, and with trust?’.

At a recent panel discussion with senior digital leaders from Scottish Government and Welsh Government, the emerging answer was clear – AI is no longer about experimentation at the edges. It’s now about readiness: organisational, skills, ethical, and information optimisation.

Here we highlight the key insights from that conversation, offering a strategic lens for government technology leaders maturing AI plans. 

The full panel discussion – featuring Cassandra Bisset (Objective Information Intelligence), Eilidh McLaughlin (Scottish Government) and Glyn Jones (Welsh Government) – can be viewed here.


1. Understanding your organisation’s AI persona

Governments are not approaching AI from a single vantage point. Panellists discussed three organisational ‘personas’ – highlighted in recent Gartner research – that are shaping AI ambition:

  1. Productivity pursuers: the most common persona today – organisations aiming to relieve workforce burden, streamline processes and respond to rising service demand with the same (or fewer) resources.
  2. Not in front of my customers:  cautious service stewards – teams unwilling to place AI directly in front of citizens until trust, ethics and infrastructure are in place and proven.
  3. AI-first/everywhere: teams actively seeking AI-led transformation, where automation and augmentation redefine the operating model and reimagine services.

Many public sector organisations sit in category two –  ‘Not in front of my customers’. Though excited about AI productivity gains, they are hesitant to expose AI-powered services to the public until they can demonstrate safety, fairness and reliability.


2. The productivity imperative and its limits

Both speakers from Scottish Government and Welsh Government described a similar internal driver – rising expectations paired with a pressurised civil service workforce. With more scrutiny, more parliamentary activity, and more demand for responsive services, AI becomes a means to:

  • free frontline staff from administrative burden;
  • speed policy drafting, research and analysis;
  • accelerate casework and decision support; and
  • improve consistency in routine but high-volume tasks.

But this is not about replacing people.

As the panel emphasised, the value of the public sector is human value: judgement, empathy, contextual knowledge, and situational awareness. The goal is not workforce reduction. It is to elevate human judgement by removing administrative drag.

AI’s role can elevate staff into work that matters, not degrade expertise.


3. Trust is the non-negotiable foundation

If one theme dominated the panel, it was that trust is the critical input and enabler to any successful government AI programme. Trust is a prerequisite.

Trust is required in three places for successful AI adoption:

  1. Public trust: citizens must trust how their information is used, how decisions are made, and how AI interacts with their personal circumstances. Without transparency and accountability, uptake will stall.
  2. Workforce trust: civil servants must trust that AI is a tool, not a threat. They need clarity on what data is safe to use, how outputs are audited, the limits of automated decision-making, and how accountability remains firmly with humans.
  3. Data trust: as Eilidh McLaughlin noted in reference to the Scottish Government’s work on race and ethnicity data, AI is reflective of the dataset behind it. Historical technical structures and legacy systems can encode bias unintentionally. AI ethics is now data ethics, at scale.

4. Skills, confidence and the AI-ready workforce

The panel described an observable gap between two groups inside government: those energised by AI tools; and those anxious about using them incorrectly, or at all.

Investment in foundational AI education and more specialised digital professions across the organisation, not just in IT, has been key to efficiently scale around the productivity promise of AI. Skills programmes must target both the enthusiastic early adopters and the hesitant majority, with recognition that forming working parties across the organisation will be important to the pace of change.

The role of knowledge and information management is also expanding. Information is the fuel for AI-generated insight.

Future roles on the horizon include:

  • prompt specialists;
  • AI curation and dataset stewards;
  • algorithmic accountability advisers; and
  • AI-enabled records and FOI specialists.

AI literacy is becoming a baseline competency across the public service.


5. Data scaffolding: the unseen enabler of AI success

Perhaps the strongest message for technology leaders – AI cannot outperform the quality and structure of the information behind it. AI success depends on high-quality, structured, well-governed information.

Governments have spent decades building structured data warehouses, but predictions indicate that 80% of information held in an organisations is unstructured, decentralised and often poorly described.

The panel offered three strategic imperatives:

  1. Integrate records management and AI pipelines: AI must be fed by secure, well-curated repositories, not uncontrolled file systems or personal drives.
  2. Automate the mundane to elevate the critical: Generative AI is now capable of high-quality ‘dull AI’ tasks such as FOI document discovery, suggested redactions, summarisation and transcription, archival preparation, and metadata generation. These tasks free teams to focus on decisions, not document handling or preparation, which is historically highly manual.
  3. Maintain security, provenance and permissioning: As records are surfaced to AI models, governments must maintain auditability, access control and clear rules on what information can be used. This is where technology scaffolding becomes essential, otherwise governments risk serious governance failures, which undermines trust both internally and externally.

6. The future day-in-the-life of a civil servant

The panellists were asked what the workplace could look like in five to ten years if AI is implemented well and we ‘get it right’. The short answer was a more motivated, more effective, more connected workforce. 

A day where a morning starts with an AI-curated list of priorities and where AI suggests key partners, stakeholders and experts to contact to complete the tasks. Where AI drafts memos, speeches, and analysis using approved internal content and summarises meetings, applies correct governance steps, and files outputs automatically. And where you can be connected instantly to knowledge held across government, not just in your own team.

This all supports faster policy cycles, increased transparency, and a less burdened workforce.


7. Evolving, not reinventing, governance

Governments do not need to reinvent governance but rather adapt and scale it.

Key principles emerging include:

  • Risk-tiered governance for different AI use cases: lightweight controls for low-risk use, robust oversight for high-risk applications.
  • Stronger audit trails: capturing prompts, outputs, and human-in-the-loop decisions.
  • Reusing what exists: applying existing Data Protection Impact Assessments (DPIAs), ethical frameworks, FOI standards and records policies to AI capability.
  • AI-augmented compliance: where AI assists with evidence gathering, classification, policy assurance and processes such as redaction for sensitive data masking.

AI governance must be practical, proportionate and embedded.


Conclusion: AI becomes part of the fabric

The panel ended with a unanimous vision: AI should become as embedded and unremarkable as the internet – part of the organisational fabric, not a separate initiative. This leads to AI powering better decisions and better outcomes.

To get there, governments must invest in skills and confidence, build robust data foundations, embed ethics and transparency, create communities of practice across sectors and place trust from both the public and workforce at the centre of their AI strategies.

The opportunity is huge when harnessed on the right initiatives, not just personal productivity. The challenge is equally significant in ensuring AI enhances trust and delivers financial benefits, as well as closing gaps in service delivery. Human centred design has a critical role to focus investment, as well as clarifying where to elevate people, process and technology across an organisation.

Public sector technology leaders now stand at the pivotal moment to shape the future of public service.

The biggest risk isn’t using AI; it’s using it without governance!


Find out how Objective can help support your future AI strategy or speak to a member of the team today.