Contact

AI

May 01, 2024

What to consider when scaling AI in public sector (Part two)

Varun Sarin
Huzaifa Chishti

Varun Sarin and Huzaifa Chishti

What to consider when scaling AI in public sector (Part two)

According to Bloomberg Intelligence, the Generative AI (GenAI) market is expected to reach $1.3 trillion in 2030, from a market size of just $40 billion in 2022. Yet approximately 85% of AI projects fail to deliver on their intended outcomes. So, what this data seems to suggest is: we are excited about the growth and potential of AI, but we struggle to turn ideas into reality.

Industries like legal, healthcare, and financial services are rapidly growing their capabilities and use of AI to drive down cost, increase productivity, and organise knowledge. However, for public sector, there are a variety of factors that have caused the adoption to be slower. For one, public trust in AI is sinking, dropping to 53% (down from 61% five years ago). At the same time, there is a skills gap, with 7 out of 10 government bodies stating that they do not have the ability to retain and attract staff with the right capabilities.

Moving forward, here are examples of foundational capabilities the public sector needs to address when it comes to scaling AI:

  • Strategy & Roadmap with First Principles

  • Technology with Governance

  • Data with Privacy & Bias

  • Delivery Model with Risk Management

  • Talent with Compliance

  • Adoption & Scaling with Transparency

This blog will cover the first three foundational capabilities, leaving the rest open for future discussion. We will start by taking a look at Strategy & Roadmap.

At Credera, we regularly make AI roadmaps for clients, and the process begins by examining the strategy devoid of AI.

Why? Because it avoids the scenario “AI is the answer, now what’s the question?”

At the same time, we look at the operating model of the organisation, completely ignoring AI at the start. Once you have that, you are ready to consider first principles for your use of AI. This involves asking the question – Where are you prepared to use AI? And where are you not prepared to use it? These are what we call the First Principles.

The Gates Foundation have published one of the simplest, yet most powerful examples of what First Principles look like for AI. These are:

  • Adhere to our core values

  • Promote co-design and inclusivity

  • Proceed responsibly

  • Address privacy and security

  • Build for equitable access

  • Ensure transparency

After establishing first principles, evaluate how AI can optimise different facets of your strategy and roadmap, including boosting productivity, enhancing personalisation, and refining knowledge management. We also recommend picking one of these to begin your journey in AI.

Now, moving on to Technology.

Technology is typically the capability that a lot of organisations start with. The problem about starting with a particular solution is that it misses the opportunities to fix broader problems. Forget wood for the trees - this is focussing on a specific root of a specific tree and ignoring that you have a much wider forest around you.

What usually ends up happening is that the tools are not sanctioned for use at scale, and therefore end up in dead-end experiments or built on an environment that has no chance of getting governance approval. Hence, it is imperative to think about technology with the governance guardrail in mind.

Choosing between off-the-shelf products and custom development presents a fundamental dilemma for public sector organisations as well. Whilst building custom solutions offers heightened security, it is also time-consuming and reliant on specialised talent.

Now, we will look at the third and final critical capability: Data.

Making sure the data held is properly managed is also a challenge. Without evidence of data, organisations can develop policies and services that do not address people’s real concerns.

Previously, we have helped create user-focused operating models for data analytics which enabled rapid response to market conditions. This helped bring together teams, tooling, and data; resulting in a more reliable, user-focused, and performant platform.

To implement data best practices, organisations may need specialist roles such as:

  • Data architect: Develops data vision and design to meet user needs

  • Data scientist: Understands existing data and target problems

  • Data engineer: Integrates delivery into business systems

  • Ethicist: Provides ethical judgments on inputs

  • Domain knowledge expert: Understands the deployment environment

  • Engineer: Supports production with dev-ops, infrastructure, and security knowledge

In the end, a great AI team is cross-functional and has a strong grasp of what the end user wants, how secure the environment is, and what technology is needed. It is critical to plan for and develop approaches to encourage adoption from day one.

In this exploration, we explored three of the six pivotal points laid out in the beginning when it comes to scaling AI adoption in public sector, laying a foundation for a broader conversation. The remaining points hold the potential for future discussion.



    Conversation Icon

    Contact Us

    Ready to achieve your vision? We're here to help.

    We'd love to start a conversation. Fill out the form and we'll connect you with the right person.

    Searching for a new career?

    View job openings