How different sectors approach AI adoption

Public Sector: Cautious but Transforming

Public sector organisations (government agencies, public services) often take a cautious, deliberate approach to AI. These bodies operate under intense public scrutiny and accountability, which fosters a culture of prudence and risk-aversion. Failure can lead to public outcry, so experimentation tends to be measured. It’s not that laws outright ban AI – indeed, many governments are exploring AI for better citizen services – but the bureaucratic culture and legacy processes slow things down. Government leaders must balance innovation with mandates for transparency, fairness, and security, which can make AI projects proceed carefully.

Cultural barriers loom large. Public sector employees may worry that AI will disrupt civil service jobs or that they lack the skills to work with advanced tech. Comprehensive change management is crucial for successful AI adoption in government, as leaders and employees often face cultural barriers or skill gaps when incorporating AI into daily workflows​. In practice, we see many agencies start with pilot programs or public-private partnerships to build confidence and expertise gradually​.

Yet change is afoot. Most public sector leaders now see AI as pivotal to the next phase of digital transformation​. To turn that belief into reality, agencies are investing in training and upskilling programs, workshops, and ongoing support so that AI isn’t an alien intrusion but “woven into the organisational fabric”​. By focusing on a mission-driven narrative (e.g. how AI can improve public safety, health outcomes, or service efficiency) and creating a safe environment for experimentation, some public organisations are beginning to overcome the inertia. The cultural shift is from “we’ve always done it this way” to a mindset of continuous improvement and responsible innovation in service of the public. Progress may be gradual, but with the right leadership and change strategy, even cautious bureaucracies can harness AI in meaningful ways.

Traditional Regulated Companies: Innovation Under Compliance

Heavily regulated industries – think banking, insurance, healthcare, utilities – are by necessity risk-conscious and process-driven. These traditional companies have long histories, complex structures, and strict compliance obligations. But while regulations shape what they must consider (privacy, safety, etc.), it’s often the internal culture and governance that determine how fast they move on AI. Many regulated firms have a culture of prudence: decisions go through layers of approval, and there is low tolerance for failure or anything that might threaten stability. Innovation happens, but incrementally.

For example, a large bank might be legally free to deploy AI in customer service or fraud detection, yet its board and executives could slow-roll the initiative out of an abundance of caution. 

Directors in these companies often prioritise traditional risks (compliance, financial controls) over “the risks – and opportunities – associated with innovation and disruption”​.

 The result? AI adoption projects compete with legacy priorities and can stall unless clearly tied to risk reduction or efficiency gains. Indeed, a major study of Australian corporate boards found that many boards weren’t prioritising innovation or digital disruption as much as their international counterparts, contributing to a risk-averse corporate culture​. While that study was Australia-specific, the pattern rings true in many regulated environments globally.

Yet being regulated doesn’t have to mean being rigid. Some forward-looking banks and hospitals are carving out innovation teams or “digital garages” to experiment with AI in a sandbox environment that doesn’t threaten core operations. Leadership makes the difference – when top executives champion AI with a clear vision and allocate resources, it signals to the organisation that innovation is not only allowed but expected. These companies often take a “trust but verify” approach: adopt AI solutions but layer them with strong governance, ethics reviews, and compliance checks. The cultural challenge is to maintain their strengths (safety, reliability, trust) while shedding the “not invented here” syndrome and fear of change that can plague long-established firms. Those that succeed find they can both comply with regulations and compete on innovation.

Traditional Unregulated Companies: Breaking Out of Legacy Mindsets

Not all traditional companies are in tightly regulated sectors. Manufacturers, retailers, consumer goods firms, and others often face fewer direct regulatory hurdles to AI adoption. In theory, this gives them freedom to innovate; in practice, many still struggle if they carry a legacy culture. An established retail chain or manufacturing giant might not have a regulator looking over its shoulder on AI usage, yet it can be just as slow as a bank to implement AI if its leadership is change-averse or unconvinced of the ROI.

Many of these firms are “born-traditional” companies trying to become more digital, with mixed results. They may have siloed data, outdated IT systems, and employees who’ve “always done it this way.” If past technology projects fizzled out or if the workforce is unfamiliar with AI, skepticism can run high. On the other hand, competitive pressure in unregulated markets can be intense – if your rival uses AI to optimise supply chains or personalise e-commerce, you can’t afford to sit still. This creates a tension between the urgency to innovate and the inertia of old habits.

Here again, culture is the tipping point. Organisations that encourage experimentation and learning from failure tend to adopt AI faster, even without regulatory pressure. A culture of experimentation “eliminates the fear of failure and is essential for fostering innovation… ensuring that employees are willing to embrace new technologies, including AI, to achieve business goals”​. Companies that cultivate this mindset – perhaps inspired by lean startup principles or after hiring fresh digital talent – will pilot AI in one part of the business, prove the value, and then scale it. Those stuck in rigid hierarchies, by contrast, might endlessly deliberate or wait for a “perfect” solution.

We often see unregulated incumbents forming partnerships or acquiring startups to jump-start their AI capabilities, effectively importing a more innovative culture. Others create cross-functional “AI task forces” internally to break down silos and spark new ideas. The bottom line: freedom from regulation doesn’t automatically speed up AI adoption – it’s the willingness to challenge legacy thinking and empower teams that does. Traditional companies that break out of their own comfort zones can level the playing field with more tech-native rivals.

Digital-Native Businesses: Innovating by Default

On the opposite end of the spectrum are digital-native businesses – companies born in the internet era (or later) with technology in their DNA. For these organisations, adopting AI isn’t a bold leap; it’s the natural next step. Their cultures typically celebrate innovation, speed, and “fail-fast” learning, which gives them a major head start in extracting value from AI. A recent study contrasts “born-digital” companies with “born-traditional” ones and finds significant differences in mindset. For example, 83% of born-digital companies say that adopting new technologies will drive growth for their business, and 80% have fully digitised their customer journey – versus only 20% of born-traditional companies having done so​. In short, digital-natives view AI as central to their strategy, not just a tool to try out.

This makes sense: many digital-natives are tech companies (think e-commerce, fintech, software firms) or disruptive startups attacking old industries. They often build AI into their products from day one – whether it’s a recommendation engine, an intelligent chatbot, or analytics baked into a service. AI is “woven into the very fabric” of these organisations’ business models, giving them a head start​. Moreover, such companies tend to have flatter hierarchies and a culture that empowers employees to experiment without endless approval chains. Failure is seen as a learning opportunity rather than a career-ending blunder. 

“Digital natives build a culture where it’s ok that not every investment will pay off, and where failures are just as valuable as successes,”

as one analysis noted​. This environment makes it much easier to spin up an AI pilot, iterate quickly, and scale what works.

Another advantage: digital-natives often manage talent differently, attracting and nurturing people with strong AI and data skills. They invest in training their teams and can “adapt and retool” their platforms when technology takes a leap forward​. In other words, they expect change. Even these companies aren’t immune to challenges – they can still make tech bets that don’t pan out, or hit roadblocks with data quality – but their inherent agility means they rebound faster. For incumbent businesses, the lesson from digital-natives is less about copying specific tech, and more about embracing a culture that values agility, continuous learning, and the strategic use of AI at every level. The gap between digital-native and traditional organisations is wide, but it’s crossable if the latter are willing to transform how they think and operate.

Previous
Previous

Driving Change: The Human Side of AI Adoption

Next
Next

Culture Eats Compliance: How Organisational Change Drives AI Adoption