AI is already changing how work gets done, but the firms that benefit most will be the ones that treat it as disciplined operational change.
Roland Emmans, Head of Technology Sector & Growth Lending at HSBC, spends his time speaking with technology businesses at every stage of growth, from disruptive start-ups to large corporates.
That vantage point gives him a useful read on where AI is genuinely helping, where businesses are still feeling their way through adoption, and where trust, governance, and human judgement are starting to matter more than the tools themselves.
In this episode of the Sense of Identity podcast, Roland discusses how the AI conversation has matured.
You can listen to this episode on Spotify or Apple podcasts.
The question is no longer whether businesses should pay attention to AI, but how to adopt these systems in ways that improve productivity, protect sensitive data, and keep decision-making accountable.
AI Has Moved From Curiosity to Operating Reality
One reason Roland’s perspective is useful is that it is grounded in repeated exposure to real operating decisions.
He describes a market where businesses are already working out how deeply AI should be embedded into coding, workflow design, customer service, analysis, and internal operations.
The Adoption Story Is Broad, but Uneven
Recent ONS figures show that more than a quarter of UK businesses reported using at least one AI technology in March 2026, rising to 45% among firms with 250 or more employees.
In UK financial services, the picture is even more advanced, with a Bank of England and FCA survey finding that 75% of respondent firms were already using AI, while another 10% planned to do so within three years.
Boards don't need a grand AI manifesto before they act, but they do need a clear view of where automation, assistance, and machine-generated output are already entering the organisation through approved tools, third-party software, and employee behaviour.
The Hard Part Is Not the Model - It Is the Management
AI may be technical in how it is built, but it becomes a human issue in how it is adopted, supervised, trusted, and improved.
Better Management Usually Means Better Adoption
A recent ONS study found that firms with stronger management practices were more likely to adopt advanced technologies, more likely to plan AI adoption, and more likely to follow through on those plans.
The same research found that the most common barrier to AI adoption was difficulty identifying useful activities and business use cases.
That lines up neatly with Roland’s argument that sensible adoption starts with a specific problem statement, not with a vague ambition to “do AI”.
"You can delegate work to AI, but you can't abdicate responsibility."
Roland Emmans, Head of Technology Sector & Growth Lending, HSBC
The Future of Jobs Report says employers expect 39% of workers’ core skills to change by 2030, while the OECD’s work on AI in the workplace suggests training and worker consultation are associated with better outcomes.
This is why underinvesting in people is a false economy.
If a firm spends heavily on licences, models, and pilots, but leaves managers and frontline staff to improvise, the likely result is patchy use, avoidable risk, and a lot of private scepticism behind the scenes.
Value Appears Fastest Where the Work Is Boring, Repetitive, or Slow
Many of the early wins are unglamorous.
The strongest use cases are the chores that soak up skilled human time without adding much human value.
Start With Friction, Not Fantasy
Roland points to coding support, document-heavy reconciliation, and routine productivity work inside familiar software as examples of where AI is already earning its keep.
That fits a broader pattern seen in the OECD’s research on changing job tasks, where AI can shift some work away from scanning, sorting, and checking, and towards customer interaction, judgement, and exception-handling.
Don't begin with the most sensitive, strategic, or open-ended process in the business.
Begin where there is friction, repetition, inconsistency, or delay.
Drafting and first-pass creation - where humans still review, refine, and approve.
Matching and reconciliation - where machines can spot patterns and exceptions faster than people.
Triage and prioritisation - where AI can help teams focus attention on what needs human judgement.
Workflow support inside existing tools - where adoption feels less like a disruptive leap and more like an incremental gain.
This also helps organisations build trust internally, because people can see where the tool helps and where human oversight still sits.
The Risks in Regulated Environments Are Practical, Not Abstract
Inside real organisations, the risks sit in data handling, procurement, supplier exposure, poor prompting, weak review, and staff using tools that the business has not properly governed.
Shadow AI Is the Governance Problem Hiding in Plain Sight
If employees believe AI helps them get work done faster, they will look for ways to use it, whether the organisation is ready or not.
That is why Roland’s warning about unofficial use is so important.
When firms do not provide safe, approved routes, staff often route around the gap with consumer tools, copied prompts, and untracked workarounds.
The ICO’s guidance and the NCSC’s guidance on AI and cyber security both reinforce the same broad point: organisations need to think about lawful use of data, security, explainability, and risk management before AI is treated as routine infrastructure.
For regulated firms, that means asking basic but often neglected questions about what data enters the system, where it is stored, who can access outputs, how logs are kept, and what review sits between generated content and real-world action.
They frame AI security as a lifecycle issue that runs from design through deployment, operation, and maintenance.
That is especially relevant for businesses leaning on third-party tools, because the Bank of England and FCA survey also found substantial and increasing third-party exposure inside financial services.
In other words, AI risk is not just about what your model does.
It is also about who built it, how it is connected, how it is monitored, and what assumptions staff make about its reliability.
Trust Is Becoming a Critical Commercial Layer
Roland’s comments on deepfakes, impersonation, and synthetic media point to a wider shift.
As generated content becomes cheaper and more convincing, organisations need better ways to prove origin, intent, and accountability.
It affects customer communications, executive identity, approval chains, and any environment where a convincing fake could trigger payment, disclosure, or reputational damage.
Provenance Will Matter More Than People Think
Emerging approaches such as the C2PA standard are designed to help establish provenance information, including the origin and edits of digital media.
They are not a silver bullet, and they don't remove the need for scepticism, but they do point towards a world where authenticity signals become part of normal digital hygiene.
For regulated sectors, that broader trust layer matters because the cost of a believable fake is rarely confined to embarrassment.
It can spill into fraud losses, service disruption, legal exposure, and customer mistrust.
The Next Competitive Shift May Be Less About Interface, More About Data and Control
One of Roland’s more interesting observations is that a truly agentic future could change what makes software valuable.
If digital assistants increasingly move across systems on a user’s behalf, the premium may shift away from polished interface alone and towards data quality, permissions, interoperability, and auditable actions.
Agentic Systems Raise the Bar for Governance
The UK government’s AI Opportunities Action Plan argues for wider adoption across the economy, not a slower wait-and-see approach.
That raises the pressure on businesses to work out where they want autonomy, where they want assistance, and where they still want hard human sign-off.
In practice, that means access control, role design, auditability, and exception management become strategic capabilities.
The businesses that benefit most are unlikely to be the ones with the noisiest demos.
They are more likely to be the ones with the cleanest data, the clearest rules, and the strongest decision rights.
What Sensible Leaders Should Do Next
Roland’s overall message is neither anti-AI nor blindly enthusiastic.
AI can create value, but only when organisations make deliberate choices about use cases, ownership, controls, and training.
Five Moves That Usually Make Sense
Pick one painful workflow and define the outcome before choosing the tool.
Give staff approved options so productivity does not drift into shadow AI.
Set review rules early for prompts, outputs, approvals, logging, and escalation.
Train managers as well as users, because adoption failure is usually managerial before it is technical.
Measure quality, speed, and risk reduction together, rather than chasing novelty or vanity metrics.
AI doesn't remove the need for leadership.
Curiosity still matters, but so do judgement, restraint, and the willingness to put adults in the room when the technology gets ahead of the process.
FAQs
Is AI About to Wipe Out Most Jobs in UK Tech?
The evidence is more mixed than that.
Recent labour market work points more towards job redesign, task change, and reskilling pressure than a simple one-way collapse in employment.
What Is Shadow AI?
It is the use of AI tools outside approved company controls.
That can include staff pasting work into consumer tools, using personal accounts, or relying on outputs that are not properly reviewed or logged.
Where Should a Regulated Firm Start With AI?
Start with a bounded use case that has clear ownership, low ambiguity, measurable outcomes, and strong review.
Routine drafting, reconciliation, and triage work are often better starting points than high-stakes decisioning.
Does Provenance Technology Solve the Trust Problem?
No. It helps by adding evidence about origin and edits, but it still needs to sit alongside identity checks, secure communications, monitoring, and human oversight.
Paul, CEO and Founder of Beyond Encryption, is an expert in digital identity, fintech, cybersecurity, and business. He developed Webline, a leading UK comparison engine, and now drives Mailock, Nigel, and AssureScore to help regulated businesses secure customer data.