In the past year or so, agentic AI has gone from a niche concept to one of the most talked-about frontiers in automation. It’s not just another chatbot trend. It’s a shift in how we think about software, automation and even the role of ‘digital workforces’ inside organisations.
IBM succinctly defines AI agents as “AI systems that are designed to autonomously make decisions and act, with the ability to pursue complex goals with limited supervision.”
If you’ve only ever used AI tools that respond to your prompts, think of agentic AI as the next evolutionary step – autonomous intelligence that can figure things out and take action.
Instead of waiting for instructions, agentic AI systems are proactive, goal-driven assistants. They can set objectives, plan steps, act and adapt to changes in real time – often with minimal human intervention.
This leap in capability opens incredible possibilities across industries. But it also raises new operational, ethical and governance challenges that organisations can’t afford to ignore.
Agentic AI flips that on its head. You can give an agent a broad goal – for example, “Reduce customer support ticket resolution time and improve satisfaction” – and it will figure out the intermediate steps, find the data it needs, take relevant actions and escalate to a human when necessary.
Imagine a customer support scenario where the AI:
All of that, without a single line of code from your team.
That’s the power – and the challenge – of Agentic AI. You’re no longer just managing scripts or bots: you’re supervising a 24/7, never-tiring, highly capable digital coworker.
The common thread? These agents don’t just automate a single step – they manage entire workflows, adapt to new information and work alongside humans.
The hype around Agentic AI is justified – but so is caution. Unlike traditional automation, agentic workflows are non-deterministic. That means outcomes can vary each time, even with the same input. As you scale up, this introduces unique risks:
These aren’t hypothetical concerns – they’re already emerging in the first rollouts of the technology
If Agentic AI is the next leap in workplace automation, the way we adopt it matters as much as the technology itself. Introducing agentic co-workers into your organisation means new processes, evolved job roles, updated technology stack and different ways of serving customers. This is a design and change management challenge and approaching it as such will reap rewards in efficiency, adoption and capability development.
1. Start small, scale responsibly
Select a contained, non-critical use case with clear success measures. Run pilots with human oversight before handing over more autonomy. Learn and adapt.
2. Keep humans in the loop
Even when the AI can act independently, build escalation protocols for high-stakes decisions. In other words: automation should never mean abdication.
3. Plan for governance early
Establish cross-functional AI governance committees, with representation from compliance, legal, operations and ethics. Define policies for:
4. Design with people in mind
Map the user journey, including employees, customers and partners. Factor in accessibility standards and cultural differences. Use frameworks that make AI reasoning visible so users can trust decisions.
5. Test like it matters
Use diverse, structured testing with users of different profiles, roles and needs. Include scenario-based ‘red-teaming’ to stress-test against failures, biases, or adversarial inputs.
6. Monitor and adapt
Deploy monitoring systems to track AI decisions and performance over time. Hold regular stakeholder reviews and be prepared to pause or roll back functionality if risks emerge.
One creative tool we’ve explored is the ‘AI-sona’ – a twist on the design persona concept, used to explain the role and capabilities of AI agents to non-technical stakeholders. This simple reframing can prevent over- or under-estimating what the AI can do and helps teams think about agents as part of the workforce.
The AI maturity curve: know where you stand
Not every organisation is ready to hand over mission-critical decisions to AI. Think of adoption as a maturity curve:
The higher you go, the more value you can unlock – but also the heavier the compliance and risk-management burden.
For organisations, the challenge is not just what’s possible – it’s what’s responsible.
Technology will keep accelerating. Regulations, ethical frameworks and standards are racing to catch up, but they’re not there yet.
That means the burden is on today’s leaders to:
If you’d like to explore how AI-sonas, governance frameworks or pilot programmes can help your organisation adopt agentic AI responsibly, we’d love to connect and share ideas.
Email clientservices@opencastsoftware.com and we’ll come back to you.
This post is based on a presentation by Gordon and Marianne delivered at the TechNExt 2025 festival in Newcastle on 18 June 2025.
Loading...