Skip to content

The 2026 executive guide to using ChatGPT, Copilot, Gemini and agentic AI at work

Adam And Theo At Growcreate

By 2026, AI at work stops being an experiment.

ChatGPT, Microsoft Copilot and Google Gemini have moved from novelty tools to everyday infrastructure. Agentic AI – software that can plan, decide and act across multiple steps – is beginning to coordinate whole workflows.

The question for leaders is no longer whether to use AI. It is how AI fits into your operating model, who is accountable, and where you are comfortable letting software act on your behalf.

McKinsey estimates that generative AI could add between $2.6 trillion and $4.4 trillion in value annually, with most of that coming from improvements inside existing workflows rather than net-new products (Source: McKinsey). This is an operating model conversation, not a tooling trend.

This guide sets out a practical for SME leaders, CMOs and CTOs on how to:

  • Treat AI as a leadership and governance topic
  • Define decision boundaries and escalation rules for AI
  • Integrate AI with CRM, CMS and BI systems
  • Prepare for agentic AI without increasing risk

Growcreate’s perspective is simple: tools matter, but leadership intent, governance and integration with your systems of record matter more.

Why AI at work is now a leadership question

Most organisations now use some form of AI, yet maturity still lags. EY notes that while generative AI is the top opportunity for technology businesses, around 90% of organisations remain in the earliest stages of AI maturity and need clear leadership structures to manage it (Source: EY).

For SME leaders, the core questions have shifted:

  • Where does AI sit inside our operating model, not just our tech stack?
  • Which decisions should AI inform, which should it execute, and which stay firmly with humans?
  • How do we govern AI at the speed of delivery, without creating bureaucracy that blocks progress?
  • What happens when AI systems stop responding and start acting across workflows?

These are leadership topics. They touch on accountability, trust, operating rhythm and regulatory exposure.

Growcreate Monomarque Fullcolour Web

Business value takeaway

Treating AI as an operating model decision creates clarity on where value, accountability and risk sit.

From assistants to agents in 2026

Leaders are comfortable with AI assistants.

  • ChatGPT drafts copy, analyses data and tests scenarios
  • Copilot summarises meetings and surfaces actions inside Microsoft 365
  • Gemini , reasons and connects live information across formats

These tools still work in a request–response pattern. You ask, they respond.

Agentic AI changes the pattern.

Agentic systems pursue goals by breaking them into tasks, calling tools and APIs, updating systems and iterating until they reach an outcome. on agentic AI highlights attributes such as independent action, tool use, memory and orchestration across components, typically driven by large language models (Source: Wikipedia).

In practice, this means:

  • An AI agent monitors inbound leads, qualifies them, updates CRM fields and books meetings
  • A support agent triages tickets, drafts responses and escalates exceptions
  • An operations agent monitoring alerts, suggests fixes and triggers runbooks

Gartner expects agentic capabilities to feature in around one‑third of enterprise software and to autonomously handle a meaningful share of daily business decisions by 2028, while also warning that over 40% of early agentic projects may be cancelled due to unclear value and high cost (Source: Reuters).

Growcreate Monomarque Fullcolour Web

Business value takeaway

The move from assistants to agents is a shift in how work flows and who is accountable, not just a new feature set.

How SME leaders should govern AI across the organisation

Effective AI governance does not slow teams – it lets them move quickly with confidence.

Two things are now clear:

  1. Regulation is real
  2. Boards are expected to show control


The EU AI Act introduces a risk‑based framework for AI systems, with obligations that phase in between 2025 and 2027, especially for general‑purpose and high‑risk use cases (Source: European Commission). In the UK, the ICO’s guidance on AI and data protection makes it explicit that organisations remain accountable for the outcomes of AI systems, even when decisions are automated (Source: ICO).

For SME leaders, a workable governance model usually rests on three layers.

1. Foundation models – what intelligence you rely on

You choose underlying models such as GPT‑class systems, Copilot models or Gemini variants.

Key leadership responsibilities:

  • Define which models are allowed for which risk levels
  • Ensure contractual and technical controls cover data residency, security and use of your data for training
  • Align model choice with your industry’s regulatory expectations

OpenAI, for example, states that ChatGPT Enterprise does not train on customer inputs and offers SOC 2‑aligned controls and encryption for data in transit and at rest (Source: OpenAI).

2. Orchestration and tooling – how AI touches systems

This is where AI calls your APIs, CRMs, CMS platforms, analytics tools and custom applications.

Leadership focus:

  • Data quality and access – what data can models read, and at what level of granularity
  • Permission models – how AI respects existing role‑based access rules
  • Integration standards – how AI services are exposed, monitored and versioned

This is also where many organisations fall into tool sprawl. A Zapier survey found that 70% of enterprises had not moved beyond basic AI integration and that three‑quarters had experienced at least one negative outcome due to disconnected AI, such as security or compliance issues (Source: Zapier).

3. Governance and intent – who owns outcomes

This is the true leadership layer.

  • Executive sponsor – accountable for business outcomes
  • Business owner – defines value, guardrails and success metrics
  • Technology lead – designs architecture and integration
  • Risk or compliance lead – aligns usage with policy and regulation

BCG highlights that responsible AI frameworks, when applied consistently, both reduce risk and increase adoption and value from AI programmes (Source: BCG). Boards that frame governance as an enabler, not a blocker, see stronger results.

Growcreate Monomarque Fullcolour Web

Business value takeaway

Governance that is clear, light and embedded into delivery lets you scale AI faster and with fewer surprises.

ChatGPT, Copilot and Gemini as a platform stack

Rather than asking “Which AI tool should we back?”, a better question is “What role does each platform play in our stack?”

ChatGPT – reasoning and experimentation layer

ChatGPT excels at synthesis, abstraction and scenario testing.

Best used for:

  • Strategic thinking, options analysis and planning support
  • Drafting content, communications and internal documentation
  • Early experimentation with prompts, personas and workflows

ChatGPT Enterprise is positioned as a secure, managed environment for this kind of work, with controls for data isolation, encryption and enterprise administration (Source: OpenAI).

Microsoft Copilot – productivity and collaboration layer

Copilot sits inside Microsoft 365 and is often the safest starting point for scaled AI, because it respects existing permissions across OneDrive, SharePoint, Teams and Outlook.

Microsoft emphasises that Copilot conversations and uploaded files are protected by the same privacy and security commitments that apply across Microsoft services, with options to control training use and retention of shared files (Source: Microsoft).

Best used for:

  • Day‑to‑day productivity and meeting summaries
  • Document drafting, analysis and comparison
  • Cross‑team coordination using data you already store in Microsoft 365

Gemini – , context and multimodal insight layer

Gemini is a family of multimodal models from Google, designed to work across text, images, audio, video and code, and is tightly integrated with Google and Workspace (Source: Wikipedia).

Best used for:

  • tasks where freshness matters
  • Combining different inputs such as screenshots, PDFs and web content
  • Multi‑step actions across Google services and compatible apps
Growcreate Monomarque Fullcolour Web

Business value takeaway

Clear platform roles reduce duplication, simplify training and make it easier to explain where sensitive data is, and is not, allowed.

What governance framework enables fast, safe AI delivery?

A good AI governance framework is short, concrete and easy to apply.

At Growcreate we often anchor leadership discussions on four simple questions:

  1. Data access – What data can AI read, write or infer from?
  2. Decision rights – Which decisions can AI recommend, and which can it execute?
  3. Human oversight – Where must humans , approve or override?
  4. Auditability – How are AI actions logged, explained and over time?


This aligns well with regulatory expectations. The European Commission’s approach to AI stresses transparency, risk classification and traceability, particularly for higher‑risk and general‑purpose systems (Source: European Commission). The ICO expects organisations to be able to explain how AI systems have reached decisions that affect individuals (Source: ICO).

A practical structure for SMEs looks like this:

  • Policy – short AI usage principles that cover acceptable use, data use and sensitive topics
  • Process – standard design and steps for new AI use cases
  • People – a small cross‑functional group that can approve, pause or retire AI services
  • Platform – a preference for a small number of strategic AI platforms over many disconnected tools

For a deeper on how governance and auditability support SME adoption, see Growcreate’s guide on AI governance and auditability, which links these controls to ISO 27001‑style practices and SME‑ready oversight structures.

Growcreate Monomarque Fullcolour Web

Business value takeaway

Governance frameworks should help teams ship with confidence, not add months to every AI decision.

How to integrate AI with CRM, CMS and BI systems

AI only creates durable value when it connects to systems of record.

These typically include:

  • CRM platforms such as Dynamics, Salesforce or HubSpot
  • CMS and digital experience platforms such as Umbraco and Optimizely
  • Analytics and BI tools such as Power BI or Looker
  • Finance or ERP systems

Common integration patterns

For SME leaders, the technical detail can stay in the background. What matters is recognising the patterns your teams will use.

  1. Retrieval‑augmented assistants – AI reads from your content stores (for example, CRM notes, CMS articles or BI reports) and answers questions with references.
  2. Workflow automation – AI triggers events such as creating tasks, sending alerts or updating fields when conditions are met.
  3. Agentic orchestration – AI agents monitor data in real time, move work between stages and call multiple systems in one flow.


BCG highlights that in revenue operations, agentic AI is already being used to qualify leads, keep CRM records up to date and optimise contact strategies, improving pipeline efficiency and conversion (Source: BCG).

A simple systems

System of record Assistant‑level AI Agentic extension Example metrics
CRM Summarise accounts, suggest next best actions Auto‑update fields, manage cadences, schedule follow‑ups Win rate, sales cycle length, data completeness
CMS Draft content, suggest metadata, summarise pages Run content tests, route approvals, trigger translations Time to publish, engagement, localisation cost
BI Explain dashboards, surface anomalies Trigger alerts, open tickets, adjust thresholds Time to insight, issue resolution time


Growcreate’s AI development services focus on exactly this kind of integration for Microsoft and .NET estates – adding agents, assistants and automation into existing applications without a rebuild.

Growcreate Monomarque Fullcolour Web

Business value takeaway

Integrating AI with CRM, CMS and BI systems insight into action and lets you measure value directly in business metrics.

What agentic AI is and how it changes workflows

Agentic AI is not “AI on autopilot”. It is structured autonomy within clear rules.

Most agentic systems share five components:

  1. Goal – what the agent is trying to achieve
  2. Planning – how it breaks work into steps
  3. Tools – the APIs, databases and applications it can call
  4. Controls – what it is allowed to do without human approval
  5. Feedback – how outcomes are logged, and adjusted


BCG’s work on agentic AI in enterprise platforms shows agents already orchestrating workflows in areas such as IT service, customer claims and finance, with early adopters reporting faster cycles and lower back‑office cost (Source: BCG).

For SMEs, high‑value agentic patterns tend to appear in three areas:

  • Revenue operations – monitoring inbound leads, updating CRM, nudging account teams
  • Customer support – triaging tickets, drafting answers, escalating edge cases
  • IT and operations – spotting incidents, suggesting fixes, opening and closing tickets

Gartner expects most customer service leaders to be piloting conversational AI within their operations, with AI handling greater portions of case resolution while human roles evolve rather than disappear (Source: Gartner).

Growcreate Monomarque Fullcolour Web

Business value takeaway

Agentic AI pays off when it reduces coordination overhead and keeps humans focused on judgement, not status updates.

Decision boundaries and escalation rules for AI agents

When AI agents can act, decision boundaries become essential.

A simple way to think about this is by level of autonomy.

  • Level 0 – Assist – AI suggests, humans decide and act
  • Level 1 – Co‑pilot – AI drafts and pre‑configures, humans approve and submit
  • Level 2 – Agent – AI acts within defined limits, humans exceptions and patterns

Leaders should define, in plain language:

  • Which workflows are eligible for Level 2 autonomy
  • Monetary or risk thresholds that always require human approval
  • Topics or customer segments where AI is advisory only
  • Escalation rules – for example, after a certain number of failed attempts or low‑confidence answers

Wavestone’s analysis of AI guardrails found that many large organisations are pausing or shelving AI pilots not due to model limitations, but because Legal and Risk teams cannot sign off in the absence of clear governance and control frameworks (Source: Wavestone). Clear decision boundaries address this directly.

From there, invest early in:

  • Audit trails – logs that show what the agent saw, decided and did
  • Monitoring – alerts when behaviour drifts or error rates change
  • Review cycles – regular human of agent decisions and rules

Growcreate’s AI governance guidance explores practical logging and oversight patterns aligned with ISO 27001 and GDPR that fit SME teams.

Growcreate Monomarque Fullcolour Web

Business value takeaway

Well‑defined decision boundaries and escalation rules agentic AI from a risk story into a reliability story.

A practical AI roadmap for leaders to 2026

You do not need a perfect three‑year plan. You do need a clear next step.

from MIT CISR shows that financial performance improves at each stage of AI maturity, but that only a minority of organisations have reached higher stages where AI is embedded into core processes (Source: MIT CISR). Oxford Economics similarly finds that enterprise AI use is still nascent and that leaders are distinguished less by experimentation and more by integration and governance (Source: Oxford Economics).

A simple roadmap for SME leadership teams:

0–12 months – fluency and platform consolidation

  • Choose core platforms (for example, Copilot, ChatGPT Enterprise, Gemini)
  • Publish AI usage principles and a light governance process
  • Roll out productivity use cases in Office tools and core collaboration spaces
  • Start measuring time saved and satisfaction across a few teams

12–24 months – integration with systems of record

  • Connect AI to CRM, CMS and analytics systems
  • Prioritise a small number of high‑value, low‑risk workflows in revenue, CX and operations
  • Formalise joint business–technology ownership and a regular AI forum
  • Introduce monitoring, logging and access controls aligned to your regulatory footprint

24–36 months – agentic enablement

  • Design agents for well‑defined processes with clear decision boundaries
  • Put escalation rules, audit trails and human in place from day one
  • Extend AI beyond cost and productivity into resilience, service quality and new propositions
  • Treat AI as part of your core digital and data strategy, not a bolt‑on initiative
Growcreate Monomarque Fullcolour Web

Business value takeaway

A staged roadmap lets you prove ROI early while laying the foundations for more autonomous workflows.

How Growcreate helps leaders move from pilots to agentic AI

Growcreate works with SME and mid‑market leadership teams that want AI to feel as dependable as their existing digital platforms.

Our work typically includes:

  • Leadership alignment – clarifying where AI fits in your value chain and operating model
  • Governance and readiness – mapping data, access, risk and regulatory context
  • Platform and architecture – designing AI on top of Microsoft Azure, .NET and your current systems
  • Integration and engineering – connecting AI to CRM, CMS, BI and custom platforms with Azure‑first patterns
  • Agent design – creating agents with clear goals, decision boundaries and escalation paths

You can see how this comes to life in our AI development services and wider AI consulting services, as well as in our guides on secure Azure deployment and AI governance and auditability.

Growcreate Monomarque Fullcolour Web

Business value takeaway

A structured partner helps you move from scattered experimentation to measurable, leadership‑grade adoption.

Executive checklist for 2026 and beyond

Before you sign off on your AI strategy for the next planning cycle, test it against these questions:

  • Do we have a clear of which AI platforms we standardise on and why?
  • Can we point to specific parts of our value chain where AI already affects revenue, cost or risk?
  • Are our decision boundaries, escalation rules and audit trails defined for every agent and high‑impact use case?
  • Do business and technology leaders share ownership of AI outcomes, not just tools?
  • Are we aligning AI adoption with GDPR, the EU AI Act and ICO expectations, rather than reacting late?

If any answer is unclear, that is a practical starting point for leadership discussion.

When you are ready to move from pilots to an AI operating model that fits your organisation, Growcreate is ready to help you plan, govern and deliver.

Is your Umbraco platform end‑of‑life ready

Your CMS and cloud estate are the foundation for any AI programme.

Take the test to see whether your Umbraco platform is ready for what comes next.