Close Menu
    Facebook X (Twitter) Instagram
    Cloud Tech ReportCloud Tech Report
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Cloud Tech ReportCloud Tech Report
    Home»AI News»Deloittes guide to agentic AI stresses governance
    AI News

    Deloittes guide to agentic AI stresses governance

    January 28, 2026
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Deloittes guide to agentic AI stresses governance
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email
    synthesia


    A new report from Deloitte has warned that businesses are deploying AI agents faster than their safety protocols and safeguards can keep up. Therefore, serious concerns around security, data privacy, and accountability are spreading.

    According to the survey, agentic systems are moving from pilot to production so quickly that traditional risk controls, which were designed for more human-centred operations, are struggling to meet security demands.

    Just 21% of organisations have implemented stringent governance or oversight for AI agents, despite the increased rate of adoption. Whilst 23% of companies stated that they are currently using AI agents, this is expected to rise to 74% in the next two years. The share of businesses yet to adopt this technology is expected to fall from 25% to just 5% over the same period.

    Poor governance is the threat

    Deloitte is not highlighting AI agents as inherently dangerous, but states the real risks are associated with poor context and weak governance. If agents operate as their own entities, their decisions and actions can easily become opaque. Without robust governance, it becomes difficult to manage and almost impossible to insure against mistakes.

    synthesia

    According to Ali Sarrafi, CEO & Founder of Kovant, the answer is governed autonomy. “Well-designed agents with clear boundaries, policies and definitions managed the same way as an enterprise manages any worker can move fast on low-risk work inside clear guardrails, but escalate to humans when actions cross defined risk thresholds.”

    “With detailed action logs, observability, and human gatekeeping for high-impact decisions, agents stop being mysterious bots and become systems you can inspect, audit, and trust.”

    As Deloitte’s report suggests, AI agent adoption is set to accelerate in the coming years, and only the companies that deploy the technology with visibility and control will hold the upper hand over competitors, not those who deploy them quickest.

    Why AI agents require robust guardrails

    AI agents may perform well in controlled demos, but they struggle in real-world business settings where systems can be fragmented and data may be inconsistent.

    Sarrafi commented on the unpredictable nature of AI agents in these scenarios. “When an agent is given too much context or scope at once, it becomes prone to hallucinations and unpredictable behaviour.”

    “By contrast, production-grade systems limit the decision and context scope that models work with. They decompose operations into narrower, focused tasks for individual agents, making behaviour more predictable and easier to control. This structure also enables traceability and intervention, so failures can be detected early and escalated appropriately rather than causing cascading errors.”

    Accountability for insurable AI

    With agents taking real actions in business systems, such as keeping detailed action logs, risk and compliance are viewed differently. With every action recorded, agents’ activities become clear and evaluable, letting organisations inspect actions in detail.

    Such transparency is crucial for insurers, who are reluctant to cover opaque AI systems. This level of detail helps insurers understand what agents have done, and the controls involved, thus making it easier to assess risk. With human oversight for risk-critical actions and auditable, replayable workflows, organisations can produce systems that are more manageable for risk assessment.

    AAIF standards a good first step

    Shared standards, like those being developed by the Agentic AI Foundation (AAIF), help businesses to integrate different agent systems, but current standardisation efforts focus on what is simplest to build, not what larger organisations need to operate agentic systems safely.

    Sarrafi says enterprises require standards that support operation control, and which include, “access permissions, approval workflows for high-impact actions, and auditable logs and observability, so teams can monitor behaviour, investigate incidents, and prove compliance.”

    Identity and permissions the first line of defence

    Limiting what AI agents can access and the actions they can perform is important to ensure safety in real business environments. Sarrafi said, “When agents are given broad privileges or too much context, they become unpredictable and pose security or compliance risks.”

    Visibility and monitoring are important to keep agents operating inside limits. Only then can stakeholders have confidence in the adoption of the technology. If every action is logged and manageable, teams can then see what has happened, identify issues, and better understand why events occurred.

    Sarrafi continued, “This visibility, combined with human supervision where it matters, turns AI agents from inscrutable components into systems that can be inspected, replayed and audited. It also allows rapid investigation and correction when issues arise, which boosts trust among operators, risk teams and insurers alike.”

    Deloitte’s blueprint

    Deloitte’s strategy for safe AI agent governance sets out defined boundaries for the decisions agentic systems can make. For instance, they might operate with tiered autonomy, where agents can only view information or offer suggestions. From here, they can be allowed to take limited actions, but with human approval. Once they have proven to be reliable in low-risk areas, they can be allowed to act automatically.

    Deloitte’s “Cyber AI Blueprints” suggest governance layers and embedding policies and compliance capability roadmaps into organisational controls. Ultimately, governance structures that track AI use and risk, and embedding oversight into daily operations are important for safe agentic AI use.

    Readying workforces with training is another aspect of safe governance. Deloitte recommends training employees on what they shouldn’t share with AI systems, what to do if agents go off track, and how to spot unusual, potentially dangerous behaviour. If employees fail to understand how AI systems work and their potential risks, they may weaken security controls, albeit unintentionally.

    Robust governance and control, alongside shared literacy are fundamental to the safe deployment and operation of AI agents, enabling secure, compliant, and accountable performance in real-world environments

    (Image source: “Global Hawk, NASA’s New Remote-Controlled Plane” by NASA Goddard Photo and Video is licensed under CC BY 2.0. )

     

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



    Source link

    bybit
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Meet OAT: The New Action Tokenizer Bringing LLM-Style Scaling and Flexible, Anytime Inference to the Robotics World

    February 9, 2026

    What the OpenClaw moment means for enterprises: 5 big takeaways

    February 7, 2026

    How separating logic and search boosts AI agent scalability

    February 6, 2026

    Mistral AI Launches Voxtral Transcribe 2: Pairing Batch Diarization And Open Realtime ASR For Multilingual Production Workloads At Scale

    February 5, 2026

    Best AI Video Apps for 2026

    February 5, 2026

    Katie Spivakovsky wins 2026 Churchill Scholarship | MIT News

    February 4, 2026
    Customgpt
    Latest Posts

    EXACTLY How to Start Making AI Influencers and get RICH

    February 9, 2026

    Google’s 6 Hour Prompt Engineering Course in 10 Minutes

    February 9, 2026

    I Tried 500+ AI Tools, These 9 Will Make You Rich

    February 9, 2026

    Global Market Crypto Regulations

    February 9, 2026

    BTC, Gold & Silver Exposed?

    February 9, 2026
    quillbot
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights

    South Korea Probes Whale Manipulation: SUBBD Disrupts

    February 9, 2026

    AAVE Price Prediction: Technical Signals Point to $125 Recovery by March 2026

    February 9, 2026
    kraken
    Facebook X (Twitter) Instagram Pinterest
    © 2026 CloudTechReport.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.

    bitcoin
    Bitcoin (BTC) $ 69,494.00
    ethereum
    Ethereum (ETH) $ 2,061.01
    tether
    Tether (USDT) $ 0.999579
    xrp
    XRP (XRP) $ 1.44
    bnb
    BNB (BNB) $ 630.47
    usd-coin
    USDC (USDC) $ 0.999808
    solana
    Solana (SOL) $ 85.27
    tron
    TRON (TRX) $ 0.277502
    jusd
    JUSD (JUSD) $ 0.999053
    dogecoin
    Dogecoin (DOGE) $ 0.095116