
Confidence in generative AI is soaring across industries, yet investment in governance, transparency, and ethical oversight remains dangerously inadequate, experts warn.
Cary, North Carolina : October 7, 2025
A sweeping new global study has found that while trust in generative AI (GenAI) is reaching record highs, the safeguards needed to ensure its ethical and secure use are not keeping pace.
The research — conducted jointly by SAS and IDC — surveyed over 2,300 business and technology leaders worldwide and revealed a startling paradox: organizations increasingly trust GenAI to make decisions, write code, and automate workflows, yet most admit they have little to no investment in responsible AI governance or ethical oversight.
The findings highlight a widening global gap between trust and trustworthiness — where enthusiasm for AI’s potential far outpaces institutional readiness to manage its risks.
Rising Trust in GenAI Across Sectors
The study, which covered participants from the U.S., Europe, Asia-Pacific, and the Middle East, found that 48% of respondents say they have “complete trust” in GenAI, compared to just 33% for agent-based AI and 18% for traditional machine learning systems.
More than eight in ten organizations (81%) have already deployed some form of GenAI in daily operations — from customer support chatbots to content generation, health diagnostics, and predictive maintenance.
This widespread adoption signals not only the democratization of AI technology but also a surge in user confidence. For many companies, the conversational and human-like quality of large language models (LLMs) has made GenAI appear more reliable and intelligent than earlier forms of automation.
“Generative AI is redefining trust because it feels human,” said Jennifer Chase, Chief Marketing Officer at SAS. “But the problem is that perceived intelligence doesn’t equal safe intelligence.”
The Safeguard Deficit: A Growing Risk
Despite this growing confidence, only four out of ten organizations surveyed reported active investment in AI governance, auditing tools, or ethical oversight frameworks.
Worse still, many firms admit to using GenAI systems without clear accountability or security controls. Even among respondents with no formal AI safety programs, GenAI is still rated as twice as trustworthy as legacy AI systems.
This overconfidence, experts warn, could be setting the stage for a series of governance and reputational crises.
“There’s a dangerous assumption that new equals better,” said David Schubmehl, Research Director at IDC. “Organizations are deploying GenAI faster than they can ensure it behaves safely or fairly.”
The trust–safeguard gap is most evident in the following areas:
- Data Privacy: 62% of respondents cited concerns about GenAI accessing sensitive or unregulated data.
- Transparency: 57% said they struggle to explain how AI systems make decisions.
- Ethical Use: 56% raised worries about bias, hallucination, and misuse of generated content.
- Security: 49% said they lack tools to monitor for prompt injection or data exfiltration risks.
Despite these warnings, most companies are expanding GenAI projects without pausing to develop internal “AI responsibility teams” or model validation frameworks.
The Business Paradox: Trust Pays, but Only With Oversight
Interestingly, the SAS-IDC study found that organizations investing in responsible AI reap substantially higher returns.
Businesses classified as “trustworthy AI leaders” — those that prioritize explainability, data governance, and fairness — are 60% more likely to double their ROI on AI projects compared to peers who neglect such practices.
This suggests that ethical AI is not only good for society but also good for business.
“When customers and employees trust your AI, adoption follows naturally,” said Reggie Townsend, VP of Data Ethics at SAS. “Trust without safety is fragile. But trust built on transparency lasts.”
Global Disparities: Emerging Economies Lead in AI Optimism
A related global survey by KPMG and the University of Melbourne earlier this year revealed that emerging economies exhibit higher trust in AI than developed nations.
In countries like India, Brazil, and Indonesia, three in five people said they trusted AI systems to make decisions, compared with two in five in North America and Europe.
Analysts believe this reflects differing experiences: in emerging markets, AI is seen as a pathway to inclusion and growth, whereas in advanced economies, the public tends to be more cautious due to privacy, misinformation, and job displacement concerns.
However, public trust does not equal institutional safety. Many of the same countries with high AI optimism still lack comprehensive AI regulation frameworks or ethical oversight committees, underscoring that trust ≠ preparedness.
Why AI Governance Still Lags
Experts identify several reasons for the slow pace of AI safeguards:
- Regulatory uncertainty: With no global consensus on AI laws, organizations lack clear compliance roadmaps.
- Skills shortage: There are too few professionals trained in AI ethics, model auditing, or algorithmic risk management.
- Speed vs. safety: Companies fear losing competitive advantage if they pause to implement governance layers.
- Cost concerns: Building responsible AI infrastructure can be expensive, and short-term ROI pressures often override ethical priorities.
- Cultural inertia: In many firms, AI ethics is treated as a PR issue rather than a core operational requirement.
As a result, many businesses are rushing adoption while leaving critical safety and accountability measures to future planning — a strategy experts compare to “building airplanes in midair.”
The Trust vs. Trust worthiness Dilemma
The global GenAI landscape is now defined by a tension between perceived trust and earned trust.
Users trust AI tools because they deliver speed, fluency, and convenience — but these are surface-level indicators. True trustworthiness requires auditable data practices, bias mitigation, human oversight, and clear accountability.
This mismatch between perception and infrastructure is what scholars call the “trust gap” — a structural weakness that could undermine AI’s long-term social acceptance.
A recent paper from Stanford’s Human-Centered AI Institute warned that “unverified confidence” in GenAI systems could result in widespread “delegation of critical decisions to opaque algorithms”, especially in healthcare, defense, and financial services.
What Needs to Change
To close the trust gap, experts and policymakers are calling for a coordinated push toward responsible AI governance. The SAS-IDC report and AI policy analysts recommend several urgent steps:
- Integrate ethics by design: Embed fairness, explainability, and accountability into the earliest stages of AI model development.
- Establish global AI standards: Similar to ISO certifications, an international framework for AI safety and audit could create consistency.
- Invest in training: Upskill workforces in AI risk management, compliance, and interpretability.
- Mandate transparency: Require companies to disclose AI model usage, data sources, and risk assessments.
- Strengthen regulatory guardrails: Governments must harmonize data privacy laws and develop dedicated AI oversight agencies.
- Foster human–AI collaboration: Encourage systems that keep human decision-makers in the loop, especially for critical sectors.
“It’s not enough for AI to be impressive — it must also be accountable,” said Dr. Monica Rogati, AI ethics consultant and former LinkedIn data scientist. “The faster we trust AI, the faster we must ensure it deserves that trust.”
A Defining Moment for Responsible AI
The surge in AI trust marks a historic inflection point. Yet as this latest study shows, the world’s confidence in AI far outpaces its preparedness to manage it.
If businesses, regulators, and technologists fail to close the safeguards gap, today’s optimism could easily give way to tomorrow’s crisis — from misinformation and bias to systemic misuse.
The message from experts is clear:
Building trustworthy AI isn’t about slowing innovation; it’s about securing its future.
Source;
