Agentic commerce is reshaping the way people shop online. Instead of manually browsing and comparing product views, prices, and delivery options, shoppers are experimenting with delegating decisions to AI-powered agents – systems that can place orders, book services, and make payments on their behalf.
But if an AI agent is going to spend someone’s money, it needs to earn their trust.
Tools like ChatGPT, Gemini, and Claude already influence purchasing decisions. As they move from suggesting to acting, confidence becomes the deciding factor. The question is simple: do people trust these models enough to allow autonomous action?
Right now, that trust is fragile. Many shoppers are open to help from AI, but without clear signals of safety and control, brands face a patchwork of challenges:
- No agreed standard on how merchants authenticate agents acting on a person's behalf
- Limited audit trails, making disputes harder to resolve
- Uncertainty around liability, permissions, and protections
- A fragmented ecosystem of competing protocols and risk models
Many of the insights in this article come from our latest research, Peak Season 2025: The debut of agentic commerce, which explores how consumers are beginning to engage with AI agents and where trust is still missing.
Adoption is happening. But adoption alone does not create scale. Trust does. The challenge for businesses now is clear: design AI-led experiences that are secure, transparent, and easy to understand. If customers are going to say yes, they need to know exactly what they are saying yes to.
Trust and transparency matter most
Consumers already rely on AI in everyday shopping journeys. From voice assistants to image search and virtual try-ons, AI is part of how people browse, compare, and discover products, deals, and delivery options. These tools are useful, but they’re still seen as helpers, not decision makers.
For familiar, low-risk tasks, our research shows that AI already fits comfortably into daily life:
- Comparing product reviews: 25% of consumers
- Finding gift ideas: 24% of consumers
- Finding discount codes: 19% of consumers
- Checking delivery times: 15% of consumers
Consumers are most comfortable delegating everyday essentials, such as household items or weekly food shops. These are predictable, lower-risk purchases where the stakes feel manageable.

Delegating specific, familiar tasks feels manageable. Handing over purchasing authority altogether feels different.
The divide becomes clearer when consumers are asked directly how comfortable they would feel allowing an AI agent to make a purchase on their behalf. Willingness varies significantly by age, with younger shoppers far more open to delegation than older generations.

The generational gap highlights where early adoption is likely to accelerate. Younger consumers are testing the boundaries of delegation, while older shoppers remain more cautious.
Our research shows that trust and transparency are the biggest barriers to adoption:
- 42% of consumers fear losing control over what’s purchased
- 28% of consumers cite a lack of transparency
These concerns point to three simple questions:
- What’s the agent doing?
- Why is it doing that?
- And is it acting in mine or the brand’s best interest?
Trust breaks down when the logic behind a decision feels hidden or overly commercial. If people can’t see how an outcome was reached, they won’t trust the process that got them there.
What it takes to make agentic commerce feel safe
To scale agentic commerce, you’ll need to build a stack that earns trust continuously. And it starts with control. AI agents depend on users being willing to cede some decision-making power, so systems should be designed to balance how much agency users, developers, and agents retain. That’s crucial, with “losing control over what’s purchased” emerging as the top consumer concern.
Agents should operate within clear limits, including spending caps, approval prompts, and easy ways for users to review what was bought, and why.
Our research shows that consumers already have clear financial boundaries in mind when it comes to delegated purchasing. While interest in agentic commerce is growing, willingness to spend is not unlimited.

On average, UK consumers say they would be comfortable allowing AI to spend £204.53 on their behalf, compared with $233 in the US. These figures are meaningful. They show that consumers are not rejecting agent-led purchasing, but they are placing defined limits around it.
The closer a transaction moves toward these thresholds, the more visible control mechanisms need to be. Configurable caps and approval checkpoints are fundamental to earning trust. Decisions also need to be explainable, with override options in place so users can intervene if agents behave unexpectedly. The ability to step in is just as important as the decision itself.
Control is the foundation of trust. People want to stay informed and involved. Permission prompts and spend limits are essential, but transparency matters just as much. Brands have to show how decisions are made, what is prioritized, and what trade-offs are involved. In fact, 28% of consumers say a lack of transparency is one of their biggest concerns. AI systems can build confidence by surfacing their reasoning and the data behind their choices.
Then there’s identity and authentication. Security is central to trust. Many consumers are uneasy about the data implications of AI. Okta’s 2025 Customer Identity Trends Report found that 60% of respondents are either “concerned” or “very concerned” about AI’s impact on their digital privacy and security. Similarly, Omnisend reports that 58% of surveyed consumers are worried about how their personal information is being handled.
That concern is amplified by a shifting fraud landscape. Fraud is becoming more sophisticated, with deepfakes already capable of bypassing traditional ID checks, and Deloitte’s Center for Financial Services predicts generative AI could push US fraud losses to $40 billion by 2027.
Trust and security are inseparable. While both consumers and developers want agentic AI to be safe, many remain unconvinced it will fully meet their privacy expectations.
The potential answer lies in more advanced trust infrastructure – fraud detection, tokenization, biometrics, and models that can flag anomalies early. Trust cannot be layered on at the end. It has to be embedded. That means face authentication, real-time ID checks, least-privilege access, and proven payment partners offering tokenization and fraud detection tools.
The businesses consumers trust to protect their data are more likely to thrive in agentic environments. Trust here is tied directly to who handles data, payments, and identity behind the scenes. That means working with partners who bring reliable safeguards, and ensuring agents act only on behalf of verified users.
Familiarity builds confidence
Consumers are more likely to trust AI agents provided by familiar brands than general-purpose platforms. That’s largely because the relationship is already established. People have shared their preferences, addresses, and payment details with these businesses – so when those same brands introduce agent-led experiences, it feels like a natural extension, not a new risk.
This is reflected in trust scores. In the US, UK, and Brazil, retailer-provided agents are rated significantly higher than personal AI tools. Years of consistent service, clear accountability, and established support systems create a foundation that newer entrants don’t yet have. Brand-led agents feel more reliable, and therefore easier to trust.
Consistency also matters. Like any trusted partner, AI agents feel more dependable when they demonstrate memory and context. When agents forget preferences or behave inconsistently, confidence can erode quickly.
And while speed and inefficiency are important, comfort often depends on how human the interaction feels. Smooth and reliable experiences are fundamental to building trust, especially in high-stakes or high-value moments, like booking international travel or making a large one-off purchase. For many businesses, this is where agentic commerce will be won or lost.
What brands can do today
The first step is visibility. If an agent is acting on a user’s behalf, say so clearly. Label the action, show the reasoning, and use language that reflects the user’s inputs – such as “Selected based on your preferred delivery time and price range.” Clarity around how decisions are made, and what data was used, helps users feel informed rather than bypassed. Silence creates doubt. Transparency builds confidence.
Starting in familiar environments can also accelerate trust. When agentic capabilities are introduced by brands consumers already rely on, it feels like a natural extension of an existing relationship. That’s why many experts recommend embedding agents into apps and experiences where users already have data stored and expectations set. Trust moves faster where credibility already exists.
Optionality matters too. Letting users opt in or out, set rules, or pause permissions can make automation feel safer. Checkpoints, approvals for high-value purchases, and access to agent history give users meaningful ways to stay in control, without losing the benefits of delegation.
Security needs to be just as visible. Consumers want to know what’s being verified, how it’s protected, and why it matters. That means making privacy cues, tokenization, and authentication steps part of the user experience – not hidden behind it.
Some solutions are already emerging to standardize trust. The Agentic Commerce Protocol (ACP), for example, enables agents to collect payment details and request delegated tokens on a customer’s behalf. It also includes built-in controls like spend caps and user permissions, while keeping the merchant of record model intact – preserving existing processes for refunds, disputes, and accountability.
Other network-led initiatives are also gaining traction. Visa Intelligent Commerce (VIC), for example, links AI agents to secure, tokenized credentials and verifies their identity across the payment network using agentic tokens. It also provides spending insights and supports user mandates, helping agents make safer, more personalized decisions.
Mastercard’s Agent Pay (MAP) takes a registration-first approach, requiring agents to be verified before transacting. It, too, secures payments with agentic tokens and ensures that every stakeholder can see who or what initiated a transaction.
It’s also worth designing with the agent in mind, not just the user. A well-branded product might still be invisible to AI if the data behind it is poorly structured. In an agentic world, discoverability is technical. Visibility increasingly depends on how well catalogues are formatted and exposed via APIs.
For early adopters, particularly Millennials and Gen Z, smoother, more integrated agentic journeys may already feel intuitive. Low-risk use cases like tokenized reorders or AI-assisted subscriptions can be a practical starting point. They build familiarity and help trust grow over time.
Together, these approaches reflect a shared goal: embedding trust into the mechanics of agent-led commerce, from identity to execution.
Building for trust in an agentic future
Agentic commerce is still taking shape, but one thing is already clear. Adoption is not the finish line. For AI agents to play a meaningful role in commerce, consumers need to trust them with decisions and ultimately, money.
That trust is built in layers. People want control over what agents can do, transparency into how decisions are made, and reassurance that agents are acting in their best interest. They want security they can see, safeguards they can understand, clear accountability if something goes wrong, and the reassurance that agents will behave consistently and respectfully. Trust is cumulative. Every interaction either strengthens it or weakens it.
Design plays a critical role. Trusted systems should be explainable and anchored in familiar environments. They need to include identity verification, real-time fraud detection, tokenized payments, and mechanisms for consumer choice and oversight – from spend caps to permission prompts. The most successful agentic experiences will start small, with low-risk use cases, and earn trust gradually over time.
At Checkout.com, we’re building the infrastructure to support this shift. In November 2025, we announced support for ACP, an open standard backed by OpenAI. ACP enables agents to make purchases within AI-native environments like ChatGPT, while preserving merchant control, user permissions, and tokenized security. We also support Google’s Universal Commerce Protocol (UCP) and Agent Payments Protocol (AP2), ensuring merchants can capture high-intent demand across Google’s ecosystem. We’re also working with Visa, Mastercard, and a growing network of enterprise players to help define the standards that will shape secure, intelligent commerce.
Earning trust is what will determine the long-term success of agentic commerce. It has to be considered from the start, and built into every layer of the system.



