
In the early days of digital identity, trust was a starting point: you logged in once, verified your credentials and got on with your journey. But in today’s rapidly evolving threat landscape, where AI-generated deepfakes, identity fraud and third-party breaches are growing exponentially, trust must be earned continuously.
Adam Preis, director of product-solution marketing at Ping Identity, reveals how digital trust is evolving, and what it takes to build meaningful, secure digital relationships with customers, partners or AI agents.
How has the concept of digital identity evolved?
Digital identity has transformed dramatically over the past two decades.
Initially, identity and access management (IAM) was simply about allowing the workforce to log into systems and applications. But around 12 years ago, the introduction of identity federation standards revolutionised access by enabling logins through third-party identity providers, such as Google and Facebook.
Put simply, users no longer needed to create, store and use unique credentials for each application. This shift enabled more seamless and secure access, paving the way for frictionless digital experiences and expanding the role of identity in consumer-facing applications.
How do you define ‘verified trust’ in today’s digital landscape and why does it matter?
Verified trust has shifted from a ‘trust then verify’ approach to a ‘verify first, then trust’ model.
With the emergence of sophisticated attack vectors - including fraud, deepfake content and AI-powered impersonation - organisations must be able to verify a user’s identity with greater certainty, especially as risk levels fluctuate across the user journey. These evolving threats make fraud detection, response and prevention significantly more challenging.
In the age of AI, verification should involve matching identities against trusted ‘anchors’ - such as government digital identity schemes, bank ID systems or decentralised wallet-based verified-credential schemes - and implementing robust liveness detection. This means not only verifying credentials at onboarding but also continuously assessing risk signals throughout the user journey and introducing step-up identity verification when needed.
Today, identity is quickly becoming the security perimeter, whether you’re a worker logging into secure corporate systems or a consumer accessing sensitive services such as banking or healthcare.
What role does identity play in customer retention and churn?
IAM directly impacts customer experience because poor verification processes create friction that frustrates users. In fact, complex onboarding, multiple login requirements and inconsistent authentication across channels, such as mobile apps, web platforms, call centres and hybrid physical-digital touchpoints, can drive customers away. Businesses that can create smooth, secure and flexible identity experiences, with options for verification and clear data-privacy controls, are more likely to retain customers.
How do AI-generated deepfakes challenge traditional IAM strategies?
AI-generated deepfakes have dramatically increased the sophistication and prevalence of identity fraud. Malicious actors can now create highly convincing fake identities, voices and even entire video-conferences. A notable example is a Hong Kong case where criminals used deep-fake technology to convince executives to transfer $25m by impersonating company leadership.
Again, this means that traditional IAM strategies that rely on static credentials or simple multi-factor authentication are no longer sufficient. Organisations must instead consider advanced verification technologies that can detect, respond to and prevent AI-generated fakes, using a myriad of first- and third-party risk signals in real time across the entire user journey.
AI-generated deepfakes have dramatically increased the sophistication and prevalence of identity fraud
How should organisations approach the rise of AI agents with digital identities?
Organisations must treat AI agents as entities with distinct identities that act on behalf of users, requiring clear, user-granted authorisation to access specific identity attributes and personal data for narrowly defined tasks within a limited timeframe.
This should include giving AI agents specific, time-limited access to defined data sets, implementing granular authorisation controls, securing API access for AI interactions and continuously monitoring and verifying AI-agent activities.
Customers should also have control, such as granting an AI agent access to only a limited portion of their profile for a specific task, for a period of time. As an example, identity-enabled AI agents in consumer banking will allow these agents to perform specific tasks, such as finding better mortgage rates or recommending savings products, all while respecting the customer’s defined permissions.
More than anything, you should create a framework that enables AI agents to operate effectively while maintaining robust security and privacy controls.
How can businesses introduce verification without adding friction?
The key is to understand that poorly designed identity experiences can drive customers away. Businesses that recognise this are the ones successfully finding the right balance – it’s not about removing all friction but using it wisely and only where it adds value or trust.
Strategies such as introducing delegated authorisation and policy-based access control are helping to redefine this balance. For example, if I want to give my daughter access to my current account but limit her to a certain spending amount, I should be able to do that securely and seamlessly.
This concept can be extended to more complex scenarios, such as delegating authority through a power-of-attorney arrangement in healthcare or legal contexts. It also applies to, for instance, mortgage or insurance processes, enabling seamless identity delegation between customers, brokers and providers without requiring users to navigate cumbersome verification steps.
What’s next for digital trust – are we moving towards a trust-as-a-service model?
We’re seeing the early signs of a shift in that direction. A lot of organisations, particularly in the financial sector, are starting to realise that their trust and identity-verification capabilities can be monetised.
Some banks in Europe are exploring decentralised identity, particularly wallet-based technologies. These systems enable users to hold verified credentials on their devices, proving their identity during transactions without relying on a centralised database. With a trust anchor in place, users can, for example, confirm their age or address, without revealing personal details. This provides them with greater control over how they share and revoke access to their credentials while reducing the risk of data breaches and fraud.
In this emerging model, banks and other trusted institutions can function as trust anchors. That means they can both verify a person’s identity and offer those verification services to third parties. This enables new systems that deliver verification-as-a-service.
Regulatory momentum is accelerating this trend. In Europe, new regulations such as eIDAS 2.0 and the European Digital Identity Wallet will soon come into force. By November 2027, businesses must be able to accept the digital wallet for customer identification and authentication.
This is going to be a huge push towards decentralised identity and will likely open the door for trust-as-a-service models. Over the next three years, as both the public and private sectors align on this vision, we will see a shift towards more secure, seamless experiences. Verified trust will play a central role in safeguarding consumer, workforce and third-party identities, driving more secure and frictionless interactions across all digital touchpoints.
For more information please visit pingidentity.com

In the early days of digital identity, trust was a starting point: you logged in once, verified your credentials and got on with your journey. But in today’s rapidly evolving threat landscape, where AI-generated deepfakes, identity fraud and third-party breaches are growing exponentially, trust must be earned continuously.
Adam Preis, director of product-solution marketing at Ping Identity, reveals how digital trust is evolving, and what it takes to build meaningful, secure digital relationships with customers, partners or AI agents.
AI-generated deepfakes have dramatically increased the sophistication and prevalence of identity fraud