Hybrid AI-Human Models Sharpen Fraud Response

Highlights

i2c views AI not as a replacement for human roles but as a tool to enhance decision making, improve operational efficiency and support functions like fraud detection, compliance and customer service.

i2c has built a unified infrastructure to prevent data silos and uses iterative models — like SecureAuth 3.0 — to analyze real-time behavioral signals and deliver dynamic risk scores.

The company prioritizes real-world effectiveness over flashy AI applications, achieving results like higher compliance rates and lower fraud decline rates, all while ensuring human oversight.

When it comes to tech innovations, flash can often overshadow function. Yet when it comes to payments and banking, artificial intelligence is starting to prove that it doesn’t need to shout to make an impact.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    “We’ve been building our AI capabilities for over a decade now,” John Bresnahan, global head of operations at i2c, told PYMNTS. “This isn’t a new phrase for us.”

    “We really think of AI as augmented intelligence,” Bresnahan added. “We don’t view it as something that takes over human roles but as something that really enhances them.”

    That ethos of viewing AI not as a replacement but as a force multiplier can unlock efficiencies across all elements of operations, from fraud detection to compliance monitoring to customer service analytics.

    “Our clients aren’t looking for the novelty of AI,” Bresnahan said. “They’re looking for impact.”

    Building AI That Adds Up for Payments and Finance

    While much of the financial services sector may still be experimenting with generative AI or grappling with legacy tech, i2c has deployed mature, iterative models that are transforming how risk and service are managed at scale, Bresnahan said.

    Today, the company has more than 50 dedicated AI analysts and a cross-functional strategy that integrates data science into engineering, compliance and product design.

    Still, as the volume and complexity of financial data grow, one of the biggest challenges AI teams are facing is avoiding data silos. i2c has addressed this by building a unified infrastructure that allows data to move freely across fraud models, dispute workflows, token events and human-generated decisions.

    “All relevant signals inform our fraud models,” Bresnahan said. “That includes decisions made by AI and by humans.”

    One of i2c’s flagship AI tools is SecureAuth 3.0, the latest iteration of its fraud detection model. The system is now responsible for identifying 40% of fraud volume and 30% of fraud value across the platform — all while maintaining a fraud decline rate of 0.5%, Bresnahan said.

    “It’s a difficult balance,” he said. “You want to catch more fraud without creating more friction for legitimate users.”

    AI excels at crunching hundreds of behavioral signals in real time, such as token provisioning, cardholder history and contextual nuances specific to a financial program, to generate a dynamic risk score for every transaction, he said. Unlike static rule-based systems, these scores evolve as new data is ingested, offering the flexibility to detect threats that might otherwise go unnoticed.

    Another key differentiator is that i2c retrains its AI models every three to four months, compared to industry standards that often lag behind on a 12-month refresh cycle, he said. This helps the company mitigate model drift, a problem that plagues many financial institutions relying on stale or inflexible systems.

    “Going forward, that cycle might get even shorter,” Bresnahan said. “And we’re always focused on explainability. If a decision can’t be explained, it doesn’t belong in our system.”

    The AI risk score model doesn’t just sit atop i2c’s fraud stack; it collaborates with other systems, automating responses for high-risk events while ensuring that gray areas are routed to human analysts.

    “If the decision isn’t clear-cut, it escalates to a human,” Bresnahan said. “That’s where human judgment becomes more important.”

    These layered, hybrid workflows are the heart of i2c’s approach to what Bresnahan called “commonsense intelligence.” It’s not about AI acting alone, but about AI highlighting where action is needed — and letting human experts decide how best to proceed.

    Quiet Confidence, Real Results

    Ultimately, AI is not about grabbing headlines. It’s about building systems that work.

    “Our compliance has gone up tremendously [thanks to AI],” Bresnahan said, adding that in addition to the compliance benefits, i2c has been able to improve the service delivered to its clients’ customers.

    AI allows the company to identify issues in real time and apply corrective actions immediately, he said. When combined with thorough human oversight, this creates a feedback loop that ensures no compliance gaps go unnoticed — and no agents or systems go untrained.

    Looking ahead, Bresnahan said he sees two primary areas of focus: expanding the breadth of AI signals used in decision-making and shortening model retraining cycles even further.

    As more data sources become available and regulatory expectations grow more stringent, the demand for AI that’s powerful and explainable will likely increase.

    For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.