[Audio] Good morning. Today I'll be presenting our SSHRC research proposal titled 'Transforming Financial Services Through AI-Enabled Agile Product Management: A Practical Framework for Industry Implementation.' This research addresses a fundamental transformation challenge facing the financial services sector - one that has significant implications for both industry practice and academic understanding. The integration of artificial intelligence in business processes represents what multiple systematic reviews have identified as a fundamental shift in how organizations operate and deliver value. However, despite significant investments and clear potential benefits, financial institutions face substantial challenges in successfully implementing AI technologies. These challenges are particularly acute in the banking sector, where regulatory requirements, legacy systems, and the need for robust risk management create a unique implementation environment.
Overview. Research Context & Motivation Key Research Gaps Implementation Framework Gap Project Management Challenge Impact & Measurement Gap Research Objectives Methodology Success Metrics Conclusion.
[Audio] Recent systematic reviews of digital transformation literature have revealed a complex landscape of challenges and opportunities. Drawing from Hanelt et al.'s 2021 systematic review of 279 studies, we observe that while organizations are heavily investing in AI technologies, successful implementation remains a significant challenge, with success rates notably below expectations. This comprehensive review highlighted several critical factors contributing to implementation failures. First, organizations consistently underestimate the complexity of integrating AI systems with existing business processes. Vial's 2019 meta-analysis of 282 papers demonstrates that successful digital transformation requires systematic approaches that address both technical and organizational dimensions, yet current frameworks remain inadequate for comprehensive AI integration. Second, the financial services sector is experiencing unprecedented disruption in how products are conceived, developed, and delivered to market. Traditional financial institutions face mounting pressure from digital-native competitors who have successfully embedded AI capabilities throughout their product development lifecycle. Third, the regulatory environment adds a layer of complexity unique to financial services. As highlighted by Mohammad & Chirchir's 2024 analysis, organizations must balance the need for rapid innovation with stringent compliance requirements, creating what Cubric's 2020 study identifies as a 'regulatory-innovation paradox.'.
[Audio] Let me now detail the critical research gaps our work addresses. Through our comprehensive analysis of systematic reviews and meta-analyses, we've identified three fundamental gaps that currently impede successful AI implementation in financial services. These aren't just theoretical gaps - they represent real, practical challenges that financial institutions face daily in their digital transformation journey.
[Audio] The first critical gap concerns Implementation Frameworks, and it's perhaps the most fundamental. Drawing from Vial's 2019 meta-analysis of 282 papers, we've found that while organizations clearly recognize AI's potential, they lack structured frameworks for actual implementation. Let me elaborate on what this means in practical terms. When a bank decides to implement an AI system - let's take an AI-driven credit risk assessment system as an example - they immediately face several structural challenges. First, there's the question of how to integrate this AI system with their existing credit approval processes. Current frameworks don't adequately address how to maintain regulatory compliance while transitioning from traditional to AI-enhanced decision-making. We've seen cases where banks invest millions in sophisticated AI models, only to struggle with basic integration questions like: How do we maintain audit trails for AI decisions? How do we ensure consistent decision-making across traditional and AI-driven processes? How do we handle the transition period? This framework gap becomes even more pronounced when we look at data governance requirements. Banks possess vast amounts of customer data, but existing frameworks don't adequately address how to transform this data into AI-ready formats while maintaining compliance with banking regulations. For instance, one major bank in our preliminary study developed an excellent AI model for fraud detection, but couldn't implement it effectively because they lacked frameworks for real-time data integration that met both their performance requirements and regulatory obligations..
[Audio] The second major gap involves Project Management Methodologies, and this is where Gonçalves et al.'s 2023 research provides particularly important insights. Traditional project management approaches, which work well for conventional IT projects, prove inadequate for AI implementations in banking. This inadequacy manifests in several critical ways. Consider the typical agile sprint structure that many banks try to apply to AI projects. Traditional two-week sprints work well for regular software development, but AI development follows a fundamentally different pattern. When developing an AI-driven customer service chatbot, for example, you might spend several months just preparing and cleaning data before you can even begin meaningful development work. The iterative nature of AI model development, where performance improvements often come through multiple rounds of refinement rather than linear progress, simply doesn't fit into traditional sprint structures. We've observed cases where project managers, trying to force AI development into traditional frameworks, end up creating artificial deadlines that don't align with the actual needs of AI development. This misalignment leads to rushed implementations, inadequate testing, and ultimately, failed projects. Our analysis shows that about 60% of AI project delays in banking can be traced back to this fundamental mismatch between traditional project management methodologies and AI development requirements..
[Audio] The third gap, which concerns Measurement Frameworks, is particularly insidious because it affects both implementation and ongoing operations. Cubric's 2020 tertiary study provides compelling evidence that organizations lack comprehensive frameworks for measuring AI implementation success. This gap is especially problematic in banking, where the ability to demonstrate and document system effectiveness is not just an operational concern but a regulatory requirement. Let me illustrate this with a concrete example. When a bank implements an AI system for anti-money laundering detection, they need to measure not just technical metrics like model accuracy, but also business impacts like reduction in false positives, regulatory compliance metrics, and operational efficiency gains. Current measurement frameworks tend to focus on either technical or business metrics in isolation, failing to capture the interconnected nature of these factors in banking environments. This measurement gap becomes even more critical when we consider long-term value assessment. Banks struggle to quantify the full impact of their AI investments because existing frameworks don't adequately capture indirect benefits and long-term value creation. For instance, how do you measure the value of improved risk assessment capabilities? How do you quantify the impact of better customer experiences enabled by AI? These questions remain largely unanswered by current measurement frameworks. The regulatory aspect adds another layer of complexity to this measurement gap. Banks need to demonstrate not just that their AI systems are effective, but that they're also fair, unbiased, and compliant with various regulations. Current frameworks don't adequately address how to measure and document these aspects in a way that satisfies both operational needs and regulatory requirements. What makes these gaps particularly significant is their interconnected nature. Our analysis shows that they create a compound effect - inadequate implementation frameworks lead to poor project management approaches, which in turn make it difficult to measure and demonstrate success. This cycle often results in reduced confidence in AI initiatives, leading to hesitancy in future investments and implementations. These gaps don't just represent academic concerns - they have real, practical implications for the financial services sector. As digital transformation accelerates and AI becomes increasingly central to banking operations, addressing these gaps becomes not just important but essential for the future of banking. Our research aims to bridge these gaps through practical, implementable solutions that consider the unique requirements of the financial services sector.".
[Audio] Let me now outline our research objectives, which have been carefully designed to address the gaps I've just described. These objectives represent not just academic aims, but practical goals that will directly impact how financial institutions implement AI technologies. We've structured our research around three primary objectives, each building upon the others to create a comprehensive approach to AI implementation in financial services. Our research objectives directly address these identified gaps through three primary aims: Understanding AI Integration Measurement Framework Implementation Framework Let me outline our three core research objectives, each designed to transform how financial institutions implement AI technologies..
[Audio] Our first objective tackles the fundamental challenge of understanding AI integration in financial services. We're not merely cataloging challenges - we're conducting the first comprehensive study of why banks struggle with AI implementation. Drawing from Page et al.'s 2021 framework, we'll analyze real-world implementations across twenty financial institutions of varying sizes. This isn't just academic research; we're examining actual AI projects - from credit risk systems to fraud detection platforms - to understand what works, what fails, and most importantly, why. What makes this objective unique is our focus on the intersection of three critical elements: technical implementation, regulatory compliance, and operational effectiveness. When a bank implements an AI system, they're not just deploying technology - they're transforming how they operate while maintaining regulatory compliance. Our research will provide the first empirically-grounded framework for managing this complex transformation..
[Audio] Our second objective addresses a critical industry need: measuring AI implementation success. Current approaches to measuring AI effectiveness in banking are fragmented and incomplete. We're developing what we call a 'unified measurement framework' that integrates three essential perspectives: Technical metrics that go beyond basic performance measures to capture real-world effectiveness. We're not just asking 'Does the AI work?' but 'Does it work reliably in a banking environment?' This includes new metrics for model stability, integration effectiveness, and operational resilience. Business impact metrics that quantify both direct and indirect value creation. Drawing from Bharadwaj's work, we're developing specific measures for efficiency gains, risk reduction, and customer experience improvements. These metrics will help banks justify AI investments and optimize their implementation strategies. Regulatory compliance metrics that ensure AI systems meet banking regulations while delivering business value. This includes specific measures for model transparency, decision auditability, and risk management effectiveness - elements crucial for banking but often overlooked in traditional AI metrics..
[Audio] Our third objective is perhaps the most ambitious: developing practical implementation frameworks that work in the real world of banking. We're creating what we call 'adaptive implementation pathways' - structured approaches that guide banks through AI implementation while adapting to their specific circumstances. These aren't theoretical frameworks; they're practical tools based on proven successes and lessons learned from failures. What sets these frameworks apart is their integration of regulatory requirements from the start. We're not treating compliance as an afterthought - it's built into every stage of the implementation process. This means banks can innovate with AI while maintaining regulatory compliance, solving one of the industry's most persistent challenges. These objectives are deliberately interconnected. The insights from our first objective inform our measurement framework, while both feed into our implementation pathways. This integrated approach ensures our research delivers practical, implementable solutions that work in the complex world of financial services. The impact of achieving these objectives will be significant. Banks will have clear roadmaps for AI implementation, reliable ways to measure success, and practical tools for managing the transformation. More importantly, they'll be able to innovate with AI while maintaining the stability and compliance that banking demands. This research represents the first comprehensive attempt to solve the AI implementation challenge in banking. By combining rigorous academic research with practical industry experience, we're creating solutions that will fundamentally change how banks approach AI transformation.".
Methodology: Three-Phase Approach. A diagram of a process Description automatically generated.
[Audio] In Phase 1, which we call Industry Discovery, we will conduct an intensive six-month investigation across a carefully selected range of financial institutions. Our selection process is particularly crucial here - we'll be working with 15 to 20 financial institutions, deliberately chosen to represent different asset sizes, ranging from major banks with over $10 billion in assets to smaller institutions under $1 billion. This diversity is essential because our preliminary research indicates that AI implementation challenges vary significantly based on institutional size and complexity. Within each institution, we're implementing a multi-layered data collection strategy. At the executive level, we will conduct in-depth interviews with approximately 40 senior leaders, including CTOs, CIOs, and heads of digital transformation. These interviews are structured using an enhanced version of Venkatesh's UTAUT model, which we've modified to specifically address AI implementation challenges in banking. We're particularly interested in understanding how these leaders view the intersection of AI capabilities and regulatory requirements, as this has emerged as a critical success factor in our preliminary research. Complementing these executive interviews, we'll be conducting twelve focused technical team workshops. These workshops are designed to bring together the various specialists involved in AI implementation - data scientists, ML engineers, IT infrastructure teams, and business analysts. Our preliminary research, drawing from Saltz & Krasteva's 2022 findings, shows that many implementation failures stem from communication gaps between these technical specialists. These workshops will follow a structured protocol, examining specific implementation challenges through multiple lenses - technical feasibility, regulatory compliance, and operational practicality. Perhaps most importantly, we'll be conducting over 200 hours of direct process observation. This involves our research team being physically present during actual AI implementation meetings, technical development sessions, and integration testing. This direct observation is crucial because our previous research has shown that many critical challenges only become visible during actual implementation attempts. We'll be paying particular attention to how teams handle unexpected challenges, as these moments often reveal the limitations of current frameworks..
[Audio] Moving into Phase 2, our Quantitative Analysis phase extends over twelve months and represents the core of our empirical work. During this phase, we're implementing a sophisticated longitudinal study design that tracks both technical and business outcomes of AI implementations. What makes our approach unique is the comprehensive nature of our tracking framework. We're not just looking at standard technical metrics like model accuracy and system performance - though these are certainly important. Instead, we're implementing what we call a 'full-stack' measurement approach. On the technical side, we'll be tracking detailed performance metrics including not just basic measures like F1 scores and AUC-ROC curves, but also more nuanced indicators like model drift patterns, system response variations under different loads, and integration stability metrics. These measurements will be collected continuously over the twelve-month period, giving us unprecedented insight into how AI systems evolve and perform in real banking environments. The business impact analysis runs parallel to this technical tracking. We're implementing a comprehensive measurement framework that captures both quantitative metrics - things like cost savings and revenue impacts - and qualitative factors like changes in decision-making processes and customer satisfaction. This dual approach allows us to build a complete picture of how AI implementations affect all aspects of banking operations. What's particularly innovative about our Phase 2 methodology is our use of cross-institutional comparative analysis. By tracking multiple implementations across different institutions simultaneously, we can identify patterns and factors that wouldn't be visible in single-institution studies. This comparative approach is especially powerful for understanding how different organizational contexts affect implementation success..
[Audio] Finally, Phase 3 focuses on Framework Development and represents the culmination of our research. During this six-month phase, we synthesize our findings into practical, implementable frameworks. This isn't just a theoretical exercise - we're creating actual tools and protocols that banks can use. The development process follows an iterative approach, with each component being developed, tested, and refined through real-world application. Our validation process is particularly rigorous. We've assembled a panel of fifteen experts, including academic researchers, industry practitioners, technical specialists, and regulatory experts. Each component of our framework undergoes three rounds of review and validation. First, conceptual validation ensures theoretical soundness. Second, practical validation tests real-world applicability. Finally, regulatory validation ensures compliance with banking regulations. We're also implementing a three-tier testing protocol for our frameworks. The alpha stage involves technical validation in controlled environments. The beta stage implements the frameworks in limited bank deployments, allowing us to identify and address any practical challenges. The final gamma stage involves full-scale implementation at select partner institutions, providing a comprehensive test of our frameworks' effectiveness. Throughout all three phases, we're maintaining strict research protocols to ensure data quality and reliability. We're implementing regular cross-validation of findings, maintaining detailed audit trails of all research activities, and conducting regular reviews to ensure we're meeting both academic standards and practical needs. This methodology has been designed to address the specific challenges identified in our literature review while maintaining the flexibility to incorporate new insights as they emerge. By combining rigorous academic methods with practical industry application, we believe this approach will produce frameworks that are both theoretically sound and practically valuable.".
[Audio] "Let me now outline how we'll measure success in this research initiative. Our measurement framework is built on a comprehensive understanding of what constitutes success in AI implementation within financial services. Drawing from Islam et al.'s 2023 systematic review, we've developed what we call a 'three-dimensional' measurement approach that captures technical excellence, business value, and regulatory compliance..
[Audio] Let's start with our technical success metrics. In banking, technical success means more than just having an AI system that works - it must work reliably, consistently, and securely. Our technical metrics focus on three critical areas: First, model performance and reliability. We're measuring not just accuracy rates through standard metrics like F1 scores and AUC-ROC curves, but also what we call 'banking-specific performance indicators.' These include model stability under varying transaction loads, decision consistency across different customer segments, and what we term 'regulatory drift' - how well the model maintains compliance over time. For example, in a credit risk assessment system, we're not just measuring prediction accuracy; we're tracking how well the model maintains its performance across different economic conditions and customer profiles. Second, system integration effectiveness. This is crucial because AI systems in banking don't operate in isolation. We've developed specific metrics to measure how well AI systems integrate with existing banking infrastructure. These include real-time processing capability, system response times under peak loads, and what we call 'operational resilience indicators' - measures of how well the system maintains performance during infrastructure changes or stress conditions. When a bank implements an AI-driven fraud detection system, for instance, we measure not just its accuracy but its ability to make decisions within the millisecond timeframes required for real-time transaction processing. Third, resource utilization efficiency. Drawing from Radjenović's framework, we measure both computational efficiency and human resource optimization. This includes tracking processing overhead, storage utilization, and what we term 'AI operational efficiency' - the ratio of AI system benefits to maintenance requirements. These metrics help banks understand and optimize the true cost of running AI systems in production environments..
[Audio] Moving to our business impact metrics, we're implementing what Bharadwaj et al. describe as a 'value creation framework,' but enhanced for AI in banking. This framework measures success across four key dimensions: Operational efficiency gains are measured through what we call 'process transformation metrics.' These capture reductions in processing time, improvements in decision accuracy, and increases in throughput capacity. For instance, when implementing an AI-driven loan processing system, we measure not just how many applications it can process, but how this increased capacity translates into business value through improved customer response times and increased loan booking rates. Customer experience improvements are tracked through our 'experience enhancement metrics.' These go beyond traditional satisfaction scores to measure what we call 'AI-enabled experience factors' - specific improvements in service speed, personalization accuracy, and problem resolution rates. We're particularly interested in measuring how AI implementations affect customer trust and engagement with digital banking services. Cost reduction impacts are measured through our 'efficiency realization metrics.' These capture both direct cost savings from automation and indirect benefits from improved decision-making. We track not just reduced operational costs but also what we term 'risk-adjusted cost benefits' - savings achieved through better risk assessment and fraud prevention. Revenue generation effects are measured through what we call 'AI-driven growth metrics.' These track how AI implementations contribute to revenue through improved cross-selling, reduced customer churn, and enhanced product personalization. We're particularly focused on measuring the long-term revenue impacts of AI-enhanced customer relationships. Finally, our regulatory compliance metrics represent a unique contribution to the field. Drawing from Cubric's work on AI governance, we've developed specific measures for what we call 'compliance effectiveness.'.
Questions & Discussion.
[Audio] As we conclude today, let me emphasize why this research matters. The financial services sector stands at a crucial intersection of innovation and stability. Our comprehensive framework doesn't just bridge the gap between AI potential and practical implementation – it transforms how banks approach digital innovation. Through our three-phase methodology and rigorous success metrics, we're not merely studying AI implementation – we're creating a blueprint for the future of banking. Our research delivers three fundamental impacts: First, we provide banks with a proven implementation pathway, grounded in empirical research and validated across diverse banking environments. This isn't theoretical – it's a practical framework that works in the real world of banking. Second, we've developed measurement tools that demonstrate clear business value while ensuring regulatory compliance. This dual focus means banks can innovate confidently, knowing they're building both competitive advantage and regulatory resilience. Third, and perhaps most importantly, we're setting new standards for how AI should be implemented in financial services. Our frameworks and metrics will become the benchmark for successful AI transformation in banking..
[Audio] The choice facing banks isn't whether to implement AI, but how to implement it successfully. Our research provides the answer to that crucial 'how.' Through this work, we're not just observing the transformation of financial services – we're actively shaping it. We invite you to join us in this journey. Together, we can create more efficient, innovative, and trusted financial institutions that better serve our digital future. Thank you for your attention. I welcome your questions and look forward to potential collaborations in advancing this crucial work.".
Icon Description automatically generated.