From Legacy Systems to AI-Driven CRM: Saurav Pal’s View on Enterprise Transformation

Table of Contents
    Add a header to begin generating the table of contents
    From Legacy Systems to AI-Driven CRM Saurav Pal’s View on Enterprise Transformation

    You’ve spent 18 years working inside the CRM function at major organizations across banking, manufacturing, and automotive. What does working across those three industries teach you about how large enterprises manage customer relationships at scale?

    Across banking, manufacturing, and automotive, large enterprises converge on a few core truths about managing customer relationships at scale. Everyone starts by digitizing touchpoints like the branch, web, dealer, call center, and app. Real maturity comes when we design a canonical customer and interaction model and make every channel a projection of that model, not a mini-CRM of its own.

    Banking is forced by regulation (KYC/AML) to invest early in golden records and survivorship rules, while manufacturing and automotive learn the hard way that without strong matching and de-dup for person, business, vehicle, asset, and dealer, every downstream initiative suffers: loyalty, targeting, service, leads management, marketing. In banking, we have led to an opportunity to account tightly governed; SLAs and compliance are baked in. In automotive and manufacturing, there is more emphasis on complex B2B2C journeys involving dealer networks, distributors, and partners, where a “case” or “opportunity” might span multiple legal entities and systems.

    At scale, CRM is mostly an integration problem. Core systems such as core banking, ERP, DMS, telematics, manufacturing execution, contact center, and marketing clouds must exchange clean events and reference data. In banking, privacy, consent, and auditability are first-class citizens; we architect CRM around entitlements, purpose limitation, and explainability. In automotive, privacy is rising fast with location, telematics, and biometrics in the car, but historically, dealers and OEMs treated data more opportunistically.

    In all three domains, CRM eventually becomes a shared enterprise platform. The only sustainable way to run it is through a product operating model with clear product ownership, roadmap, and intake instead of ad-hoc projects, plus governance for data and configuration like fields, picklists, workflows, message types, and integrations so local needs don’t shatter global consistency.

    Industries like automotive and retail lending are undergoing significant digital transformation, yet both carry decades of legacy infrastructure and established ways of working. Where does that tension show up most concretely when you’re trying to modernize how those organizations manage customer relationships?

    The tension shows up mostly where new CRM behavior has to coexist with old risk, data, and operating models.

    On the system-of-record versus system-of-engagement front, the concrete pain is that you stand up Salesforce or a similar system as “the customer 360,” but underwriting, contract, and servicing data still live in LOS/core banking/mainframe or DMS/legacy CRM. This manifests as multiple truths for customer, account, and asset (vehicle/loan) state. Sales and service advisors can see offers and journeys in CRM that can’t be cleanly actioned in the legacy Loan Origination System or Dealer Management System.

    Legacy product and process models clash with modern journeys. Legacy systems are organized around products and transactions like auto loan, lease, and service RO, while modern CX is organized around customer journeys. Journey orchestration needs cross-product events like application status, funding, first payment, delinquency, and service visit, but those events are buried in batch jobs and COBOL tables. Simple experience asks like “show me all relationships for this household or fleet” or “tell me every open promise to pay and service commitment for this customer” require stitching together 5 to 10 systems.

    The batch, file-based world collides with real-time engagement. Marketing and CRM teams want triggered, real-time journeys, but the source systems are nightly SFTP drops, mainframe jobs, or dealer batch feeds. “Abandoned application” or “in-equity” campaigns run days late, losing conversion. Dealers or loan officers see stale lead states because status updates return in flat files rather than events. Any change to timing or logic implies re-engineering decades-old jobs, which is high-risk and slow.

    Data quality and lineage can’t support personalization or AI. Data that was “good enough” for operations and regulatory reporting is not good enough for next-best-action, scoring, or personalization. Duplicate customers across LOS, collections, and dealer CRM cause conflicting risk views and mis-targeted offers, like promoting a refi to someone in early delinquency. Missing or dirty fields like VIN, BAC/branch, income, co-borrowers, and consent flags block ML models and propensity scoring or create compliance exposure.

    Risk, compliance, and controls push back against agile change. Lending and auto finance are heavily regulated; old systems implicitly bake in approval flows, audit trails, and segregation of duties. CRM teams want rapid A/B tests and dynamic UI, but risk and compliance push back because the control framework lives in the legacy stack. Moving decisions or workflows into the CRM for pricing, approvals, hardship, and repossession triggers requires re-proving controls, model governance, and SOX or SOX-like evidence.

    Channel and identity fragmentation adds another layer. Web, mobile, call center, dealer/branch, and partner channels each have their own identity and consent models. A customer authenticates with one ID for online banking or OEM app, another at the dealer, another in legacy servicing; CRM has to link them probabilistically. Consent and communication preferences differ by system, making it hard to guarantee channel- and product-consistent regulatory compliance around TCPA, CAN-SPAM, FCRA, and GDPR/CCPA.

    AI is being built directly into enterprise CRM platforms, and organizations are moving quickly to adopt it. From your experience deploying these capabilities inside major enterprises, what does responsible adoption require at the organizational level?

    Responsible adoption at enterprise scale is less about the features in the CRM and more about whether the org treats AI as a governed, cross-functional capability.

    Clear ownership and governance come first. Stand up a formal AI/CRM governance group or plug into an existing AI council with Legal/Privacy, InfoSec, Data Governance, Risk/Compliance, CX, and CRM product owners represented. Define a RACI for who can propose AI use cases in CRM, who approves them, including risk/impact assessment, and who owns outcomes and ongoing monitoring.

    Data readiness, cataloging, and lineage matter enormously. Treat AI CRM features as consumers of governed data products, not raw objects. Catalog data sets, features, and models with owners, classifications, and lineage. Ensure source systems for training and inference data are under basic data quality SLAs covering completeness, timeliness, deduplication, and keys for household/relationship.

    Policy, guardrails, and patterns need to be concrete. Translate high-level AI principles like fairness, accountability, privacy, and explainability into concrete CRM patterns. Implement technical guardrails: role- and purpose-based access in CRM for AI features covering who can see suggestions, who can override, who can configure. Set content policies for generative features, including banned topics, required disclaimers, and mandatory templates for regulated disclosures.

    Model and feature lifecycle management applies even when models are “out-of-the-box” from Salesforce or another vendor. Maintain a registry of enabled AI features, the data they consume, and their configurations. Define triggers for re-evaluation like drift in predictions, user feedback, or regulatory changes. Build monitoring dashboards for model performance, bias, unexpected outcomes, and user override rates. Require impact assessments before major config changes or new feature rollouts.

    Human-in-the-loop and override design must be intentional. Make the AI assistive by default: surface recommendations, but let the human decide. Build explicit override paths for front-line users and design feedback loops so their corrections inform future behavior. Log overrides with rationale; treat patterns as signals to retrain or constrain the model. For high-stakes decisions around credit, pricing, and adverse actions, mandate human review and approval regardless of the model’s confidence.

    Explainability and transparency can’t be afterthoughts. For every AI feature in CRM, define what level of explainability is required based on the decision type. Surface model reasoning to users in simple language: “recommended because of prior purchase history” or “flagged due to delinquency risk.” Provide audit trails for decisions: who saw which recommendation, whether they acted, when, and why they overrode it. Make these records accessible to Compliance, Risk, and customer-facing teams for investigations.

    Training and adoption programs need to shift the culture. Educate business users and front-line teams on what AI does, what it doesn’t do, and how to challenge it. Run pilots with co-design: involve real users in shaping AI features so they trust them and feel a sense of ownership. Create playbooks for common AI failure modes like hallucinations, outdated data, or bias, and how to escalate and fix them. Celebrate teams that identify and fix bad outcomes, not just those that ship new AI features.

    Privacy, consent, and transparency toward customers require active management. Make customers aware when they’re interacting with AI, especially in generative chat or voice. Honor opt-outs and consent preferences by giving customers granular control over AI-driven personalization. Ensure data minimization: don’t pull in more customer data than the feature needs, and respect purpose limitation. Build mechanisms for customers to request explanations for decisions and challenge outcomes.

    Migrating from a system like Siebel to Salesforce is a multi-year undertaking that touches nearly every part of how an organization operates. Beyond the technical complexity, what does a project like that reveal about how an organization actually functions?

    A migration like that is a forcing function for organizational self-awareness. You learn things that were invisible or politely ignored.

    Patterns that appear almost every time start with the process-versus-reality gap. Documented “lead-to-cash” or “case handling” flows often don’t match real behavior. CRM migration forces choices: which path is the process, and which are workarounds, exceptions, or power-user hacks.

    You find out who really runs the business. You discover where decisions are actually made: in the field, in spreadsheets, by one ops manager, or via formal governance. The org chart might say one thing; the approvals, escalations, and “call this person to get it done” patterns say another.

    Data literacy and shared definitions get stress-tested. A cloud CRM makes ambiguity painful. What is a qualified lead? When is an opportunity created or closed lost? What counts as active customer or enrolled dealer? If teams can’t agree on these, they don’t really have a shared understanding of their own business.

    Customer journey fragmentation becomes visible. Migration reveals how disconnected marketing, sales, service, dealer networks, and finance truly are. Every handoff you model and every integration you build shows where customers fall into cracks no one “owns.”

    The migration reveals what the organization actually values. Where they are willing to change the process versus forcing the system to bend tells you their priorities. If they’ll re-engineer around a cleaner data model, they value scalability and analytics. If they keep customizing to mimic old screens, they value local comfort and continuity over standardization.

    You carry both architectural responsibility and leadership of large engineering teams. How does that combination shape your approach to digital transformation projects?

    It makes me treat tech choices as leadership decisions, not just design decisions.

    Concretely, that combination changes my approach in a few ways. Architecture is framed in business language. Target architectures are explained as changes to how work flows, who decides what, and which metrics will move, not as diagrams.

    I set guardrails, not blueprints. As an architect, I define a small set of non-negotiables: domain boundaries, integration contracts, data ownership, and security/compliance constraints. As a leader, I let teams experiment inside those guardrails so they feel ownership instead of “central architecture is blocking us.”

    The roadmap is shaped by team capacity and talent, not just ambition. I sequence modernization around where I have leaders and teams who can actually absorb change, not just where the architecture is ugliest. “Can we run this safely with the people we have?” becomes as important as “Is this the cleanest design?”

    Org design is part of the architecture. If I define domains like Leads, Dealer Data, and Campaigns, I try to line teams and product owners up to those domains. Conway’s Law is assumed, not ignored. Where the org chart can’t move, I expect integration and governance frictions and design explicitly for them.

    Trade-offs are made in the open. Instead of hiding compromises under “we’ll refactor later,” I surface them as explicit decisions with business stakeholders around standardization versus local flexibility, speed versus robustness, and time-to-market versus technical debt. That keeps architecture honest and gives teams cover when we choose “good enough now.”

    Change management is treated as a first-class system. Rollout, training, feature flags, migration waves, hypercare, and feedback loops are designed with the same rigor as APIs and data models. I expect resistance and design “escape hatches” like shadow reporting, dual-running, and opt-in pilots rather than assuming one big-bang cutover.

    Measurement is end-to-end. As architect, I define the data we must capture; as leader I insist we actually use it in QBRs and team rituals around adoption, cycle time, defect rates, rework, and business KPIs. If a transformation doesn’t show up in those numbers, we treat it as unfinished regardless of how “done” the tech looks.

    As AI agents become more embedded in how enterprises manage customer relationships, what do you think the next few years will require from business leaders who want to use those capabilities without losing sight of the human side of customer engagement?

    Over the next few years, the ones who do this well will set clear guardrails for AI in customer touchpoints. Decide where AI can act autonomously, where it only assists, and when a human must take over. Codify things like “right to a human,” escalation rules, and “no dark patterns” in journeys and SLAs, not just in policy slides.

    They’ll redefine what “good” customer engagement means. Balance efficiency metrics like deflection, AHT, and cost/contact with human metrics, including trust, sentiment, effort, repeat contacts, and quality of resolution. Treat relationship health as a first-class KPI alongside revenue and cost.

    Design for human-centered handoffs, not AI silos. Ensure the agent, whether human or AI, always has context: prior interactions, promises made, and emotional tone. Make handoffs feel like one continuous conversation, not a restart every time a bot hits its limit.

    Invest in frontline roles instead of hollowing them out. Move people up the value chain toward complex judgment, negotiation, empathy, recovery, and exception handling. Train them to supervise AI outputs, challenge recommendations, and feedback patterns and edge cases.

    Own data ethics and personalization boundaries. Be explicit about what data is used, how, and where the line is between “helpful” and “creepy.” Give customers real controls over personalization depth and channel preferences; use that as a design constraint.

    Build cross-functional AI governance, not shadow projects. Stand up small, durable squads with business, CX, legal, risk, data, and engineering that own AI use cases end-to-end. Require every AI initiative to answer: What problem, for whom, what human is accountable, and how will we know if this harms trust?

    Model the behavior they want the org to adopt. Use AI themselves for briefs, analysis, and scenario planning, but be transparent about where human judgment steps in. Celebrate teams that fix a bad AI outcome quickly and learn from it, not just teams that launch the flashiest new bot.