Challenges of Genai in Finance

Explore top LinkedIn content from expert professionals.

Summary

Generative AI (GenAI) is transforming the finance industry by automating processes, identifying trends, and unlocking new efficiencies. However, its adoption comes with challenges, including data biases, overreliance on AI over human judgment, and security vulnerabilities that can lead to fraud or misuse.

  • Focus on data quality: Ensure that your AI models are trained on comprehensive and accurate historical data to avoid skewed predictions and poor decision-making.
  • Combine AI with human judgment: Use AI as a tool to enhance analysis, but involve human oversight for critical thinking and to address bias, uncertainty, or unique scenarios.
  • Strengthen security measures: Develop robust systems for fraud detection and prevention, including advanced monitoring of user behavior and securing data from unauthorized AI usage.
Summarized by AI based on LinkedIn member posts
  • View profile for John Glasgow

    CEO & CFO @ Campfire | Modern Accounting Software | Ex-Finance Leader @ Bill.com & Adobe | Sharing Finance & Accounting News, Strategies & Best Practices

    13,653 followers

    Harvard Business Review just found that executives using GenAI for stock forecasts made less accurate predictions. The study found that:  • Executives consulting ChatGPT raised their stock price estimates by ~$5.  • Those who discussed with peers lowered their estimates by ~$2.  • Both groups were too optimistic overall, but the AI group performed worse. Why? Because GenAI encourages overconfidence. Executives trusted its confident tone and detail-rich analysis, even though it lacked real-time context or intuition. In contrast, peer discussions injected caution and a healthy fear of being wrong. AI is a powerful resource. It can process massive amounts of data in seconds, spot patterns we’d otherwise miss, and automate manual workflows – freeing up finance teams to focus on strategic work. I don’t think the problem is AI. It’s how we use it. As finance leaders, it’s on us to ensure ourselves, and our teams, use it responsibly. When I was a finance leader, I always asked for the financial model alongside the board slides. It was important to dig in and review the work, understand key drivers and assumptions before sending the slides to the board. My advice is the same for finance leaders integrating AI into their day-to-day: lead with transparency and accountability. 𝟭/ 𝗔𝗜 𝘀𝗵𝗼𝘂𝗹𝗱 𝗯𝗲 𝗮 𝘀𝘂𝗽𝗲𝗿𝗽𝗼𝘄𝗲𝗿, 𝗻𝗼𝘁 𝗮𝗻 𝗼𝗿𝗮𝗰𝗹𝗲. AI should help you organize your thoughts and analyze data, not replace your reasoning. Ask it why it predicts what it does – and how it might be wrong. 𝟮/ 𝗖𝗼𝗺𝗯𝗶𝗻𝗲 𝗔𝗜 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝘄𝗶𝘁𝗵 𝗵𝘂𝗺𝗮𝗻 𝗱𝗶𝘀𝗰𝘂𝘀𝘀𝗶𝗼𝗻. AI is fast and thorough. Peers bring critical thinking, lived experience, and institutional knowledge. Use both to avoid blindspots. 𝟯/ 𝗧𝗿𝘂𝘀𝘁, 𝗯𝘂𝘁 𝘃𝗲𝗿𝗶𝗳𝘆. Treat AI like a member of your team. Have it create a first draft, but always check its work, add your own conclusions, and never delegate final judgment. 𝟰/ 𝗥𝗲𝘃𝗲𝗿𝘀𝗲 𝗿𝗼𝗹𝗲𝘀 - 𝘂𝘀𝗲 𝗶𝘁 𝘁𝗼 𝗰𝗵𝗲𝗰𝗸 𝘆𝗼𝘂𝗿 𝘄𝗼𝗿𝗸. Use AI for what it does best: challenging assumptions, spotting patterns, and stress-testing your own conclusions – not dictating them. We provide extensive AI within Campfire – for automations and reporting, and in our conversational interface, Ember. But we believe that AI should amplify human judgment, not override it. That’s why in everything we build, you can see the underlying data and logic behind AI outputs. Trust comes from transparency, and from knowing final judgment always rests with you. How are you integrating AI into your finance workflows? Where has it helped vs where has it fallen short? Would love to hear in the comments 👇

  • View profile for Bryan Lapidus, FPAC

    Director, FP&A Practice at the Association for Financial Professionals (AFP)

    16,860 followers

    From Vineet Jain: Historically, forecasting, budgeting and variance analysis activity heavily rely on manual efforts and reliance on historical data. With the changes in markets and complexity, the need for more agile and data-driven approaches has become paramount. AI can process information from diverse sources that identify hidden trends and generate predictions above human capabilities, but only if you solve... COMMON CHALLENGES AND LIMITATIONS OF AI-DRIVEN FINANCIAL ANALYSIS ➡ Data Quality and Quantity: In financial analysis, data quality and quantity are critical, and AI models heavily rely on data for accurate predictions. Inaccurate or incomplete data can lead to flawed outcomes, insights and predictions. Additionally, AI models require a significant amount of historical data to train effectively, and this large dataset could be a limitation for several businesses. ➡ Model Overfitting: Overfitting occurs when an AI model performs exceptionally well on the training data but, due to exceptional transactions, fails to generalize this new, unseen data. This can happen when the model captures noise or anomalies in the training data, and the new data is widely skewed. Financial data often contains noise due to extraordinary and time-specific transactions, and without careful regularization and validation, AI models can provide misleading results. ➡ Volatility and Uncertainty: Financial markets are inherently volatile and subject to sudden shifts due to black swan conditions, economic events or geopolitical factors. AI models might struggle to accurately predict extreme events or abrupt changes that fall outside the pattern of historical data. ➡ Bias and Interpretability: Biases in historical data can lead to biased predictions and calculation of financial forecasts. Many AI models, particularly deep learning algorithms, operate as “black boxes,” meaning their decision-making process is complex and challenging to understand. Understanding why a model made a particular prediction is crucial for risk assessment and compliance with regulatory standards, and the biased nature impacts the confidence in the forecast. ➡ Human Expertise and Judgment: While AI can process vast amounts of data, human expertise and judgment remain invaluable. AI may not provide the same level of analytical capability that humans have in particular situations. These financial decisions and situational nuances might be a struggle for AI models. ...for the rest of Vineet's list, check out the full article here: https://lnkd.in/gUJVSmf3

  • View profile for André F.

    Co-Founder, CEO, Incognia | Fraud prevention | Authentication | Identity | Computer science

    23,442 followers

    A recent TechCrunch article stuck out to me: "GenAI could make KYC effectively useless" This is something I've been vocal about – the rise of deepfakes and their implications for fraud prevention. Many companies, including financial institutions and marketplaces, rely on document scanning and facial recognition for identity verification. But here's the hard truth: creating fake documents is incredibly easy, and GenAI makes it even easier for fraudsters. The bigger concern? Facial recognition can be easily duped. Our faces, often publicly available on social media and various websites, can be used by fraudsters to create masks and bypass facial recognition software. Even liveness detection isn't foolproof anymore. GenAI has become sophisticated enough to bypass both facial recognition and liveness tests. Relying on public information for identity verification is no longer effective. Sure, it might check the compliance box 🤷🏻♂️ But it's not stopping fraud. The same goes for PII verification. With the sheer number of data breaches, much of this data is effectively public. Document verification, facial recognition, PII verification – all these methods are vulnerable in the age of GenAI. This isn't just a temporary challenge; it's the future of fraud prevention. So, if your company is using these traditional methods for KYC and IDV, it's time to rethink your strategy. At Incognia, we're ahead of the curve, developing solutions that address these evolving challenges.

  • View profile for Soups Ranjan

    Co-founder, CEO @ Sardine | Payments, Fraud, Compliance

    36,140 followers

    The adoption of Real Time Payments will feel slow then sudden, especially in B2B payments. $18.9 trillion is a conservative estimate for RTP volume. The ROI calculation of the criminals improved dramatically since the dawn of GenAI and RTP compounds this problem. GenAI reduces the cost of creating convincing phishing emails, scams, and deep fakes. The payoff for a B2B payment can be in the low six to mid seven figures for a single transaction. We’ve already seen a spike in stolen business credentials from data leaks and hacks that lead to: 👉 Sophisticated business email compromise. Believable emails from what appears to be a company’s tech support staff. 👉 Remote access attacks. The “tech support team” taking over a screen and sending a transaction to the wrong recipient while “fixing the employee’s computer” 👉 Targeted deep fakes. Where finance ops teams are now directly attacked with fakes of internal staff, CFOs and leadership. Our clients tell us they regularly see generated documents, and deep fake attacks during their onboarding process. The volume has exploded in the past 12 months. Gen AI + Faster Payments makes B2B payments a critical potential vulnerability that gets ignored because it was once a sleepy backwater and not as high risk. That’s why it's critical to 🐟 Watch for device and behavior usage before, during and after every single customer interaction. If you can monitor their device and behavior, you can detect deep fakes and prevent a transaction from happening if the risk appears high enough. 🐟 Implement real-time transaction monitoring. If you only review transactions for fraud during cut-off windows and on batch, you’ll be vulnerable to RTP fraud and AML schemes.

  • View profile for Suyesh Karki

    #girldad #tech-exec #blaugrana

    4,216 followers

    A major financial services firm recently faced regulatory fines after an analyst, unable to access approved AI tools, uploaded sensitive customer data to a personal GenAI account just to meet a tight deadline. The external platform stored the data overseas, violating company policy and privacy laws-resulting in both financial and reputational damage. This isn’t an isolated incident. I read somewhere that 45.4% of sensitive exposures now happen through personal accounts-not out of carelessness, but necessity. When corporate tools can’t keep up, employees turn to Shadow AI, and data risks multiply. To address this, organizations must provide secure, approved GenAI tools, deploy real-time DLP and CASB for data monitoring, automate Shadow AI discovery and governance, and train employees on safe AI use. Let’s move beyond patchwork fixes and build unified, adaptive security that keeps pace with how people really work.

Explore categories