top of page

Navigating AI Adoption: How Fear is Holding Companies Back from Unlocking Generative AI’s Full Potential

Writer's picture: Anupam KunduAnupam Kundu

Image generated by Dall-E 2024

One year ago, I met Amanda, a mid-level manager at a leading financial services company in Paris. She told me that earlier that day she was pulled into an urgent meeting at her work by her boss.


The company's leadership had just made a major decision: ChatGPT and all other generative AI tools were banned due to concerns over confidentiality and data privacy, While Amanda understood the fear of exposing sensitive customer data to external systems, she couldn’t help but feel frustrated. Her small team had been experimenting with these tools to automate mundane tasks and improve decision-making speed—and it was working. But now, it felt like the plug had been pulled on something that could genuinely transform their workflow. 



Despite the ban, Amanda’s team continued experimenting with generative AI—under the radar. They had already seen its potential to help sift through customer data faster than ever and identify patterns that could be useful for future strategy sessions. However, the secrecy and uncertainty created a cloud of fear. 


Employees whispered about AI potentially replacing their jobs, and Amanda struggled with one looming question: how could they provide the necessary metrics to justify these initiatives, while complying with stringent EU regulations on data privacy and confidentiality?




Fear and Apprehension vs. Curiosity and Innovation


Amanda’s story is not unique. In organizations across industries, we are seeing a pattern. 

Instead of embracing AI as a tool for innovation and growth, companies are trapped in a state of limbo. Leadership is aware of AI’s inevitability, but fear holds them back. Meanwhile, employees like Amanda are left wondering whether their efforts will be acknowledged—or if they’ll be seen as threats to their own job security.


The rise of powerful tools like ChatGPT and other generative AI models has generated excitement about what AI can do —but for many organizations, particularly in the EU, this excitement has been tempered by caution. Strict regulations such as the General Data Protection Regulation (GDPR) and the growing movement toward AI sovereignty have left companies unsure about how to proceed. 


While many small teams are experimenting with AI tools within large companies, the fear of non-compliance with data protection laws or inadvertently sharing confidential information externally leads to a fragmented approach. Leadership often opts for outright bans to mitigate risk, leaving their organizations in a state of paralysis. The result? A divided organization where fear takes precedence over the curiosity and collaboration that drive real progress.


In contrast, a forward-thinking approach would be to adopt “guardrails” that ensure AI tools are used responsibly while maintaining compliance with EU laws. These guardrails include:


  1. Data Minimization: Ensuring that AI tools only access the minimal amount of data necessary to perform tasks, reducing the risk of exposure.

  2. AI Sovereignty: Exploring European-based AI solutions that comply with local and regional data laws, ensuring data doesn’t leave EU borders.

  3. Privacy by Design: Implementing AI systems that are built with GDPR compliance in mind, ensuring that customer and employee data remains secure throughout the AI process.


Instead of approaching AI adoption with fear, companies need to shift toward responsible experimentation, with an emphasis on privacy, transparency, and control.



The Role of Metrics: What Do You Measure When You Don’t Know the Destination?


In the midst of all this, leadership teams demand one thing: metrics. Amanda told me of multiple situations where her boss asked for a report on the success of her team's AI pilot project.


The problem? No one had defined success in the first place. How could she provide metrics for a tool they weren’t even supposed to be using?


This scenario plays out in countless organizations. Mid-level managers are asked to justify AI experiments with hard data, but the reality is that traditional metrics don’t apply here. Unlike more established business processes, AI adoption doesn’t come with a clear roadmap or predefined KPIs. This can lead to frustration for managers like Amanda, who see the potential of these tools but are unable to prove it in ways their superiors understand. 


The speed of learning is perhaps the most important thing to measure. In the EU context, enterprise companies need to rethink how they measure AI success, placing an emphasis on learning, compliance, and early-stage experimentation.  Rather than focusing solely on outputs like cost savings or efficiency gains, companies should explore metrics that capture learning and experimentation, such as:


  1. Experimentation Volume: How many AI-driven pilot projects are being initiated across the organization?

  2. Employee Engagement: Are employees curious and eager to learn about AI, or is fear stifling experimentation?

  3. Use Case Identification: How many new workflows or processes are being identified where AI can bring meaningful value?

  4. Skills Development: Are employees upskilling in AI? Are there internal programs to help staff integrate AI into their work?

  5. GDPR Compliance: Are the AI tools and processes compliant with GDPR regulations? Are data protection impact assessments (DPIAs) being conducted to evaluate privacy risks?

  6. Data Privacy Safeguards: How effectively are data minimization strategies being applied? Are anonymization and encryption protocols being used to protect sensitive data?

  7. AI Sovereignty: Is the company leveraging AI solutions that ensure data sovereignty within EU borders? Are local AI vendors or cloud providers being considered to mitigate cross-border data risks?


By shifting the focus to these exploratory metrics, companies can start gaining a clearer picture of their AI journey without forcing teams to report on traditional KPIs too early in the process.


Note: I have written about early stage progress measurement before here and about introducing AI in large enterprises here.


Championing AI Inside Your Organization

For Amanda, the biggest source of anxiety wasn’t AI itself—it was the fear it generated within her team. While AI had proven its ability to save time and streamline processes, her team worried about what this meant for their job security. They had seen headlines about AI displacing workers, and that fear was palpable.


This fear is echoed in enterprises everywhere. But the truth is that AI, especially generative AI, doesn’t have to be a threat to jobs. Instead of replacing human workers, it can take over repetitive tasks, freeing employees to focus on more meaningful work—tasks that require human creativity, critical thinking, and emotional intelligence.

Leadership teams need to help shift this narrative.


The role of leadership in AI adoption cannot be overstated. Leaders need to be the bridge between cautious optimism and responsible AI experimentation. In Amanda’s case, her team’s productivity gains were significant, but they lacked the guidance and confidence to proceed safely within the company’s guardrails.


Leaders should aim to create a vision that encourages experimentation while adhering to data privacy and compliance. By ensuring a transparent process for AI experimentation, offering employees the necessary training, and focusing on data sovereignty, leaders can steer their organizations toward growth while keeping employee concerns at bay.


Successful companies will be those that not only define what AI success looks like but also make room for learning and failure as essential components of progress.


What do you think? How are you adopting AI / Gen AI in your enterprise?



Recent Posts

See All

Comments


bottom of page