Generative AI (GenAI) presents a transformative, once-in-a-generation opportunity for enterprises to redefine how they operate, innovate, and compete. Its applications span three core areas: driving internal productivity gains, enhancing customer experiences through AI integration with Software as a Service (SaaS) products, and enabling advanced, domain-specific AI agents tailored to vertical industries.
According to the Applied Generative AI Market Report, enterprises have moved only 15% of GenAI initiatives into full production. Many enterprises, especially those in highly regulated industries, continue to face challenges in adopting a strategic approach that unlocks their full potential. Unclear ROI pathways, regulatory complexities, and operational risks often hinder decision-makers’ journey from experimentation to enterprise-wide implementation.

Fig 1: GenAI adoption maturity curve vs. Value delivered
In this blog post, we will explore the key barriers that enterprises encounter when rolling out GenAI initiatives. Grounded in invaluable lessons from our customer conversations, we’ll deliver actionable insights that empower leaders to tackle challenges head-on and craft a roadmap for sustainable success with GenAI.
Key Barriers
- Doubting Return on Investment: While most tech teams in large enterprises have conducted some form of GenAI proof-of-concept (PoC), convincing CIOs and CTOs to move these initiatives into production remains a significant challenge. The substantial costs of AI infrastructure, scarcity of AI talent, and labor-intensive data preparation often result in benefits that fail to justify the investment, making ROI a critical business consideration for generative AI initiatives. The primary hurdle lies in quantifying the return on investment (ROI), which is often difficult to articulate. On the other hand, business stakeholders tend to prioritize maintaining existing operations over investing in new technologies. As a result, these initiatives are frequently relegated to 'nice-to-have' status and fail to gain executive buy-in—unless the true ROI can be clearly demonstrated.
- Build vs. Buy: This is another significant challenge that often causes projects to stall indefinitely, as organizations struggle to make a decisive choice between purchasing an existing GenAI solution or building one from the ground up. The lack of clarity on factors such as cost, scalability, time to market, and alignment with business needs further complicates the decision-making process. This indecision not only delays progress but also risks losing competitive advantage in rapidly evolving markets where agility is key.
- Lack of Mindshare: Firms that are culturally risk-averse fear job replacement by AI. This internal resistance often results in missed opportunities to leverage AI for innovation and efficiency. Without a clear understanding of how AI can augment rather than replace human roles, organizations struggle to build trust and enthusiasm among employees, further stalling progress.
- Lack of governance (Operating model): Although every business unit is eager to incorporate GenAI in various ways, as outlined in the maturity curve in Figure 1, technology stakeholders within individual business divisions often seek autonomy in implementation. However, central tech and InfoSec teams frequently impose restrictions with a one-size-fits-all approach, creating roadblocks. The absence of a clear, organization-wide governance framework further exacerbates the issue, resulting in many projects stalling after the proof-of-concept (PoC) stage.
- Lack of Quality AI-ready data: Historically siloed organizations often fail to capitalize on the potential of high-quality AI results fully. For instance, the inability to integrate unstructured data, such as reports and product documentation, with structured reference data stored in separate systems creates significant challenges. This disjointed approach can result in even a simple Retrieval-Augmented Generation (RAG) chatbot failing to contextualize the semantic meaning of a specific business line's operations. In regulated industries and jurisdictions with strict data privacy laws (such as GDPR), the challenges of curating AI-ready data are even more complicated. Using sensitive datasets, like clinical or patient information, to fine-tune models requires rigorous adherence to PHI and HIPAA compliance, making the process both more complex and high-stakes.
- Model parameter size as the silver bullet: In many enterprise discussions, teams often fixate on choosing the largest model as a guaranteed solution to their challenges. Questions like 'Should we use Llama 3 8B, Llama 3 405B, or wait for Llama 4 2T?' are not uncommon, reflecting a mindset that equates larger model sizes with better results. However, this approach overlooks critical factors such as task alignment, computational efficiency, and cost-effectiveness. Without a nuanced understanding of when and why larger models are necessary, enterprises risk overinvesting in scale while neglecting the importance of optimizing for their actual needs.
- Model selection (Analysis paralysis): With newer models getting released every other week, enterprises are speculating endlessly about which model to adopt. The initial analysis phase often turns into paralysis. Industry benchmarks and competitive pressure amplify this 'fear of missing out' (FOMO), pushing organizations to chase trends rather than focus on their specific needs. At the same time, competitors move forward with smaller, purpose-built models tailored to their immediate goals. This cycle of indecision not only delays innovation but also risks wasting resources on models that may not align with business priorities.
- Sensitive Data leak and Shadow AI: Last but certainly not the least concern is the fear of sensitive data getting leaked. Gen AI tools are becoming increasingly accessible, and with the rise of Shadow AI, leaders worry about unintentional exposure of sensitive data (e.g., PHI, financial records) by employees. Enterprises even tend to block access to any form of AI tools. These leaks expose organizations to regulatory penalties (GDPR, HIPAA), reputational damage, and intellectual property theft.
A Strategic Approach:
With the right strategies in place, enterprises’ challenges can be transformed into opportunities for growth and innovation by unlocking GenAI's power.
Identifying and Prioritizing High-Value Use Cases
The first step in achieving success with GenAI is to identify and prioritize high-value use cases that can drive measurable impact. This involves quantifying the potential outcomes and selecting use cases that can deliver tangible results. To do this, consider the following:
- What are the key business challenges that GenAI can help address?
- What are the metrics that will be used to measure success?
- What is the potential return on investment (ROI) for each use case?
- What are the risks related to compliance, security, reputation and other legal implications?
- What are the technical challenges?
For example, using a strawman approach, let's take 2 use cases in a Financial advisor’s workflow,
Use case 1: Automated Financial Reporting:
- Net Profit: Savings from reduced labor costs and error correction.
- Investment Cost: Cost of implementing and maintaining the AI system.
- ROI: (Savings−Implementation Cost)/Implementation Cost×100
Use case 2: Personalized Investment Recommendations:
- Net Profit: Increased revenue from higher client returns and improved retention.
- Investment Cost: Cost of implementing and maintaining the AI system.
- ROI: (Increased Revenue−Implementation Cost)/Implementation Cost×100
Use Case
|
Business Impact
|
Complexity
|
ROI
|
Automated Financial Reporting
|
Medium
|
Low
|
150%
|
Personalized Investment Recommendations
|
High
|
Medium
|
200%
|
By prioritizing high-value use cases, business teams can ensure that their AI initiatives are not only successful but also strategically beneficial, driving growth and innovation in the long run. Here is an example highlighted by The Wall Street Journal: Johnson & Johnson's revised AI strategy underscores the importance of prioritizing high-value business cases that deliver measurable ROI, rather than investing in broad, unproven projects.
Securing Stakeholder Buy-In
Building GenAI solutions without stakeholder buy-in can lead to wasted resources and failed projects. It's essential to engage with stakeholders and ensure that everyone is aligned with the project's goals and objectives. This includes:
- Communicating the value proposition of GenAI to business stakeholders
- Addressing concerns and misconceptions about GenAI
- Ensuring that stakeholders are involved in the decision-making process
By securing stakeholder buy-in early in the game, technology teams can ensure that the project has the necessary support and resources to succeed.
Building a Cross-Functional Team
Assembling a team with the right mix of skills and expertise is critical to the success of the GenAI project. This includes:
- Business subject matter experts (SMEs) who understand the business domain really well and can translate it into requirements for technical stakeholders
- Product Owner who manages the entire lifecycle of AI products, including overseeing technical design and deployment, defining and tracking KPIs, optimizing performance, and ensuring successful delivery and execution
- Data Engineers who can develop and maintain the data pipelines. Ingesting governed and high-quality data is equally, if not more important than the model itself, especially for fine-tuning use cases and evaluating the responses generated by the model.
- Machine Learning engineers proficient in developing, fine-tuning, and deploying GenAI applications.
- UX Designers focus on designing seamless interfaces, enabling human-AI collaboration, ensuring explainability, and addressing ethical considerations to create engaging and trustworthy experiences.
- Compliance and security experts to protect sensitive data, secure infrastructure, and monitor for threats, enabling safe and compliant GenAI adoption that aligns with regulatory requirements, mitigates risks like data privacy violations, and establishes governance frameworks for ethical AI use.
Adopting a Crawl-Walk-Run Approach
Implementing GenAI solutions can be complex and time-consuming. To mitigate this risk, it's essential to adopt a crawl-walk-run approach, which involves:
- Setting clear goals and milestones to guide the development
- Developing a proof of concept (PoC) to test the feasibility of the solution
- Utilizing methods such as Prompt Engineering, RAG, or a simple tool calling agent
- Understanding the technology capabilities and building up the team’s skillset
- Building a Minimum Viable Product (MVP) to validate the solution with stakeholders
- Trying various Frontier models, such as Anthropic Claude 3.7 Sonnet, to determine the viability of the product
- Working closely with SMEs to evaluate the outcomes by a human
- Releasing a beta version to gain early signals and refine the solution based on user feedback
- If the application quality or cost of the model is high as you scale
- Consider fine-tuning your own domain-specific model. Start with a small model (7B or 8B) and work your way up as needed.
- Evaluating various models to determine the right fit. This will help keep the cost of the model serving low
- Developing a Go-To-Market (GTM) strategy to deploy to a wider user base at scale and more advanced GenAI applications

Fig 2: Crawl->walk->Run approach
Choosing the right platform that can serve the needs of a company’s evolving goals is the foundation for success in AI. The platform should allow enterprises to explore simple use cases to more complex ones as adoption matures. Essentially, a flexible platform that supports
- Having both structured and unstructured data in one place for unified data engineering, data science, and machine learning workflows
- Applying security and governance, such as access controls, to the data, models, and functions ensures that only authorized users and applications call the right assets in cross-domain collaboration
- Evaluating agentic systems using both online and offline evaluation frameworks to improve quality over time
- Offering multiple GenAI patterns: Prompt Engineering, RAG, Finetuning, and Pre-Training
- Options for best-in-class open and proprietary models with highest-grade enterprise security promises and an AI Gateway to manage all the models in a single place while having the right AI guardrails in place
- Explainability with End-to-end Lineage of all data assets and models
- Future scaling, from a simple Retrieval Augmented Generation (RAG) app to building a domain-specific model from the ground up with domain context data, by lowering the evaluation and procurement cost of multiple-point solutions
By partnering with the right Data & AI platform provider, such as Databricks, which provides a unified Data Intelligence Platform that could support most, if not all, of the features listed above, enterprises can unlock GenAI's full potential and achieve greater efficiency and productivity by reducing multiple point solutions.
Establishing a Governance Model
As AI solutions become increasingly adopted, it's essential to establish a governance framework that ensures accountability and transparency. This involves:
- Establishing a GenAI Center of Excellence (CoE) to provide guidance and support to other projects within the organization by enforcing bare minimum enterprise controls while giving autonomy of execution to business divisions
- Implementing a comprehensive security framework, such as those outlined in Databricks AI Security Framework (DASF), to mitigate risks in AI systems
By establishing a governance model similar to the one below, enterprises can improve collaboration across business, IT, data, AI, and security teams and achieve faster time to market without compromising enterprise security controls for AI applications.

Fig 3: Enterprise AI Governance Model
Secure and Safe use of AI
Securing AI applications, ensuring safe and ethical use of AI, and explainability are of paramount importance for highly regulated industries. This can be achieved through zero trust principles, including
- Enforcing a least-privilege model through fine-grained access control to data, models and all assets related to AI applications
- Securing access through MFA and conditional access, including location-based awareness control
- Capturing end-to-end audit trail and logging the payload to models for effective monitoring and alerting anomalous access patterns
- Encrypting data at rest, in transit, and in use with customer-managed encryption keys and advanced encryption using stronger cipher suites
- Centralized governance for automated data classification and custom guardrails, like content filtration and PII detection in prompts and responses
- Applying rate limits for protecting against abuse or Distributed Denial of Service (DDoS) attacks
- Advanced data exfiltration prevention using end-to-end private link connectivity, IP ACLs, secure egress gateway
In addition, educating employees on the responsible use of AI is as important as protecting the AI Infrastructure.
Controlling Costs and Ensuring Reusability
Finally, it's essential to control costs and ensure reusability; business teams can achieve greater ROI and scalability with their AI solutions. This involves:
- Implement cost-control measures to ensure that development and deployment are within budget. Databricks system table allows tech teams to easily monitor costs to prevent overruns.
- Developing domain-specific fine-tuned models and an AI-ready dataset that can be democratized across the organization, reducing duplicate efforts. Databricks Data Intelligence Platform offers an end-to-end model lifecycle management for collaboration across teams and workspaces via access controls.
- Establishing a framework for reusing AI components and solutions to reduce duplication and improve efficiency
In summary, A strategic approach to GenAI starts with identifying high-value use cases that deliver measurable business impact and ROI. Securing stakeholder buy-in, assembling a cross-functional team, and adopting a phased “crawl-walk-run” methodology are essential for successful implementation. Educating employees on ethical considerations and responsible use of AI, including understanding bias, data privacy, legal implications, and compliance with organizational AI policies, is as important as securing the AI platform. Choosing a unified, secure platform like Databricks enables robust data management, governance, and responsible and scalable AI development. By establishing strong governance, security, and cost controls, enterprises can maximize efficiency and unlock the full potential of GenAI for growth and innovation.