Specials

Charting the Course for GenAI: From Experimentation to Enterprise Adoption

By Mayank Baid

Generative AI (GenAI) was the technology story of 2023, creating a tidal wave of innovation that is reshaping the way businesses and users harness the power of technology to drive productivity. According to a report by McKinsey, one-quarter of C-suite executives are using next-gen AI tools, and another one-quarter of AI-using companies have GenAI on their boards’ agendas. Additionally, 40% of organisations plan to increase AI investment owing to GenAI advancements. Major enterprises, spurred by the success of GenAI intervention as a solution, are revolutionising technology use for enhanced productivity. However, what we’ve seen so far is just the beginning. The true power of GenAI will only become clear once organisations take it out of the experimental stage and begin to use it more widely in production.

However, in order to ride the wave rather than get caught up in it, organisations must overcome some key challenges around cost and trust. Doing so will require a robust data roadmap that leverages the cloud.

Cost and trust are the biggest barriers

When it comes to GenAI, the old computing maxim of “garbage in, garbage out” applies—you can’t expect to generate useful results if the model is trained on untrustworthy data. Data governance and security are still at a nascent stage in many organisations, with crucial information often locked away in silos—making it effectively unusable without costly integration. In practice, this means that AI training data may be poor quality and lack crucial business context, which can lead to irrelevant  responses and compromised decision making.. Either way, it adds no value for the business.

Another pain point is the high cost of in-house GenAI projects. While outsourcing comes with its own security, compliance and potential risk, doing everything internally may not be cost-effective either.  This involves the need to hire new resources or invest in training them for effective data management and address loopholes in the system.

 

Taking GenAI from the lab to production

Cloud providers have the GPU resources to empower customers to scale their GenAI projects and only pay for what they use. This enables organisations to experiment with GenAI and turn off the model once they’ve finished tinkering, rather than having to provision GPU in on-premise environments. That saves on expenses and provides the flexibility organisations need to take operations back in-house in the future if required.

As part of internalising the process, organisations can resort to the BRIESO model comprising the Build, Refine, Identify, Experiment, Scale, and Optimise mechanism. Once organisations have made the decision to implement the cloud, the focus must be to get GenAI projects out of the lab and deliver value in production environments.

Build: First, create a modern data architecture and universal enterprise data mesh. Whether on-premises or in the cloud, this will enable the organisation to gain visibility and control of its data. It will also help by establishing a unified ontology for mapping, securing and achieving compliance across all data silos. Look for tools which not only meet current demand but have the scalability to accommodate future growth. Open source solutions often offer the greatest flexibility.

Refine: Next, it’s time to refine and optimise data according to existing business requirements. It’s important at this stage to anticipate future requirements as accurately as possible. This will reduce the chances of migrating too much unnecessary data, which will add no value but may increase the cost of the project significantly.

Identify: Spot opportunities to utilise cloud for specific workloads. A workload analysis will be useful here in helping to determine where most value could be derived. It’s about connecting data across locations – whether on-premises or in multiple clouds – to optimise the project. Now is also a good time to consider potential use cases for development.

Experiment: Try pre-built, third-party GenAI frameworks to find the one that best aligns with business requirements. There are plenty to choose from, including AWS’s Bedrock (Hugging Face), Azure’s OpenAI (ChatGPT) and Google’s AI Platform (Vertex). It’s important not to rush into a decision too early. The model must integrate closely with existing enterprise data for the project to stand any chance of success.

Scale and Optimise: Once a suitable platform is chosen, consider picking one or two use cases to scale into a production model. Continuously optimise the process, but keep an eye on GPU-related costs in case they start to spiral. As the organisation’s GenAI capabilities start to grow, look for ways to optimise their use. A flexible AI platform is crucial to long-term success.

The future is here

Indian IT and business leaders are  excited about the transformative potential of AI-powered applications as it is expected to add nearly $500 billion to India’s GDP by 2025.. From enhanced customer service to seamless supply chain management and supercharged DevOps, AI foundation models hold the potential to revolutionise business strategy in a few years time.

However, before indulging in unfettered enthusiasm, there is the substantial groundwork to cover. A modern data architecture must be the starting point for any successful AI project.  Then it’s time to refine, identify, experiment, scale and optimise. The future awaits.

 

(The author is Mayank Baid, Regional Vice President – India, Cloudera, and the views expressed in this