Sun. Dec 22nd, 2024

Capturing the Value of Generative AI in a company

capturing-the-full-value-of-generative-ai-in-banking

With today’s large-language models (LLMs), a mother tongue like English or Mandarin is now an actual programming language. The word prompts we give these models are essentially the code they use to compute an answer. It’s as close as we’ve ever come to a true democratization of programming.

Put it all together and it’s clear we are in the midst of a once-in-a-generation breakthrough, opening up opportunities to transform major business functions like software development, customer support, sales and marketing. As this next wave of AI innovation accelerates, it will have a profound impact throughout the global economy. With generative AI we have an opportunity to reinvent education by addressing variability in learning1, assist doctors in providing clinical diagnoses2, help consumers with their investment decisions3, and so much more. While these are just a few examples, consider this projection: A recent McKinsey report suggests generative AI could add up to $7.9 trillion in global economic value annually4.

Three Critical Challenges We Must Address

As often happens in the early stages of such a large-scale breakthrough, we’re confronting some critical barriers to broader adoption. To capture the full value and potential of generative AI in the enterprise, there are three core challenges we must collectively address.

From Expensive to Affordable

Training and managing today’s generative AI models is complex and costly. It requires massive amounts of specialized compute power and high-speed networking with lots of memory. There is effectively a 1:1 relationship right now between AI-model performance and compute infrastructure, a dynamic that’s neither scalable nor sustainable. Andreessen Horowitz recently described training a model like ChatGPT as “one of the more computationally intensive tasks humankind has undertaken so far”5. The price tag for a single training run now ranges anywhere from $500,000 to $4.6 million6 – and training will remain an ongoing expense as models are updated.

Looking at these eye-popping costs, many have jumped to the conclusion that our world is going to be limited to a very small number of “mega LLMs” like ChatGPT.

But there’s another path forward. I see a future where the typical enterprise is empowered to build and run their own customized AI models at an affordable price. It comes down to flexibility and choice: most CIOs I talk to plan to use mega LLMs for a variety of use cases, but they also want to build numerous, smaller AI models that they can optimize for specific tasks. These models are often based on open-source software. Indeed, the sheer volume of innovation in open-source AI models right now is astonishing. It’s not a stretch to predict that many businesses will embrace these open models as their go-to option for many use cases, with less reliance on the massive, proprietary LLMs that dominate today. 

These open, purpose-built models will leverage an organization’s domain-specific data – their unique intellectual property. We have an opportunity to run these more compact AI systems cost-efficiently on dedicated infrastructure, including cheaper GPUs (graphical processing units) and, someday, low-cost CPUs modified to deliver the performance and throughput that AI workloads require. By driving down costs and building solutions that offer flexibility and choice, we can open up access to AI innovation so it’s more accessible to mainstream enterprises.

From Specialized “AI Wizardry” to Democratized AI Expertise

Today, the talent needed to build, fine-tune and run AI models is highly specialized and in short supply. This comes up in nearly every conversation I have with CEOs and CIOs, who consistently rank it among their top challenges. They’re acutely aware that the AI open-source software space is moving fast. They want the ability to quickly and easily pivot to new innovations as they emerge, without being locked into any single platform or vendor. That level of adaptability is difficult to achieve when only a relatively small percentage of tech professionals fully understands the “wizardry” behind today’s AI models.

To address this skills gap, we need to radically simplify both the process and the tools we use to build and train AI models. This is where reference architectures come into play, providing a blueprint and an actionable pathway for the majority of organizations that lack in-house expertise to build AI solutions from scratch.

From Risk to Trust

Lastly and perhaps most importantly, we need to move from risk to trust. Current AI models create significant risks, including privacy issues, legal and regulatory threats, and intellectual-property leakage. These risks have the potential to damage a company’s reputation, harm customers and employees, and negatively impact revenue.  Many organizations have established policies restricting employee use of generative AI tools in the wake of employees accidentally leaking sensitive, internal data onto tools such as ChatGPT. At the same time, today’s generative AI systems suffer from a fundamental lack of trust because they frequently “hallucinate” – creating new content that is nonsensical, irrelevant, and/or inaccurate.

As an industry, we need to develop a robust set of ethical principles to ensure and reinforce fairness, privacy, accountability, the intellectual property of others, and the transparency of training data. A large and growing ecosystem of organizations is on a quest to address the core issues of AI explainability7, data integrity8, and data privacy9.The open-source community is innovating at the center of this movement, working to help businesses train and deploy their AI models in a safe and controlled manner.  

The Next Wave of Tech Innovation

Just as the mobile-app revolution transformed business and our relationship with technology over the past 15 years, a new wave of AI-enabled apps is poised to dramatically increase worker productivity and accelerate economic development globally. We’re in the early stages of a new innovation supercycle. Our collective challenge is to make this powerful and nascent technology more affordable, more accessible and more trustworthy. 

In my conversations with AI decision-makers around the world, there is general agreement that we need to strike a strategic balance: We must move cautiously where there are unknowns, especially regarding confidentiality, privacy, and misuse of proprietary information. At the same time, it’s critical that we equip enterprises to rapidly embrace new AI models – so they can participate in this next wave of innovation in a responsible, ethical way.

At VMware, our teams are working hard to address these challenges. We will discuss these innovations and much more at VMware Explore 2023 in Las Vegas next month. I look forward to continuing the conversation with all of you there!

By Benie

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *


Warning: Undefined array key "popup_cookie_tcmfa" in /home/u565243816/domains/mbs-tech.com/public_html/wp-content/plugins/cardoza-facebook-like-box/cardoza_facebook_like_box.php on line 1025