Introducing generative AI into your organization is a multi-step process that, if implemented correctly, can have a significant impact on efficiency and bottom line. Monica Livingston leads the AI Center for Excellence at Intel. In this video, she outlines the initial steps required to assess opportunity, gather resources, and deploy infrastructure when building a generative AI strategy.
Transcript
Monica Livingston:
Some of the things that one should consider when evaluating AI strategy, first, is the cost versus return on investment. What are you actually hoping to achieve with AI? There has to be a business need. There has to be a business outcome. It's not just that we are doing it better. There's brand new types of applications that we've never been able to do before.
I'm Monica Livingston and I lead the AI Center of Excellence at Intel.
So the big question in AI is do you build from the ground up? Do you have sufficient skillset and data and need to build an application from the ground up, or do you purchase off-the-shelf? Or you could even customize an off-the-shelf application, and the cost of that model or of that application needs to be such that you have a return on investment.
The other part of getting started with AI altogether is understanding your data. What data do you have? Because in order to train AI models, you need to have your own data sets, or you need to have access to data sets, or you need to license data sets. But one way or another, you have to get data that is usable.
When you get to the point where you understand what your workload is, whether it's build in house or develop externally and you know what workload you want to run, then you start looking at infrastructure.
What do I run this on? For these smaller models for running inference, we recommend the 4th Generation Intel Xeon Scalable Processor. Most data centers have Xeon processors in their install base, so they're already there. As well as our core processors, which are for client devices like notebooks and desktops.
So, our role in making AI accessible is to add AI functionality in these product lines. Being able to run your AI applications on general purpose infrastructure is incredibly important because then your cost for additional infrastructure is reduced.
And then you look at responsible AI and responsible AI is a huge growing area. You should have a vendor checklist for responsible AI, specifically to be able to vet that your vendor is using AI responsibly, that they have processes in place to correct for any sort of issues that might potentially come up.
So those are a couple of things that someone should consider once they're building their AI models and really infusing AI into their applications. Rather than just saying, "I'm building AI and I'm building it from the ground up for AI."
Generally, you have a business outcome, you have an application, and the question is, should I start using some form of AI in that application.