• July 18, 2024
  • roman
  • 0



Instead of simply telling the AI to make recommendations for the best places to launch stores, Levine suggests that the retailer would be better served by coding very extensive and very specific lists of how it currently evaluates new locations. That way, the software can follow those instructions, and the chances of it making errors is somewhat reduced.

Would an enterprise ever tell a new employee, “Figure out where our next 50 stores should be. Bye!”? Unlikely. The business would spend days training that employee on what to look for and where to look, and the employee would be shown lots of examples of how it had been done before. If a manager wouldn’t expect a new employee to figure out how to answer the question without extensive training, why would that manager expect genAI to fare any better?

Given that ROI simply means value delivered minus cost, the best way to improve value is to increase the accuracy and usability of the answers provided. Sometimes, that means not giving genAI broad requests and seeing what it chooses to do. That might work in machine learning, but genAI is a different animal.

To be fair, there absolutely are situations where it makes sense to set genAI loose and see where it chooses to go. But for the overwhelming majority of situations, IT will see far better results if it takes the time to train genAI appropriately.

Reining in genAI projects

Now that the initial hype over genAI has died down, it’s important for IT leaders to protect their organizations by focusing on deployments that will bring true value to the company, say AI strategists.

One suggestion for trying to better control generative AI efforts is for enterprises to create AI committees consisting of specialists in various AI disciplines, Snowflake’s Shah said. That way, every single generative AI proposal originating anywhere in the enterprise would have to be run by this committee, who could veto or approve any idea.

“With security and legal, there are so many things that can go wrong with a generative AI effort. This would make executives go in front of the committee and explain exactly what they wanted to do and why,” he said.

Shah sees these AI approval committees as short-term placeholders. “As we mature our understanding, the need for those committees will go away,” he said.

Another suggestion comes from NILG.AI’s Fernandes. Instead of flashy, large-scale genAI projects, enterprises should focus on smaller, more controllable objectives such as “analyzing a vehicle’s damage report and estimating costs, or auditing a sales call and identifying if the person follows the script, or recommending products in e-commerce based on the content/description of those products instead of just the interactions/clicks.”

And instead of implicitly trusting genAI models, “we shouldn’t use LLMs on any critical task without a fallback option. We shouldn’t use them as a source of truth for our decision-making but as an educated guess, just like you would deal with another person’s opinion.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *