generative-AI
AI

Using generative AI to create new possibilities

author
6 minutes, 50 seconds Read

Rama Ramakrishnan made two discoveries after consulting with retail leaders in 2010. First, there was a lot of buzz about retail systems that supplied clients with tailored recommendations, but the return on investment (ROI) was generally little. Second, many corporations only saw their clients once or twice a year, so they didn’t learn anything about them.

“But by being very diligent about noting down the interactions a customer has with a retailer or an e-commerce site, we can create a very nice and detailed composite picture of what that person does and what they care about,” says Ramakrishnan, a professor of the practise at the MIT Sloan School of Management. “Once you have that, you can use tried and true machine learning methods.

Because of these insights, Ramakrishnan launched the company CQuotient, whose technology forms the basis of Salesforce’s popular artificial intelligence (AI) e-commerce platform. He estimates that on Black Friday alone, CQuotient technology interacts with more than a billion shoppers.

In 2019, Ramakrishnan went back to MIT Sloan, where he had obtained master’s and doctoral degrees in operations research in the 1990s, after a highly successful entrepreneurial career. Rather of simply instructing them on “how these amazing technologies work,” he shows them “how you take these technologies and actually put them to use pragmatically in the real world.”

Ramakrishnan finds great value in MIT’s executive education programmes as well. “This is a great opportunity for me to convey the things that I have learned, but also as importantly, to learn what is on the minds of these senior executives, and to guide them and nudge them in the right direction,” he adds.

For instance, executives’ worries about the quantity of data required to train machine learning systems are warranted. He may now direct them to a wide variety of models that have already been educated for use. Ramakrishnan calls it “an incredible advance” to be able to employ these pre-trained AI models and immediately adjust them to your specific business challenge.

Rama Ramakrishnan – Artificial Intelligence (AI) in Practical Settings for Enhanced Productivity
Video: MIT’s AI Industry Liaison Programme: Classifying AI

The goal of artificial intelligence, he explains, is to give computers the same level of intelligence that people possess. To fully utilise the technologies available, it is helpful to have some background knowledge of the context in which they were developed.

The old school method of AI, which relied on applying if/then rules gleaned from humans, only worked well for a limited number of situations. One explanation is because, as Ramakrishnan puts it, “we can do lots of things effortlessly, but if asked to explain how we do them, we can’t actually articulate how we do them.” Additionally, those programmes may be flummoxed by novel circumstances that don’t fit the predefined norms.

In contrast, machine learning relies heavily on examples to teach software how to do a task. “You give it lots of examples of inputs and outputs, questions and answers, tasks and responses, and get the computer to automatically learn how to go from the input to the output,” he explains. Machine learning has already proven its worth in a wide variety of contexts, including credit scoring, loan decisions, disease prediction, and demand forecasting.

However, machine learning performed best only with organised data, such as that found in a spreadsheet. The system “wasn’t very good at going from that to a predicted output” if the incoming data was “unstructured,” as Ramakrishnan puts it. This includes things like photos, video, audio, ECGs, and X-rays. This implies that the unstructured data used to train the system needs to be organised by hand.

According to him, deep learning started cracking that barrier in 2010 by providing a way to directly process unstructured input data. Based on a longstanding AI strategy known as neural networks, deep learning became practical due to the global flood tide of data, the availability of extraordinarily powerful parallel processing hardware called graphics processing units (originally invented for video games) and advances in algorithms and math.

Finally, last year saw the emergence of generative AI software packages inside deep learning, which can generate unstructured outputs like human-sounding prose, photos of dogs, and 3D models. ChatGPT, an example of a large language model (LLM), takes in text and spits out more text, whereas DALL-E, an example of a text-to-image model, can quickly produce pictures that seem natural.

Taking Note of Slight Information to Strengthen Customer Service, by Rama Ramakrishnan
Video: MIT’s Programme for Industrial Liaison
What can and can’t be done with generative AI

Based on its training on the unfathomably large corpus of online material, an LLM’s “fundamental capability is to predict the next most likely, most plausible word,” as described by Ramakrishnan. “Then it attaches the word to the original sentence, predicts the next word again, and keeps on doing it.”

He claims that many people, especially scholars, are surprised to learn that “an LLM can do some very complicated things.” It can produce Seinfeld episodes, answer some types of logic issues, and compose beautiful, coherent poetry. How next-word prediction may result in such impressive talents is quite surprising.

“But you have to keep in mind that what it is doing is not so much finding the correct answer to your question as finding a plausible answer to your question,” Ramakrishnan stresses. Its information could be wrong, useless, harmful, biassed, or even downright insulting.

This places accountability for accurate, relevant, and helpful results squarely on the shoulders of the users. You need a mechanism to verify that the output is error-free before sending it out, he advises.

Ramakrishnan adds that there is much investigation into methods to remedy these deficiencies, and that he anticipates many novel tools to accomplish so.

Placement of LLMs in suitable business positions

Given the incredible development of LLMs, how should businesses plan to implement this technology for jobs like content creation?

Ramakrishnan suggests initially calculating the expenses involved: “Is it much less expensive effort to have a draught that you correct rather than you creating the whole thing?” Can you handle the fallout if the LLM makes a mistake and the wrong information gets out into the world?

When Ramakrishnan says, “If you have an application which satisfies both considerations, then it’s good to do a pilot project to see whether these technologies can actually help you with that particular task,” he means it. He highlights the need of seeing the pilot not as a routine IT project but as an experiment.

Among business LLM applications, software development is now the most developed. In contrast to a software programme, which is more than simply text output, “ChatGPT and other LLMs are text-in, text-out,” he explains. “Programmers can go from English text-in to Python text-out, as well as you can go from English-to-English or English-to-German. There are several resources available to assist in the creation of code for these systems.

Of course, programmers must make sure the result accomplishes the job properly. Thankfully, infrastructure for testing and validating code exists in software development already. It’s considerably cheaper to have the technology produce code for you in this “sweet spot,” he says, “where you can very quickly check and verify it.”

Content creation, such that used for advertising or online retail, is another common application of LLM. Ramakrishnan suggests that you save money by editing ChatGPT’s rough draught rather than writing from scratch. However, businesses need to take precautions to ensure that human intervention is always possible.

LLMs also are expanding swiftly as in-house solutions to search business information. An LLM chatbot, in contrast to traditional search engines, may provide a conversational search experience by keeping track of your questions and answers. “But again, it will occasionally make things up,” he adds. “The risk of saying something wrong to the customer means that we are still in the early days of chatbots for external customers.”

When considering the possibilities and dangers of artificial intelligence, Ramakrishnan says that we are living in a wonderful period. He explains that he assists businesses in “figuring out how to take these very transformative technologies and put them to work” to increase the intelligence of their goods and services, the productivity of their staff, and the efficiency of their operations.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *