recent

14 Alumni Quotes from 2024

Top 10 Alumni Stories of 2024

Using Statistical Modeling to Predict Election Outcomes

Alumni

Artificial Intelligence

Making Good Business Decisions About AI

By

When Manish Raghavan (Drew Houston (2005) Career Development Professor; Assistant Professor, Information Technology) taught 15.563 Artificial Intelligence for Business last spring, he was not trying to teach MBA students about the technical rigors of computer engineering or machine learning.

On the contrary, Raghavan wanted to teach his students how to make good business decisions about deploying (or not deploying) AI-based products and services considering the practical, social, and ethical challenges this would entail. As the official description noted, the goal was a “functional (as opposed to mechanistic) understanding” of AI.

“That requires a very different type of knowledge and expertise,” Raghavan told moderator Jackie Selby, EMBA ’21, and the attendees of the MIT Sloan Alumni Online event in September.

Manish Raghavan, Drew Houston (2005) Career Development Professor

Credit: Caitlin Cunningham

“Most machine learning classes in computer science departments teach you how to solve a particular problem,” said Raghavan, whose faculty appointment is shared with the MIT Schwarzman College of Computing. “The course I wanted to teach was, if you could solve that problem, what would you do with it? What would you do in the real world with this magical tool?”

Success with skepticism

Both in his class and in conversation with Selby, Raghavan emphasized the importance of skepticism in certain contexts when determining whether the use of AI will succeed.

Just because a program is making seemingly accurate, rapid predictions does not mean those predictions are without error. And if these erroneous calculations occur in a setting that poses potentially life-altering consequences, then skepticism may be necessary, said Raghavan.

“What that translates into is trying to prevent these systems from being deployed in ways where they’re too independent,” he added. “You really want more oversight, more fault tolerance.”

Raghavan cited continued advances in autonomous vehicle technology in the past decade and more recent developments in using generative AI applications to create art.

When it comes to system efficacy, independence, and safety, autonomous vehicles are highly regulated because the consequences of a driverless car making mistakes are potentially catastrophic. However, if a generative AI program producing art makes a comparable mistake, the possible side effects may be annoying or disconcerting, but they will not be life-threatening.

“[We should be] figuring out how to deploy AI in ways that it’s not really about improving the system itself, but about improving what you do around the system to make it more robust,” said Raghavan. The potential mistakes these systems make are inevitable, he added, but their context is where any gains to be made will be found.

These contexts include the computer scientists who develop AI systems, the businesspeople who find applications for them, and the consumers who use them.

“People are not machines. They think in different ways that can sometimes be beneficial, but they also make different mistakes. Figuring out how to leverage people’s strengths and weaknesses is an important step in deploying robust AI systems,” said Raghavan.

When to (not) use AI

In addition to recommending a more skeptical and contextual approach to developing business applications with AI, Raghavan also posed an important question: When is (or isn’t) AI a good solution to a particular problem?

“People often mistake predictive problems for causal problems, and that actually leads to a lot of issues,” he said. Consider the use of randomized drug trials by pharmaceutical companies.

When trying to understand the efficacy of a new drug, researchers do not give the drug to a group of patients and observe them to see what happens. They instead conduct randomized drug trials by giving the test drug to one group and a placebo to a control group, then study the outcomes of both to understand the causal (as opposed to predictive) effect on people’s health.

This is an example of a causal problem to be solved, and it requires an entirely different infrastructure from studying and addressing predictive problems, Raghavan explained. Yet in addition to determining the nature of the problem, those interested in deploying AI solutions should also consider the intricacies of the tools they want to use.

“You’d be surprised at how many people just buy off-the-shelf machine learning products, try to deploy them, and don’t really have any feedback mechanism for improving them,” said Raghavan.

Based on his experience teaching 15.563, however, Raghavan is hopeful for the role of managers (and engineers) regarding the expert evaluation and deployment of AI and machine learning programs for good business.

“The students I see coming out of MIT are asking these kinds of questions more and more,” he said. “Difficult questions like ‘Should I even deploy this system?’ and ‘Can I invest in the infrastructure around a system to make it better, even if the model is what it is right now?’”

Register to attend next month's MITSAO for a discussion of opportunity and empowerment through art with Liz Powers, co-founder and CEO of ArtLifting, and Aimee Hofmann, abstract artist.

MIT Sloan Alumni Online: Manish Raghavan

For more info Andrew Husband Sr. Associate Director Content Strategy, OER (617) 715-5933