recent

To help improve the accuracy of generative AI, add speed bumps

4 capabilities of a real-time business

Slack CEO: How to roll out artificial intelligence internally

Credit: Thongden Studio / Shutterstock

Ideas Made to Matter

Artificial Intelligence

Generative AI research from MIT Sloan

By

In the year since OpenAI introduced the ChatGPT chatbot, generative artificial intelligence has burst into the public consciousness and jumped to the top of most corporate agendas.

Most companies and business leaders are still finding their way with this new technology, from understanding more about how generative AI works and how it will affect businesses and workers to anticipating how it will be regulated. There is also growing emphasis on how to make sure it is used responsibly.

Researchers at MIT Sloan have been examining generative AI and the best ways to use it in the enterprise. Here’s what they’ve found.

AI and workers

Inexperienced workers stand to benefit the most from generative AI, according to research by MIT Sloan associate professor Danielle Li, MIT Sloan PhD candidate Lindsey Raymond, and Stanford University professor Erik Brynjolfsson, PhD ’91.

The researchers found that contact center agents with access to a conversational assistant saw a 14% boost in productivity, with the largest gains impacting new or low-skilled workers.

“Generative AI seems to be able to decrease inequality in productivity, helping lower-skilled workers significantly but with little effect on high-skilled workers,” Li said. “Without access to an AI tool, less-experienced workers would slowly get better at their jobs. Now they can get better faster.”

Generative AI can boost highly skilled workers’ productivity too, according to a research paper co-authored by MIT Sloan professor Kate Kellogg — though it has to be introduced the right way.

It is not always obvious to highly skilled knowledge workers which of their everyday tasks could easily be performed by AI, the researchers found. To introduce generative AI to highly skilled workers and boost productivity, organizations should establish a culture of accountability, reward peer training, and encourage role reconfiguration.

And MIT Sloan professor John J. Horton notes that several factors have to be in place for a human-AI interaction to be worthwhile. He recommends that leaders consider four points before swapping in AI for human labor: how much time the task will take without assistance, how much the employee performing a task is paid, whether AI is capable of performing the task correctly, and how easy it is for humans to determine whether the AI output is accurate. 

Generative AI could also help people get hired. Job applicants who were randomly assigned algorithmic assistance with their resumes — such as suggestions to improve spelling and grammar — were 8% more likely to be hired, according to an experiment conducted by Horton, MIT Sloan PhD student Emma van Inwegen, and MIT Sloan PhD student Zanele Munyikwa.

“If you take two identical workers with the same skills and background, the one with the better-written resume is more likely to get hired,” van Inwegen said. “The takeaway is that employers actually care about the writing in the resume — it’s not just a correlation.” That means that AI assistance can be a useful tool for those hoping to get hired, she said.

Using AI to the best advantage

It’s time for everyone in your organization to understand generative AI, according to MIT Sloan senior lecturer George Westerman. In a webinar, he outlined early use cases, such as summarizing documents, creating personalized shopping experiments, and writing code. Generative AI is the latest in a line of advanced analytics tools, he noted, which vary in how much data and domain expertise is needed to use them, whether their results are repeatable, and how easy it is to understand how they generate results.

For businesses, using work generated by AI will depend in part on how consumers perceive that work. With this in mind, MIT Sloan senior lecturer and research scientist Renee Richardson Gosline and Yunhao Zhang SM ’20, PhD ’23, a postdoctoral fellow at the Psychology of Technology Institute, studied how people perceive work created by generative AI, humans, or some combination of the two.

They found that when people knew a product’s source, they expressed a positive bias toward content created by humans. Yet at the same time, and contrary to the traditional idea of “algorithmic aversion,” people did not express a negative bias toward AI-generated content when they knew how it was created. In fact, when respondents were not told how content was created, they preferred AI-generated content.

Related Articles

The legal issues presented by generative AI
10 startups harnessing the power of AI
Third-party AI tools pose risks for organizations

Users can make the most of generative AI by using it in concert with external tools to answer complex questions and execute actions. MIT Sloan professor of the practice Rama Ramakrishnan looked at how to use ChatGPT as an agent to do things like search the web, order groceries, purchase plane tickets, or send emails.

And businesses that find success with generative AI will also harness human-centric capabilities, such as creativity, curiosity, and compassion, according to MIT Sloan senior lecturer Paul McDonagh-Smith. The key is figuring out how humans and machines can best work together, resulting in humans’ abilities being multiplied, rather than divided, by machines’ capabilities, McDonagh-Smith said during a webinar.

AI policy

It’s time to talk about how to rechart the course of technology so it complements human capabilities, according to MIT economists Daron Acemoglu and Simon Johnson. In their new book, “Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity,” they decry the economic and social damage caused by the concentrated power of business and show how the tremendous computing advances of the past half century can become empowering and democratizing tools.

“Society and its powerful gatekeepers need to stop being mesmerized by tech billionaires and their agenda,” they write in an excerpt from the book. “Debates on new technology ought to center not just on the brilliance of new products and algorithms but also on whether they are working for the people or against the people.”

In a policy memo co-authored with MIT professor David Autor, Acemoglu and Johnson suggested five policies that could steer AI implementation in a direction that complements humans and augments their skills. These include equalizing tax rates on employing workers and owning equipment or algorithms, updating Occupational Safety and Health Administration rules to create safeguards against worker surveillance, and creating an AI center of expertise within government.

When President Joe Biden issued an executive order in October on AI safety and security, one part of it addressed using content labels to identify content generated by artificial intelligence. A new working paper co-authored by MIT Sloan professor David Rand looked at the right terms to use for those labels. The researchers found that people associated certain terms, such as “AI generated” and “AI manipulated,” most closely with content created using AI. Conversely, the labels “deepfake” and “manipulated” were most associated with misleading content, whether AI created it or not.

Read next: Study finds industry now dominates AI research 

For more info Sara Brown Senior News Editor and Writer