“`html
Questions that once appeared merely theoretical are now critically important as we delegate progressively intricate decisions to AI. (Illustration/iStock)
Science/Technology
What occurs when artificial intelligence confronts the human dilemma of uncertainty?
Q&A: In a world increasingly influenced by AI, understanding how machines navigate decisions amid uncertainty is becoming ever more vital. USC’s Willie Neiswanger shares his insights.
How do we assess competing values when results are uncertain? What defines a rational choice when complete information is not at hand? These inquiries, once limited to theoretical philosophy, are now critically relevant as we assign progressively intricate decisions to AI.
A novel large language model framework crafted by Willie Neiswanger, assistant professor of computer science at the USC Viterbi School of Engineering and the USC School of Advanced Computing, alongside students in USC Viterbi’s Thomas Lord Department of Computer Science, integrates classical decision theory and utility theory principles to remarkably elevate AI’s capability to navigate uncertainty and address those complex choices.
Neiswanger’s research was highlighted at this year’s International Conference on Learning Representations. He elaborated on how AI manages uncertainty with USC News.
What is your perspective on the distinction between artificial and human intelligence?
Neiswanger: Currently, human intelligence possesses various advantages compared to machine intelligence. Nevertheless, machine intelligence also showcases certain benefits relative to humans, making it advantageous. Large language models (LLMs)—AI systems trained on extensive text data capable of comprehending and producing humanlike responses—can, for example, rapidly absorb and synthesize substantial amounts of information from reports or other data sources, and can generate on a large scale by simulating numerous potential futures or suggesting a broad range of expected outcomes. In our research, we strive to harness the strengths of LLMs while counterbalancing them with human judgment and capabilities.
Why do today’s AI large language models falter with uncertainty?
Neiswanger: Uncertainty remains a fundamental obstacle in practical decision-making. Contemporary AI systems find it challenging to effectively reconcile uncertainty, evidence, and the process of making forecasts based on the probabilities of various outcomes, in conjunction with user preferences when confronted with unknown factors.
Unlike human experts who can articulate varying degrees of confidence and recognize the limits of their knowledge, LLMs generally produce responses with an apparent certainty, regardless of whether they are relying on well-established patterns or making uncertain predictions extending beyond the available data.
How does your research relate to uncertainty?
Neiswanger: My work centers on formulating machine learning strategies for decision-making under uncertainty, particularly focusing on sequential decision-making—contexts where you make a series of choices over time, with each choice impacting future possibilities—in environments where data is costly to obtain. This encompasses applications like black-box optimization (identifying the optimal solution when internal system operations are invisible),
“““html
Experimental design (organizing studies or experiments to acquire the most valuable information), along with decision-making tasks in the fields of science and engineering — such as materials or pharmaceutical discovery and optimizing computer systems.
I’m also intrigued by how substantial foundational models (large AI systems trained on extensive datasets that act as a base for numerous applications), particularly large language models, can both enhance and derive benefits from these decision-making frameworks: on one side, aiding humans in making improved choices in uncertain contexts, and on the other, employing mathematical techniques for making optimal selections to achieve better results with less training data and enhanced quality in training and fine-tuning of LLMs.
How did your research tackle the issue of uncertainty and AI?
Neiswanger: Our focus was on enhancing a machine’s capability to quantify uncertainty, essentially training it to assess and articulate how confident it should be regarding various predictions. Specifically, we created an uncertainty quantification method that allows large language models to make decisions amid incomplete information, while also producing predictions with measurable confidence levels that can be validated and selecting actions that yield maximum advantage aligned with human preferences.
The approach initiated by pinpointing critical uncertain variables significant to decision-making, followed by having language models assign language-based probability scores to different possibilities (like crop yield, stock price, uncertain event dates, projected warehouse shipment volumes, etc.), grounded in reports, historical data, and other contextual factors, which were subsequently converted into numerical probabilities.
Are there immediate uses?
Neiswanger: In corporate environments, it may enhance strategic planning by delivering more accurate evaluations of market uncertainties and competitive dynamics. In healthcare scenarios, it may offer diagnostic assistance or facilitate treatment planning by aiding physicians in better considering uncertainty in symptoms and test outcomes. In personal decision-making, it may assist users in obtaining more informed, pertinent advice from language models regarding daily choices.
The system’s capability to align with human preferences has proven particularly beneficial in situations where allowing computers to identify the mathematically “best” solution could overlook crucial human values or limitations. By explicitly modeling stakeholder preferences and integrating them into mathematical evaluations of the value of different outcomes to individuals, the framework generates recommendations that are not only technically optimal but also practically acceptable to the individuals who execute them.
What’s next for your investigation?
Neiswanger: We are currently investigating how this framework can be applied to a wider spectrum of real-world decision-making tasks under uncertainty, which includes implementations in operations research (utilizing mathematical techniques to address complex business challenges), logistics, and healthcare. One primary focus going forward is enhancing human auditability: creating interfaces that provide users with clearer insights into why an LLM made a specific decision, and why that decision is optimal.
“`