LAPPI: Interactive Optimization With LLM-Assisted Preference-Based Problem Instantiation

IEEE Access 2026
1Sakana AI, Japan2University of California, Davis, USA3OMRON SINIC X Corporation, Japan4The University of Tokyo, Japan* So Kuroki and Manami Nakagawa conducted this work while interns at OMRON SINIC X.

TL;DR LAPPI uses LLMs to transform vague natural-language preferences into solver-ready optimization problems, enabling an interactive loop for tasks such as trip planning.

Overview

Many real-world tasks, such as trip planning or meal planning, can be formulated as combinatorial optimization problems. However, using optimization solvers is difficult for end users because it requires problem instantiation: defining candidate items, assigning preference scores, and specifying constraints. We introduce LAPPI (LLM-Assisted Preference-based Problem Instantiation), an interactive approach that uses large language models (LLMs) to support users in this instantiation process. Through natural language conversations, the system helps users transform vague preferences into well-defined optimization problems. These instantiated problems are then passed to existing optimization solvers to generate solutions. In a user study on trip planning, our method successfully captured user preferences and generated feasible plans that outperformed both conventional and prompt-engineering approaches. We further demonstrate LAPPI's versatility by adapting it to an additional use case.

Video

LAPPI as an Extension of Interactive Optimization

This figure positions LAPPI against conventional interactive optimization. Standard approaches usually assume that the optimization instance has already been defined and focus on adjusting parameters, validating outcomes, and iterating on feedback.

LAPPI moves the interaction boundary upstream into problem instantiation itself. Instead of asking users to enumerate candidates and assign values manually, it lets them express preferences in natural language and uses the LLM to construct the optimization instance that the solver will later handle.

This shift is the paper's main conceptual contribution: optimization becomes accessible before the user has formalized the problem. In other words, LAPPI lowers the barrier to optimization by treating vague preferences as the starting point of the workflow rather than expecting users to prepare a complete mathematical instance in advance.

Trip Planning UI

The trip-planning system is easiest to read as a cycle between two stages.

Stage 1: Spot exploration. The user describes what kind of trip they want, and LAPPI proposes candidate points of interest that match those preferences. The user then reviews the suggested places and decides which ones should become part of the optimization problem.

Stage 2: Plan optimization. Once the candidate set is fixed, LAPPI assigns the numerical values needed for optimization, combines them with the user's time constraint, and sends the instantiated problem to the solver. The solver then returns a route and itinerary that the user can inspect.

The loop comes from moving back and forth between these stages. After seeing a route, the user can revise the candidate set, change their preferences, or tighten the constraint, and LAPPI reformulates the problem accordingly. In the paper, this system combines GPT-4 for enumeration and value assignment, the Google Maps API for map data and travel times, and Gurobi for route optimization.

User Study Takeaways

This plot summarizes the user study with 12 participants comparing baseline tools, single-LLM LAPPI, and multi-LLM LAPPI. Statistically significant differences were observed in the following five metrics.

Efficiently Planned. Participants reported that LAPPI made it faster to build a trip plan. This is consistent with the system design: destination discovery, value assignment, and route generation are connected in one workflow instead of being split across multiple tools.

Easily Planned. Users also found the planning process easier. The system reduces the amount of manual bookkeeping required to compare spots, estimate visit times, and rebuild routes after each change.

Reflects Preferences. During destination exploration, LAPPI was rated better at surfacing spots that matched what users actually wanted. That result matters because the benefit of optimization depends on whether the candidate set already reflects the user's intent.

Confident in Plan. Participants expressed higher confidence in the final itinerary produced by LAPPI. The interface does not just list suggestions; it returns a complete plan that users can inspect and refine.

Reasonable Route. LAPPI also scored higher on perceived route rationality, especially around respecting the required start and end points. This suggests that the optimization backend improved not only objective performance but also users' sense that the plan was coherent.

Why Optimization Matters

This comparison makes the technical point visually clear. Under the same 8-hour trip-planning constraint in Rome, LAPPI generates a route that visits more POIs and avoids the kind of path retracing seen in the direct GPT-4 route.

This visual difference matches the quantitative evaluation:

  • Measured as deviation from the target time budget, LAPPI averaged 0.55 hours while GPT-4 with prompt engineering averaged 1.8 hours. Since smaller deviation is better, this indicates that LAPPI stayed much closer to the intended trip duration.
  • LAPPI achieved 100% success on satisfying the time constraint. GPT-4 also reached 96%, but this came with a tendency to return overly conservative plans that left a large time buffer rather than using the available budget effectively.
  • LAPPI also obtained higher total reward and visited more POIs.

The paper's broader lesson is that LLMs are effective at capturing preferences, but dedicated solvers are still necessary when feasibility and optimization quality matter.

Beyond Trip Planning

Beyond trip planning, the paper also adapts LAPPI to meal planning with calorie constraints. That second use case supports the general claim of the paper: LAPPI is a reusable framework for conversationally instantiating optimization problems, not a single-purpose travel planner.

The paper closes by identifying future work around richer constraint types, stronger explainability for excluded options, broader domain validation, and better memory use across longer interactions.

Citation

@article{kuroki2026lappi,
  title={LAPPI: Interactive Optimization With LLM-Assisted Preference-Based Problem Instantiation},
  author={Kuroki, So and Nakagawa, Manami and Yoshida, Shigeo and Koyama, Yuki and Kozuno, Tadashi},
  journal={IEEE Access},
  volume={14},
  pages={49207--49224},
  year={2026},
  doi={10.1109/ACCESS.2026.3679052}
}