This is the multi-page printable view of this section. Click here to print.
Technical
How We Build Frontends: jQuery in 2026
In 2026, we still ship jQuery. This isn’t technical debt or legacy code we haven’t gotten around to modernizing. It’s an intentional choice.
SolverForge quickstarts are educational demos for constraint optimization. Their purpose is to show developers how to model scheduling, routing, and resource allocation problems—not to demonstrate frontend engineering. Every architectural decision in these applications prioritizes transparency.
The Stack We Chose (and Why)
Our frontend stack:
- jQuery 3.7 — DOM manipulation, AJAX
- Bootstrap 5.3 — Responsive layout, components
- No React/Vue/Angular — Intentional
The reasoning: when a developer opens the browser devtools, we want them to see exactly what’s happening. No virtual DOM diffing. No state management abstractions. No build step artifacts. Just JavaScript that reads like what it does.
Extending jQuery for REST
HTTP semantics matter in REST APIs. jQuery doesn’t include PUT or DELETE out of the box, so we add them:
$.put = function (url, data) {
return $.ajax({
url: url,
type: 'PUT',
data: JSON.stringify(data),
contentType: 'application/json',
});
};
$.delete = function (url) {
return $.ajax({
url: url,
type: 'DELETE',
});
};
Four lines per method. A React developer would reach for axios. A Vue developer might use the Fetch API with async/await. Both are fine choices for production applications. But for a demo where someone is learning constraint modeling, this explicitness matters.
Visualization Strategy
Different optimization problems need different visualizations. The principle: match the tool to the domain.
Problem Types and Their Natural Representations
Scheduling problems need timeline views. Employees as rows, shifts as blocks, time as the x-axis. Gantt-style charts make resource allocation over time immediately comprehensible. Whether you use vis-timeline, a commercial scheduler component, or build something custom, the representation matters more than the library.
Routing problems need maps. A table of coordinates tells you nothing; lines on a map tell you everything. The solver’s output is geographic—the visualization should be too.
Warehouse and spatial problems need 2D or isometric views. When physical layout affects the optimization (picking paths, equipment placement), abstract representations lose critical information.
Financial problems have established conventions. Risk-return scatter plots, allocation pie charts, time series. Analysts expect these formats; deviating creates cognitive friction.
Choosing Components
We don’t standardize on one visualization library. Each quickstart uses whatever component best serves its domain—sometimes lightweight open-source libraries, sometimes more capable commercial components when the complexity warrants it.
The trade-off is maintenance overhead across different APIs. The benefit is using tools designed specifically for each visualization type rather than forcing everything through a general-purpose abstraction.
For educational demos, this trade-off works. The quickstarts aren’t a unified product; they’re independent examples. Consistency between them matters less than clarity within each one.
The Shared Component System
All quickstarts share utilities through a webjars structure:
/webjars/solverforge/
├── js/
│ └── solverforge-webui.js
└── css/
└── solverforge.css
solverforge-webui.js Utilities
This file contains cross-cutting concerns:
// Error handling
function showError(title, xhr) {
const message = xhr.responseJSON?.message || xhr.statusText || 'Unknown error';
const alert = $('<div class="alert alert-danger alert-dismissible fade show">')
.append($('<strong>').text(title + ': '))
.append(document.createTextNode(message))
.append($('<button type="button" class="btn-close" data-bs-dismiss="alert">'));
$('#alerts').append(alert);
}
// Tango color palette for consistency across visualizations
const TANGO_COLORS = [
'#3465a4', // Blue
'#73d216', // Green
'#f57900', // Orange
'#cc0000', // Red
'#75507b', // Purple
'#c17d11', // Brown
'#edd400', // Yellow
'#555753' // Grey
];
function getTangoColor(index) {
return TANGO_COLORS[index % TANGO_COLORS.length];
}
Why Tango Colors?
The Tango Desktop Project defined a color palette in 2004 for consistent icon design across Linux desktops. These colors were specifically chosen for:
- Distinguishability at small sizes
- Accessibility across different displays
- Aesthetic harmony when used together
Perfect for distinguishing multiple routes, employees, or resources in visualization.
State Management Without a Framework
React has useState. Vue has reactive refs. Angular has services. We have global variables:
var scheduleId = null;
var schedule = null;
var solving = false;
var timeline = null;
var employeeGroup = null;
var locationGroup = null;
var autoRefreshEnabled = true;
Seven variables. That’s the entire state of the application.
Data Flow
The data flow is explicit and traceable:
Backend (FastAPI)
↓ JSON
Frontend receives response
↓ JavaScript objects
Rendering functions
↓ jQuery DOM manipulation
User sees updated UI
No state synchronization. No computed properties. No watchers. When data changes, we explicitly call the function that updates the relevant UI section.
Card-Based Rendering
Most quickstarts render entities as Bootstrap cards:
function createTaskCard(task) {
const card = $('<div class="card task-card">')
.addClass(getStateClass(task));
const header = $('<div class="card-header">')
.append($('<span class="task-name">').text(task.name));
if (task.violations && task.violations.length > 0) {
header.append(
$('<span class="badge bg-danger ms-2">')
.text(task.violations.length + ' violations')
);
}
const body = $('<div class="card-body">')
.append($('<p>').text('Assigned to: ' + (task.employee?.name || 'Unassigned')))
.append($('<p>').text('Duration: ' + formatDuration(task.duration)));
return card.append(header).append(body);
}
function getStateClass(task) {
if (task.violations?.some(v => v.type === 'HARD')) return 'border-danger';
if (task.violations?.some(v => v.type === 'SOFT')) return 'border-warning';
return 'border-success';
}
Violation detection happens at render time. Cards show red borders for hard constraint violations, yellow for soft, green for satisfied. The visual feedback is immediate and requires no explanation.
The Code-Link System
Every UI element in a quickstart can be clicked to reveal its source code.
How It Works
function attachCodeLinks() {
$('[data-code-ref]').each(function () {
const $el = $(this);
const ref = $el.data('code-ref');
$el.addClass('code-linked');
$el.on('click', function (e) {
if (e.ctrlKey || e.metaKey) {
e.preventDefault();
showCodePanel(ref);
}
});
});
}
function showCodePanel(ref) {
const [file, pattern] = ref.split(':');
$.get('/api/source/' + encodeURIComponent(file))
.done(function (source) {
const lineNumber = findPatternLine(source, pattern);
highlightCode(source, lineNumber, file);
});
}
function findPatternLine(source, pattern) {
const lines = source.split('\n');
for (let i = 0; i < lines.length; i++) {
if (lines[i].includes(pattern)) {
return i + 1;
}
}
return 1;
}
Runtime Line Detection
Notice that we don’t hardcode line numbers. The data-code-ref attribute contains a file path and a search pattern:
<button data-code-ref="constraints.py:def required_skill">
Required Skill Constraint
</button>
When clicked, we fetch the source file and find the line containing def required_skill. This means:
- Line numbers stay correct as code changes
- Refactoring doesn’t break links
- The same pattern works across different quickstarts
Syntax Highlighting
We use Prism.js for syntax highlighting:
function highlightCode(source, lineNumber, filename) {
const language = getLanguage(filename);
const highlighted = Prism.highlight(source, Prism.languages[language], language);
const $panel = $('#code-panel');
$panel.find('.code-content').html(highlighted);
$panel.find('.code-filename').text(filename);
// Scroll to the relevant line
const $line = $panel.find('.line-' + lineNumber);
if ($line.length) {
$line.addClass('highlighted');
$line[0].scrollIntoView({ block: 'center' });
}
$panel.show();
}
Bidirectional Navigation
The documentation site has a corresponding “See in Demo” feature. When reading about a constraint in the docs, you can click through to the running application with that constraint highlighted. Education flows both ways: from UI to code, and from code to UI.
CSS Architecture
Our CSS follows a clear pattern for optimization-specific styling:
Constraint State Colors
/* Hard constraint violations - must fix */
.constraint-hard-violated {
background-color: var(--sf-error-bg);
border-left: 4px solid var(--sf-error);
}
/* Soft constraint violations - should improve */
.constraint-soft-violated {
background-color: var(--sf-warning-bg);
border-left: 4px solid var(--sf-warning);
}
/* Constraint satisfied */
.constraint-satisfied {
background-color: var(--sf-success-bg);
border-left: 4px solid var(--sf-success);
}
Capacity Bars
Resource utilization appears throughout optimization UIs:
.capacity-bar {
height: 20px;
background-color: var(--sf-neutral-bg);
border-radius: 4px;
overflow: hidden;
}
.capacity-fill {
height: 100%;
transition: width 0.3s ease;
}
.capacity-fill.under { background-color: var(--sf-success); }
.capacity-fill.warning { background-color: var(--sf-warning); }
.capacity-fill.over { background-color: var(--sf-error); }
Code Link Hover Effects
The code-link system needs visual affordance:
.code-linked {
cursor: pointer;
position: relative;
}
.code-linked::after {
content: '</>';
position: absolute;
top: -8px;
right: -8px;
font-size: 10px;
color: var(--sf-muted);
opacity: 0;
transition: opacity 0.2s;
}
.code-linked:hover::after {
opacity: 1;
}
.code-linked:hover {
outline: 2px dashed var(--sf-primary);
outline-offset: 2px;
}
A dashed outline and </> indicator tells users this element reveals source code.
API Design for Frontend Consumption
The backend API is designed for straightforward frontend consumption:
Endpoint Structure
GET /demo-data → List available datasets
GET /demo-data/{id} → Generate sample problem
POST /schedules → Submit problem, start solving
GET /schedules/{id} → Get current solution
GET /schedules/{id}/status → Lightweight status check
PUT /schedules/{id}/analyze → Score breakdown
DELETE /schedules/{id} → Stop solving
Score Format
Scores come back as strings: 0hard/-45soft
The frontend parses these:
function parseScore(scoreString) {
if (!scoreString) return null;
const match = scoreString.match(/(-?\d+)hard\/(-?\d+)soft/);
if (!match) return null;
return {
hard: parseInt(match[1], 10),
soft: parseInt(match[2], 10),
isFeasible: parseInt(match[1], 10) >= 0
};
}
Why string format? It’s human-readable in logs and network traces. When debugging why a solution looks wrong, seeing 0hard/-45soft immediately tells you: feasible (hard = 0), 45 units of soft penalty remaining.
Constraint Weight Sliders
Some quickstarts let users adjust constraint weights:
$('#weight-balance').on('input', function () {
const weight = $(this).val();
$('#weight-balance-value').text(weight);
constraintWeights.balance = parseInt(weight, 10);
});
function submitWithWeights() {
const payload = {
...currentSchedule,
constraintWeights: constraintWeights
};
$.post('/schedules', JSON.stringify(payload), function (id) {
scheduleId = id;
solving = true;
refreshSolvingButtons();
});
}
The weights integrate naturally with the REST payload structure. No separate configuration endpoint needed.
Trade-offs We Accepted
This architecture has real costs:
No component reuse — Every quickstart duplicates similar UI patterns. When we improve card rendering in one, we manually copy to others.
No type safety — JavaScript string manipulation means typos in property names fail silently. We rely on manual testing.
No hot module replacement — Changes require a full page refresh. Development is slower than modern frameworks.
No state persistence — Refresh the page and state is lost. We could add localStorage, but haven’t needed it.
Limited testing — UI logic isn’t unit tested. We test the backend constraint logic thoroughly; the frontend gets manual testing.
When This Approach Works
This architecture excels when:
- Primary goal is education — Readers need to understand the code, not just use the product
- Scope is bounded — Each quickstart is a single-page application under 1000 lines of JavaScript
- Longevity isn’t critical — Quickstarts are reference implementations, not production systems
- Team is small — One or two developers maintaining all quickstarts
It would be wrong for:
- Production applications with multiple developers
- Complex state management requirements
- Long-term maintenance expectations
- Performance-critical real-time updates
Conclusion
The SolverForge quickstart frontend architecture optimizes for a specific goal: helping developers understand constraint optimization by example. Every decision—jQuery over React, global variables over state management, domain-specific libraries over unified abstractions—serves that goal.
Modern frameworks solve real problems. Build tools enable powerful abstractions. Type systems catch bugs. None of that is wrong.
But when your audience is learning, when every abstraction layer is one more thing to understand before getting to the actual concept you’re teaching, simplicity has value.
For educational demos, the goal is a frontend architecture that stays out of the way of the concepts being taught.
Repository: SolverForge Quickstarts
Quickstarts:
Further reading:
Why Java Interop is Difficult in SolverForge Core
SolverForge Core is written in Rust. The constraint solving engine runs in Java (Timefold). Getting these two to talk to each other has been one of the more humbling engineering challenges we’ve faced.
This post is a retrospective on what we’ve tried, what worked, what didn’t, and what we’ve learned about the fundamental tensions in cross-language constraint solving architectures.
The Fundamental Tension
Constraint solving is computationally intensive. A typical solving run evaluates millions of moves, each triggering:
- Constraint evaluation: Checking if a candidate solution violates rules
- Score calculation: Computing solution quality
- Shadow variable updates: Cascading changes through dependent values
- Move generation: Creating new candidate solutions
The solver’s inner loop is tight and fast. Any overhead in that loop compounds millions of times. This is where language boundaries become painful.
JNI: The Road Not Taken
Java Native Interface (JNI) is the standard way to call Java from native code. We ruled it out early, but it’s worth understanding why.
Memory management complexity: JNI requires explicit management of local and global references. Missing a DeleteLocalRef causes memory leaks. Keeping references across JNI calls requires NewGlobalRef. The garbage collector can move objects, invalidating pointers. Getting this wrong crashes the JVM—often silently, hours into a long solve.
Type marshalling overhead: Every call requires converting types between Rust and Java representations. Strings must be converted to/from modified UTF-8. Arrays require copying. Objects need reflection-based access. In a hot loop, this adds up.
Thread safety constraints: JNI has strict rules about which threads can call which methods. Attaching native threads to the JVM has overhead. Detaching must happen before thread termination. Get the threading wrong and you get deadlocks or segfaults with no stack trace.
Error handling across boundaries: Java exceptions don’t automatically propagate through native code. Every JNI call must check for pending exceptions. When something goes wrong deep in constraint evaluation, the error context is often lost by the time it surfaces.
We looked at Rust libraries that wrap JNI (j4rs, jni-rs, robusta_jni). They reduce boilerplate but can’t eliminate the fundamental overhead of crossing the boundary millions of times per solve.
The JPype Lesson
Before SolverForge, we maintained Python bindings to Timefold using JPype. JPype bridges Python and Java by creating proxy objects—Python method calls translate to Java method calls transparently.
This transparency has a cost. Our order picking quickstart made this viscerally clear: constraint evaluation calls cross the Python-Java boundary millions of times. Each crossing involves type conversion, reference management, and GIL coordination.
@constraint_provider
def define_constraints(factory: ConstraintFactory):
return [
minimize_travel_distance(factory), # Called for every move
minimize_overloaded_trolleys(factory),
]
The constraint provider looks like Python. It runs as Java. Every evaluation triggers JPype conversions. Even with dataclass optimizations, we couldn’t eliminate the FFI cost.
This experience shaped our thinking: FFI bridges that cross the language boundary in constraint hot paths will always have performance problems at scale. The only way to win is to keep the hot path on one side of the boundary.
The HTTP/WASM Approach
Our current architecture tries to solve this by moving all solving to Java:
- Serialize the problem to JSON in Rust
- Send via HTTP to an embedded Java service
- Execute constraints as WASM inside the JVM (via Chicory)
- Return the solution as JSON
The idea is clean: boundary crossing happens exactly twice (problem in, solution out). The hot path runs entirely in the JVM. No FFI overhead during solving.
In practice, it’s more complicated.
The WASM Complexity Tax
Compiling constraint predicates to WebAssembly sounds elegant. In practice, it introduces its own category of problems.
Memory layout alignment: WASM memory layout must exactly match what the Rust code expects. We compute field offsets using Rust’s LayoutCalculator approach, and the Java-side dynamic class generation must produce compatible layouts. When this drifts—and it has—values read from wrong offsets, data corrupts silently, and constraints evaluate incorrectly.
Limited expressiveness: WASM predicates can only use a defined set of host functions. Complex logic that would be trivial in native code requires creative workarounds. We’ve added host functions for string comparison, list membership, and various list operations. Each new host function is a maintenance burden and a potential source of bugs.
Debugging opacity: When a WASM predicate behaves unexpectedly, debugging is painful. You’re looking at compiled WebAssembly, not your original constraint logic. Stack traces don’t map cleanly to source. Print debugging requires host function calls.
Dynamic class generation: The Java service generates domain classes at runtime from JSON specifications. This is powerful but fragile. Schema mismatches between Rust and Java manifest as runtime errors, often in ways that are hard to trace back to the root cause.
Score Corruption: The Silent Killer
The most insidious problems we’ve encountered involve score corruption. Timefold’s incremental score calculation is highly optimized—it doesn’t recalculate everything from scratch on each move. Instead, it tracks deltas and applies corrections.
This works beautifully when the constraint implementation is correct. When there’s a bug in the WASM layer, the memory layout, or the host function implementation, scores drift. The solver thinks it’s improving the solution when it’s actually making it worse. Or it rejects good moves because corrupted scores make them look bad.
Score corruption is hard to detect because the solver still runs. It still produces solutions. The solutions are just subtly wrong. We’ve added FULL_ASSERT environment mode for testing, which recalculates scores from scratch and compares them to incremental results. But you can’t run production workloads in full assert mode—the performance hit is too severe.
We’ve caught and fixed several score corruption bugs. Each time, we’ve wondered how many edge cases remain.
The Serialization Boundary
Moving problems and solutions across HTTP as JSON has its own costs.
Large problem overhead: Serializing a problem with thousands of entities and millions of constraint-relevant relationships is non-trivial. We’ve optimized our serializers, but there’s a floor to how fast JSON can go.
No intermediate visibility: Once the problem is sent, Rust is blind until the solution comes back. You can’t inspect intermediate solutions. You can’t adjust parameters mid-solve based on progress. Everything must be pre-computed before serialization.
State synchronization: The Rust and Java representations of the problem must stay synchronized. Domain model changes require updating both sides. This is a source of bugs we’ve learned to test carefully.
Service Lifecycle Complexity
The Java service must be started, monitored, and stopped. We handle this in solverforge-service with automatic JAR downloading, JVM process management, and port allocation.
This works, but it adds operational complexity:
- JVM startup time adds latency to first solve
- JAR caching and versioning requires careful management
- Port conflicts require detection and retry logic
- Process health monitoring adds code and failure modes
- Java 24+ requirement narrows deployment options
For users who just want to solve a constraint problem, requiring a JVM feels like a lot of machinery.
What We’ve Learned
Building this architecture has taught us a lot about what makes cross-language constraint solving hard:
- The hot path must be pure: Any boundary crossing in the inner loop is fatal to performance
- Memory layout bugs are silent: They don’t crash, they corrupt
- Incremental score calculation amplifies bugs: Small errors compound into wrong solutions
- Operational complexity compounds: Each moving part adds failure modes
These lessons have shaped how we think about constraint solver architecture. The HTTP/WASM approach was a reasonable bet — it solves the FFI overhead problem by eliminating FFI from the hot path. But the complexity tax is real: the WASM layer introduces subtle bugs, score corruption remains an ever-present concern, and the operational overhead of managing an embedded JVM service is non-trivial.
We’ve spent the past months wrestling with these challenges, and it’s given us a deep appreciation for what a constraint solver actually needs to be reliable and fast.
Looking Ahead
We’re on track for our January release. The work we’ve done understanding these interop challenges, debugging score corruption edge cases, and learning the internals of constraint solving has been invaluable — even when frustrating.
Stay tuned. We think you’ll like what we’ve been building.
Further reading:
Order Picking Quickstart: JPype Bridge Overhead in Constraint Solving
Our current constraint solving quickstarts in Python are based on our stable, legacy fork of Timefold for Python, which uses JPype to bridge to Timefold’s Java solver engine. The latest example is Order Picking—a warehouse optimization problem with real-time isometric visualization showing trolleys routing through shelves to pick orders.
The implementation works and demonstrates the architectural patterns we’ve developed. It also exposes the inherent overhead of FFI (Foreign Function Interface) bridges in constraint-heavy workloads.
The Problem Domain
Order picking is the warehouse operation where workers (or trolleys) collect items from shelves to fulfill customer orders. The optimization challenge combines:
- Capacity constraints: trolleys have buckets with volume limits, products have different sizes
- Routing constraints: minimize travel distance, efficient sequencing
- Assignment constraints: each item picked exactly once, balance load across trolleys
This maps to vehicle routing with bin packing characteristics—a constraint-intensive problem domain.
Real-Time Visualization
The UI renders an isometric warehouse with trolleys navigating between shelving units. Routes update live as the solver reassigns items, color-coded to show which trolley picks which items.
Not only solving itself, but merely getting real-time updates working required tackling JPype-specific challenges. The solver runs in a Java thread and modifies domain objects that Python needs to read. To avoid crossing the Python-Java boundary on every poll, solutions are cached in solver callbacks:
@app.get("/schedules/{problem_id}")
async def get_solution(problem_id: str) -> Dict[str, Any]:
solver_status = solver_manager.get_solver_status(problem_id)
with cache_lock:
cached = cached_solutions.get(problem_id)
if not cached:
raise HTTPException(status_code=404)
result = dict(cached)
result["solverStatus"] = solver_status.name
return result
This pattern—caching solver state in callbacks, serving from cache—avoids some JPype overhead in the hot path of UI polling.
Performance Characteristics
In spite of the above hack, the JPype bridge still introduces major overhead that becomes very significant in constraint-heavy problems like order picking. The overhead is expacted to grow exponentially with scale.
The solver’s work happens primarily in:
- Constraint evaluation: Checking capacity limits, routing constraints, assignment rules
- Move generation: Creating candidate solutions (reassigning items, reordering routes)
- Score calculation: Computing solution quality after each move
- Shadow variable updates: Cascading capacity calculations through trolley routes
For order picking specifically, the overhead compounds from:
- List variable manipulation (
PlanningListVariable): Frequent reordering of trolley pick lists - Shadow variable cascading: Capacity changes ripple through entire routes
- Equality checks: Object comparison during move validation
Each of these operations crosses the Python-Java boundary through JPype, and these crossings happen millions of times during solving.
Why JPype Specifically
JPype bridges Python and Java by converting Python objects to Java proxies, calling Java methods, and converting results back. Each crossing has overhead. In constraint solving, we cross this boundary millions of times:
@constraint_provider
def define_constraints(factory: ConstraintFactory):
return [
minimize_travel_distance(factory), # Called for every move
minimize_overloaded_trolleys(factory),
]
Every constraint evaluation triggers JPype conversions. Even with dataclass optimization(avoiding Pydantic overhead in hot paths), we can’t eliminate the FFI cost.
The operations most affected by bridge overhead:
- List operations:
PlanningListVariablefor trolley steps requires frequent list manipulation - Shadow variables: capacity calculations cascade through step lists
- Equality checks: object comparison during move validation
Mitigation strategies that help:
- Callback-based caching: Store serialized solutions to avoid repeated boundary crossings
- Simplified domain models: Fewer fields means fewer conversions
- Dataclass over Pydantic: Skip validation overhead in solver hot paths (see architecture comparison)
Why This Validates Rust
This quickstart doesn’t just expose a performance problem—it validates our architectural direction.
We’re building a constraint solver framework in Rust with WASM + HTTP architecture:
- Solver compiles to WebAssembly
- Runs natively in browser or server
- No FFI boundary—just function calls
- Zero serialization overhead for in-memory solving
- No JPype conversions, no GIL contention, direct memory access
With Rust/WASM, the order picking implementation would eliminate all JPype overhead and run constraint evaluation at native speed while keeping the same domain model structure. The architecture stays the same. The performance gap disappears.
Source Code
Repository: SolverForge Quickstarts
Run locally:
git clone https://github.com/SolverForge/solverforge-quickstarts.git
cd solverforge-quickstarts/legacy/order-picking-fast
python -m venv .venv
source .venv/bin/activate
pip install -e .
run-app
Architecture: All quickstarts follow the pattern documented in dataclasses vs Pydantic.
Rust framework development:
The Rust/WASM framework is in early development. Follow progress at github.com/SolverForge.
Further reading:
Dataclasses vs Pydantic in Constraint Solvers
When building constraint solvers in Python, one architectural decision shapes everything else: should domain models use Pydantic (convenient for APIs) or dataclasses (minimal overhead)?
Both tools are excellent at what they’re designed for. The question is which fits the specific demands of constraint solving—where the same objects get evaluated millions of times per solve.
We ran benchmarks across meeting scheduling and vehicle routing problems to understand the performance characteristics of each approach.
Note: These benchmarks were run on small problems (50 meetings, 25-77 customers) using JPype to bridge Python and Java. The findings about relative performance between dataclasses and Pydantic hold regardless of scale, though absolute timings will vary with problem size and infrastructure.
Two Architectural Approaches
Unified Models (Pydantic Throughout)
class Person(BaseModel):
id: str
full_name: str
# Single model for API and constraint solving
class MeetingAssignment(BaseModel):
id: str
meeting: Meeting
starting_time_grain: TimeGrain | None = None
room: Room | None = None
One model structure handles everything: JSON parsing, validation, API docs, and constraint evaluation. This is appealing for its simplicity.
Separated Models (Dataclasses for Solving)
# Domain model (constraint solving)
@dataclass
class Person:
id: Annotated[str, PlanningId]
full_name: str
# API model (serialization)
class PersonModel(BaseModel):
id: str
full_name: str
Domain models are simple dataclasses. Pydantic handles API boundaries. Converters translate between them.
Benchmark Setup
We tested three configurations across 60 scenarios (10 iterations × 6 configurations):
- Pydantic domain models: Unified approach with Pydantic throughout
- Dataclass domain models: Separated approach with dataclasses for solving
- Java reference: Timefold v1.24.0
Each solve ran for 30 seconds on identical problem instances.
Test problems:
- Meeting scheduling (50 meetings, 18 rooms, 20 people)
- Vehicle routing (25 customers, 6 vehicles)
Results: Meeting Scheduling
| Configuration | Iterations Completed | Consistency |
|---|---|---|
| Dataclass models | 60/60 | High |
| Java reference | 60/60 | High |
| Pydantic models | 46-58/60 | Variable |
What We Observed
Iteration throughput: The dataclass configuration completed all optimization iterations within the time limit, matching the Java reference. The Pydantic configuration sometimes hit the time limit before finishing.
Object equality behavior: We noticed some unexpected constraint evaluation differences when using Pydantic models with Python-generated test data. The same constraint logic produced different results depending on how Person objects were compared during conflict detection.
Results: Vehicle Routing
| Configuration | Iterations Completed | Consistency |
|---|---|---|
| Dataclass models | 60/60 | High |
| Java reference | 60/60 | High |
| Pydantic models | 57-59/60 | Variable |
The pattern was consistent across problem domains.
Understanding the Difference
Object Equality in Hot Paths
Constraint evaluation happens millions of times during solving. Meeting scheduling detects conflicts by comparing Person objects to find double-bookings.
Dataclass equality:
@dataclass
class Person:
id: str
full_name: str
# __eq__ generated from field values
# Simple, predictable, fast
Python generates straightforward comparison based on fields.
Pydantic equality:
class Person(BaseModel):
id: str
full_name: str
# __eq__ involves model internals
# Designed for API validation, not hot-path comparison
Pydantic wasn’t designed for millions of equality checks per second—it’s optimized for API validation, where this overhead is negligible.
The Right Tool for Each Job
Pydantic excels at API boundaries: parsing JSON, validating input, generating OpenAPI schemas. These operations happen once per request.
Dataclasses excel at internal computation: simple field access, predictable equality, minimal overhead. These characteristics matter when operations repeat millions of times.
Practical Examples
The quickstart guides demonstrate this pattern in action:
Employee Scheduling
Employee Scheduling Guide shows:
- Hard/soft constraint separation with
HardSoftDecimalScore - Load balancing constraints using dataclass aggregation
- Date-based filtering with simple set membership
Key pattern: Domain uses set[date] for unavailable_dates—fast membership testing during constraint evaluation.
Meeting Scheduling
Meeting Scheduling Guide demonstrates:
- Multi-variable planning entities (time + room)
- Three-tier scoring (
HardMediumSoftScore) - Complex joining patterns across attendance records
Key pattern: Separate Person, RequiredAttendance, PreferredAttendance dataclasses keep joiner operations simple.
Vehicle Routing
Vehicle Routing Guide illustrates:
- Shadow variable chains (
PreviousElementShadowVariable,NextElementShadowVariable) - Cascading updates for arrival time calculations
- List variables with
PlanningListVariable
Key pattern: The arrival_time shadow variable cascades through the route chain. Dataclass field assignments keep these updates lightweight.
The Recommended Architecture
Based on our experience, we recommend separating concerns:
src/meeting_scheduling/
├── domain.py # @dataclass models for solver
├── rest_api.py # Pydantic models for API
└── converters.py # Boundary translation
Domain Layer
@planning_entity
@dataclass
class MeetingAssignment:
id: Annotated[str, PlanningId]
meeting: Meeting
starting_time_grain: Annotated[TimeGrain | None, PlanningVariable] = None
room: Annotated[Room | None, PlanningVariable] = None
Simple structures optimized for solver manipulation.
API Layer
class MeetingAssignmentModel(BaseModel):
id: str
meeting: MeetingModel
starting_time_grain: TimeGrainModel | None = None
room: RoomModel | None = None
Pydantic handles what it’s designed for: request validation, JSON serialization, OpenAPI documentation.
Boundary Conversion
def assignment_to_model(a: MeetingAssignment) -> MeetingAssignmentModel:
return MeetingAssignmentModel(
id=a.id,
meeting=meeting_to_model(a.meeting),
starting_time_grain=timegrain_to_model(a.starting_time_grain),
room=room_to_model(a.room)
)
Translation happens exactly twice per solve: on ingestion and serialization.
Additional Benefits
Optional Validation Mode
# Production: fast dataclass domain
solver.solve(problem)
# Development: validate before solving
validated = ProblemModel.model_validate(problem_dict)
solver.solve(validated.to_domain())
Get validation during testing. Run at full speed in production.
Clear Debugging Boundaries
The separation makes debugging easier—you know exactly what objects the solver sees versus what the API exposes.
Guidelines
When to Use Pydantic
- API request/response validation
- Configuration file parsing
- Data serialization for storage
- OpenAPI schema generation
- Development-time validation
When to Use Dataclasses
- Solver domain models
- Objects compared in tight loops
- Entities with frequent equality checks
- Performance-critical data structures
- Internal solver state
The Hybrid Pattern
@app.post("/schedules")
def create_schedule(request: ScheduleRequest) -> ScheduleResponse:
# Validate once at API boundary
problem = request.to_domain()
# Solve with fast dataclasses
solution = solver.solve(problem)
# Serialize once for response
return ScheduleResponse.from_domain(solution)
Validation where it matters. Performance where it counts.
Trade-offs
More Code
Separated models mean additional files and conversion logic. For simple APIs or prototypes, unified Pydantic might be fine to start with.
Performance at Scale
The overhead difference grows with problem size. Small problems might not show much difference; larger problems will.
Summary
Both Pydantic and dataclasses are excellent tools. The key insight is matching each to its strengths:
- Dataclasses for solver internals—simple, predictable, optimized for repeated operations
- Pydantic for API boundaries—rich validation, serialization, documentation generation
This separation lets each tool do what it does best.
Full benchmark code and results: SolverForge Quickstarts Benchmarks