This is the multi-page printable view of this section. Click here to print.
Releases
- solverforge-cli 1.1.2: Stabilized Scaffolding for the Converged Runtime
- SolverForge 0.8.2: CLI and Runtime Convergence
- SolverForge 0.6.0: Scaffolding and Codegen
- Planner123 1.0: Your Week, Optimized
- solverforge-maps 1.0: Routing Infrastructure for VRP Solvers
- SolverForge 0.5.0: Zero-Erasure Constraint Solving
solverforge-cli 1.1.2: Stabilized Scaffolding for the Converged Runtime
solverforge-cli 1.1.2 is now available. This is a stabilization release following the CLI/runtime convergence work in SolverForge 0.8.x.
Why this release matters
SolverForge 0.8.2 established a single coherent pipeline from scaffolding through to the retained runtime. The CLI was always the entry point; the runtime was always the destination. What changed in 0.8.x was that they now speak the same vocabulary—jobs, snapshots, checkpoints—using the same configuration format and type system all the way through.
CLI 1.1.2 ensures that the scaffolds you generate today target that converged runtime correctly:
- Scaffolded projects now depend on SolverForge 0.8.5, solverforge-ui 0.4.3, and solverforge-maps 2.1.3
- Generated code uses the retained
SolverManagerlifecycle introduced in 0.8.0 - The same
solver.tomlconfiguration drives both scaffolded servers and custom extensions
What changed in 1.1.2
Aligned scaffold targets
The 1.1.2 release fixes target alignment in the scaffold templates and adds proper tag publishing to the release workflow. When you run:
cargo install solverforge-cli
solverforge new my-scheduler
The generated Cargo.toml now correctly pins the 0.8.5 runtime line, ensuring
that new projects start from the converged API surface rather than intermediate
versions.
Reliable test executable resolution
Integration tests now resolve generated-app executables from cargo metadata
rather than assuming hardcoded paths. This makes the test suite more resilient
to workspace layout variations and cross-platform differences.
The broader context: from convergence to stability
The 0.8.x releases were about bringing the pieces together:
- 0.7.0 introduced CLI-first onboarding with
solverforge new - 0.8.0 through 0.8.5 solidified the retained runtime with exact pause semantics, snapshot-bound analysis, the job/snapshot/checkpoint vocabulary, and tracked existence streams for incremental constraint scoring
CLI 1.1.x has been tracking that stabilization:
- 1.1.0 aligned scaffolds with the retained lifecycle
- 1.1.1 hardened the scaffold templates and added demo data generation
- 1.1.2 (this release) polishes target alignment and test reliability
The result is that scaffolding now produces code that fits naturally into the
converged runtime. The generated SolverManager setup, the event stream
handling, and the configuration overlay pattern all match what the 0.8.5 runtime
expects.
Upgrade notes
- New installs:
cargo install solverforge-cligets you 1.1.2 - Existing installs:
cargo install solverforge-cli --force - Verify targets: Run
solverforge --versionto see scaffold targets
Projects scaffolded with earlier CLI versions continue to work. The runtime APIs are stable within the 0.8.x line. New projects benefit from the corrected target versions and the refined template structure.
What’s next
The CLI is now a reliable entry point for the converged SolverForge toolchain. Planned work includes:
- Expanded generator commands for common constraint patterns
- Deeper integration with the score analysis APIs introduced in 0.8.2
- Refined scaffold extension workflows for custom phases and selectors
solverforge-cli 1.1.2 is available on crates.io.
SolverForge 0.8.2: CLI and Runtime Convergence
SolverForge 0.8.2 is a cumulative update spanning the 0.7.x and 0.8.x lines.
If you last checked in at 0.6.0, the main change is that solverforge-cli and
solverforge now form one coherent developer experience.
Why this release matters
Building a solver application previously required piecing together scaffolding, generated code, manual solver loops, and lifecycle management. Starting with 0.7.0 and solidifying through 0.8.2, that boundary has collapsed into one pipeline:
- Scaffold a project with
solverforge new - Model your domain with derive macros
- Configure behavior via
solver.tomland per-solution overlays - Run with
SolverManagerhandling job lifecycle, pause/resume, and event streaming - Operate with exact checkpoint semantics and snapshot-bound analysis
The same types flow from generated code through to the retained runtime. The same configuration drives both the scaffolded server and your custom extensions. The same event stream powers both console output and production telemetry.
CLI-first onboarding
solverforge-cli is now the primary entry point for new projects:
cargo install solverforge-cli
solverforge new my-scheduler --standard
cd my-scheduler
solverforge server
The CLI scaffolds complete applications—domain model, constraints, solver configuration, and a working web interface. Templates cover standard-variable and list-heavy planning models, and the generated code targets the same unified runtime you extend.
Use solverforge generate to add entities, facts, and constraints.
Cleaner generated APIs
The #[planning_solution] macro now generates a {Name}ConstraintStreams
trait with typed accessors for each collection field. Instead of manual
extractors like factory.for_each(|s| &s.shifts), you write factory.shifts():
#[planning_solution]
pub struct Schedule {
#[problem_fact_collection]
pub employees: Vec<Employee>,
#[planning_entity_collection]
pub shifts: Vec<Shift>,
#[planning_score]
pub score: Option<HardSoftScore>,
}
// Generated trait enables:
let constraints = ConstraintFactory::<Schedule, HardSoftScore>::new()
.shifts() // No manual extractor
.join(equal(|s| s.employee))
.filter(|a, b| /* ... */)
.penalize_hard()
.named("No overlap");
Entity types with Option planning variables get a generated {Entity}Unassigned
filter. The .named("...") method is now the sole constraint finalizer, replacing
the older as_constraint naming.
Config-driven runtime
Solver behavior is now controlled through solver.toml:
[termination]
seconds_spent_limit = 30
unimproved_seconds_spent_limit = 5
step_count_limit = 10000
The runtime loads this automatically. For per-solution overrides—useful when
different problem instances need different budgets—use the config attribute:
#[planning_solution(
constraints = "define_constraints",
config = "solver_config_for_solution"
)]
pub struct Schedule {
// ...
}
fn solver_config_for_solution(
solution: &Schedule,
config: SolverConfig
) -> SolverConfig {
config.with_termination_seconds(solution.time_limit_secs)
}
The callback receives the loaded solver.toml configuration and should decorate
it, not replace it. This keeps environment-specific settings (hardware limits,
deployment profiles) separate from instance-specific adjustments (customer
SLAs, dynamic deadlines).
Retained lifecycle: jobs, snapshots, and checkpoints
SolverManager now owns the full retained lifecycle. When you solve, you get a
job ID and an event receiver:
static MANAGER: SolverManager<Schedule> = SolverManager::new();
let (job_id, mut receiver) = MANAGER.solve(schedule).unwrap();
while let Some(event) = receiver.blocking_recv() {
match event {
SolverEvent::Progress { metadata } => {
println!("step {} score {:?}",
metadata.telemetry.step_count,
metadata.telemetry.best_score);
}
SolverEvent::BestSolution { metadata, .. } => {
if let Some(rev) = metadata.snapshot_revision {
let analysis = MANAGER
.analyze_snapshot(job_id, Some(rev))
.unwrap();
// Snapshot-bound analysis
}
}
SolverEvent::Paused { metadata } => {
// Exact pause semantics: solver state is checkpointed
MANAGER.resume(job_id).unwrap();
}
SolverEvent::Completed { .. } => break,
_ => {}
}
}
The lifecycle speaks in neutral terms: jobs, snapshots, and
checkpoints. Every event carries job_id, monotonic event_sequence, and
snapshot_revision. Progress events include telemetry—step count, moves per
second, score calculation rate, acceptance rate—so your UI or monitoring stack
has structured data to work with.
Exact pause and resume
pause() requests settlement at a runtime-owned safe boundary. The runtime
transitions through PauseRequested to Paused only when the checkpoint is
exact and resumable. resume() continues from that in-process checkpoint,
not from a fresh solve seeded with the best solution.
Termination budgets (seconds_spent_limit, step_count_limit, and friends)
are preserved across pause/resume. Paused wall-clock time does not consume
active solve budgets.
Lifecycle-complete events
The retained runtime emits a complete event vocabulary:
Progress— periodic telemetry during solvingBestSolution— new best solution with snapshot revisionPauseRequested— pause is settlingPaused— checkpoint is ready, resumableResumed— continued from checkpointCompleted— normal terminationCancelled— explicit cancellationFailed— unrecoverable error
Each event carries authoritative lifecycle state. Your application does not infer completion from transport behavior or analysis availability; it responds to explicit terminal reasons.
Snapshot-bound analysis
Analysis is always revision-specific. You analyze a retained snapshot_revision,
never the live mutable job directly. This means analysis is available while
solving, while paused, and after completion—but availability does not imply
terminal state. Your UI can render constraint breakdowns without accidentally
collapsing a live job into an idle state.
Responsive operational control
Built-in search phases now poll retained-runtime control during large
neighborhood generation and evaluation. This means pause(), cancel(), and
config-driven termination unwind promptly without application-side watchdogs.
Interruptible retained phases and serialized pause lifecycle publication ensure
that PauseRequested remains authoritative before later pause-state events. If
construction is interrupted by a pause, placements are retried correctly after
resume.
List-variable improvements
List-heavy planning models (vehicle routing, task sequences) receive ongoing
attention. The #[planning_list_variable] macro supports a solution_trait
attribute when routing helpers or distance meters need extra solution-side
contracts:
#[planning_list_variable(solution_trait = "routing::VrpSolution")]
pub routes: Vec<Vec<Visit>>,
This keeps generated code compatible with custom domain extensions without requiring local macro forks.
Console and runtime polish
The console output—enabled with features = ["console"]—displays an emerald
truecolor banner matching the build tooling presentation.
Telemetry includes step count, moves per second, score calculations per second,
acceptance rate, phase timing, and score trajectory. The verbose-logging
feature adds DEBUG-level updates approximately once per second during local
search.
Upgrade notes
- Rust version: The current crate line targets Rust 1.92+.
- Breaking in 0.8.0:
Solvable::solvenow takesSolverRuntime<Self>instead of manual terminate/sender plumbing.SolverManager::solvereturnsResult<(job_id, receiver), SolverManagerError>. Manual retained-runtime implementations need to update their entrypoints. - Generated accessors: Prefer
factory.shifts()over manualfor_eachextractors in new code. - Config decoration: Use
#[planning_solution(config = "...")]to layer per-solution adjustments on top ofsolver.toml, not to replace it. - Neutral terminology: Update any code or docs using schedule-specific lifecycle terms to the job/snapshot/checkpoint vocabulary.
What’s next
Planned work includes:
- Expanded documentation for retained lifecycle orchestration in service and UI contexts
- More list-heavy planning examples and routing domain helpers
- Refined scaffold extension workflows for custom phases and selectors
SolverForge 0.6.0: Scaffolding and Codegen
We’re excited to announce SolverForge 0.6.0.
This release focuses on onboarding and developer ergonomics:
- CLI scaffolding + code generation for new projects
- Generated collection accessors (for example,
factory.shifts()) to reduce extractor boilerplate - Constraint naming standardization with
.named(...)
Why this release matters
SolverForge 0.6.0 makes it easier to go from a blank project to a working solver model while keeping constraint code concise and type-safe.
Upgrade notes
- Prefer generated accessors such as
factory.shifts()over manual extractors. - Use
.named("...")for finalizing constraints.
What’s next
We’re continuing to improve project setup and docs so new users can get started faster with fewer moving parts.
Planner123 1.0: Your Week, Optimized
We’re releasing Planner123 1.0, a desktop app that schedules your work week using constraint optimization. You give it tasks with durations, priorities, and dependencies. You connect Google Calendar so your existing meetings stay pinned. The solver finds the best arrangement of everything else – or tells you it can’t.
No AI hallucinations. No guessing. Math.
What It Does
Planner123 models your work week as a constraint satisfaction problem. It takes your tasks and finds the best way to arrange them across your schedule – or tells you it can’t.
Time is divided into 15-minute slots:
- Working hours: 9:00 – 18:00
- Lunch break: 13:00 – 14:00 (excluded)
- Weekends: excluded
- Slots per day: 32 (16 morning + 16 afternoon)
Tasks are organized into projects and linked by DAG dependencies – explicit predecessor/successor edges with built-in cycle detection. High-priority tasks get morning preference. Everything respects your existing calendar.
8 Constraints
| # | Constraint | Type | What it enforces |
|---|---|---|---|
| 1 | No overlap | Hard | Two tasks cannot occupy the same time slot |
| 2 | Within bounds | Hard | Tasks start at or after slot 0 |
| 3 | Fits schedule | Hard | Tasks end within the scheduling horizon |
| 4 | No lunch spanning | Hard | Tasks cannot cross the lunch break boundary |
| 5 | Sequence order | Hard | Tasks within a project respect sequence |
| 6 | Dependency order | Hard | DAG predecessors finish before successors start |
| 7 | Morning preference | Soft | High-priority tasks prefer morning slots |
| 8 | Earlier is better | Soft | All tasks prefer earlier placement |
All constraints are individually toggleable at runtime. The solver reports violations in plain English – not codes, not stack traces, not silence.
Your Calendar Is Confidential
Think about what’s in your calendar. Every meeting, every client name, every project codename, every doctor’s appointment, every 1:1 with the person you’re about to promote or let go. Your calendar is a complete map of your priorities, your relationships, and your time.
Every cloud-based scheduling tool asks you to hand that over. They sync it to their servers. They process it through their APIs. They store it in their databases. Some of them feed it to AI models. Most of them don’t tell you exactly what happens to it. All of them require you to trust a third party with the most detailed record of your professional life.
Planner123 runs on your machine. Your data never leaves your laptop. The solver runs locally in native Rust – not in a browser tab making API calls, not in a sandboxed VM phoning home, not “on-device” with an asterisk. Locally. The binary doesn’t even make network requests unless you explicitly connect Google Calendar, and even then the OAuth flow talks directly to Google – we never see your tokens or your events.
Faster, too. 350,000+ moves per second, zero network latency, zero waiting for a server to spin up.
The Philosophy
The engine underneath Planner123 – SolverForge – is open source. Truly open source. You can read every line, fork it, build on it. That part is free and always will be.
Planner123 itself is closed source. It’s a commercial product. We can’t give it away – rent isn’t a soft constraint.
So we did the next best thing: shareware.
Take the binary. Use it. It’s yours – not “yours until you stop paying,” not “yours on up to 3 devices,” not “yours according to section 14.2(b).” Yours. If it makes your week better, come back and pay what it’s worth. If it makes your business better, buy the source and build on it.
No subscriptions. No seat licenses. No enterprise negotiations. No tricks, no gates, no guilt. Just the oldest deal in software: a developer and a user.
Pricing
| Tier | Price | What you get |
|---|---|---|
| Shareware | EUR 0 | Full binary, 7-day scheduling horizon, Google Calendar sync, lifelong updates |
| Shareware++ | EUR 29.90 (one-time) | Unlimited horizon, Markdown/CSV data sources, AI agent integration, REST API |
| Free | EUR 4,360 (one-time) | Full source code, web application (Rails + Rust), team support, skills and fairness constraints |
The most expensive tier is called “Free” on the landing page. That’s not a typo. It’s the only tier where the software is truly free – you get the full repo, you own the code, you can scale it, ship it in your own product. Freedom has a price. We put ours on the label.
How It Works
You add your tasks. You set durations, priorities, and dependencies. You connect Google Calendar. You hit solve.
Planner123 runs a constraint solver powered by SolverForge 0.5.2 that evaluates hundreds of thousands of possible arrangements per second, searching for the one that satisfies all your hard constraints while optimizing the soft ones. The solver runs in the background – you watch your schedule assemble itself in real time.
Performance:
| Metric | Value |
|---|---|
| Move throughput | 350,000+ moves/second |
| GC pauses | 0 (native Rust, no runtime) |
| Default solve time | 300 seconds (configurable) |
| Slot resolution | 15 minutes |
Three Views
Planner123 provides three synchronized views of your schedule:
Plan – An outliner-style editor for managing tasks. Inline editing, drag-and-drop reorder, dependency indicators, and a right-click context menu for pinning, priority changes, and bulk actions. This is where you define the problem.
Gantt – A Frappe Gantt chart showing your solved schedule on a timeline. Dependency arrows render across projects. Google Calendar events appear as pinned blocks the solver works around.
Calendar – A FullCalendar month view. Drag a task to a new slot and the solver reflows everything else. Your score updates in real time.
All three views update live as the solver runs. The score panel shows hard and soft constraint satisfaction, and any violations are explained in human-readable detail.
Architecture
Planner123 is a Tauri v2 application. The backend is Rust. The frontend is vanilla JavaScript served through the system’s native WebView – no Electron, no bundled Chromium, no 200 MB of overhead.
The constraint solver runs natively via SolverForge 0.5.2. The UI is built with Shoelace web components, Frappe Gantt, FullCalendar, and vis-timeline – all vendored offline. There are no CDN requests, no build step, and no Node.js dependency. The app runs fully airgapped after installation.
Supported platforms:
- Linux: DEB, RPM, AppImage
- Windows: MSI, NSIS installer
- macOS: DMG (experimental)
What’s Next
Planner123 1.0 ships the core scheduling engine. The roadmap includes:
- Unlimited scheduling horizon – currently limited to 7 days in the shareware tier
- Data source integrations – import tasks from Markdown, CSV, and other formats
- AI agent integration – call Planner123 from your AI agent or automation pipeline
- REST API – expose the solver as a local HTTP service for programmatic access
- Web application – a Rails + Rust version for teams
- Skills and fairness constraints – for team scheduling scenarios
Try It
Planner123 is available now. Download the binary, add your tasks, connect your calendar, and let the solver figure out the rest.
Landing page: solverforge.org/planner123
Get Planner123: Download
Powered by: SolverForge 0.5.2
Related:
solverforge-maps 1.0: Routing Infrastructure for VRP Solvers
We’re releasing solverforge-maps 1.0, our Rust library for road network routing in vehicle routing problems. This library handles the map-related infrastructure that VRP solvers need: fetching road networks, computing travel time matrices, and generating route geometries.
What It Does
solverforge-maps provides a simple workflow for VRP applications:
use solverforge_maps::{BoundingBox, Coord, RoadNetwork};
let locations = vec![
Coord::new(39.95, -75.16),
Coord::new(39.96, -75.17),
Coord::new(39.94, -75.15),
];
let bbox = BoundingBox::from_coords(&locations).expand_for_routing(&locations);
let network = RoadNetwork::load_or_fetch(&bbox, &Default::default(), None).await?;
let matrix = network.compute_matrix(&locations, None).await;
That’s it. Load a road network for your delivery locations, compute the travel time matrix, feed it to your solver.
Key Features
Zero-Erasure Architecture: Following the SolverForge design philosophy, solverforge-maps uses no Arc, no Box<dyn>, and no trait objects in hot paths. The NetworkRef type provides zero-cost access to cached networks via Deref.
R-Tree Spatial Indexing: Coordinate snapping to the road network runs in O(log n) via R-tree, making it practical to route thousands of delivery points.
3-Tier Caching: Network data flows through in-memory cache, file cache, and Overpass API. Repeated requests for the same region are instant. Cache statistics are exposed for monitoring:
let stats = RoadNetwork::cache_stats().await;
println!("Hits: {}, Misses: {}", stats.hits, stats.misses);
Dynamic Speed Profiles: Travel times respect OSM maxspeed tags when available, falling back to sensible defaults by road type (motorway: 100 km/h, residential: 30 km/h, etc.).
Route Geometries: Full road-following geometries for visualization, with Douglas-Peucker simplification and Google Polyline encoding for efficient transmission to frontends.
Graph Connectivity Analysis: Debug routing failures with strongly connected component analysis:
let components = network.strongly_connected_components();
let largest_fraction = network.largest_component_fraction();
Input Validation: Coord and BoundingBox validate on construction with typed errors. No silent NaN propagation or out-of-range coordinates.
API Surface
The public API consists of:
| Type | Purpose |
|---|---|
Coord | Validated geographic coordinate |
BoundingBox | Validated rectangular region |
RoadNetwork | Core routing graph |
NetworkRef | Zero-cost cached network reference |
TravelTimeMatrix | N x N travel times with statistics |
RouteResult | Single route with geometry |
RoutingProgress | Progress updates for long operations |
Error handling is explicit via RoutingError variants that distinguish snap failures, unreachable pairs, network errors, and invalid input.
Installation
[dependencies]
solverforge-maps = "1.0"
tokio = { version = "1", features = ["full"] }
Production Use
We run solverforge-maps in production for the Vehicle Routing Quickstart. It handles routing for real delivery optimization scenarios and has proven reliable for our use cases.
The 1.0 version represents API stability. We don’t anticipate breaking changes to the public interface.
Source
SolverForge 0.5.0: Zero-Erasure Constraint Solving
We’re excited to announce SolverForge 0.5.0, a complete rewrite of SolverForge as a native Rust constraint solver. This isn’t a wrapper around an existing solver or a bridge between languages, but a ground-up implementation built on a new architecture powered by the SERIO (Scoring Engine for Real-time Incremental Optimization) engine - our zero-erasure implementation inspired by Timefold’s BAVET engine.
After exploring FFI complexity, performance bottlenecks in Python-Java bridges and the architectural constraints of cross-language constraint solving, we made a fundamental choice: build something different. The result is a general-purpose constraint solver in Rust and it is blazing fast.
While this release is labeled beta as the API continues to mature, SolverForge 0.5.0 is production-capable and represents a major architectural milestone in the project’s evolution.
What is SolverForge?
SolverForge is a constraint solver for planning and scheduling problems. It tackles complex optimization challenges like employee scheduling, vehicle routing, resource allocation, and task assignment—problems where you need to satisfy hard constraints while optimizing for quality metrics.
Inspired by Timefold (formerly OptaPlanner), SolverForge takes a fundamentally different architectural approach centered on zero-erasure design. Rather than relying on dynamic dispatch and runtime polymorphism, SolverForge preserves concrete types throughout the solver pipeline, enabling aggressive compiler optimizations and predictable performance characteristics.
At its core is the SERIO engine—Scoring Engine for Real-time Incremental Optimization—which efficiently propagates constraint changes through the solution space as the solver explores candidate moves.
Zero-Erasure Architecture
The zero-erasure philosophy shapes every layer of SolverForge. Here’s what it means in practice:
- No trait objects: No
Box<dyn Trait>orArc<dyn Trait>in hot paths - No runtime dispatch: All generics resolved at compile time via monomorphization
- No hidden allocations: Moves, scores, and constraints are stack-allocated
- Predictable performance: No garbage collection pauses, no vtable lookups
Traditional constraint solvers often use polymorphism to handle different problem types dynamically. This flexibility comes at a cost: heap allocations, pointer indirection, and unpredictable cache behavior. In constraint solving, where the inner loop evaluates millions of moves per second, these costs compound quickly.
SolverForge’s zero-erasure design means the compiler knows the concrete types of your entities, variables, scores, and constraints at compile time. It can inline aggressively, eliminate dead code, and generate cache-friendly machine code tailored to your specific problem structure.
// Zero-erasure move evaluation - fully monomorphized
fn evaluate_move<M: Move<Solution>>(
move_: &M,
director: &mut TypedScoreDirector<Solution, Score>
) -> Score {
// No dynamic dispatch, no allocations, no boxing
director.do_and_process_move(move_)
}
This isn’t just a performance optimization—it fundamentally changes how you reason about solver behavior. Costs are visible in the type system. There are no surprise heap allocations or dynamic dispatch overhead hiding in framework abstractions.
The SERIO Engine
SERIO—Scoring Engine for Real-time Incremental Optimization—is SolverForge’s constraint evaluation engine. It powers the ConstraintStream API, which lets you define constraints declaratively using fluent builders:
use solverforge::stream::{ConstraintFactory, joiner};
fn define_constraints() -> impl ConstraintSet<Schedule, HardSoftScore> {
let factory = ConstraintFactory::<Schedule, HardSoftScore>::new();
let required_skill = factory
.clone()
.for_each(|s: &Schedule| s.shifts.as_slice())
.join(
|s: &Schedule| s.employees.as_slice(),
joiner::equal_bi(
|shift: &Shift| shift.employee_id,
|emp: &Employee| Some(emp.id),
),
)
.filter(|shift: &Shift, emp: &Employee| {
!emp.skills.contains(&shift.required_skill)
})
.penalize(HardSoftScore::ONE_HARD)
.as_constraint("Required skill");
let no_overlap = factory
.for_each_unique_pair(
|s: &Schedule| s.shifts.as_slice(),
joiner::equal(|shift: &Shift| shift.employee_id),
)
.filter(|a: &Shift, b: &Shift| {
a.employee_id.is_some() && a.start < b.end && b.start < a.end
})
.penalize(HardSoftScore::ONE_HARD)
.as_constraint("No overlap");
(required_skill, no_overlap)
}
The key to SERIO’s efficiency is incremental scoring. When the solver considers a move (like reassigning a shift to a different employee), SERIO doesn’t re-evaluate every constraint from scratch. Instead, it tracks which constraint matches are affected by the change and recalculates only those.
Under the zero-erasure design, these incremental updates happen without heap allocations or dynamic dispatch. The constraint evaluation pipeline is fully monomorphized—each constraint stream compiles to specialized code for your exact entity types and filter predicates.
Developer Experience in 0.5.0
Version 0.5.0 brings significant improvements to the developer experience, making it easier to define problems and monitor solver progress.
Fluent API & Macros
Domain models are defined using derive macros that generate the boilerplate:
use solverforge::prelude::*;
#[problem_fact]
pub struct Employee {
pub id: i64,
pub name: String,
pub skills: Vec<String>,
}
#[planning_entity]
pub struct Shift {
#[planning_id]
pub id: i64,
pub required_skill: String,
#[planning_variable]
pub employee_id: Option<i64>,
}
#[planning_solution]
pub struct Schedule {
#[problem_fact_collection]
pub employees: Vec<Employee>,
#[planning_entity_collection]
pub shifts: Vec<Shift>,
#[planning_score]
pub score: Option<HardSoftScore>,
}
The #[planning_solution] macro now generates helper methods for basic variable problems, including:
- Entity count accessors (
shift_count(),employee_count()) - List operation methods for manipulating planning entities
- A
solve()method that sets up the solver with sensible defaults
This reduces boilerplate and makes simple problems trivial to solve while still allowing full customization for complex scenarios.
Console Output
With the console feature enabled, SolverForge displays beautiful real-time progress:
____ _ _____
/ ___| ___ | |_ _____ _ __ | ___|__ _ __ __ _ ___
\___ \ / _ \| \ \ / / _ \ '__|| |_ / _ \| '__/ _` |/ _ \
___) | (_) | |\ V / __/ | | _| (_) | | | (_| | __/
|____/ \___/|_| \_/ \___|_| |_| \___/|_| \__, |\___|
|___/
v0.5.0 - Zero-Erasure Constraint Solver
0.000s ▶ Solving │ 14 entities │ 5 values │ scale 9.799 x 10^0
0.001s ▶ Construction Heuristic started
0.002s ◀ Construction Heuristic ended │ 1ms │ 14 steps │ 14,000/s │ 0hard/-50soft
0.002s ▶ Late Acceptance started
1.002s ⚡ 12,456 steps │ 445,000/s │ -2hard/8soft
2.003s ⚡ 24,891 steps │ 448,000/s │ 0hard/12soft
30.001s ◀ Late Acceptance ended │ 30.00s │ 104,864 steps │ 456,000/s │ 0hard/15soft
30.001s ■ Solving complete │ 0hard/15soft │ FEASIBLE
The verbose-logging feature adds DEBUG-level progress updates (approximately once per second during local search), giving insight into solver behavior without overwhelming the terminal.
Shadow Variables
Shadow variables are derived values that depend on genuine planning variables. For example, in vehicle routing, a vehicle’s arrival time at a location depends on which locations come before it in the route.
Version 0.5.0 adds first-class support for shadow variables:
#[planning_entity]
pub struct Visit {
#[planning_variable]
pub vehicle_id: Option<i64>,
#[shadow_variable]
pub arrival_time: Option<i64>, // Computed based on route position
}
The new ShadowAwareScoreDirector tracks shadow variable dependencies and updates them automatically when genuine variables change. The filter_with_solution() method on uni-streams allows constraints to access shadow variables during evaluation:
factory
.for_each(|s: &Schedule| s.visits.as_slice())
.filter_with_solution(|solution: &Schedule, visit: &Visit| {
// Access shadow variable through solution
visit.arrival_time.unwrap() > solution.time_window_end
})
.penalize(HardSoftScore::ONE_HARD)
.as_constraint("Late arrival")
Event-Based Solving
The new solve_with_events() API provides real-time feedback during solving:
use solverforge::{SolverManager, SolverEvent};
let (job_id, receiver) = SolverManager::global().solve_with_events(schedule);
for event in receiver {
match event {
SolverEvent::BestSolutionChanged { solution, score } => {
println!("New best: {}", score);
update_dashboard(&solution);
}
SolverEvent::PhaseStarted { phase_name } => {
println!("Starting {}", phase_name);
}
SolverEvent::SolvingEnded { final_solution, .. } => {
println!("Done!");
break;
}
}
}
This enables building interactive UIs, progress bars, and real-time solution dashboards that update as the solver finds better solutions.
Phase Builders
SolverForge 0.5.0 introduces fluent builders for configuring solver phases:
use solverforge::prelude::*;
let solver = SolverManager::builder()
.with_phase_factory(|config| {
vec![
Box::new(BasicConstructionPhaseBuilder::new()),
Box::new(BasicLocalSearchPhaseBuilder::new()
.with_late_acceptance(400)),
]
})
.build()?;
Available phase builders include:
- BasicConstructionPhaseBuilder: First Fit construction for basic variables
- BasicLocalSearchPhaseBuilder: Hill climbing, simulated annealing, tabu search, late acceptance
- ListConstructionPhaseBuilder: Construction heuristics for list variables
- KOptPhaseBuilder: K-opt local search for tour optimization (TSP, VRP)
Each phase builder integrates with the new stats system (PhaseStats, SolverStats), providing structured access to solve metrics like step count, score calculation speed, and time spent per phase.
Breaking Changes
Version 0.5.0 includes one breaking change to enable shadow variable support:
Solution-aware filter traits: Uni-stream filters can now optionally access the solution using filter_with_solution(). This enables constraints to reference shadow variables and other solution-level computed state.
// Before: Filter receives only the entity
.filter(|shift: &Shift| shift.employee_id.is_some())
// After: Same syntax still works
.filter(|shift: &Shift| shift.employee_id.is_some())
// New: Can also access solution for shadow variables
.filter_with_solution(|solution: &Schedule, shift: &Shift| {
// Access shadow variables through solution context
shift.arrival_time.unwrap() < solution.deadline
})
The standard filter() method remains unchanged for simple predicates. Bi/Tri/Quad/Penta stream filters (after joins) continue to receive only the entity tuples without the solution reference.
If you’re upgrading from 0.4.0 and only using entity-level filters, no changes are required.
What’s Still Beta
The component status table in the README tracks what’s complete:
| Component | Status |
|---|---|
| Score types | Complete |
| Domain model macros | Complete |
| ConstraintStream API | Complete |
| SERIO incremental scoring | Complete |
| Construction heuristics | Complete |
| Local search | Complete |
| Exhaustive search | Complete |
| Partitioned search | Complete |
| VND | Complete |
| Move system | Complete |
| Termination | Complete |
| SolverManager | Complete |
| SolutionManager | Complete |
| Console output | Complete |
| Benchmarking | Complete |
Core solver functionality is complete and well-tested. The beta label reflects that we’re still gathering real-world feedback on ergonomics and API design.
Getting Started
Add SolverForge to your Cargo.toml:
[dependencies]
solverforge = { version = "0.5", features = ["console"] }
Try the Employee Scheduling Quickstart, which demonstrates a complete employee scheduling problem with shifts, skills, and availability constraints. It’s the fastest way to see SolverForge in action and understand the workflow for defining problems, constraints, and solving.
The quickstarts repository will continue to grow with more examples covering different problem types and solver features.
Python Bindings Coming Soon
While SolverForge is now a native Rust solver, we remain committed to multi-language accessibility. Python bindings are under active development at github.com/solverforge/solverforge-py and will be released later this month (late January 2026).
The architectural shift to native Rust was a major undertaking, and we chose to focus on getting the core solver right before building language bridges. The Python bindings will provide idiomatic Python APIs backed by SolverForge’s zero-erasure engine, giving Python developers native constraint solving performance with familiar syntax.
This gives us the best of both worlds: predictable, high-performance solving in Rust, with accessible bindings for the broader Python ecosystem.
What’s Next
Beyond Python bindings, the quickstart roadmap includes:
- Employee Scheduling: ✓ Available now
- Vehicle Routing: Next in pipeline
- More domain-specific examples as the ecosystem grows
We’re also working on:
- Expanded documentation and tutorials
- Additional constraint stream operations
- Performance benchmarks comparing different solver configurations
- Community-contributed problem templates
Looking Ahead
Version 0.5.0 represents a turning point for SolverForge. The zero-erasure architecture and SERIO engine provide a foundation for building a high-performance, accessible constraint solver that works across languages while maintaining Rust’s performance and safety guarantees.
We invite you to try SolverForge 0.5.0, explore the quickstarts, and share your feedback. Whether you’re scheduling employees, routing vehicles, or optimizing resource allocation, SolverForge provides the tools to model and solve your constraints efficiently.
The journey from FFI experiments to native Rust solver has been challenging, but the result is a constraint solver built on solid architectural foundations. We’re excited to see what you build with it.
Further reading: