Constraints
Define business rules using constraint streams, joiners, collectors, and score types.
Constraints are the business rules that define what makes a good solution. SolverForge uses a constraint streams API — a declarative, composable way to express rules that reads like a pipeline of filters and transformations.
How Constraints Work
- Select entities or facts from your solution using
for_each - Filter, join, or group to narrow down the matches
- Penalize or reward to affect the score
- Name the constraint with
.named()
let factory = ConstraintFactory::<Schedule, HardSoftScore>::new();
factory.for_each(|s: &Schedule| s.shifts.as_slice()) // Select all shifts
.filter(|s| s.employee_idx.is_none()) // Keep unassigned ones
.penalize(HardSoftScore::ONE_HARD) // Penalize each
.named("Unassigned shift") // Finalize
Constraints are returned as a tuple implementing ConstraintSet<S, Sc>, which the solver evaluates incrementally as it explores moves.
Sections
1 - Constraint Streams
Declarative constraint definition using the stream API.
Constraint streams are the primary way to define constraints in SolverForge. They provide a pipeline-style API where you select entities, transform the stream, and terminate with a scoring impact.
Defining Constraints
Constraints are defined as a function that returns a tuple of constraint objects. The #[planning_solution] macro wires this up automatically.
use solverforge::prelude::*;
fn define_constraints() -> impl ConstraintSet<Schedule, HardSoftScore> {
let factory = ConstraintFactory::<Schedule, HardSoftScore>::new();
(
factory.for_each(|s: &Schedule| s.shifts.as_slice())
.filter(|shift| shift.employee_idx.is_none())
.penalize(HardSoftScore::ONE_HARD)
.named("Unassigned shift"),
)
}
Each constraint builder chain produces an IncrementalUniConstraint (or similar) via .named(). Return them as a tuple — SolverForge implements ConstraintSet for tuples of up to 16 constraints.
Source Operations
for_each
Selects all items from a collection in the solution, using a closure extractor.
factory.for_each(|s: &Schedule| s.shifts.as_slice())
filter
Keeps only elements that match a predicate.
factory.for_each(|s: &Schedule| s.shifts.as_slice())
.filter(|shift| shift.employee_idx.is_none())
join
Combines elements from the same or different collections. The join target determines the behavior:
Self-join — pairs from the same collection, using an equal joiner:
factory.for_each(|s: &Schedule| s.shifts.as_slice())
.join(equal(|shift: &Shift| shift.employee_idx))
Cross-join — pairs from two different collections, using an equal_bi joiner:
factory.for_each(|s: &Schedule| s.shifts.as_slice())
.join((
|s: &Schedule| s.unavailability.as_slice(),
equal_bi(|shift: &Shift| shift.employee_idx, |u: &Unavailability| u.employee_idx),
))
See Joiners for all available joiner types.
flatten_last
Flattens a collection in the last element into individual elements. Takes three arguments: a slice extractor, a key function for the flattened items, and a lookup function for matching.
factory.for_each(|s: &Schedule| s.employees.as_slice())
.join((
|s: &Schedule| s.shifts.as_slice(),
equal_bi(|e: &Employee| e.id, |s: &Shift| s.employee_idx),
))
.flatten_last(
|e: &Employee| e.available_days.as_slice(), // slice extractor
|d| *d, // key for flattened item
|s: &Shift| s.date(), // lookup from A
)
group_by
Groups elements and applies a collector to aggregate.
factory.for_each(|s: &Schedule| s.shifts.as_slice())
.group_by(
|shift: &Shift| shift.employee_idx, // grouping key
count::<Shift>(), // collector
)
balance
Calculates load imbalance across a grouping key. The key function returns Option<K> — None values are skipped (useful for unassigned entities).
factory.for_each(|s: &Schedule| s.shifts.as_slice())
.balance(|shift: &Shift| shift.employee_idx)
if_exists_filtered / if_not_exists_filtered
Filters based on the existence (or absence) of matching entities in another collection.
factory.for_each(|s: &Schedule| s.shifts.as_slice())
.if_exists_filtered(
|s: &Schedule| s.unavailability.clone(),
equal_bi(|shift: &Shift| shift.employee_idx, |u: &Unavailability| u.employee_idx),
)
Terminal Operations
penalize / reward
Apply a fixed score impact per match, then finalize with .named().
.penalize(HardSoftScore::ONE_HARD)
.named("Constraint name")
.reward(HardSoftScore::ONE_SOFT)
.named("Preference bonus")
penalize_hard / penalize_soft / reward_hard / reward_soft
Convenience methods that use the score type’s unit hard or soft value.
.penalize_hard()
.named("Hard violation")
.penalize_soft()
.named("Soft preference")
penalize_hard_with / penalize_soft_with / reward_hard_with
Apply a dynamic score impact based on the matched element.
.penalize_hard_with(|shift: &Shift| HardSoftScore::of(1, shift.overtime_hours() as i64))
.named("Overtime")
penalize_with / reward_with
Apply a fully custom score impact.
.penalize_with(|shift: &Shift| HardSoftScore::of_soft(shift.preference_penalty()))
.named("Preference")
Full Example
use solverforge::prelude::*;
fn define_constraints() -> impl ConstraintSet<Schedule, HardSoftScore> {
let factory = ConstraintFactory::<Schedule, HardSoftScore>::new();
(
// Hard: every shift must be assigned
factory.for_each(|s: &Schedule| s.shifts.as_slice())
.filter(|shift| shift.employee_idx.is_none())
.penalize(HardSoftScore::ONE_HARD)
.named("Unassigned shift"),
// Hard: no employee works two overlapping shifts
factory.for_each(|s: &Schedule| s.shifts.as_slice())
.join(equal(|shift: &Shift| shift.employee_idx))
.filter(|(a, b)| a.overlaps(b))
.penalize(HardSoftScore::ONE_HARD)
.named("Overlap"),
// Soft: prefer assigning employees to their preferred shifts
factory.for_each(|s: &Schedule| s.shifts.as_slice())
.filter(|shift| shift.is_preferred_by_employee())
.reward(HardSoftScore::ONE_SOFT)
.named("Preference"),
)
}
See Also
2 - Joiners
Control which tuples are created when joining constraint streams.
Joiners define the matching criteria when two streams are joined. Without joiners, a join produces every possible pair — joiners filter this down to only relevant combinations.
Available Joiners
equal
For self-joins (pairing items from the same collection), takes a single key extractor:
equal(|shift: &Shift| shift.employee_idx)
equal_bi
For cross-joins (pairing items from two different collections), takes two key extractors:
equal_bi(|shift: &Shift| shift.employee_idx, |u: &Unavailability| u.employee_idx)
less_than
Matches when the left value is less than the right. Takes two extractors.
less_than(|a: &Shift| a.id, |b: &Shift| b.id)
greater_than
Matches when the left value is greater than the right. Takes two extractors.
greater_than(|a: &Shift| a.priority, |b: &Shift| b.priority)
overlapping
Matches when two ranges overlap. Takes four extractors: start and end for each side.
overlapping(
|a: &Shift| a.start_time, |a: &Shift| a.end_time,
|b: &Shift| b.start_time, |b: &Shift| b.end_time,
)
filtering
A general-purpose joiner that uses a predicate over both elements.
filtering(|a: &Shift, b: &Shift| a.location.distance_to(&b.location) < 50.0)
Using Joiners with join
Self-join — pass a single equal joiner directly:
factory.for_each(|s: &Schedule| s.shifts.as_slice())
.join(equal(|shift: &Shift| shift.employee_idx))
Cross-join — pass a tuple of (extractor, joiner):
factory.for_each(|s: &Schedule| s.shifts.as_slice())
.join((
|s: &Schedule| s.unavailability.as_slice(),
equal_bi(|shift: &Shift| shift.date, |u: &Unavailability| u.date),
))
Indexed joiners (equal, equal_bi, less_than, greater_than, overlapping) are much faster than filtering because they use index lookups instead of iterating all pairs. Prefer indexed joiners where possible and only use filtering for conditions that can’t be expressed with indexed joiners.
See Also
3 - Collectors
Aggregation functions for group_by operations in constraint streams.
Collectors aggregate values within groups created by group_by. They transform a stream of individual matches into grouped summaries.
Using Collectors
Pass a collector as the second argument to group_by:
factory.for_each(|s: &Schedule| s.shifts.as_slice())
.group_by(
|shift: &Shift| shift.employee_idx, // grouping key
count::<Shift>(), // collector
)
// Result: grouped stream of (key, usize)
Available Collectors
count::<A>()
Counts the number of matches in each group. Returns usize.
.group_by(|s: &Shift| s.employee_idx, count::<Shift>())
// → (key, usize)
sum(mapper)
Sums numeric values in each group. The mapper extracts the value to sum.
.group_by(|s: &Shift| s.employee_idx, sum(|s: &Shift| s.hours))
// → (key, i64)
load_balance(key_fn, metric_fn)
Measures load imbalance across a grouping key. Returns a LoadBalance<K> with unfairness metric.
.group_by(
|s: &Shift| s.department_idx,
load_balance(|s: &Shift| s.employee_idx, |s: &Shift| 1i64),
)
Balance Stream Operation
For simple load balancing without group_by, use the balance stream operation directly:
factory.for_each(|s: &Schedule| s.shifts.as_slice())
.balance(|shift: &Shift| shift.employee_idx)
.penalize_soft()
.named("Fair distribution")
The key function returns Option<K> — None values (unassigned entities) are skipped.
See Also
4 - Score Types
Choose the right score type for your optimization problem.
The score represents the quality of a solution. SolverForge provides several score types with increasing granularity. Choose the simplest one that captures your constraint hierarchy.
Available Score Types
| Score Type | Levels | When to Use |
|---|
SoftScore | 1 (soft) | All constraints are preferences, no hard rules |
HardSoftScore | 2 (hard, soft) | Most common — hard constraints must be satisfied, soft are optimized |
HardMediumSoftScore | 3 (hard, medium, soft) | Three priority tiers (e.g., must / should / nice-to-have) |
HardSoftDecimalScore | 2 (hard, soft) | Same as HardSoftScore but with decimal precision (i64 auto-scaled by 100,000) |
BendableScore<H, S> | N | Custom number of hard and soft levels (const generics) |
HardSoftScore (Most Common)
use solverforge::prelude::*;
// Constants for common impacts
HardSoftScore::ZERO
HardSoftScore::ONE_HARD // 1 hard, 0 soft
HardSoftScore::ONE_SOFT // 0 hard, 1 soft
// Custom values
HardSoftScore::of_hard(-5)
HardSoftScore::of_soft(-10)
HardSoftScore::of(-2, -15) // -2 hard, -15 soft
Hard constraints are rules that must not be broken (e.g., “no employee works two shifts at the same time”). A solution with any hard penalty is infeasible.
Soft constraints are preferences to optimize (e.g., “prefer assigning employees to their preferred shifts”). The solver minimizes soft penalties after satisfying all hard constraints.
let score = HardSoftScore::of(-1, -50);
score.is_feasible() // false — has hard violations
SoftScore
For problems with only preferences and no hard rules:
SoftScore::ZERO
SoftScore::ONE
SoftScore::of(-5)
HardMediumSoftScore
Three-level priority:
HardMediumSoftScore::of(-1, 0, -10)
// hard: must not violate
// medium: strongly prefer to satisfy
// soft: nice to have
HardSoftDecimalScore
Like HardSoftScore but with fixed-point decimal precision. Values are i64 internally, auto-scaled by 100,000:
HardSoftDecimalScore::ZERO
HardSoftDecimalScore::ONE_HARD // scaled to 100,000
HardSoftDecimalScore::ONE_SOFT
HardSoftDecimalScore::of(-1, -3) // auto-scaled: -100000, -300000
HardSoftDecimalScore::of_scaled(-150000, -370000) // raw scaled values
BendableScore
Configurable number of hard and soft levels using const generics:
// 2 hard levels, 3 soft levels
let score = BendableScore::<2, 3>::of([-1, 0], [-5, -3, -1]);
let zero = BendableScore::<2, 3>::zero();
Use when you need more than three priority tiers.
Score Arithmetic
All score types support standard operations:
let a = HardSoftScore::of(-1, -5);
let b = HardSoftScore::of(0, -3);
let sum = a + b; // (-1, -8)
Choosing a Score Type
- Start with
HardSoftScore — it covers most problems - If you need decimal precision, use
HardSoftDecimalScore - If you have three clear priority tiers, use
HardMediumSoftScore - If you need more tiers, use
BendableScore - If you have no hard constraints at all, use
SoftScore
See Also
5 - Score Analysis
Understand why a solution received its score with ScoreAnalysis.
Score analysis answers the question: “Why does my solution have this score?” It breaks down the total score by constraint, helping you debug constraints and explain results to users.
ScoreAnalysis
Types that derive #[planning_solution] with a constraints path automatically implement the Analyzable trait, which provides the analyze method.
use solverforge::prelude::*;
let analysis = analyze(&solution);
println!("Total score: {:?}", analysis.score);
for constraint in &analysis.constraints {
println!(
"{}: {:?} (count: {})",
constraint.constraint_ref.name,
constraint.score,
constraint.match_count,
);
}
ConstraintAnalysis
Each ConstraintAnalysis contains:
| Field | Description |
|---|
constraint_ref | The ConstraintRef with package and name (from .named()) |
score | Total score impact of this constraint |
match_count | Number of times the constraint matched |
matches | Individual DetailedConstraintMatch entries with justifications |
DetailedConstraintMatch
Each match includes a ConstraintJustification with the entities involved:
for constraint in &analysis.constraints {
for m in &constraint.matches {
println!(
" {} -> {:?} (entities: {:?})",
m.constraint_ref.name,
m.score,
m.justification.entities.iter()
.map(|e| e.short_type_name())
.collect::<Vec<_>>(),
);
}
}
Use Cases
- Debugging constraints: Find which constraints fire and why
- User-facing explanations: Show users why a schedule looks the way it does
- Constraint tuning: Identify which constraints dominate the score
See Also