A framework for evaluating and back-propagating through linear temporal logic traces, in pytorch.
Telos is a pytorch library that aims to make LTL easier to use in downstream ML tasks (i.e., soft trace verification or logic-based loss conditioning). The differentiator from related contemporary frameworks is a strict and precise separation of concerns. Instead of a monolithic back-end, Telos opts for an explicit distinction between syntax, semantics and their interface. The result is a clean contract that ensures plug-and-play support for arbitrary algebras, cross-algebra comparison, and dynamic switching. With Telos, one can use a differentiable algebra during training and a boolean algebra during validation with little ceremony.
telos.syntax defines the shapes of LTL formulas, following standard textbook conventions for primitive operations and
using python class inheritance to get away with the language's lack of support for ADTs.
The grammar is defined as
Φ := A -- atomic variable
| ⊤ -- top
| ⊥ -- bottom
| Φ₁ → Φ₂ -- material implication
| Φ₁ ∨ Φ₂ -- disjunction
| Φ₁ ∧ Φ₂ -- conjunction
| ¬Φ -- negation
| X(Φ) -- temporal next
| U(Φ₁, Φ₂) -- temporal until
| ◇Φ -- eventually
| □Φ -- always
| Φ₁ ↔ Φ₂ -- if and only if
The operators below are treated as primitives:
A--Variable(A), whereAis any valid python identifier⊤--AbstractTop()⊥--AbstractBottom()Φ₁ → Φ₂--Implies(l, r), shorthandl > rΦ₁ ∨ Φ₂--Disjunction(l, r), shorthandl | rΦ₁ ∧ Φ₂--Conjunction(l, r), shorthandl & r¬Φ--Negation(x), shorthand~xX(Φ)--Next(x)U(Φ₁, Φ₂)--Until(l, r)
The rest are defined as composite functions:
◇Φ--eventually(x), defined asU(⊤, Φ)□Φ--always(x), defined as¬◇¬ΦΦ₁ ↔ Φ₂--iff(l, r), defined as(Φ₁ → Φ₂) ∧ (Φ₂ → Φ₁)
telos.algebras defines the interpretations under which formulas are evaluated. An algebra fixes how each
propositional connective is computed on values from a chosen carrier; the temporal operators are built on top of these
via batch-vectorized sequence reductions. The module ships several standard algebras and exposes an abstract interface
for adding more.
An Algebra is a torch.nn.Module that fixes the semantics on a carrier set. It declares two designated elements
(top and bottom, registered as buffers) and four pointwise primitives:
| Connective | Method |
|---|---|
Φ₁ ∧ Φ₂ |
meet(x, y) |
Φ₁ ∨ Φ₂ |
join(x, y) |
Φ₁ → Φ₂ |
implies(x, y) |
¬Φ |
neg(x) |
Sequence reductions (exists, forall, running_meet, running_join, span_meet, span_join) are derived from the
primitives via functional iterators (scan, fold, and a triangular-mask span combinator). Each can be overridden
with a vectorized closed form when one exists.
Since Algebra extends Module, its parameters (if any) are first-class, and can optionally be trained end-to-end
just like a standard pytorch module.
A convenience subclass FuzzyBase handles the common case of a [0,1] carrier with neg(x) = 1 - x.
| Algebra | Carrier | Notes |
|---|---|---|
Boolean |
bool |
exact satisfaction |
Goedel |
[0,1] |
min/max t-norm; idempotent |
Lukasiewicz |
[0,1] |
bounded-sum t-norm; nilpotent |
Product |
[0,1] |
algebraic-product t-norm |
Robustness |
ℝ ∪ {±∞} |
STL signed margin |
Frank |
[0,1] |
parametric; lam interpolates Goedel / Product / Lukasiewicz; optionally trainable |
Subclass Algebra (or FuzzyBase) and implement or inherit the top and bottom elements and the four
pointwise primitives. Sequence reductions inherit defaults; override them where a vectorized closed form exists.
See telos.algebras.goedel for a minimal reference and telos.algebras.frank for an example with a trainable parameter.
telos.deduction connects syntax and semantics. The three objects of interest are:
- the
Trace(values, names)-- a glorified tensor of shape(..., vars, time), naming its coordinates alongdim=-2. Most cleanly built via the helpermkTrace(**vars). Judgement(trace, conclusion)-- a pairing of a trace and a formula whose leaf variables must occur in the trace.Model(algebra)-- atorch.nn.Modulethat lifts an algebra to a judgement evaluation engine.
Evaluation is structural recursion: pointwise constructors call the algebra's primitives.
Here's a minimal example demonstrating an end-to-end pipeline.
import torch
from telos import mkTrace, Variable, eventually, always, Model, Lukasiewicz, Product, Boolean
p, q = Variable('p'), Variable('q') # p and q are abstract symbols
phi = always(p > eventually(q)) # φ := □(p → ◇q)
trace_f = mkTrace(
p=(tp := torch.rand(4, requires_grad=True)), # the fp32 progress of p (e.g., sensor measurements, classifier outputs, etc.)
q=(tq := torch.rand(4, requires_grad=True)), # ditto for q
) # the two variables packed up in a single trace
trace_b = trace_f.bool() # the discretization of the above trace
judgement_f = trace_f >> phi # the judgement τ ⊢ φ, in the fp32 domain
judgement_b = trace_b >> phi # the same judgement in the boolean domain
score_l = Model(Lukasiewicz())(judgement_f) # apply the Lukasiewicz algebra on the judgement
score_p = Model(Product())(judgement_f) # ditto, now with the Product algebra
score_b = Model(Boolean())(judgement_b) # ditto, now with the Boolean algebra -- note the domain change
traj_l = Model(Lukasiewicz())(
judgement_f,
return_trajectory=True
) # return the valuation of φ at each step
goal_f = torch.rand(()) # some goal valuation to train on
loss = torch.nn.functional.l1_loss(
input=score_l,
target=goal_f
)
loss.backward() # backprop through the algebra, populating grads for tp and tqSame formula, multiple algebras and evaluations across two trace dtypes.
- Telos evaluates LTL over finite, fixed-duration traces and cannot work with infinite streams.
X(Φ)is padded withalgebra.bottompast the last time step, biasing trace-edge readings toward dissatisfaction.- Every algebra is associated with its own domain; you're responsible for using the right dtype.