Does anyone have ideas for how to speed up this tactic? The core is basically a loop doing
let res := match! term with
| context[?v]
=> lazy_match! v with
| @type.try_transport ?base_type ?try_make_transport_base_type_cps ?p ?t ?t
=> Some (v, p, t)
end
| _ => None
end in
match res with
| Some v
=> let (v, p, t) := v in
let some_pt := debug_Constr_check (fun () => mkApp some [ mkApp p [t] ]) in
let term := lazy_match! (eval pattern v in term) with
| ?term _ => term
end in
let (term, args) := aux term in
(term, some_pt :: args)
| None => (term, [])
end
which I think is the main slowness. I have a single call to this tactic that takes 236s in https://github.com/mit-plv/fiat-crypto/pull/1778/files (https://github.com/mit-plv/fiat-crypto/blob/f0e23a867bba8f8d1b0efeeb505d6ac615ced7e6/src/Rewriter/Passes/NBE.v#L20)
In fact I have an instance where a single invocation of the match! term with ...
takes 72.848 secs to succeed
fix the indentation
indentation fixed
Is there a good way to tell whether the issue is the non-linear ?t ?t
pattern, or if it's the nested match, or if it's just that my term is too big or something? Would it be faster to avoid the nested match and instead mkApp
to reconstruct the term being matched? (I suppose I can test this, but I'd just be flailing blindly here.)
Last updated: Oct 12 2024 at 12:01 UTC