Hooooooola colegas! coq/coq#523 has already produced some discussion
it is cool how the link system works thou
It is cool as changes to Markdown files can be previewed as rendered Markdown in GitHub ^^
Salut Theo !
I have always resisting using this stuff myself
but yeah
I guess I am hipster now
:D
Hola Emilio! As for comparison to Mattermost and Slack, actually there is one here in the FAQ http://blog.gitter.im/2017/03/15/gitter-gitlab-acquisition/
indeed it could be useful for the reasons we mentioned
Yeah I think I fully trust you on the choice of particular chat room implementation
I have no preference other than it is easy for people to participate
Well Gitter certainly has this advantage over the others ^^
ok cool so let's stick to it for now
yeah in fact I think Maxime mentioned that
I just wanted to "announce" the room, I guess it was not the proper way xD
Well it kind of was a possible way, but I would have added a little text of explanation with the PR.
short on time these days xD
but yeah, I guess the idea is to share some more conversations,
these days all happens like
one to one
and it is very redudant
if Travis break
etc..
and we where abusing the comment section of github a bit
doing PRCHAT
so this may help
it may not XD
(I am always surprised when people post code without explanation, it kind of gives the message "this was discussed elsewhere already")
Anyways, I have been in a few Slacks before and the risk is having too much traffic for people to really follow.
the PR was my own thing XD
In which case, it may lead back to one-to-one conversations...
we have been indeed discussing for some months about the 1-1 chat problem
ok let's see how this works
I am not a user myself of IRC
since 1997
or so
yeah I found it hard to follow
As for myself, I am only a very occasional user. I'm not really accustomed to it.
yeah so lets try to keep the chat efficient
I guess people is not really supposed to follow this
in the sense
that relevant issues
will be always get a bug or a PR
but indeed a few things can be helpful
questions
this kinda overlaps with coqdev
I dunno
yes it does
this is why I always refused to use gitter
as much as SO overlaps with coq-club
but still I prefer SO to coq-club
but lets be open
yeah I know what you man
mean
subtle details in the interface
do make a big difference
hihi
this is maybe a good forum
to discuss about which PR
should be fast-tracked
obligatory bash.org quote: http://www.bash.org/?4281
I would advise finishing one's sentence before sending it though, because otherwise only a very short column of my screen is actually useful. Besides, line-wrapping exists if one has a smaller window. Yeah, I know I made this message very long on purpose.
dedicated to the release manager
sure sorry about that!
I'll stop the useless chat
^^
Do you see the coq/travis chat room I have created?
I had to click in "more coq rooms"
and join
OK, I figured it would make sense to have special topic rooms, what do you think?
Maybe for Travis it does, but IMVHO I'd keep it simple for now, we will struggle just to use one room I guess
+1
@Emilio Jesús Gallego Arias I already see a good use to this Gitter. Which is to ask you what do you think about my draft-CEP on the stdlib?
I wouldn't have sent an e-mail just to ask this. Nor a new comment on GitHub.
It looks cool
:smile:
I think I'll propose it for acceptance at the next WG, in the meantime it would be good if it could gather a few more comments to make it better.
I'm all for it, right now it is very hard to evolve libraries as you noted. I hope it can land in 8.7. In fact, in one of my first Coq working group I proposed to split the stdlib , see https://coq.inria.fr/cocorico/CoqDevelopment/CoqWG20160216
However, it was met with particular resistance. I believe @letouzey provided some good arguments against it, so I focused towards trying to split coqide and coqchk which is imho more feasible
So in fact, the first time I heard about splitting the libs was in the first coding sprint, but again mostly of it was random coffee chat
Yes, I know that he has been in favor of splitting just a part of the library (Reals mostly)
So summarizing, I love the idea, however the devil is on the details. Actually realizing the implementation may prove tricky
So there is also this question of splitting the lib in several parts but I think it makes sense to postpone it and first concentrating on extracting the whole thing and test the development model we can have with it.
Some of the tricky points are listed in "Unresolved questions". If you see more, let me know.
let me have a fresh look to it
for one, I can help with co-driving the CEP [random note, I find this CEP process a bit painful, maybe we could just use GH's wiki?]
Cool, thanks! I'll add you to the drivers.
I'm not convinced that wikis are good for discussion. They are good to create TODO lists though.
I am not a fan either, it is just that modifyin some stuff on the PRs is a bit painful, maybe I have the rights to push commits?
So the initial idea looks good IMO
Actually I think that you have the rights to push some commits to this branch.
Because you are a maintainer of the repository (I think) and I checked the "Allow edits from maintainers" box.
In this quick read I am only missing one point, what about git versioning? For instance, you may need to specifiy constraitns such as lib \gt 2.8.3+git-22-abcd283
Can you try e.g. pushing a commit to move your name in the drivers list?
I am not an admin on that repository
yeah it is missing from the team page, likely an oversight of the admins @Maxime Dénès
You're one of the only two "core developers" who is not an "owner" of the Coq organization so I suppose you don't have the permissions to add yourself as a maintainer on that repository.
Yeah, I think the problem is that the repository itself is not properly set up
Ah OK
the coq-ceps I mean
Back to the git versioning question, I suppose we could wait and see the use cases. But that's indeed an interesting question to keep in mind. Feel free to add it to the list.
I find myself depending on particular Coq versions as PR are merged and APIs are improved etc... so it would make sense to have finer bounds. git describe
seems to work well, as discussed in coq/coq#483 (and also there has been some private discussion)
Yeah, at least here, if we chat, you don't need to pay attention, but later on, I can refer you to the archive of the discussion!
I am gonna test the GH integration
of course we can disable it afterwards, let's see what it does
Ups, it needs write authorization to coq/coq, so even if I can do it I'd rather get approval from the coq repos owner. A cool thing thou is that we could add another chat room and add notifications for all coq-related commits, including plugins etc...
@Emilio Jesús Gallego Arias let me know if need something
@Maxime Dénès are you ok with enabling the github hooks for gitter?
does it really require write access? I thought it was only for private repos
it requires adding a hook to the coq repos
that is
so write access to the hooks folder, I guess pretty harmless, but not something to do own my own, you can enable it very quickly if you want clicking in the integration menu
by the way @Maxime Dénès I think the coq/coq-ceps repository is wrongly configured permission-wise on GitHub
does the write access need to stay?
or is it just for setting it up?
it is just for setting up I guess
it should be ok
no need to give write access in fact
the side panel is cool, as seen in coq/travis
btw there is a Travis integration as well
I could set that up
@Théo Zimmermann set it up in the travis room
I personally have no opinion on the room structure
I think the travis room makes sense
coq/travis?
yeah, I don't know how to reference a room
About the discoverability of rooms: world icon in the top-right corner
You know folks who will be my hero? Whoever makes us move away from bugzilla xD
what would you use instead?
even debbugs would be better XD
I don't know, it is a hard question, I am becoming very lazy these days so even github issues could work, but certainly something that could integrate with source code such as trac, or phabricator, etc...
Oh we have the activity at the side! It looks very cool!
Well, I would use GitHub's issues. But as I said before, I'm not going to push for it. I would be interested though to have a precise list of the features of Bugzilla that are deemed useful and see how these can be replaced.
Interesting point, for me I don't see any advantage... I can't edit comments, properly format them, neither modify the issues. Théo, are you happy with Bugzilla?
spread the word: each time you use apply, a kitten does in atrocious cruelties
you gotta be careful with application
that was a highly controversial at OPLSS 2013
would you like to deprecate it? (in favor of refine ........)
how many things would break?
I remember the last argument I heard was that apply was doing some useful 2nd-order unification thing
you don't want to know :smile:
oh well it would be useful, IIRC for evarcons it was the case that due to the bug
the set of constraints was ordered differently
then we got different solutions, sometimes weaker, sometimes stronger, right?
what is the bug doing to unification.ml?
because the only thing that econstr can do is to expose critical bugs
as you know I'm really against the terminology "normal form" wrt evars
such a thing doesn't exist, there is only one form possible
@Emilio Jesús Gallego Arias No not happy at all with Bugzilla! I can't subscribe to the new bugs, I don't have the right set of permissions to change anything to a bug, including assigning myself, unless I was the OP.
While if we had bugs on GitHub we could have commit messages saying "This closes #3405850" and this would close the issue when the commit is merged. \o/
+1
@Emilio Jesús Gallego Arias I'm not going to answer you on the CEP PR because that's precisely what I wanted to avoid.
But I can answer you here.
So we could try to discuss again at the WG. I would support the move to GitHub bugs, but not very enthusiastically as bug data is crucial and I am not sure how open the system is.
Oh please
answer here
Just to finish on this bugs discussion first, yes I'm not strongly advocating for the move because it would have much larger implications than just making GitHub the main repo and we can see as just this seems so complicated!
Paradoxically the repository issue is not so complicated in my view, in the sense that migrating a repository is a one-minute thing
however migrating a bug tracker, is huge. It is just that bugzilla is painful
I know! And it's not complicated in my view either. It's even fully reversible.
maybe we should just poll support for Bugzilla first
But people are still arguing against the move. I can't understand why.
and then try to build another replacement
but oh man
this is not Coq
it is more like a Turtle
^^
I can't understand why.
political reasons
I can understand why
where to host the code has IMO important political significance
Yeah but political reasons are used in a inconsistent fashion
that's intrinsic to politics, isn't it ?
e.g. where we accept PRs is more important and as many more implications
it is, yes
a very experienced politician told me once "politics is very easy: here are your friends, here are your enemies"
XD
anyways
so what about the CEP?
Not really about the CEP but more on the SSR discussion, I wouldn't mind if SSR authors preferred to have a different namespace than just Coq, but keeping math-comp makes little sense to me because it is sending the message: this is still a little part of math-comp, it's just distributed with Coq.
well it is tightly couple AFAICT
But if it's really just what we want, then no need of a PR for this.
in the same way ring and reals are
I would rather say
"this is a part of mathcomp, which has become a part of Coq"
many things were outside
like Program / Obligation
and now are "inside"
so imagine the case for Program
if includes were of the from
From Program Require XX
Why to change it when it was merged?
I like namespaces, so this is a very personal opinion
Well now it is From Coq Require Program.XX
But again I think that the problem here is what is the definition of "being part of Coq"
so far it was "it is in the main repository" but that will change with your PR
anyways, that math-comp become a part of Coq has not been proposed
in fact I have an even wider view, in the sense that, for instance, in jscoq
every time the user types "From XXX Require UU"
it will donwload it
and all the packages will live in the same namespace
I find it absurd that we have to "install" things these days
oh I fully agree with you on that point
I just want things to work; I guess I am lazy, not as lazy as the Coq-Turtle but close
for me the "being part of Coq" has become a bit meaningless , in the same way than to say "being part of C++"
there are things that are officially supported
either by the Coq Developers
Well, you were not the one proposing it anyways
or by other persons
I just don't care
so I suppose @gares and @Maxime Dénès have other views.
I am a programmer, I just want all my libraries to be there, like OPAM
in opam I usually don't care about the provenance
I just say "i need this" and I get what I want
Yes, I know what you mean. But we're not getting very far in the discussion since you're obviously not sharing the views of the people who proposed the integration.
and we spend lots and lots of time about these discussion, that may be very interesting for some people, but I just don't care, and people using ssreflect just care about having it in the default package, that is
Here I don't share any view, it is just that somehow my brain blocks on this sorry XD
I cannot get my brain to even form a view; it is like an irrelevant issue for me; I think of that and we have spent many many hours discussing about these topics, whereas other many core parts of the system are truly in need of work. As I said, if was the release manager I would have just merged math-comp in 2 minutes, and move on to other things. So yeah, sorry for that XD
Anyways, that's quite easy to have SSReflect be distributed in the same package as Coq. That wouldn't require changing the development model. Conversely, in the CEP I'm proposing changing the development model of the standard library, but not how it's distributed.
Well, anyways the point was that IMHO I like top-level namespaces
And IMHO you need them
Sure well, listen it was just an example I was taking randomly. To show @silene that moving things in or moving things out, the issues at stake should be exactly the same. No more no less.
Yeah, just to conclude on that, IMO ssreflect.v
should be kept under the math-comp prefix. ssr2, indeed, could be put in a different one. I guess that is what got me confused.
they won't be compatible
Emilio Jesús Gallego Arias can I merge #525 now?
@Maxime Dénès sure I think so
oups sorry
wrong channel
the perils of "manual" serialization. It is funny than in all my printing cases I never found an empty thing
yeah I though this could go to the main one right?
sure
I'll do a pass of merges later tonight
great
I have little on the queue
I'm launching a benchmark of EConstr, we finally solved the last known issues
that's great
I saw your latest push
PR wise I'd really like to get merged #441, and then I am done for a while, but this needs Enrico OK and I understand he may need some extra time
so that should be OK
#441 will likely introduce some minor annoyances
maybe the toplevel shows goals in differente cases
but all of that should be minor
again it is hard to test interactive behavior
also #552 should be safe to merge, this was an omission on my part
wrt to CoqIDE however
the new goal display needs some help from people knowlegde in GTK
if we don't like it, it is easy to go back
the main difference is that now
at every size change, coqide will repaint the goal window
previously, it only repainted when sending a command
so it is easy to go back if we don't like the new model. I like the new model, but however I fail to implement some nice features, like having the repaint preserve the scroll status. I tried some hacks but they are ugly. Not to say that my impression of GTK is extremly negative. Many years ago I had a look at the code (in particular the GTK notebook code) and I was extremely horrified, so many obvious bugs (like not checking null pointers) that I had to stop midway. I wouldn't pass a student writing that code.... so it could be well the case that that the cool stuff is actually hard to program without ugly hacks.
Anyways I'll try to improve the window display a bit more in the summer, after I ask a few questions on the gtk mailing list.
on the other hand, it is very likely that we have news soon for some new ide efforts
vscode is free software and runs reasonably well
and I heard some rumours of an atom IDE being prepared by crazy people XD
also #502 should be IMHO merged
So that is my queue:
that's all
my next PR will be the refactoring of the STM state, it will be a largish commit , however I don't expect it to affect any plugins or users. Also, I expect it will really help with many current problems, in particular, the way we traverse nodes now it is very delicate as every traversal will restore the cached state, and sometimes you don't want that. I lost two days tracking why parsing was failing in jscoq, until I realized that simply checking for the readiness of a state will set you back in time. In fact, I wonder whether bug#5423 could be realted
seems exaclty like that
by the way folks @Maxime Dénès @Théo Zimmermann , maybe you would like to announce the gitter to coqdev, maybe just as "an experiment" I guess not all people are as careful as zimmi and saw the PR. I am sure some people will also object, but on the other hand I am not sure a strong argument against this kind of tool can be made.
So you have tested vscode. It has been in my TODO list for a while.
I got the impression that Paul was saying that CJ was leaving the team (at MIT I suppose) and he feared that he wouldn't support it anymore but it seem to have recent commits.
well maybe 24 days ago is not so recent
The question is that the day that supporting vscode is easier than supporting CoqIDE may be soon. Also, IMHO the vscode binding is extremely complicated due to the stateful protocol, but much of its complexity could be eliminated by switching it to other protocol
ms-ide-protocol support for serapi is not far, also, the implementation of term rendering can be shared by jscoq, atom and vscode
so indeed
pity we didn't have the manpower to submit the GSOC again for 2017
oh really that deadline passed? :(
I couldn't have helped much anyways, but I fully support your efforts in this general direction.
do you guys know if there's any plan to get better debugging tools in the OCaml ecosystem?
That would be a good topic for the ocaml room. I have no clue, I'm sure Jane Street ought to be cooking something given the size of their codebase. I will just say than a few times I've found easier to transpile Coq to Javascript and use the JS debugger there than to try to debug ocaml directly XD
@Théo Zimmermann I've submitted https://coq.inria.fr/bugs/show_bug.cgi?id=5426 about bugzilla permissions, you may want to weigth in
I have a feeling that the maintenance strategy for CoqIDE is something we will have to discuss in a not-so-distant future
Well, what the options are? I would certainly split it from Coq. Thanks to work started in Nice 2015 it is totally self-contained see ejgallego/coqide-exp
https://github.com/ejgallego/coqide-exp
so at least we are good here
in fact, toplevel wise Coq is totally modular now and ocambuild is already building it as modules (there's a trivial circular dependency but I'll get to submit a PR some of these days)
this is IMO nice as all the clients build modularly now
both the jscoq and serapi toplevel build this way
and they workaround many problems the current build system has, including difficulty to build with external dependencies
and you can just build against the kernel if you do package(coq.kernel)
etc...
I do think that splitting CoqIDE has many benefits
yeah I had to include ppx_deriving
as a dependency for Paul, it was a nightmare because you need to remove all the manual references to unix.cmxa
that are there today
yeah that is not worth it
I have somewhere in my hard disk a patch that builds coqtop
using the serapi
build system, have a look how easy it is: https://github.com/ejgallego/coq-serapi/blob/master/_tags
Well it was painful, but it took me like 30 minutes. The diagnosis in unification.ml for EConstr took me 2 days and 1 night...
but about CoqIDE
well, try to link more advanced stuff and you will see
anyways, about CoqIDe
does it build the VM?
it would greatly benefit from independent releases
and of having easy branching
branches
well, that is for building against a normally built Coq
but my ocamlbuild tree indeed does build the VM
let me see the status of that
well it is in a mess now, a mix of the old approach of Pierre plus hacks https://github.com/ejgallego/coq/blob/ocamlbuild/myocamlbuild.ml
but it does work
just needs polishing, no special difficulty
anyways ocamlbuild is slower for Coq, but it is interesting as a build option
as you can use it to build the upper layers
coqide, etc..
which don't really benefit from paralellism
and build the lower layers the usual way
the cma in the META files
the discussion on #485 is too long for me. Can you summarize? I.e. is it ready for merge?
IMO it needs the OK of Clément and @matafou
that is to say
the summary of that discussion is
Show Match
is used only by Proof General, thus before documenting it, please confirm you agree with the spec
so both @Clément Pit-Claudel and @matafou should IMO say "we agree with the current specification of Show Match
, the docs can go ahead
or "we think Show Match
should be modified in this particular way"
and then we follow suit
ok thanks !
Hey @/all , what would be the best way to group bugs for a particular component ? [lets say the STM] Should I create a new component? Tags?
For instance, there are many bugs now reported in the IDE component that are actually problems in the STM
@Emilio Jesús Gallego Arias I added an STM component
cool thanks
@Emilio Jesús Gallego Arias you recently mentioned the ocaml room. Is there one?
Nope. BTW, it seems that gforge was down for a bit, so you'll see some travis problem related to that, rescheduling jobs.
By the way, how about informally announcing this room ? Do the rest of developers know about it?
Not really. I think I fear is that I was already spammed quite a lot by Gitter. Some devs won't like it. I didn't even find a way to turn off all notifications!
Well, I don't see the problem, as this is opt-in. Announcing is different from requiring to join IMHO. Secondly, yeah it is like github, you can't turn notifications off if you are directly mentioned. I dunno if leaving the room disables that thou.
Again if you don't announce it, it is hard to get some feedback. I don't understand what we are afraid of here...
We seem to be too afraid sometimes I think....
oh, I wasn't saying we shouldn't announce it
I thought you'd do it :)
Apparently @Pierre-Marie Pédrot found about it too.
Let's just have people join one by one and once we are satisfied enough we can make an announcement.
@Théo Zimmermann I've been forcefully dragged into gitter because it is where the cool guys are nowadays... Sigh.
Are the three of us the cool guys? Nice ;-)
Is the wiki activated on the CEP repo or not? I can see the tab but when I click on it, I stay on the main page.
@Pierre-Marie Pédrot You were dragged only because you wanted to, deep inside :D
I'd like to use this wiki for listing feature requests before one can make a real CEP.
I'll take care of it @Maxime Dénès , isn't this chat wonderful, now we can say politically incorrect things live, publicly, and without the possibility of correcting them later
@Maxime Dénès @Pierre-Marie Pédrot puns always taste better live
Well there is an edit function
@Emilio Jesús Gallego Arias Yay! That's exactly what I wanted to do.
I mean, right now.
@Emilio Jesús Gallego Arias indeed, I usually say politically incorrect things on videos whose sound nobody can hear, this is way better!
We ougtha admit it, our dream has become true!
:D
@Pierre-Marie Pédrot Matej and I improved the bench script, and EConstr is being benched
EConstr is almost ready to go, right?
I hope so...
fantastic
@Emilio Jesús Gallego Arias Do you have maintainer rights on the CEP repo now?
My main center of attention today is not EConstr and the craziness of unification it unveiled. I am instead loathing in algorithmic and architectural despair about Nsatz.
@Maxime Dénès Do I have the right to waste the day rewriting this... code?
Uhhh
@Pierre-Marie Pédrot No :)
anything ending in "z" cannot be any good
Come on. Little algorithmic complexity quizz:
let lc = List.nth lci k in
for i=0 to List.length lc - 1 do
if not (zero (List.nth lc i))
then utiles (i+k+1);
done
@Pierre-Marie Pédrot I mean, we need to merge EConstr. The early results seem to indicate some slowdown (not huge, but significant) in most devs. How about pushing the unrolling patch for EConstr.kind
?
I'm waiting for detailed slowness report to see where things go wrong.
Isn't this unrolling easy to implement? We could bench with it and see the numbers, don't you think?
@mdenes Did you use a special Travis overlay for your branch BTW? It doesn't seem to compile on my own branch.
@Théo Zimmermann no, coq/ceps is not added to group permissions
@Maxime Dénès Lemme write the unrolling and push it to my branch then.
@Pierre-Marie Pédrot For mathcomp, yes. But I pushed it on my branch, didn't you merge from there?
Yes I did. But something does not seem to work because it still downloads the default repo.
@Pierre-Marie Pédrot You deserve better than turning lists into arrays :)
yeah if that is for polynomials you may want some more complex stuff
And using arguments to actually pass something to a function rather than global variables?
arguments are overrated
That's because you don't know about purely imperative programming
the STM manages to do everything without them
I mean even by keeping the same algorithmic core, there are crazy stuff going in there.
A very common paradigm in baroque programming.
I'm not even able to reason on those functions. And perfing indicate really fishy hotspots.
val define :
?safe_id:Stateid.t ->
?redefine:bool -> ?cache:Summary.marshallable ->
?feedback_processed:bool -> (unit -> unit) -> Stateid.t -> unit
once you remove the unnecasrry optional arguments, it will be perfect
val define : (unit -> unit) -> unit
@Pierre-Marie Pédrot I'm fine with you fixing it, but after we get the work done on EConstr and Ltac 2. Those have a bus factor of 1. My son can fix you Nsatz
example.
This still takes a function. Where is your val foo : ref (unit -> unit)
?
That is just for convenience when doing CPS
Okay okay. I'm writing the EConstr stuff right now.
we must retain some elegance
purely imperative CPS
No more slacking around on this chat!
ok
yeah
right!
Pierre-Marie Pédrot I'm fine with you fixing it, but after we get the work done on EConstr and Ltac 2. Those have a bus factor of 1. My son can fix you
Nsatz
example.
How old is your son?
Not old enough to consume alcohol not to drown into depression.
Less than two years old, but he can reach the keyboard. That's the only required skill.
If this chat manages to keep PMP focused on work that matters for Coq, I'll be truly impressed :D
I'm trying to look at the generated assembly to see what we can do.
He was born inside an INRIA audition, so he got some advantage
If this chat manages to keep PMP focused on work that matters for Coq, I'll be truly impressed
I am searching for the "send coin" plugin in the style of the finest old internet chats, so that won't be cheap for you
@Maxime Dénès is your notification problem when the tab in the browser is open? or have you installed the mobile app? or are you receiving e-mails?
@Théo Zimmermann It's not a problem for me, I thought it could be for other developers, and I meant e-mails.
OK, because I received once an email telling me about unread messages, I clicked the "unsubscribe from all emails" button and have not received any more email since
but maybe it's just because people don't @ mention me enough
I think you are right, there are actually two links. I tried only the first (notification settings).
and I thought that programmers were good at understanding those things :D
To answer partially a question I asked here yesterday, there is ongoing work for gdb support in OCaml: http://mshinwell.github.io/libmonda/
Great!
@Théo Zimmermann My IQ was getting close to 0 after spending nights and days with @Pierre-Marie Pédrot and @Matthieu Sozeau understanding that apply
relies on heuristics that mutate user flags under conditions that are typically the ones you don't want, but fortunately through a testing of the conditions that is completely bogus, until EConstr
fixes it...
@Emilio Jesús Gallego Arias We seem to get a number of timeouts on fiat-crypto
these days. Should we compile only part of it?
@Maxime Dénès If you were willing to merge my optimizations, it should probably manage to compile... (Not to mention Nsatz.)
Ah ah! Well done :)
Let's see. Do you have in mind some that were already benchmarked?
At least the array mapping in the kernel. But when pendulum is free, I'll overrun it with benchmarks.
@Pierre-Marie Pédrot I made some local changes to the script to get EConstr
to compile, because @matejkosik hasn't implemented overlays yet. So I'll undo them once EConstr
is finished, before you can run other benchmarks.
@Pierre-Marie Pédrot Also, for now it goes way faster if you give the packages in topological order. @matejkosik is working on using OPAM to compute this order automatically.
I understood the first priority was to separete into "install phase" "clean phase" "build phase" right
why does it go faster? shouldn't be .opam just cached?
I'll let @matejkosik give more details :)
okk
@Maxime Dénès I've just pushed an unrolling of EConstr.kind on my econstr branch. You should rebase your branch on it.
Is there anything in my branch which is not in yours?
Nope.
I've rebased against yours before commiting.
So I can just bench your branch, I guess :)
Probably.
I'll let the current benchmark finish, though, so that we can compare EConstr
alone / EConstr
with unrolling / trunk
I'd still like to know where things are slow by perfing and optimizing the hotspots.
How long do you think it will take for the current bench to end?
@Pierre-Marie Pédrot you will probably need more detailed information than what the bench currently gives, right? If that is the case, feel free to open an issue at: https://github.com/coq/coq-bench
Not long, it reached CompCert already
How many times are you benching each package?
once only
And it did not finished yet? I thought it took about 7h, and you started it like today at 2:00am.
Right, I didn't tell you about the EConstr
curse.
All national Inria services crashed last night, CI included...
-_-
Unification Pharaoh probably put a curse on this code. We've disturbed his sleep and we're going to pay for it.
But before restarting it, I requested some improvements to make it way faster, that @matejkosik implemented. We were recompiling dependencies a quadratic number of times...
So it should be faster now.
Were there a lot of dependencies? I thought everybody was still embedding what they needed (e.g. flocq)
Mathcomp is split in many packages...
If you are curious a simple calculus show that you will lose way less time losing time now to properly implement the cache and benchmark than not doing so
Seriously, what kind of drug do you need to take to write this stuff:
let lci = ref lci in
List.iter (fun x -> lci := List.tl (!lci)) lp0;
!lci
would you blame that on the type of List.tl
?
XD
it is beautiful, amazing
Alternative explanation: Nsatz has been processed by a highly cunning obfuscator able to compile elegant functional constructions to imperative ML with higher-order functions. This has to be it.
maybe https://erowid.org/chemicals/pcp/ ?
Yeah had they used the unit -> unit
way it would have been way more elegant
@Emilio Jesús Gallego Arias who were you talking to about the cache?
to @jenkins folks
whoever they are
@Pierre-Marie Pédrot it seems that it will take some work to convince people that optimizations may be necessary
well @silene has a point with the comment
maybe you should just folks use flambda
and let non flambda users pay the cost
Flambda cannot make wonders I'm afraid.
too many of these optimizations are becoming necessary, so something is off
Not quite. The kernel needs to be fine-tuned, as well as the tactic engine.
It is been a while since I tried, but well, it looks like in this case it would work thou it may have a hard time with the opaqueness here
well, the evar_map thing is fundamentally off, in the sense that evar maps are inherently turtle-slow
It's actually not that much of a hot-spot when you look at it. Obviously, it may become so after the merge of EConstr.
they are turtle-slow, as @gares pointed out usually one order of magnitude, so yeah, I wonder how to optimize an intrinsically slow approach; it may be a hard battle to win in the end
but yeah we'll have to live with stuff like that for now I'm afraid
I'd like to create a wiki page to gather feature requests about typeclasses eauto
, in order to have material so that someone, someday can make a CEP out of it. Do you agree it would make sense to do that on the GitHub wiki of the CEP repository? If yes, can someone like @Maxime Dénès activate the wiki pages of this repo?
Why not on the main coq page?
I think we could just have the wiki there
I think given the number of people it makes sense to have everything in the same place
Two reasons:
¡Hola @Jason Gross !
1) this is specifically linked to CEPs
Hi!
Hi
(I'm mostly just observing/reading now; I don't have anything to say at the moment)
2) If we need to have everything in the same place, there is already Cocorico so I can just create a page there. But the pb is, I don't like Cocorico
Just the fact that you cannot double click on a page scares me away
2) I agree; however 1) I am not so sure
I can't see where the CEP process is going
In fact, if this is a successful experience, I would then add a link from the README to the wiki
Well, I like the CEP process a lot.
So far what are the results?
How many CEPs turned into something real?
I think it helps take decisions which were really thought through
such as?
Several, let me count
do you have an example
numeral notation has now a new PR, opam metadata I think it was implemented somehow, ltac2 is being implemented, your stuff on printing was merged (at least the first part of it) and econstr is going to get merged
Plus, I think the new CEP process (as advocated by @silene and @gares) is better
Well for my stuff on printing, the CEP didn't really provide me not a single comment. The PR a few more. That is my point, given what is going on, we could just have wiki pages in coq/coq + the PRs
associated at each CEP
and then it is better as we can migrate things easilier to documentation
etc...
How so?
Same repository
And?
vs ceps-wiki / coq-wiki
But doc is not in a wiki anyways
Oh some of it is
in Cocoriko
that hopefully is migrated to coq / coq
Yes, well it's mostly outdated doc
anyways a good proof of the overhead
is that actually we cannot get CEP merged
or touch anything in the coq-ceps
because some permission problem
so indeed, there is an overhead of having so many wikis
IMHO
if we have a wiki
we should have a single one
for Coq
that means
cocorico is out if we add another wiki
I don't like wikis. At least not for any purpose.
I like Wikipedia but it's much more than just a wiki
@Emilio Jesús Gallego Arias Could you announce the gitter experiment? If some users are aware of its existence, it would be better to tell developers ;)
@maxime today I have a very complicated day so I may not be able to do it
@Maxime Dénès
I mean
I want to test the GitHub wiki before arguing to move to the GitHub wiki. Do you see my point?
Anyways, given your reluctance to open a wiki on the CEP repo, I think I will just create a Cocorico page.
Oh well, I don't have myself rights to modify the CEP repo, so all that I can offer is to open the wiki in the coq repository if others agree, you can also open the wiki in your own cep clone. I am not reluctant to the wiki, but indeed I would appreciate having a single wiki for Coq; of course that is just my opinion.
I _really_ get confused with several wikis, etc...
I'm fine with anything, but we indeed need some coordination.
I would be in favor of not maintaining our own wiki platform (like Cocorico), as long as the platform we choose has APIs that can make a transfer and backups easy
Githubs wikis are just git repositories with markdown.
so that is not _too_ bad
@Pierre-Marie Pédrot the bench of you branch failed with an old error. Are you sure you merged all commits from my pr379 branch?
Yep, you can check on gitk that I'm just above your commit.
There is trickery in the travis overlay to check whether we're on your pr379 branch or not
That way the right version of compcert is downloaded. I've mentionned that somewhere before in this thread.
But jenkins doesn't use overlays AFAIK
Right, so how is compcert supposed to compile properly then?
What fix does CompCert need for EConstr
?!
I mean mathcomp, sorry.
I hacked the script run by jenkins for that, but it doesn't rely on the branch name so I'm confused
I'll investigate
Wait, hasn't the error nothing to do with that? It seems Coq does not even compile.
Indeed, and that's an error we used to have with old Coq versions on recent OCaml
@Pierre-Marie Pédrot I looked at it, but I don't understand what is going on in this script.
oh fuck I know why
it takes the trunk from your repo, @Pierre-Marie Pédrot
@Pierre-Marie Pédrot can you pull the latest trunk from the main repo and push it back to your repo? I'll relaunch the build
It is amazing how good we are at wasting each other's time in this community...
There you go...
@Pierre-Marie Pédrot Ok, this time the bench really started.
Cocorico is so frustrating! So many outdated pages. Immutable pages like the front page and this one https://coq.inria.fr/cocorico/CoqDevelopment/Public which I'd like to update.
If we ever move to a new wiki, I think we should not copy-paste the existing pages. Better to just mark the current wiki as deprecated as it is done here https://nixos.org/wiki/Main_Page and start fresh.
Well, it seems the loop unrolling barely had any effect (well within a statistic error margin)
So indeed, how can we help to see where time is spend, would having a line to line time diff help here?
I think Jason has some tools for that, but we can cook our own
In particular for hott, right?
Was it make timing
@Jason Gross ?
Yeah we need something more detailed.
Unfortunately HoTT stuff is geared towards tracking time regressions in the library itself, again it tries to guess commits and stuff like that
So hey folks, that's my updated tree for flambda https://github.com/ejgallego/coq/tree/flambda
It is WIP as it needs some tweaks, namely:
coqmktop
as not to require hardcoded flags recognition. Honestly the whole of coqmktop
should go away, but that's another history.¡Hola @Matej Košík! Comment vas tu ?
I didn't know romance languages were allowed here.
I love romance
@Pierre-Marie Pédrot whas is universe normalization? Is it the same as minimization?
what is*
No, it is the fact that universes have a value depending on the current universe unification state (as in UState).
If you get a variable, it can be instantiated later on by some other expression.
ah! Like a local universe can become a global one?
Yep.
So we don't have the same strategy for universes and terms? I.e. evar normalization vs univ normalization?
Universe-operating functions ought to be insensitive as well (cough cough).
Oh my god.
Indeed.
Luckily, the attack surface is way smaller than with evars.
Although one should be afraid of Type u being unified to Set / Prop.
@Emilio Jesús Gallego Arias Parfaitement! Et toi? Qu'est-ce que tu fait maintenant? (quand tu n'ouvers pas des issues?)
@Matej Košík If you have further questions like the git clone
one, don't hesitate to ask here, in case I'm not available, you'll get answers faster
Can I spam pendulum with benchmarks btw?
Ok.
Well, I am not sure what is the protocol?\
Do we want to run the benchmarks always via Jenkins?
Is hat sufficient?
I don't know, I'm happy with the ci interface.
If yes, then one can see quite easily whether something is being benchmarked or not. No?
Yes we always want to run benchmarks via Jenkins, because it enforces to have only one running job.
But I think what PMP is really asking is if your script has stabilized
Yes, I meant that.
@Matej Košík je suis emmerdé avec l'ANR
@Maxime Dénès @Pierre-Marie Pédrot Please try it. I am still testing it (with the most simple cases progressively to the more realistic ones) but I haven't finished. But so far I haven't found a problem.
Have you pulled the latest version?
The script should behave consistently with the open/closed status of the issues.
(on github)
For other people, you mean here: https://github.com/coq/coq-bench/issues
Universe rocket launched on pendulum. Let's see what happens.
@Maxime Dénès @Pierre-Marie Pédrot I know I am a bit annoying with that thing so I won't mention it again, but don't you think that the naming choices of "evar normalization" / "evar insensitive" are quite misleading? In particular, I think they seriously downplay the issue and kind of legitimate "evar sensitivity"
@Maxime Dénès I have pushed it but I haven't pulled it on pendulum. When inconvenience is that there are local modifications (sort of overlay) which is something missing and needs to be supported as well officially.
@Pierre-Marie Pédrot +1
@Matej Košík for sure we will want to benchmark with flambda, this implies a) building coq configured with a particular set of flags depending on the benchmark -flambda-flags "-03 -inline"
etc... and b) using a particular ocaml switch
@Emilio Jesús Gallego Arias Well, what do you propose?
@Matej Košík Yeah but for now, there is none, right? Anyway you should always keep the repo and pendulum in sync, otherwise we get no change of understanding what the script does.
@Emilio Jesús Gallego Arias I agree with you, so what should we call it? "evar fuck up"?
@Pierre-Marie Pédrot "dereference fault"?
"existential delirium"?
@Emilio Jesús Gallego Arias Can you please open the issue? (so that we don't forget). I'll check out what you mean.
"functions not properly dereferencing evars?
@Matej Košík ok, I will do later on, just keep in mind that the scripts need to be able to configure coq with specific sets of flags
I like existential delirium.
Existential Delirium in the Evar Hell
_fucked existential delirium fault_
looks like a novel title
We can go the medical way otherwise
Existential Delirium in the Dereferencing Hell?
What about evarititis
I hope it's not the title of the novel you write when we finally solve the last EConstr bug, in 20 years...
evaritis is good
but Coq will still suffer from evarititis even after your PR
It should go curing itself with the time
@Emilio Jesús Gallego Arias Bon chance!
@Matej Košík so it is better if we build them with that in mind from the start
Yeah, we need an evarititis cure group
@Matej Košík merci
Did I already yelled enough at the world about how fucked up is Nsatz?
I've been rewriting it for the whole day. And yesterday also.
Yes, but you should yell at the right person.
Not enough I'm afraid
That's just beyond words.
I really need a therapy.
https://coq.inria.fr/cocorico/CoqDevelopment/Maintainers
Laurent Théry volunteered to maintain it, so can you send him pointers on the things you find that need fixing?
Nah, it'll be faster to fix it myself
I'm already done actually
ok
but by the god of specification
If I had to grade this as a student work, it wouldn't pass
But we tend to proceed like this too much, and not get the maintainers involved.
At least make it a PR, so that he can see it.
How the hell did this get included in the repo?
Yes, I'm going to do a PR don't worry
Inclusion of such low-quality code is the first thing we fixed.
but the code is so fucked up that expliciting the invariants seem like 3rd-type encounter
Basically we recently reduced the problem to a finite one...
@Pierre-Marie Pédrot While your benchmarking job does not waste too much time, can we:
That should make you feel good
By construction it has always be finite, come on. What you mean is that there is an eventual bound.
@Matej Košík OK
@Matej Košík Can you relauch it when you're done?
Ok. I'll do that.
thx
@Pierre-Marie Pédrot broken code was integrated faster than cleanups, that's what I call infinite.
You could think it was countable though, but I have doubts :)
I don't know Loic Pottier personally, but I'm wondering from which programming background he came. @Maxime Dénès you probably have more info about this.
Oh by the way, what is the real answer to #526?
Deprecate Funind?
The argument of branch sharing does not make sense.
That's not what I meant. Imagine you had to write funind today. What is the proper datatype to work on?
You can always check whether a constr pattern-matching is indeed coming from a wildcard, Detypign does that
constr
Obviously constr. That's the only thing that has a canonical normal form.
And even trickier, how does a plugin author make an informed choice?
Meh.
@Pierre-Marie Pédrot do you mean that glob_constr is redudant?
No, but it is an internal structure that you should not manipulate.
(Apart from producing it, storing it and pretyping it obviously.)
Honestly, I would not be sure what datatype to use. So I don't see how people more external to the team would guess it.
Cool, can we please add that to glob_term.mli ?? And move it away from intf
Yeah... "On pousse mémé dans les orties"...
Now that I'm thinking about it, is gitter indexed by Google? So that people can find back the info...
In fact, I don't see a reason why we shouldn't just get rid of glob_constr
No.
We need glob_constr, because that is the stable representation of data for storing.
You should answer #526.
It would be a start, to communicate.
It has all names resolved, which is important.
the stable representation of data for storing
sorry, storing what?
Glob_constr
implicits arguments are better done in the syntax tree
notations likewise
in fact it is much better
For instance when you have notations or ltac containing terms
so that leaves us with name resolution
the natural type you store is glob_constr
But you can't complain or be surprised if people use Glob_constr
which is in the interface folder instead of Constr
which is not.
@Pierre-Marie Pédrot Done.
Well that's not any people.
well you could store constrexpr
I mean they are somehow close to the dev team.
notations I'm pretty sure
@Emilio Jesús Gallego Arias but you need names to be resolved
things work much better if you resolved them over constrexpr
WDYM? I'm pretty sure if I was new to Coq, I'd pick something in intf
. Everything else would look internal to me.
and ditto for implicits arguments, the explicit application is there
IDES would have a great time
But that's not for ide
and they could get the diff over the syntax tree, which is what it is fed to the printer
glob_constr is like the stablest representation of kernel constr which is invariant by things like evars, universes and the like.
It is like a compiled form of syntax.
bytecode if you want
while constr is really the execution trace
and constr_expr is raw syntax
IMHO not worth the cost, apart that from actually both implicits and notations are resolved at the wrong level
yeah I don't see the advantage of that
other than creating a bit of chaos in the codebase
Well if we had absolute names in constr_expr that could be used instead maybe
I think a good proof that this representation has a bad overhead/benefit ratio, is that we could remove it in a few days of work
and I'll bet it would actually clean up things a lot
Don't forget the mighty funind plugin!
I wish you luck to port that one
in fact if you have a look to the code
it is barely used
You mean Function
is barely used?
I mean glob_constr
in fact PM is right, the main user is funind
so indeed, funind either wants to work
with constrexpr
or with constr
You still need to extend constrexpr a bit to replace glob_constr
not with glob_constr
right?
only for references?
the rest is there
IMHO the highest priority would be to give people a chance to write correct code
beware of the scope for tactic interpretation also
and probably for arguments also
so in the end you end up rewriting glob_constr i think
At least documenting what datatype they are supposed to use, and have intf
reflect that.
for tactic interpreation I have no clue, but for the rest I dunno. The evidence is there, it is barely used. In fact, it is not hard to come with uses for notations or implicits where you would like the elaboration to happen in the pretyper (not that I am advocaing such cases)
anyways just a comment
IMHO the issue is settles
settled
as it is really an internal thing
and plugins should be forbidden to use glob_constr
which I fully support
so then if we want to optimize the internal notation handling, etc... we could see how we do
IMHO now we have a lot of overhead and I fail to see the advantages
in the sense
it would be just adding 3 cases to the pretyper
one for notations => call the notation compiler
one for reference => call the reference solved
I fee like I'm repeating myself, but before forbidding, we should at least document it, right?
one for impargs => call the impargs inserted
inserter
@Pierre-Marie Pédrot Coming back to EConstr
, one thing I still don't fully understand is why EConstr
changed the universe normalization strategy.
What do you mean?
It does what it should do.
nf_evar was doing that as well
you mean whd_evar?
yep, that's the same
Ah ok, I see what you mean now, I think. The proper fix would be that EConstr
contains "non-normalized" universes. But of course, you go through a view that normalizes when actually needed, like for EConstr
.
Exactly.
We could probably even be more intelligent on this one and use persistent data structures to save us from repeated normalizaton
There would not be such things as 'Unsafe.to_univ'
You mean it would be more helpful than for terms?
It would be helpful for terms as well actually.
We would probably save a lot of indirections.
The problem is that you can't do that as long as there is a function Unsafe.to_constr which needs to be fast.
Yeah, so let's go step by step.
What we need is to get all EConstr
overhead below 2% I'd say. Ideally without the loop unrolling.
(and then merge)
Right.
What are the developments I should be looking at?
I'd say we should start with the highest overheads, because fixing them may make the other ones fall into the acceptable margin.
HoTT is a special case, so we should not care too much at first.
So we should start with HoTT. Verify your conjecture on universe normalization.
\o/
Alternative facts FTW.
OK so what should I do?
Ok, maybe, but couldn't universe normalization impact developments that don't use polymorphism?
May I extend the Constr.view term to account for universes as well?
Well, it does change the behaviour of Type-casing.
The resulting view function is also going to be much more efficient once inlined I think.
There would be only two cases which are not identity : Evar and App.
Mhhh. Now that I'm thinking about it...
So maybe the best first step is to take the slowest file in HoTT
and coq-mathcomp-character
and perf
universe normalization.
There is a bug in EConstr.kind actually
What is App
doing?
You need to be stable by the fact that there are no two consecutive App node.
I'm realizing I forgot another case of smart constructors : Cast.
Ah ah!
We ensure that there aren't two consecutive casts in Constr but not in EConstr.
I'm not sure how much this affects the semantics of Coq, seeing how casts are completely bugged anyway.
Is the bug introduced only by your last commit?
No, it was there before.
What does the Cast
smart constructor do for two different kinds of Cast
?
Eh...
Who knows?
Well, if we end up removing a vm_cast
somewhere...
So how about the following plan:
Cast
bugHoTT
and the character theory.let mkCast (t1,k2,t2) =
match t1 with
| Cast (c,k1, _) when (k1 == VMcast || k1 == NATIVEcast) && k1 == k2 -> Cast (c,k1,t2)
| _ -> Cast (t1,k2,t2)
Very strange invariant.
What the hell is that supposed to mean?
The treatment of Cast
should be rewritten fully.
So? What do we ensure for EConstr casts?
Same as kernel?
Yep, at least that would not introduce a new question when fixing all this, and it will make our comparisons fair.
What do you think of my plan above?
Right, but we need to document that it is written that way to conform to what the kernel does.
Yes
LGTM.
You can even add a technical note that it is probably broken for REVERTcast
s...
Do you need help somewhere in the plan?
I'll have a hard time compiling HoTT but I think I can survive. What is the status of HoTT wrt trunk?
you can easily use the ci target for that
make ci-hott
and thank @Emilio Jesús Gallego Arias ;)
Cool!
Still, I'm finishing Nsatz up. I'm almost done.
grrrrrrrrrr
If you guys stop distracting me sending messages. The notification thingy is really disturbing.
@Pierre-Marie Pédrot disable it; I've done so indeed
So indeed @Pierre-Marie Pédrot when you have a bit of free time, I would appreciate knowing more about why ltac needs to store glob_expr
, in fact, as you folks know we have this little gem in intf
:
(** In globalize tactics, we need to keep the initial [constr_expr] to recompute
in the environment by the effective calls to Intro, Inversion, etc
The [constr_expr] field is [None] in TacDef though *)
type glob_constr_and_expr = Glob_term.glob_constr * constr_expr option
thus something is going on, this seems to suggest indeed that ltac may do not so bad without glob_constr
.
I'm implementing the delayed universe stuff and I think universe handling in tactics is almost as broken as evars are.
Luckily, almost nobody relies on this as of today, except people doing universe polymorphism.
O_O
Well you are indeed lucky, I have discovered that exception printing CErrors.iprint
performs _essential_ side-effects, so if don't "print" exceptions (relying instead on feedback) then something strange happens between Proof_global
and the STM resilience stuff and IDE messages are not emitted anymore.
How much fun!
It's Friday, Friday, Friday, gonna get fun fun fun...
Never had a Friday like this
of course, purely imperative programming couldn't be far from my fun
Chasing bugs on Friday night is a well-known practice!
in this particular case, we are talking about _purely imperative error handling_
@Pierre-Marie Pédrot my few remaining friends don't approve of it thou
ok got it: purely imperative programming, case 2 vs expected case 0
Come on, the EXTASIS of finding that $%!§£ universe leak...
"extasis" falls short of describing what you can feel
Not close to it yet unluckily.
but yeah I have a better idea we should implement the proof-engine CPS by raising exceptions and attaching properties to them; we could even efficiently implement LTAC with that approach
and would likely be cleaner
well, I sadly need to go back to my .tex files
Good luck...
but I'll share my findings here: the behavior of the STM was essentially different depending whether the toplevel was printing exceptions or not. That was a bit puzzling and was hindering my efforts on the way to a unified printing path. So what happens is that actually printing an exception in Coq, raises other exceptions.
Thanks PM!
well, that is a bad side effect, but no so bad as to alter the behavior of the STM in such a bad way. Now let me introduce:
Yeah, I wrote Exninfo.add.
That's a horrible hack, but there is no easy way around in OCaml API AFAIK.
Now it is essential you put all these ingredients together, and the chain of events is like this:
in its purely imperative programming crusade, Stm.State.define
will actually run its generic function (of type unit -> unit
) and check for any produced exception. Then, it will look at what the properties the resulting exception are, and act in four different, carefully random ways.
then error printing comes in the way. A typical pattern in Coq error printers is to raise a new exception inside the handler to delegate the printing to a lower level. That is fine, however, our purely imperative engine is watching, and our finely decorated original exception will be turn into an exception stripped of any attributes if we dare to call the printer.
and like due to its randomness, define only does the right thing when indeed the incoming exception is stripped of any attribute.
Here you have it.
Lots and lots of imperative fun
Yes, we need to take advantage of the new exception uid to use maps instead of reraising exceptions.
You got me =>
Exninfo.add was written to support reraise, that is fine I think (even if in Ocaml 4.03 we can use the native reraise I think)
however Exninfo.add has been badly abused in the STM
Well, the original implementation, believe me or not, was pure;
Unluckily, it relied on OCaml < 4.01 representation of exceptions
and used unsafe machinery to mangle exceptions.
Yeah I had a look some weeks ago, I think the original implementation and purpose was fine, however you should have restricted Exninfo.add so people cannot abuse it
people^W the STM
in fact yeah it seems badly broken
now that we have PPX we may try do that by using a wrapper
ensuring that all try / with are guarded
what about doing external reraise : exn -> 'a = "%reraise"
?
oh well you mean for catchable ones, that would be fine
this needs to ensure that no exception was raised in the meantime
thou it seems that if people forget to use the guard they could forget to use the ppx
Anyways, IMO attaching backtraces is fine, attaching locations is quite fragile but could be allowed (worse effect you lose location info), attaching properties the way the STM does is just broken now, unless we change a lot of things in Coq to ensure properties are preserved in exception "transformers"
I never got the whole stuff about exception wrapping in the STM
Yeah I think we want to remove it, it is just impossible to make it reliable due the semantics of state + reliance on preservation of the attachemente
so you say PM that %reraise
wouldn't work would we move to 4.03?
The problem is that you may want to reraise an exception after having done something.
If ever that something raises an exception, you can say farewell to the attached backtrace.
I see
thx
So would you support a change towards type iexn = exn * Backtrace.t
?
You may want other irrelevant information than Backtrace, e.g. locations.
What one should not do is to rely on the info parameter to take decisions
That is only for debug or printing.
Ok, we fully agree on that. Regarding locations, I'd weakly push for removing that in favor of a located object:
but yeah that is very low priority
the stm stuff is of higher priority thou
Do as you prefer, I can't say I'm fond of this code.
Aha! Got my bug.
Morale: how is it possible to write correct code without the help of a proof assistant?
:D seems a bit like a chicken and egg problem :D
It's a miracle that Coq is not yet bootstrapped.
50 years from now, people will look at us with a mix of compassion and contempt
In ye olde days, we were cowboys !
« À l’époque, j’étais moi-même cowboy, je vivais avec Jacques, un bon copain. »
@Pierre-Marie Pédrot Did you confirm the inefficiency in HoTT and others come from universe normalization
Working on it.
I've already stopped normalizing universes, and on my way to instances.
but you do that by implementing the lazy universe normalization?
Yes.
This is always going to be more efficient.
I would have used a profiling tool.
Even for non-universe polymorphic code.
Yes, but here again you are going to find cases where code relies on the wrong behavior, right?
For now, this looks OK.
I'm pretty sure we'll wreak havoc in HoTT, but I'm a believer.
Ok. I agree that's something we need to fix anyway, but I'm a bit afraid to open a second front in the middle of the EConstr
war.
Finger in the gear, they say...
Well, if that is the only way, then fine, but if using a profiler shows another source of inefficiency... We'd better fix that one and merge EConstr
first.
I'm really sure this is because of that.
I mean this affects specifically HoTT, and it is clear from the code what happens.
Gotta sleep, but keep me posted please, I'll try to help you, even with a second front :)
I'm only a second couteau in this stuff, but I have no mercy.
And need no sleep, I see
Maxime stop interrupting PM, he's trying to work !
@Emilio Jesús Gallego Arias (@05:04) HoTT has a way of spitting out .timing
files (maybe make timing
; I don't recall; it just saves the output of coqc -time
), but there's nothing that processes them to create diffs. It shouldn't be too hard to extend my timing python scripts to handle .timing
files; you just need to teach it about the output format of coqc -time
, in addition to the output of make TIMED=1
.
@Pierre-Marie Pédrot (11:38) What's wrong with the invariant in casts? To me it says that consecutive VM/native casts should be coalesced (which can only ever make typechecking faster, because these casts fully normalize the types and so it doesn't matter what type you claim to be casting to/from), but other casts should not be coalesced (because if they are, you might blow up conversion, because you've lost the careful hints the user inserted to guide conversion to not taking forever)
Unrelatedly, I managed to trigger the assert in pretyping/constr_matching.ml
in this function:
let rec extract_bound_aux k accu frels ctx = match ctx with
| [] -> accu
| (na1, na2, _) :: ctx ->
if Int.Set.mem k frels then
begin match na1 with
| Name id ->
let () = assert (match na2 with Anonymous -> false | Name _ -> true) in
let () = if Id.Set.mem id accu then raise PatternMatchingFailure in
extract_bound_aux (k + 1) (Id.Set.add id accu) frels ctx
| Anonymous -> raise PatternMatchingFailure
end
else extract_bound_aux (k + 1) accu frels ctx
The trigger condition seems to have something to do with match
statements. I'm working on minimizing the bug, but was wondering if any of you all have any thoughts on what this means.
We should get @herbelin here
I think he's the one that may know, maybe Matthieu too?
I guess @Théo Zimmermann can lure him in
Sorry I won't be back to the office until Wednesday, so I can't be of any physical help here.
@Matthieu Sozeau is supposed to be on this chat though, but he's never said anything so I'm not sure he'll see the notification
Do you guys know about git fetch origin pull/PRNUMBER/head
? I just learned about it in the nixpkgs manual.
Actually I use some magic config line that allows me to see PR as github/pr/#NUM.
So far, to test a PR I always added a new remote corresponding to the submitter's fork :S
Magic line:
[remote "github"]
url = git@github.com:coq/coq.git
fetch = +refs/heads/*:refs/remotes/github/*
fetch = +refs/pull/*/head:refs/remotes/github/pr/*
@Pierre-Marie Pédrot I'd be interested in your config line.
cool thanks!
The important line is the third one.
yeah I figured out (after some comparison to existing configuration)
I wonder if there is a way to see only opened PRs
@Théo Zimmermann I doubt it. The comments on https://gist.github.com/piscisaureus/3342247 suggest other people are looking for a way and haven't found one, and I don't think github has separate URLs for open vs closed pull requests (everything is https://github.com/coq/coq/pull/n
) so I don't think you can use any sort of wild-card filter to separate open ones from closed ones. It might be possible to wrangle some sort of hook that goes and separately looks at the github data and deletes the local refs for PRs that aren't open, but git doesn't seem to have a post-fetch hook, so that seems like a non-starter. Maybe you could write a bash script that fetches the PRs and then deletes the ones that are closed? ¯\_(ツ)_/¯
Thanks @Jason Gross! That's quite useful information.
after weighing in several solutions, I'm going to go with a git alias:
pull-request = "!f() { git fetch -fu ${2:-$(git remote | grep ^upstream || echo origin)} refs/pull/$1/head && git checkout --detach FETCH_HEAD; }; f"
this will avoid me fetching too many PRs
@Maxime Dénès I've pushed the delayed universe fix on my branch. I'll try to schedule a benchmark on pendulum.
AFAICT HoTT still compiles, which is a good thing...
@Pierre-Marie Pédrot Nice! Let me know how it goes.
pendulum is a bit crowded at the time...
Is fiat-crypto supposed to compile with trunk? I keep getting a stack overflow on my machine.
in ./src/WeierstrassCurve/WeierstrassCurveTheorems.v
And strangely enough, Travis fails on my clean-nsatz branch while it works for me...
File "./src/Util/WordUtil.v", line 67, characters 4-17:
Error: Ltac call to "etransitivity" failed.
No matching clauses for match.
@Pierre-Marie Pédrot Let me compile the latest trunk and try it on the latest fiat-crypto. (Should definitely build on v8.6, as long as you have enough time and enough ram.)
I have a patch to make it work in any case. The stack overflow comes from a call to List.merge in ideal.ml.
@Pierre-Marie Pédrot Nice!
Also, I managed to track down the anomaly in constr_matching; seems like there's an invariant that's asserted there that the vm breaks: https://coq.inria.fr/bugs/show_bug.cgi?id=5434
@Pierre-Marie Pédrot @Maxime Dénès Pendulum is yearning for more benchmarking jobs ... You can queue in advance any number of jobs.
@Matej Košík Is HoTT supposed to compile with master? I launched a bench yesterday for EConstr but HoTT was not compiled. Looking at the list of default benched packages, it does not appear there...
@Pierre-Marie Pédrot Let me check ...
(I think I dropped it from the set of default OPAM package at some point by mistake).
I think I already harassed some of you at POPL about this, to no avail, and I'm becoming crazy. To be short, the 'perf' tool does not work anymore for me after going from a 4.8 kernel to a 4.9 kernel. This is an important tool for profiling Coq, and I'd like not to be stuck on an already old kernel. Do anyone of you know where I could find some help?
There is no Debian bug on perf, and I can't find anything related to my problems through the allmighty Google.
@Pierre-Marie Pédrot @Matej Košík There's a branch of @Emilio Jesús Gallego Arias 's that has HoTT compiling with 8.6, and another one that has it compiling with master? I think we plan to integrate the 8.6 branch as soon as we can integrate a version of dpdgraph that works with 8.6.
I am not sure if I understand your plans. From my (not very well informed) point of view, it would be great if we did not have to use our custom branches but we could directly clone your repo and test it with Coq 8.6 & trunk.
@Pierre-Marie Pédrot I've checked, Emilio's branch of coq-hott
still compiles with Coq trunk
. I've added it (back) among the default OPAM-packages for benchmarking.
OK, thanks, I'll retry a bench for EConstr.
The thing currently blocking me from integrating the patch that makes HoTT compile with 8.6 is the fact that the make install-coqlight
target went away. @Emilio Jesús Gallego Arias suggested using
make states tools
touch bin/coqtop.byte bin/coqchk stm/{proof,tac,query}workertop.cma
sudo make install-binaries
sudo rsync -a plugins theories /usr/local/lib/coq/
but this fails to build/install grammar/compat5.cmo
, which is needed for coq-dpdgraph. How should I build and install coq without building all of the standard library in a way that gets me compat5.cmo
?
@Jason Gross does the install-library
target work? I'll try to clean up the makefile
situation one of these days so we don't need such hacks.
My current solution is make states tools coqlight plugins grammar/compat5.cmo grammar/grammar.cma
and make install-binaries install-devfiles
, which seems to work. I'm now trying to fix the issue that trusty ocaml doesn't support -safe-string (which coq-dpdgraph unconditionally passes)
it is the coq makefile which actually passes the -safe-string
flag, and indeed building coq plugings with a different -safe-string
setting is not supported. Note that Coq 8.7 requires ocaml 4.02.3, which supports safe string. My advice is to use the standard ocaml repos for travis as used in Coq's travis.yml
@Emilio Jesús Gallego Arias What do you mean it is coq_makefile
? https://github.com/Karmaki/coq-dpdgraph/blob/master/Makefile.in#L50
Alternatively, @Emilio Jesús Gallego Arias , do you want to take over finishing up the port to 8.6? I've pushed a bunch of commits on top of your branch at https://github.com/JasonGross/HoTT/tree/mz-8.6 (feel free to squash all of mine into one, or not, at your discretion), and the only thing left, I think, is actually getting coq-dpdgraph to build on travis.
Indeed I meant that in trunk coq_makefile
will force plugins to be built with -safe-string
. You can build a plugin with -safe-string
even if Coq is not; however the inverse doesn't hold.
I've been very busy these days sorry, feel free to take over if you want.
Note also the mz-8.7 branch
which is the one used for trunk
we should eventually get to integrate that one, but not urgent IMO.
Indeed I suggest you build the 8.6 branch with the mz-8.7 setup
[for travis] that will solve the dpdgraph problem
@Emilio Jesús Gallego Arias I don't see anything special about the travis setup in https://github.com/ejgallego/HoTT/tree/mz-8.7 ...
Unrelatedly, I think I got burned by relying on behavior of the unifier, and finding it too unpredictable. When documenting my code as part of a PR for fiat-crypto, I added this paragraph to explain this line of code:
(** Now we rely on the behavior of Coq's unifier to transform
the goal for us; we a goal like [let f' : A := ?f_evar in
B], and we want a goal like [A /\ B]. So we refine with a
hole named [pf] which is proof of [A /\ B], and then assert
that the second projection of the proof (which has type
[B]) actually has type [let f' : A := proj1 pf in B]. If
done naïvely, this would give a circlular type, which Coq
disallows. However, Coq is happy to zeta-reduce away the
circlularity; happily, this is done after Coq unifies [let
f' : A := proj1 pf in B] with [let f' : A := ?f_evar in B],
hence filling [?f_evar] with the first projection of the
proof. Since Coq instantiates the two existing evars
([?f_evar] and the current goal, which is represented by an
evar under the hood) with projections of the new evar
(which becomes the new goal)---and let us hope that Coq
devs never decide both to turn on judgmental η (currently
controlled by primitive projections) for [and], and to
prefer η-expansion of evars before dropping context
variables (we might also be in trouble if someone adds a
[Canonical Structure] for [and])---we get the desired
behavior--for now. *)
lazymatch goal with
| [ |- let f' := _ in ?B ]
=> refine (let pf := _ in (proj2 pf : let f' := proj1 pf in B))
end
Now I'm trying to fix an issue where Defined
takes over 30 hours and over 10 GB of RAM, and it seems like this one refine
is the issue...
@Jason Gross oh sorry I meant this branch: https://github.com/ejgallego/HoTT/tree/ocaml.4.02.3
@Jason Gross , related unification indeed there has been recent general chat about whether Coq users can really rely on unification. The impression is that with the current status of the heuristics it is not a good idea. I am not really qualified to talk about this given that many expert people is in the chat, but coming from a Logic Programming background, indeed we rely on predictable unification, but AFAICT the closest you can do in Coq is UniCoq, and even it, it may be not too stable yet. But one of the pitfalls of logic programming is the way unification and backtracking break modularity. In fact this is why I stopped writing Prolog and LProlog programs. If you try to update unification, it is common to have to update all your programs, and this is not what you want when writing proofs. I won't get started with backtracking....
@Emilio Jesús Gallego Arias Thanks for the pointer re HoTT
@Emilio Jesús Gallego Arias re unification the behavior I'm relying on here is that if you try to unify let f := x in y
with let f := ?e in y
, where ?e
is an evar (and the only one), then Coq will instantiate ?e
with x
rather than preemptively doing zeta-reduction and discovering that f
does not occur in y
and so it doesn't need to instantiate ?e
.
@Jason Gross NP!! I indeed see no reason as not to push the updates in https://github.com/ejgallego/HoTT/commit/b2255e1feca65b7eb01dab5484a712eac4dea33f
Why this makes Defined
take forever, I don't know
(see https://coq.inria.fr/bugs/show_bug.cgi?id=5437 for more details)
@Jason Gross re unif: that looks pretty fragile to me in the presence of the other heuristics, [that could be triggered non-locally] but I'm sure experts will comment on that.
What's non-local? I rely on this only at top-level, in the case where I do refine (let pf := _ in ((_ : T2) : T3))
where T2
is of the form let f := x in y
and T2
is of the form let f := ?e in y
*and T3
is of the form
@Emilio Jesús Gallego Arias but, yeah, I expected it to be fragile, which is why I wrote such a long comment in the source explaining what I was doing
Is there a way to add VST git repository to the pendulum benchmark? I've no knowledge on writing opam packages...
@Pierre-Marie Pédrot The problem is that VST does not compile with Coq trunk
Why not? Last time I tried it worked...
Because we broke Flocq
We fixed Flocq, ported the fix to CompCert which embeds Flocq, but did not port the fix to VST which embeds CompCert...
I've submitted a PR to backport already : https://github.com/PrincetonUniversity/VST/pull/68
and it got merged.
oh!
So I don't know, @Matej Košík, why doesn't the bench compile VST?
AFAIK, there is no opam package.
@Pierre-Marie Pédrot what is the status of EConstr
?
Waiting for a bench on HoTT
I'm expecting time gain for everybody (hopefully).
What build is it on Jenkins?
#102
101 is evaluating the removal of double hashconsing essentially, but I'd like to test it on VST as well, so it can probably be cancelled if you're in a hurry about EConstr.
Ok I'll interrupt it.
We should put only one iteration for benchmarks
All right, I thought 3 was coming from some rule of thumb.
Also, beware that when you use "benchmark-the-whole-branch", bash script tries to guess the base branch it should compare against
Should we ensure master is merged before benching then?
No, precisely if you do that you'll run into trouble.
I advice you double check the base commit in all the benchmark results you got so far.
You may be comparing two unrelated branches...
Fine, I'm leaving the dirty work to you for now then.
@Maxime Dénès Should I perfom a merge into EConstr anyway?
I did it recently, so it won't change much
@Pierre-Marie Pédrot Btw , why didn't we try to merge a version of EConstr
that was doing the same normalization of universes as Constr
s before? (not more) Was there any difficulty?
That's probably unsound now that we removed eager normalization.
That said, I would not be surprised if the code in master is deadly bugged.
It probably still is in EConstr, though a bit less.
If we don't remove nf_evar from here and there, but leave EConstr.kind there and doing only evar-related work (no univ work)
wouldn't that be sound?
Probably not anyway.
would it be less sound than without econstr?
Well, I don't think we can measure things like that.
I'd say on the same order of magnitude.
Fact is, nobody is using universe polymorphism.
I mean: more bugs, same bugs, different bugs
Except crazy people who don't care yet about another buggy unification
It will be unsound in a different way, which means breaking code to replace unsoundness by unsoundness...
+1
(in the few developments that actually use univ poly)
I still don't get where the change would come
from
Typically, if you unify some universe variable with Set / Prop, then you're going to be able to observe that.
I think I spotted an instance of this bug somewhere, or at least imagined I had seen it in the wild.
sorry, I did not make the context of my question clear
objective: merge the API change of econstr, without attempting to fix any bug in existing code, nor introducing new bugs
@Enrico Tassi I don't believe this is feasible.
well the question is: with how much code duplication?
in particular, if you don't want to duplicate all the primitives (reduction, occur functions, etc.)
Yeah obviously you can duplicate all folders starting from engine
We don't want to do that, do we?
as soon as you look for occurrences of variables in properly substituted terms, you change unification...
it is so broken!
Welcome in Coq Hell, Mr Anderson.
« Les *bugs peuvent prendre de nombreuses formes ! Ce n'est que le début, M. Vincent. »
@Pierre-Marie Pédrot please repeat what you said here in #526 about the proper datatype to use
it is an interesting and underdocumented issue
I already forgot. How can we find something in the history of the chat?
"Search Gitter" in the leftside panel
@Pierre-Marie Pédrot why is that impossible? I mean you don't port evarconv for for example (you change the API taking an econstr, then you immediately Unsafe.to_constr)
@Enrico Tassi I started to do that for real
... and...
I can show you the code, I gave up after two days of work
and like 10+ hours each day sitting at the desk
I got something that compiled and failed on the stdlib...
so which is the future of econstr?
it is this: https://ci.inria.fr/coq/view/benchmarking/job/benchmark-part-of-the-branch/21/console
@Enrico Tassi I'd favour merging and seeing if we broke things.
more precisely, we fixed the last known incompatibilities
now we'd like to fix the performance overhead
this branch includes the removal of some nf_evar, fixes to pretyping, ...
all that?
yes
It's very intricate, I agree.
well good luck
But that's what you get when you have to fix years of shitty code.
I only partially agree, IMO you did look for troubles ;-)
I think we really had to do this at some point.
@Maxime Dénès can probably write a whole book about the eldritch horrors of unification.ml
.
I'm not sure I'm willing to answer the discussion on funind actually...
Let them merge their code, and at some point kick funind out the repo.
@Enrico Tassi do you think it is the right strategy for funind, what PMP suggests?
If I can put a bit more argumentation on the table:
I've no opinion/knowledge on funind. To my eyes it is made of copy/paste, i.e. very bad code, but I did never dig deep. IMO what @Pierre-Marie Pédrot proposes is reasonable.
I'm ok, but it means no communicating. We let people work on it, while hoping that we get rid of the code...
Also, I'm not sure if this unification project @Théo Zimmermann mentions is really going to happen. The actors were @Matthieu Sozeau and Yves, and AFAIK they have very little time available.
Yes, I suppose the same fate awaits the unification of the unification algorithms.
Although, this could happen in a later release.
Yeah, for funind the situation is looking a bit better: @Matthieu Sozeau has hired an engineer to work on it. But it still requires time from some senior devs.
@Matej Košík WTF? Isn't HoTT in the default packages for the bench?
Already! Great news. Where did he found the money for it?
I'm wasting my time waiting for the bench since hours.
I don't know, I wasn't involved in the hiring process
OK, well anyways, that's always good to hear that more people are hired for working on Coq, until the Consortium starts.
Yeah, to say the truth I feel it lacks coordination, on my contract it is written that I should supervise the engineering effort on Coq, and I was aware of this hiring only after it took place.
But indeed it is always good to get more manpower, we badly need it.
@Maxime Dénès Too late, but if I had been you I'd have waited for the completion and only checked HoTT in a separate run.
@Pierre-Marie Pédrot That's what we did. Intermediate results:
I've sent the intermediate results:
https://ci.inria.fr/coq/view/benchmarking/job/benchmark-part-of-the-branch/21/console
┌──────────────┬─────────────────────────┬─────────────────────────────────────┬───────────────────────────────────────┐
│ │ user time [s] │ CPU cycles │ CPU instructions │
│ │ │ │ │
│ package_name │ NEW OLD PDIFF │ NEW OLD PDIFF │ NEW OLD PDIFF │
├──────────────┼─────────────────────────┼─────────────────────────────────────┼───────────────────────────────────────┤
│ coq-geocoq │ 2422.54 2421.81 +0.03 % │ 6741260473927 6743054069790 -0.03 % │ 11272153642487 11437966699056 -1.45 % │
├──────────────┼─────────────────────────┼─────────────────────────────────────┼───────────────────────────────────────┤
│ coq-compcert │ 838.36 835.39 +0.36 % │ 2328009504172 2317808332890 +0.44 % │ 3494804886743 3550565787937 -1.57 % │
└──────────────┴─────────────────────────┴─────────────────────────────────────┴───────────────────────────────────────┘
Fixed the Jenkins's set of default packages (added coq-hott
at the beginning)
and restarted Maxime's job with coq-hott
and the rest of the packages.
Well, there were quite a few other packages in the run, weren't there?
Benchmarking was finished only for two packages:
coq-geocoq
coq-compcert
coq-hott
, will benchmark also the remaining ones:coq-hott coq-color coq-fiat-crypto coq-fiat-parsers coq-flocq coq-unimath coq-sf coq-mathcomp-ssreflect coq-iris coq-mathcomp-fingroup coq-mathcomp-finmap coq-coquelicot coq-mathcomp-algebra coq-mathcomp-solvable coq-mathcomp-field coq-mathcomp-character coq-mathcomp-odd_order
All right then. That's slow as hell for two packages...
-j1
:-(
And luckily intuitionistic-nuprl is not in there!
Both of them take a lot of time to finish. Also, each of them is compiled for two Coq commits :-(.
I know I'm nitpicking, but is it possible to cache the timing results for pairs of Coq hashes / package hashes? That way the bench would not have to recompile twice the packages...
I guess we all need a candy before going home...
Check forall x : nat, forall x : Prop, x -> x.
nat -> forall x0 : Prop, x0 -> x0
@Pierre-Marie Pédrot That makes sense.
;-)
... makes sense
LGTM. Where is the trick?
@Matej Košík Fine for me.
@Enrico Tassi I am afraid my mind was silently alpha-converting. You are right, its lame.
... the second x is renamed into x0 (to avoid a collision with the first x) but also the first x is not displayed, because it does not occur...
@Maxime Dénès The bench did not use the right branch for mathcomp :/
@Enrico Tassi Yeah, we saw this one together some time ago. That's what I dub paranoid renaming.
@Pierre-Marie Pédrot Yeah, we're waiting for support for overlays from @Matej Košík :)
@Maxime Dénès Tomorrow I'll try to implement them.
Great! We can discuss it a bit, it requires a bit of design.
For today's job, I modified the script on pendulum to add the extra repository you have placed there. There are custom packages for coq-ssreflect
.
oh ok, thanks! So that's what Jenkin's \#26 is doing?
Yes.
Sounds very good, thanks.
@Pierre-Marie Pédrot What do you think of HoTT's timings?
They look OK.
The previous job worked itself as far as coq-mathcomp-ssreflect
and then it crashed.
The intermediate results (from before the crash):
┌─────────────────┬─────────────────────────┬─────────────────────────────────────┬───────────────────────────────────────┐
│ │ user time [s] │ CPU cycles │ CPU instructions │
│ │ │ │ │
│ package_name │ NEW OLD PDIFF │ NEW OLD PDIFF │ NEW OLD PDIFF │
├─────────────────┼─────────────────────────┼─────────────────────────────────────┼───────────────────────────────────────┤
│ coq-color │ 631.49 650.80 -2.97 % │ 1754004131981 1803179944513 -2.73 % │ 2205655960449 2339989943536 -5.74 % │
├─────────────────┼─────────────────────────┼─────────────────────────────────────┼───────────────────────────────────────┤
│ coq-flocq │ 55.58 55.59 -0.02 % │ 152473845088 151365590486 +0.73 % │ 194772860066 195218732996 -0.23 % │
├─────────────────┼─────────────────────────┼─────────────────────────────────────┼───────────────────────────────────────┤
│ coq-fiat-crypto │ 2393.69 2371.60 +0.93 % │ 6651126154562 6595511506108 +0.84 % │ 10973828256053 10998944800258 -0.23 % │
├─────────────────┼─────────────────────────┼─────────────────────────────────────┼───────────────────────────────────────┤
│ coq-sf │ 45.96 45.47 +1.08 % │ 125554592434 124037099135 +1.22 % │ 164277938151 163715060784 +0.34 % │
├─────────────────┼─────────────────────────┼─────────────────────────────────────┼───────────────────────────────────────┤
│ coq-unimath │ 1285.03 1258.59 +2.10 % │ 3563758045063 3492057761063 +2.05 % │ 5942466363507 5849766688064 +1.58 % │
├─────────────────┼─────────────────────────┼─────────────────────────────────────┼───────────────────────────────────────┤
│ coq-hott │ 239.56 230.82 +3.79 % │ 646887985092 621646051095 +4.06 % │ 1095798838565 1050349361476 +4.33 % │
└─────────────────┴─────────────────────────┴─────────────────────────────────────┴───────────────────────────────────────┘
I think we're close to merging EConstr, aren't we?
I fixed the reasons why the script was crashing (it was a bug I introduced today :-( )
(but anyway, without those "overlays" wouldn't provide more results)
I think we are getting close, yes. Still not ideal because we are above 2% overhead.
You'll need to convince everyone ;)
Come on, that's coq-hott.
You should draw a chart performance x correctness
We need an axis about program proof...
It's possible to have full performance at the expanse of proving your code!
Proofs are overrated.
The good thing is that if we track all the unsafe casts, it'll become possible to use a dedicated datastructure for EConstr.t
WDYM?
For instance, some kind of imperative but persistent representation.
oh yes
didn't they use to have that in like Coq 4?
So that one does evar normalization literally only once.
No idea, I can have a look
Maybe it was not for evar or meta substitution
that'd be @Emilio Jesús Gallego Arias's sweet dream
but I've heard one day about an imperative data structure for Coq terms
don't know if it was a legend
Coq source code. A time of legend and heroes
That would make a great movie.
a horror movie most probably
@Maxime Dénès That special ssreflect OPAM packages are needed for both commits?
nope, not for trunk
Hmmm. The job has failed again when trying to compile coq-mathcomp-ssreflect
, as if it pulled again the wrong version of it....
oh no, that's a different one
We need the custom package for head but not for base. Is it what you did?
That's what I tried, but could easily made a mistake.
I will tomorrow start with compiling coq-mathcomp-ssreflect
manually (with Econstrified Coq).
When that works, it will be obvious how to update the script.
Anyway, you can see my modification on pendulum...
Yeah you added the repo instead of replacing it...
The repo-list looks all-right:
50 [local] opam-coq-archive /home/jenkins/git/coq-bench/opam-coq-archive
40 [local] coq-bench /home/jenkins/git/coq-bench/opam
30 [http] coq-released https://coq.inria.fr/opam/released
20 [http] coq-extra-dev https://coq.inria.fr/opam/extra-dev
10 [local] custom-opam-repo /home/jenkins/workspace/benchmark-part-of-the-branch/26/custom_opam_repo
0 [http] default https://opam.ocaml.org
I'll start tomorrow by debugging why this did not work.
Btw. do I understand it correctly that in opam-coq-archive
you do not actually need all that stuff?
It doesn't work because it fetches mathcomp from coq-extra-dev
.
The numbers on the left are priorities? The lower the number, the higher the priority?
I don't know, but why keep coq-extra-dev
?
I adjusted it.
Can you launch a last one?
Maybe it will run over night this time.
Done. Let's see...
I saw you solved the issues, sounds very good.
Thanks. Tomorrow I can try to tackle overlays. That sounds useful. Then we may decide. Many things remain:
@Matej Košík git show 4f041384cb27f0d24fa14b272884b4b7f69cacbe kernel/nativecode.ml
?
and compile_rel env sigma univ auxdefs n =
- let decl = Context.Rel.lookup n env.env_rel_context in
- let n = Context.Rel.length env.env_rel_context - n in
- let open Context.Rel.Declaration in
- match decl with
+ let open Context.Rel in
+ let n = length env.env_rel_context - n in
+ let open Declaration in
+ match lookup n env.env_rel_context with
Shitstorm in approach!
That's what causes https://coq.inria.fr/bugs/show_bug.cgi?id=5435
So? Any player?
hint: the first let-in was there for a reason
let's make it a proof of false
I'm currently myself looking at the way we handle universe constraints in the kernel.
There are suspicious discrepancies.
I wouldn't be surprised...
Actually I'm hoping to give static invariants to it
by carefully chosing datatypes, and encoding info in GADTs
@Pierre-Marie Pédrot, what goes in env_rel_context? Do section let-ins go there?
nah
hmmm
You need to be inside a term for it to be non-empty.
I think.
yeah but fortunately I have native casts
\o/
Fortunately.
That's what you get for trusting native_compute.
Well, I'm surprised that the same kind of "refactoring" wasn't done on the VM ;)
What about my PR touching the kernel btw?
I'm pushing for the swift integration of the array mapping one !
Was there any complaint?
I'll read again
I think @silene complained about the ugliness.
Let me finish my proof of false. I put too many assertions...
So, is Coq 8.6 inconsistent?
Hmm I don't understand why my term doesn't type check
Maybe you can help me:
Definition foo (T := False) (U := T) (x : True) (X := True) := x <<: U.
Lemme see
How does the typechecker know I'm trying to make fun of him?
He's reached self-consciousness
try e.g. Definition foo (T := False) (U := T) (x : True) (X := True) := Eval native_compute in U.
In fact, when I use the cast, two type checks are triggered
the first one is nice and buggy, and succeeds, but the second compares True
and False
. So of course it fails.
I don't understand where this second check comes from.
backtraces to the rescue!
Yeah I looked at it already.
I get fancy assertion failures if I play a bit with the term.
We're walking on thin ice!
Yeah but those are "expected" given the bug.
But why are there two tests? And where does the typechecker see True
and False
without going through native_compute
?
Interesting:
Definition foo (T := False) (U := T) (V := U) (x : True) (X := True) : V = False := (@eq_refl Prop U <<: @eq Prop V U).
In environment
T := False : Prop
x : True
X := True : Prop
The term "@eq_refl Prop T" has type "@eq Prop T T"
while it is expected to have type "@eq Prop T T".
I don't understand how definitions are type checked...
I also get a few stack overflows
did you make sense of the backtrace on my initial example?
Not quite, I'm launching a debugger to see what happens
the other errors you see are probably at reification time
but my example is cooked up to make the thing go through
All right.
how do we go from term_typing
to fast_typeops
?
I think I got it.
tell me :)
is native compilation enabled by default ?
I think not
it falls back to VM right?
Yes it is enabled by default.
sure ?
Sure. What is not enabled is the precompilation except for stdlib.
And tests are running, I saw the generated code.
cause the native conversion fails in the cast
From what I saw, it is called twice.
The first time it succeeds, the second time it fails.
yep
same behaviour here
But I don't understand the second call.
Also, why don't we see something related to native_compute
in the backtrace?
nah, that's because the trace is not propagated
What does that mean? I'm always confused when trying to debug.
Where is it not propagated?
one need to add some CErrors.push to Typeops.check_cast
check_cast
?!
yep
There is no check_cast
in 8.6.
oh, I'm on trunk
but that should be the same
The hell are we doing in the stm
something is wrong
yeah I don't understand
it fails here
66 | Aind (ind1,u1), Aind (ind2,u2) ->
67 if eq_ind ind1 ind2 then convert_instances ~flex:false u1 u2 cu
68 else raise NotConvertible<|a|>
the second time?
yeah of course
it's being asked to compare True
and False
obviously
but the question is who called it?
I may be blind, but I don't see what typing rule does that.
Except if somehow, we said that the type of x <<: T
was the type of x...
so the native compiler is asked to compare
t1: Term.constr =
let T := False in let U := T in fun _ : True => let X := True in True
t2: Term.constr =
let T := False in let U := T in fun _ : True => let X := True in U
Ah yeah
Where are the casts gone?!
no, we're checking the cast
Oh!
isn't that supposed to be the same because of your bug?
Well it depends. I thought what we were doing
Erm...
isn't that because of this implementation ?
was asking to compare True
and U
in the rel_context that corresponds to what you wrote.
let native_conv cv_pb sigma env t1 t2 =
if Coq_config.no_native_compiler then begin
warn_no_native_compiler ();
vm_conv cv_pb env t1 t2
end
else
let univs = Environ.universes env in
let b =
if cv_pb = CUMUL then Constr.leq_constr_univs univs t1 t2
else Constr.eq_constr_univs univs t1 t2
in
if not b then
let univs = (univs, checked_universes) in
let t1 = Term.it_mkLambda_or_LetIn t1 (Environ.rel_context env) in
let t2 = Term.it_mkLambda_or_LetIn t2 (Environ.rel_context env) in
let _ = native_conv_gen cv_pb sigma env univs t1 t2 in ()
hum
we abstract over the context
I'm too paranoid.
so, no soundness bug?
But then who makes the first call?
the environment is wrong that said
The first call produces wrong code, I checked.
Nah, in trunk there is only one call which is NativeCast
Yeah but probably there is one in the pretyper.
I got it know.
?
Hum not quite.
In 8.6, I can swear the compiler is called at least twice.
I see the generated code.
I can check if you want
The first time it is incorrect, the second time it is correct.
clearly the code has changed between trunk and 8.6
let me try in trunk
still two calls
probably with different environments
can't reproduce on trunk
if you do -debug
you can look for .native files in /tmp
I'm in the debugger, this does not work
nah, in 8.6 only one native cast as well
I'm using bytecode so I'm not sure what is supposed to happen
still, isn't because you abstract over the term that it does not break loose?
probably
can you try in native?
so what you get in /tmp
see*
lemme try
I'd like to understand why there are two calls, because maybe we are inefficient
independently of the soundness
What should I observe?
I can observe the backtrace if you want
each conversion test leaves a xxxxx.native
file in /tmp
my example creates at least two on my machine
you can open them, they are ml source files
what I see is the first is not abstracted over the context, but the second is
I'll tell you right now
I only get one call to nativeconv
and you see only one file?
I get two
WTF did I mess up?
no wait
I get none actually
there is indeed a compiled cmxs / o whatever
if you don't put -debug
, they don't stay
but no native
ah ok
Ok
in debug mode, I do get two calls to native compilation
and thus two files !
what?!?!
isn't that some stm story?
only in debug mode?
about backtrack blahblah
yep
but it's not even the same environment
who knows
something bad is going on
First backtrace:
======
Raised by primitive operation at file "kernel/nativeconv.ml", line 158, characters 11-41
Called from file "kernel/fast_typeops.ml", line 233, characters 6-61
Called from file "kernel/fast_typeops.ml", line 396, characters 14-40
Called from file "kernel/fast_typeops.ml", line 390, characters 16-31
Called from file "kernel/fast_typeops.ml", line 376, characters 16-31
Called from file "kernel/fast_typeops.ml", line 390, characters 16-31
Called from file "kernel/fast_typeops.ml", line 390, characters 16-31
Called from file "kernel/fast_typeops.ml", line 449, characters 10-28
Called from file "kernel/fast_typeops.ml" (inlined), line 456, characters 19-28
Called from file "kernel/term_typing.ml", line 250, characters 14-28
Called from file "kernel/term_typing.ml" (inlined), line 414, characters 4-50
Called from file "kernel/safe_typing.ml", line 525, characters 8-68
Called from file "library/global.ml", line 68, characters 16-31
Called from file "library/declare.ml", line 155, characters 26-65
Called from file "library/lib.ml", line 215, characters 2-26
Called from file "library/declare.ml", line 215, characters 23-36
Called from file "library/declare.ml", line 269, characters 11-41
Called from file "toplevel/command.ml", line 158, characters 11-77
Called from file "toplevel/command.ml", line 190, characters 4-54
Called from file "toplevel/command.ml", line 218, characters 12-145
Called from file "lib/flags.ml", line 11, characters 15-18
Called from file "lib/flags.ml" (inlined), line 150, characters 19-41
Called from file "toplevel/vernacentries.ml", line 2235, characters 17-69
Called from file "toplevel/vernacentries.ml", line 2232, characters 10-389
Called from file "stm/stm.ml", line 122, characters 15-71
Called from file "stm/stm.ml", line 2143, characters 10-28
Called from file "stm/stm.ml", line 885, characters 6-10
Called from file "stm/stm.ml", line 2258, characters 4-100
Called from file "stm/stm.ml", line 2291, characters 4-48
Called from file "stm/stm.ml", line 2301, characters 2-35
Called from file "stm/stm.ml", line 2831, characters 8-30
Called from file "toplevel/vernac.ml", line 203, characters 6-16
Called from file "lib/flags.ml", line 11, characters 15-18
Called from file "lib/flags.ml" (inlined), line 150, characters 19-41
Called from file "toplevel/vernac.ml", line 221, characters 6-87
Called from file "toplevel/vernac.ml", line 311, characters 14-48
Called from file "toplevel/vernac.ml", line 348, characters 2-13
Called from file "list.ml", line 77, characters 12-15
Called from file "toplevel/coqtop.ml", line 640, characters 6-22
Called from file "toplevel/coqtop.ml", line 664, characters 11-59
Called from file "/tmp/coqmain5bd0d9.ml", line 2, characters 17-31
======
argh no
that's the backtrace of the error sorry
but I do get two files
strace to the rescue
so maybe that is independent from the debug mode, right?
I have a feeling I screwed up something myself here ^^
Yeah, right, I get 2 cmxs even without debug
but I have no idea what
I can look at who opens a file
well
it is native_conv
so the only missing bit is to find the two calls
no, I'd see the other trace
at least it's not Nativeconv.native_conv
maybe native_conv_gen, let me check
native_conv_gen
?
yeah
it would explain why it's not abstracted...
this part is terrible, I'm ashamed
indeed
"used outside of the kernel"
says the comment :)
Here are the backtraces
======
Raised by primitive operation at file "kernel/nativeconv.ml", line 133, characters 11-41
Called from file "pretyping/reductionops.ml", line 1297, characters 3-73
Called from file "pretyping/pretyping.ml", line 1024, characters 21-83
Called from file "pretyping/pretyping.ml", line 1102, characters 8-72
Called from file "pretyping/pretyping.ml", line 1174, characters 17-77
Called from file "interp/constrintern.ml", line 2010, characters 4-52
Called from file "toplevel/command.ml", line 103, characters 16-64
Called from file "toplevel/command.ml", line 198, characters 4-56
Called from file "lib/flags.ml", line 11, characters 15-18
Called from file "lib/flags.ml" (inlined), line 150, characters 19-41
Called from file "toplevel/vernacentries.ml", line 2235, characters 17-69
Called from file "toplevel/vernacentries.ml", line 2232, characters 10-389
Called from file "stm/stm.ml", line 122, characters 15-71
Called from file "stm/stm.ml", line 2143, characters 10-28
Called from file "stm/stm.ml", line 885, characters 6-10
Called from file "stm/stm.ml", line 2258, characters 4-100
Called from file "stm/stm.ml", line 2291, characters 4-48
Called from file "stm/stm.ml", line 2301, characters 2-35
Called from file "stm/stm.ml", line 2831, characters 8-30
Called from file "toplevel/vernac.ml", line 203, characters 6-16
Called from file "lib/flags.ml", line 11, characters 15-18
Called from file "lib/flags.ml" (inlined), line 150, characters 19-41
Called from file "toplevel/vernac.ml", line 221, characters 6-87
Called from file "toplevel/vernac.ml", line 311, characters 14-48
Called from file "toplevel/vernac.ml", line 348, characters 2-13
Called from file "list.ml", line 77, characters 12-15
Called from file "toplevel/coqtop.ml", line 640, characters 6-22
Called from file "toplevel/coqtop.ml", line 664, characters 11-59
Called from file "/tmp/coqmain79e4c9.ml", line 2, characters 17-31
======
======
Raised by primitive operation at file "kernel/nativeconv.ml", line 133, characters 11-41
Called from file "kernel/nativeconv.ml", line 175, characters 12-55
Called from file "kernel/fast_typeops.ml", line 233, characters 6-61
Called from file "kernel/fast_typeops.ml", line 396, characters 14-40
Called from file "kernel/fast_typeops.ml", line 390, characters 16-31
Called from file "kernel/fast_typeops.ml", line 376, characters 16-31
Called from file "kernel/fast_typeops.ml", line 390, characters 16-31
Called from file "kernel/fast_typeops.ml", line 390, characters 16-31
Called from file "kernel/fast_typeops.ml", line 449, characters 10-28
Called from file "kernel/fast_typeops.ml" (inlined), line 456, characters 19-28
Called from file "kernel/term_typing.ml", line 250, characters 14-28
Called from file "kernel/term_typing.ml" (inlined), line 414, characters 4-50
Called from file "kernel/safe_typing.ml", line 525, characters 8-68
Called from file "library/global.ml", line 68, characters 16-31
Called from file "library/declare.ml", line 155, characters 26-65
Called from file "library/lib.ml", line 215, characters 2-26
Called from file "library/declare.ml", line 215, characters 23-36
Called from file "library/declare.ml", line 269, characters 11-41
Called from file "toplevel/command.ml", line 158, characters 11-77
Called from file "toplevel/command.ml", line 190, characters 4-54
Called from file "toplevel/command.ml", line 218, characters 12-145
Called from file "lib/flags.ml", line 11, characters 15-18
Called from file "lib/flags.ml" (inlined), line 150, characters 19-41
Called from file "toplevel/vernacentries.ml", line 2235, characters 17-69
Called from file "toplevel/vernacentries.ml", line 2232, characters 10-389
Called from file "stm/stm.ml", line 122, characters 15-71
Called from file "stm/stm.ml", line 2143, characters 10-28
Called from file "stm/stm.ml", line 885, characters 6-10
Called from file "stm/stm.ml", line 2258, characters 4-100
Called from file "stm/stm.ml", line 2291, characters 4-48
Called from file "stm/stm.ml", line 2301, characters 2-35
Called from file "stm/stm.ml", line 2831, character
Rhat
it does not fit
let's split them
Raised by primitive operation at file "kernel/nativeconv.ml", line 133, characters 11-41
Called from file "pretyping/reductionops.ml", line 1297, characters 3-73
Called from file "pretyping/pretyping.ml", line 1024, characters 21-83
Called from file "pretyping/pretyping.ml", line 1102, characters 8-72
Called from file "pretyping/pretyping.ml", line 1174, characters 17-77
Called from file "interp/constrintern.ml", line 2010, characters 4-52
Called from file "toplevel/command.ml", line 103, characters 16-64
Called from file "toplevel/command.ml", line 198, characters 4-56
Called from file "lib/flags.ml", line 11, characters 15-18
Called from file "lib/flags.ml" (inlined), line 150, characters 19-41
Called from file "toplevel/vernacentries.ml", line 2235, characters 17-69
Called from file "toplevel/vernacentries.ml", line 2232, characters 10-389
Called from file "stm/stm.ml", line 122, characters 15-71
Called from file "stm/stm.ml", line 2143, characters 10-28
Called from file "stm/stm.ml", line 885, characters 6-10
Called from file "stm/stm.ml", line 2258, characters 4-100
Called from file "stm/stm.ml", line 2291, characters 4-48
Called from file "stm/stm.ml", line 2301, characters 2-35
Called from file "stm/stm.ml", line 2831, characters 8-30
Called from file "toplevel/vernac.ml", line 203, characters 6-16
Called from file "lib/flags.ml", line 11, characters 15-18
Called from file "lib/flags.ml" (inlined), line 150, characters 19-41
Called from file "toplevel/vernac.ml", line 221, characters 6-87
Called from file "toplevel/vernac.ml", line 311, characters 14-48
Called from file "toplevel/vernac.ml", line 348, characters 2-13
Called from file "list.ml", line 77, characters 12-15
Called from file "toplevel/coqtop.ml", line 640, characters 6-22
Called from file "toplevel/coqtop.ml", line 664, characters 11-59
Called from file "/tmp/coqmain79e4c9.ml", line 2, characters 17-31
Then
Raised by primitive operation at file "kernel/nativeconv.ml", line 133, characters 11-41
Called from file "kernel/nativeconv.ml", line 175, characters 12-55
Called from file "kernel/fast_typeops.ml", line 233, characters 6-61
Called from file "kernel/fast_typeops.ml", line 396, characters 14-40
Called from file "kernel/fast_typeops.ml", line 390, characters 16-31
Called from file "kernel/fast_typeops.ml", line 376, characters 16-31
Called from file "kernel/fast_typeops.ml", line 390, characters 16-31
Called from file "kernel/fast_typeops.ml", line 390, characters 16-31
Called from file "kernel/fast_typeops.ml", line 449, characters 10-28
Called from file "kernel/fast_typeops.ml" (inlined), line 456, characters 19-28
Called from file "kernel/term_typing.ml", line 250, characters 14-28
Called from file "kernel/term_typing.ml" (inlined), line 414, characters 4-50
Called from file "kernel/safe_typing.ml", line 525, characters 8-68
Called from file "library/global.ml", line 68, characters 16-31
Called from file "library/declare.ml", line 155, characters 26-65
Called from file "library/lib.ml", line 215, characters 2-26
Called from file "library/declare.ml", line 215, characters 23-36
Called from file "library/declare.ml", line 269, characters 11-41
Called from file "toplevel/command.ml", line 158, characters 11-77
Called from file "toplevel/command.ml", line 190, characters 4-54
Called from file "toplevel/command.ml", line 218, characters 12-145
Called from file "lib/flags.ml", line 11, characters 15-18
Called from file "lib/flags.ml" (inlined), line 150, characters 19-41
Called from file "toplevel/vernacentries.ml", line 2235, characters 17-69
Called from file "toplevel/vernacentries.ml", line 2232, characters 10-389
Called from file "stm/stm.ml", line 122, characters 15-71
Called from file "stm/stm.ml", line 2143, characters 10-28
Called from file "stm/stm.ml", line 885, characters 6-10
Called from file "stm/stm.ml", line 2258, characters 4-100
Called from file "stm/stm.ml", line 2291, characters 4-48
Called from file "stm/stm.ml", line 2301, characters 2-35
Called from file "stm/stm.ml", line 2831, characters 8-30
Called from file "toplevel/vernac.ml", line 203, characters 6-16
Called from file "lib/flags.ml", line 11, characters 15-18
Called from file "lib/flags.ml" (inlined), line 150, characters 19-41
Called from file "toplevel/vernac.ml", line 221, characters 6-87
Called from file "toplevel/vernac.ml", line 311, characters 14-48
Called from file "toplevel/vernac.ml", line 348, characters 2-13
Called from file "list.ml", line 77, characters 12-15
Called from file "toplevel/coqtop.ml", line 640, characters 6-22
Called from file "toplevel/coqtop.ml", line 664, characters 11-59
Called from file "/tmp/coqmain79e4c9.ml", line 2, characters 17-31
yeah sorry, I understand now
so reductionops is calling it
there is an infer_conv
for native
to infer univ constraints and so on
ah!
I had forgotten about that
before universes there was only one conv function
and one for normalizing
so we did learn one thing: the kernel function abstract over the context, but not the pretyper
yep !
once again, doubling everything saves the day
it is probably more efficient to do it
but I was scared to remove it from the kernel
well in that case the kernel was right
but just in case you never know
I mean more efficient not to do it
yeah
not that the passed environment is still wrong
really?
I don't think there is a way to work around that
yes, if you abstract over it
you should clear the rel context
oh yes
I don't think you can exploit that but still
but it's unused then
I agree it's not pretty
I'm always afraid about universe constraints
also we compile a lot of useless stuff
couldn't you get a leak by duplicating the relcontext?
I have no idea.
speaking about which, did you have a look at malfunction in the end?
not that I remember
did you talk to me about it? :)
I forget things these days
yep I did !
that's just an API over dlambda
probably better than spitting Obj-ified ML code
I see, that sounds nice.
since we're in OCaml 4.02...
But how good is it to depend on it?
well, that's a very small wrapper
we could probably integrate it ourselves, but the advantage is that we deport compatiblity on a package
I'm not sure about how stable dlambda is
Hum
"Malfunction is released under the same terms as OCaml."
the subset you use is small
well, we could recode it
oh but it's LGPL now?
yeah, how juridically clear is native_compute BTW ?
it is very clear!
much clearer than EConstr
!
LGPL 2.1
what?
The Great Proud University of Ljubljana
has now shares in Coq
BEHOLD
First, American first lady
now: Coq
now if you mean what happens if a plane crashes because of a wrong DeBruijn index in native_compute
I'm not sure.
« c'est comme d'habitude : le gouvernement niera avoir eu connaissance... »
LGPL, dude
that's like we're gangsta
no liability whatsoever
still, I think it would be worthwile to plug directly into dlambda
I'll give Matej's name :)
what we should do is remove all this
and implement Coqonut!
?
well, you could probably remove coq altogether
and use LEAN
that's C++
safest of the safest
look ma, no garbage collection
they don't need to, they don't allocate arrays like crazy :D
I'm waiting for the implementation of Coqonut
Tomorrow by 2pm, is that possible?
OP delivered EConstr
Tomorrow is the day of insensitivity
if I didn't have to fix all this mess, Coqonut would be already shipped :)
do we have some mathcomp results?
Seems to be a failure
oh but of course
we didn't port it to EInstance
I mean you didn't do it!
Too busy finding bugs in universes
and rewriting the kernel
and universe unification
and nsatz
Ok, so let's do that tomorrow.
fine
I think I'll go to bed
Meanwhile, enjoy the night, and thanks for your help!
you're welcome
I'm always interested in finding soundness issues
did you know we got cited in SIGBOVIK last year for that?
lol really?
yep
http://sigbovik.org/2016/proceedings.pdf
UNPRL: NUPRL PROOF RE-EVALUATION LOGIC
yeah I read
not sure it is the best way to become famous :)
That's international fame come on
probably my best citation up to no
*w
The ultimate SIGBOVIK 1 H-index
I should probably get some sleep
yeah the only other could be a medal of honor for heroic battles on the Coq field
and some cheese
in an indistinct order
nobody cares
you can die for their freedom
this is AMERICA
yep, see you tomorrow for more bugs and more fun
see you then
using opam to do benchmarking is like using NUPRL to make foie gras
Hey, @Emilio Jesús Gallego Arias, if I point you at a branch or two in a development making heavy use of universes and rewriting in type, would you be {interested in,willing to} add it to the bench?
@Matej Košík @Enrico Tassi I'm not sure how to do, and you probably know better so I'm asking directly: could any of you write an OPAM package for the git VST ? I'd like to bench it on pendulum as well. Thanks.
https://github.com/PrincetonUniversity/VST
Theoretically HEAD compiles both with 8.6 and trunk.
@Pierre-Marie Pédrot Ok. I am on it.
@Pierre-Marie Pédrot I've created coq-vst
package, added it to the set of benchmarked packages. It is safe to start benchmarking job. Right now I am trying to confirm that coq-vst
can be installed with coq.8.6.dev
and coq.dev
. That will take some time.
@Matej Košík Thanks a lot!
@Pierre-Marie Pédrot You are welcome.
@Pierre-Marie Pédrot I launched a new EConstr
bench after merging your mathcomp PR.
https://ci.inria.fr/coq/view/benchmarking/job/benchmark-part-of-the-branch/34/console
@Jason Gross sure, let us know where the code is
@Pierre-Marie Pédrot @Matthieu Sozeau have you seen the message of Beta on coqdev? Is it expected to have univ inconsistencies with -type-in-type
?
I think it is, but I'm not a specialist of this code, @Matthieu Sozeau is probably better informed.
@Maxime Dénès The job you started has failed because there was a bug in the bencharmking script. I've fixed the bug, restarted the script but it will still fail because coq-mathcomp-ssreflect
compilation (for the _old_ commit) fails to compile. For the _new_ commit all goes well.
Yeah I'm working on it. We need to merge trunk
into econstr
.
Can I play with pendulum or should we bench EConstr first?
@Pierre-Marie Pédrot I am not doing anything there, right now...
@Maxime Dénès ?
So any clue folks on where is the module system spending 1GiB to compile https://github.com/coq/coq/blob/trunk/theories/Structures/OrdersEx.v (see https://coq.inria.fr/bugs/show_bug.cgi?id=5270)
Well, that's not a huge amount of memory, is it?
Just wondering, as it seems to be the file that uses most memory of the whole stdlib
and it is relatively compact
Modules are known to be costly. Even the vo files are big.
Not mentionning the vio.
yeah I see
they have some non-linear growth factor it seems
You can use votour on the vio if you want to have an idea
yes, everything is duplicated when applying a module substittuion
AFAIR, the problem stems from the field Declarations.mod_type
.
We ought to keep it abstract, but we expand it unconditionally, and bam.
maybe it is the time to increase the density parameter of Coq
?
to stop it expanding I mean
I guess that making mod_type
abstract is science fiction material, right?
That's worse, I'm not sure why it would make sense.
How making it abstract would sove the issue?
Oh no clue myself, you had mentioned it
ah yeah, I misunderstood your statement
Abstract as in Coq, not as in OCaml
Well, I have no idea of the feasability
And the worst thing is that probably nobody has.
I see, thanks
Because you know, modules.
Yeah, modules are really opaque to me
the way the obj system is organized, hard to dwell in
Obj system is not that bad.
You have to understand when the methods are called but that's about it.
Modules are really intricate, OTOH.
libobject
is quite incovenient to use in my scenarios, where I really want to inspect Coq's state live
like let's say, from SerAPI
That's the wonder of existential types.
You could add a marshalling method if you like.
That's not very hard actually.
all the subst_object
stuff
yeah it is not hard
but incovenient
I am not a fan of such "generic" stuff
Coq needs to be extensible, if you want to write e.g. plugins.
extensible != generic stuff
What do you call generic?
libobject
contains very different objects
with very different API
You don't like OOP?
also the way classification works
totally hinders following the data flow scheme
it is a matter of responsibilites
the module system should take of the disposal
explicitly
when closing modules etc
I agree that it would be better to have inversion of control
instead this is delegated to the object, so you have no clue what is going on
indeed
you could of course, provide a proper extension API for the enviroment, etc...
If you start writing purely functional interface to libobject
that should become possible
I'm actually not sure this is that hard in the end.
We can always write imperative wrappers in the transition phase.
yeah it was just a comment, but at least for me, the current control flow makes it one order of magnitude harder to understand the code path of closing a module
it also makes it very hard to interact properly say, a new system of notations namespaces, with the current system
but anyways it was just a comment
how would such interface look like in your opinion PM?
I'd say that there is a blobby State.t
where you can add you own stuff.
and all the unit types of libobject are turned into some State.t -> State.t
, maybe even returning a new object.
or
more precisely
a type quantified over the parts of the state that can be accessed by the function.
we can probably do that with GADTs.
should that be merged with summary
?
I'd say so
Well there would not be a real need of summary afterwards
that's something worthwhile to pursue
yep
the key question is what the structure of State.t
should be
indeed you have proof state, section state
somehow you would like to expose the section org
Something we have problems handling in the stm indeed is sectioning and modules
There are fishy things in the kernel about both sections and modules anyway
Is @Maxime Dénès around? I'd like to have a few PR merged.
That includes EConstr :bomb:
:shipit:
@Pierre-Marie Pédrot I'm not far
can you make the list?
#434 : no real complains from people, @Enrico Tassi lamenting we need to do the compiler job
#400 : @Matthieu Sozeau happy but not @silene
#508 : seemingly uncontroversial
#519 : @Enrico Tassi complaining, but off-topic I think
and then a few more which need more testing and discussion.
ah and obviously :
#379 : the one PR to rule them and and in insensitivity bind them
oh, related to Pierre-Marie PRs, I've got my flambda patch ready, will submit tomorrow; how should we integrate it into benchmarking? @Matej Košík , are configure options implemented?
Also, is there a possibility that #434 / #400 may affect adversely flambda ?
#434 is independent from Flambda AFAICT.
I don't think Flambda will affect adversely #400. At worst it would subsume the patch.
Thankfully we can bench!
well yeah maybe we should test
in this cases I'd love to have CPP{,O} support in Ocaml/Coq
so we could define #OPTIM
at least for development releases, so your patch could be merged, and I could still pass something like -D DISABLE_OPTIM
to Coq to test how well flambda is going
We can do that with a PPX
and by the miracle of OCaml 4.02 we do have ppx around
ppx seems a bit overkill, I wish we didn't have to roll our own system, but to reuse cpp cppo
on the other hand, by Pierre-Marie's law, if it can be abused, it will be abused
so I fear adding #ifdefs
to Coq
@Emilio Jesús Gallego Arias No.
@Matej Košík I don't get why the compilation of math-comp on pendulum fails : https://ci.inria.fr/coq/job/benchmark-part-of-the-branch/41/consoleFull.
It seems it uses a branch that has been ported to EConstr, but I don't know who decides that in the first place.
@Pierre-Marie Pédrot hum, what is happening again?
Where?
on pendulum
Ah! I don't know.
This machine seems a bit capricious if you ask me
The one I launched before leaving was: https://ci.inria.fr/coq/job/benchmark-part-of-the-branch/37/console
I don't understand why it failed
That one failed because math-comp was not ported to the change of GLetIn in master it seems
Wait this did not get merged yet?
I thought I had merged it precisely before launching this build.
Let me try with a more specific commit sha1.
Oh but I see!!
I didn't merge ssr's fix to my econstr branch. I'm stupid.
Still, in my case it is picking the math-comp:econstr branch, and this is not the right one.
I want math-comp:master, and didn't ask for anything
Yeah the script has been hijacked for that.
Grrr
Because overlays are being implemented by @Matej Košík.
Did you see #539?
Like I have time to read other's PR
I'm writing them sir
Oh, that's just like 6 minutes ago
I was wondering why I hadn't seen it
Yeah, sorry, it's unrelated and I wasn't expecting you to have read it already.
But we surely need to merge EConstr
before this :)
I'm hoping for the removal of unification though
removal of unification = removal of apply
yeah, nice equation
I'm in, where do I have to sign?
so removal of unification means new tactics
at least for apply
now the question is, why reimplement one that is different from ssr's apply
?
I don't care, just want unification.ml gone
do you happen to know how to time HoTT?
I'm using the timing-html target, but I get a too detailed result
I think you care but you don't know: shouldn't we take the easiest path toward removal of unification.ml
?
sorry, I don't know
Any path is good
What about my PRs?
I'm going to harass you until I get a least the two less uncontroversial merged
(this does not include EConstr)
ok, here is one thing I need to understand
is there a chance that they are subsumed by flambda?
I'd say only #400 potentially
the array mapping is untouched by Flambda AFAICT
and the others are not micro optimizations
they're real algorithmic reworking
ok, let me review briefly and merge #434 now
A few of the PR can be considered as bugfixes for 8.6
ah ah!
which one for instance?
#537
A part of #532
(which I can extract)
Oh, I missed that one because it was not in your list.
#519
Ok, I'll merge #434 in trunk first.
Probably not all of #519 because it changes the representation of summary
yay!
For #519, @Enrico Tassi still hasn't cleared his request for changes, I'll ping him tomorrow.
BTW, we can make a political coup if you're afraid of HoTT slowness in EConstr
WDYM concretely?
There is a huge portion of the time spent in hashconsing of universes
We can deactivate it
lol
and trick them into believing EConstr is faster
I'd rather see a fully independent bench with hashconsing of universes disabled to see if it is faster
@Jason Gross you didn't hear anything
That's a one-line change
I can submit a PR
speaking of benches, this one seems to be going through: https://ci.inria.fr/coq/view/benchmarking/job/benchmark-part-of-the-branch/43/console
So I'll leave it here for today and finish reviewing your optim PRs tomorrow, pinging people when necessary
Fine with me.
@Maxime Dénès @Pierre-Marie Pédrot Nothing runs on pendulum now so I am going to take advantage of that and test overlays.
@Matej Košík sounds good!
Ok. Overlays work. coq-vst
benchmarking is being tested (but seems to work). Today I'll try to fix this.
@Maxime Dénès On pendulum, I've copied the original coq-bench
(with your modified OPAM archive with your custom stuff) to coq-bench.bak
. You'll find it there.
@Matej Košík ok, thanks!
So indeed, what is the plan to integrate coq-bench and travis, @Maxime Dénès @Matej Košík ?
@Pierre-Marie Pédrot I'm launching a line-by-line bench of integral_char
@Emilio Jesús Gallego Arias Maybe the best is to discuss it next week when you are here?
Oh sure, I'd be great to converge
@Maxime Dénès I've commented the EConstr PR, but I think I found a major source of slowness.
@Maxime Dénès Could you launch a pendulum bench with the patch I just pushed?
Sure!
Hey folks, how would a patch providing an option to disable VM reduction (in proofs/redexp.ml
be received?
Isn't there already such a thing? It rings a bell.
only for native compute
AFAICT
my motivation is that in jscoq I have to disable it manually, otherwise many users can't use ring, etc... however I would really like it to be a flag as losing vm_compute is a pain when compiling the contribs that jscoq ships
Well, that looks useful.
yeah my original plan was to port