Hey @Pierre-Marie Pédrot, you said that coqbot acted on too many PRs today. I suppose you were the one to trigger it and you triggered it only once. But did it actually mark more than ten PRs as stale? Or did it just say that it would close many many PRs (you didn't put any limitation on the latter).
I did not trigger it myself in any case.
but I did see a lot of notifications on modified PRs
mechanically this couldn't have been more than 20 (10 stale labels + 10 closures) but I think it was more in the ~30 range, and most of them were not closures anyways
Then the question is who triggered it (since it wasn't me and the PR to automatically trigger it every day hasn't been merged yet).
If someone triggers it several times, then there is nothing that prevents it to mark more than 10 PRs as stale.
I saw 50 PRs updated
Indeed, the logs show that someone triggered it twice in a row today.
OK so the reason why some PRs were not closed was because we have hit a secondary rate limit: https://docs.github.com/en/rest/overview/resources-in-the-rest-api#secondary-rate-limits
The solution seems to be to implement a throttle also for closing PRs and to lower the limit of PRs that can be touched in one triggering to coqbot.
And if needed, we could add a secret to be able to trigger the bot.
How do you trigger it? I have no clue, I hope it cannot be done by misake
To trigger it, you need to do a specific curl request (see the PR adding a GitHub Action to trigger it every day).
I'd say it is impossible to do by mistake.
Who's been triggering it on a daily basis these days? I've not. Is that you @Théo Zimmermann ?
Yes, but only on days when I was not on vacations, i.e. not in the last few days.
Maybe Google or some bot accessed the URL while crawling?
You can't trigger it with a simple GET
request, you need a POST
request with coq:coq
as the body.
Right.
The plot thickens.
You obviously don't have the IP of the client in the logs either?
Maybe some three-letter agency that thought that Coq was critical for their plans and things were not going as fast as possible...
all your PRs are mine
Pierre-Marie Pédrot said:
You obviously don't have the IP of the client in the logs either?
Actually, I have. The request came from Mougins, Alpes-Maritimes. I guess that's Inria Sophia then.
Also, it started with a GET request (something that I had overlooked yesterday), which was therefore ill-formed.
It followed by a request to /favicon
, which is a 404.
And only then a well-formed POST request.
And contrary to what I said earlier, there was only one POST request, so this doesn't explain why it marked more than 10 PRs as stale.
So what really happened is that the bot was triggered only once (which is OK then), but this had the following consequence:
Fixed (limit is now at 6 new stale PRs and 4 closed PRs).
Théo Zimmermann has marked this topic as resolved.
Mugins could be me or Laurence. But are you saying this is a bug and the log entry you mentioned was somehting else?
I could have clicked on any GH button by mistake, but I did not do a curl
request for sure
So, if there is a silly request from my IP address, there must be a link to it from GH. (I did re-run a few CI jobs for example using the bot links)
Enrico Tassi said:
Mugins could be me or Laurence. But are you saying this is a bug and the log entry you mentioned was somehting else?
I can give you the exact IP address if you want to check if it's yours.
Who is Laurence?
Laurence Rideau, not runnig curl by hand either
Last updated: May 28 2023 at 18:29 UTC