Originally we were dropping excess events inside the job queue, but that means
all of the events need to be passed into the job queue...which can cause
problems.
So drop them in the API module. The only other place we construct
RecordLintJob is when an article has been deleted, and those jobs have
no errors since they're all being deleted.
Bug: T202179
Change-Id: I61940280e0dfb99398d9f047d0e66007d91a0241
Linter calls the deprecated ApiBase::dieUsage() function in
ApiRecordLint.php, replacing it with dieWithError().
Bug: T181758
Change-Id: I0cbf784f591b86b206b032fccbc0e32564a3e9e8
Move location to two separate columns in the database: linter_start and
linter_end. This allows us to have the database enforce the uniqueness
of those fields, instead of just relying upon the PHP code to do so,
which could be bypassed since we have multiple servers and concurrent
processes.
Change-Id: I3e67ce1b7cb3c93866a388ec3248af4cff2a81e0
Since this module is internal, it doesn't really matter, but it cleans
up the output on api.php?modules=record-lint.
Bug: T151285
Change-Id: I859a2780d6ed1918cc81101e9e2c2cd348a2390a
Instead of having a complex auto-increment category manager that had
additional caching, just hardcode the (currently 6) ids for each
category, which allows us to simplify a lot of code.
If Parsoid sends a lint error in a category that we don't know about, it
is silently dropped.
Bug: T151287
Change-Id: Ice6edf1b7985390aa0c1c410d357bc565bb69108
The job queue will allow us to have better flood control and rate
limiting instead of trying to do all the database writes as soon as
parsoid contacts MediaWiki.
On the downside, this means it may take longer for changes to be
reflected in the database and to users, but we already have no promise
for that, so it seems okay.
Note that if you don't have a job queue runner set up, you'll need to
run the runJobs.php script every time to have the jobs execute.
Change-Id: I25fd54734aca4dab09711e7f6aee027654931300