Because linter_cat is the leftmost part of the linter_cat_page_position
index, we don't need a separate one.
Pointed out by jynus in T148866#2846381.
Change-Id: I3b0e75b02762cc948baf8dbdfee67b611aa3b9c1
The most likely scenario of duplicate key errors is that it's the exact
same lint error and there's just a race condition when calculating which
new errors need to be inserted, so just ignore them.
Follows-up 419610bcdb.
Change-Id: I84749ab221bbd517b474be8875bb6a59e4f3258e
These tests insert variations of fake lint errors into the database, and
then read out of the database to check they round-trip properly.
And while we're at it, improve the setForPage() return value.
These tests can be run with something like:
php tests/phpunit/phpunit.php extensions/Linter/tests/phpunit/
Change-Id: Ifdba8a8a104d218a822f909bc5d7b3512aca499d
Move location to two separate columns in the database: linter_start and
linter_end. This allows us to have the database enforce the uniqueness
of those fields, instead of just relying upon the PHP code to do so,
which could be bypassed since we have multiple servers and concurrent
processes.
Change-Id: I3e67ce1b7cb3c93866a388ec3248af4cff2a81e0
If no errors existed for the page, inserting new ones would fail with a
database error since $errors was key-ed by unique id, which the database
wrapper interpreted as field names, causing issues. Using array_values()
gets rid of the keys, fixing the issue.
Change-Id: I7645de4e5d3ac4462d7980374c8ef8be6280442b
It's possible to have duplicate, identical lint errors if the same exact
error is repeated in a template transclusion (e.g. {{1x|<b/> <b/>}})
since the position via dsr is the same. In this case, just de-duplicate
the errors since we can't differentiate them.
At the same time, trim excessive errors on the same page in the same
category. It's most likely that if a page has that many of the same
errors, the editor or bot will just fix all of them at the same time, so
we don't need to include all of them in the database. 20 is kind of a
low value, but we can always increase it later on as necessary.
Change-Id: I9cded720169870d0eea574e1a930ce4e9b190ac0
* Treat it as an array in all cases
* Make the default value all categories
* Rename from category to categories
Bug: T151288
Change-Id: I5c0c341112894c5a7ec3aaebb6ac9085353f55bd
Since this module is internal, it doesn't really matter, but it cleans
up the output on api.php?modules=record-lint.
Bug: T151285
Change-Id: I859a2780d6ed1918cc81101e9e2c2cd348a2390a
Test plan:
Type "Special:LintErrors/" into the search box, and see that the
autocomplete dropdowns include the subpages for individual error
categories.
Bug: T151289
Change-Id: I919e0e51a3b956f275f9a372b2f2844002972ea7
Instead of having a complex auto-increment category manager that had
additional caching, just hardcode the (currently 6) ids for each
category, which allows us to simplify a lot of code.
If Parsoid sends a lint error in a category that we don't know about, it
is silently dropped.
Bug: T151287
Change-Id: Ice6edf1b7985390aa0c1c410d357bc565bb69108
The job queue will allow us to have better flood control and rate
limiting instead of trying to do all the database writes as soon as
parsoid contacts MediaWiki.
On the downside, this means it may take longer for changes to be
reflected in the database and to users, but we already have no promise
for that, so it seems okay.
Note that if you don't have a job queue runner set up, you'll need to
run the runJobs.php script every time to have the jobs execute.
Change-Id: I25fd54734aca4dab09711e7f6aee027654931300