Database::updateStats moved to Database from RecordLintJob in
I2610b9b16d4032b0e18b3537cc9ed51bfdaff299 for reuse in Hooks but seems
better placed on TotalsLookup.
Change-Id: I600853e5cfc9e8abae9c6b07cee4c2adc37ef464
* Writes namespaceID to new "linter_namespace" field if global
"wgLinterWriteNamespaceColumnStage" is present
Bug: T299612
Change-Id: If908c4dc99c966cde2981f9a03be38a577406a4e
The article id of the title is set to 0 when the page is deleted so,
although the lint job from the hook runs, it doesn't remove anything.
Reverts most of I06b821b65f65609ddac8ed4e7c662336082d8266
Bug: T298782
Bug: T170313
Change-Id: I2610b9b16d4032b0e18b3537cc9ed51bfdaff299
The global function wfWikiID() is deprecated since 1.35 and it's usages
should be replaced with WikiMap::getCurrentWikiId().
Bug: T298059
Change-Id: I695c20bff266f869f740baf7f3e335b357546fb4
This eases deployment dependencies by allowing Parsoid to supply an
appropriate database category ID so that new lint categories can be
appropriately stored during the interval between adding a new lint
category to Parsoid and deploying an Extension:Linter patch to
describe it.
Change-Id: Ib7b2342168fa53ca2abac7d5f54fe313be341eb7
Originally we were dropping excess events inside the job queue, but that means
all of the events need to be passed into the job queue...which can cause
problems.
So drop them in the API module. The only other place we construct
RecordLintJob is when an article has been deleted, and those jobs have
no errors since they're all being deleted.
Bug: T202179
Change-Id: I61940280e0dfb99398d9f047d0e66007d91a0241
Page deletions were bypassing the logic in RecordLintJob that
ensured the right category totals cache was cleared and the
statsd updates. Fix that by just using RecordLintJob directly.
Bug: T170313
Change-Id: I06b821b65f65609ddac8ed4e7c662336082d8266
The query itself is too expensive to be run on large Wikimedia wikis. So
put it behind WAN cache and touch the check keys for each category
whenever those have errors added or deleted from them.
If this happens to get out of sync, it will get fully refreshed
regularly when the totals are sent to statsd.
WANObjectCache's 'lockTSE' feature will help avoid cache stampedes that
made this query expensive in the past.
Change-Id: I3774103a29fa0f29d36283950f136259fa71bffe
This way we can track the progress of individual wikis in cleaning up
errors. The wiki name is at the end of the key so we can still use
e.g. "linter.category.$name.*" to see across all wikis at once.
Change-Id: I62463b9256e125d32d97396bd939334d71b46027
Move location to two separate columns in the database: linter_start and
linter_end. This allows us to have the database enforce the uniqueness
of those fields, instead of just relying upon the PHP code to do so,
which could be bypassed since we have multiple servers and concurrent
processes.
Change-Id: I3e67ce1b7cb3c93866a388ec3248af4cff2a81e0
It's possible to have duplicate, identical lint errors if the same exact
error is repeated in a template transclusion (e.g. {{1x|<b/> <b/>}})
since the position via dsr is the same. In this case, just de-duplicate
the errors since we can't differentiate them.
At the same time, trim excessive errors on the same page in the same
category. It's most likely that if a page has that many of the same
errors, the editor or bot will just fix all of them at the same time, so
we don't need to include all of them in the database. 20 is kind of a
low value, but we can always increase it later on as necessary.
Change-Id: I9cded720169870d0eea574e1a930ce4e9b190ac0
The job queue will allow us to have better flood control and rate
limiting instead of trying to do all the database writes as soon as
parsoid contacts MediaWiki.
On the downside, this means it may take longer for changes to be
reflected in the database and to users, but we already have no promise
for that, so it seems okay.
Note that if you don't have a job queue runner set up, you'll need to
run the runJobs.php script every time to have the jobs execute.
Change-Id: I25fd54734aca4dab09711e7f6aee027654931300