Commit graph

5 commits

Author SHA1 Message Date
Kunal Mehta 419610bcdb Enforce category/page/position uniqueness constraint in the database
Move location to two separate columns in the database: linter_start and
linter_end. This allows us to have the database enforce the uniqueness
of those fields, instead of just relying upon the PHP code to do so,
which could be bypassed since we have multiple servers and concurrent
processes.

Change-Id: I3e67ce1b7cb3c93866a388ec3248af4cff2a81e0
2016-11-30 18:55:19 -08:00
Kunal Mehta 2f39ab7fff API: Fix up action=record-lint's getAllowedParams()
Since this module is internal, it doesn't really matter, but it cleans
up the output on api.php?modules=record-lint.

Bug: T151285
Change-Id: I859a2780d6ed1918cc81101e9e2c2cd348a2390a
2016-11-22 18:31:21 -08:00
Kunal Mehta b41e74ce8b Hardcode category ids
Instead of having a complex auto-increment category manager that had
additional caching, just hardcode the (currently 6) ids for each
category, which allows us to simplify a lot of code.

If Parsoid sends a lint error in a category that we don't know about, it
is silently dropped.

Bug: T151287
Change-Id: Ice6edf1b7985390aa0c1c410d357bc565bb69108
2016-11-22 18:31:17 -08:00
Kunal Mehta 9ecf62ead3 Update lint errors via the job queue
The job queue will allow us to have better flood control and rate
limiting instead of trying to do all the database writes as soon as
parsoid contacts MediaWiki.

On the downside, this means it may take longer for changes to be
reflected in the database and to users, but we already have no promise
for that, so it seems okay.

Note that if you don't have a job queue runner set up, you'll need to
run the runJobs.php script every time to have the jobs execute.

Change-Id: I25fd54734aca4dab09711e7f6aee027654931300
2016-11-15 11:31:45 -08:00
Kunal Mehta bce5b31616 Initial commit
This configures a MediaWiki extension to recieve Parsoid's lint errors
and expose them to users.

Change-Id: Ie0776aecf145eb1c87c2a539ddf3ea8d35a899f5
2016-10-17 16:02:53 -07:00