Commit graph

6 commits

Author SHA1 Message Date
gerritbot a85ae32a5e Replace some moved Title class uses, now MediaWiki\Title\Title
Bug: T321681
Change-Id: I80f2f9cdd569d549de8b403226000bb5c88fcb67
2023-08-19 04:18:19 +00:00
Umherirrender bc4b81c7bb Adjust InitImageDataJob
$params cannot a bool here
Job::run has to return a bool

Change-Id: Ieed6675e8de0e3ed4c3376676d5b027a6ab9f4f2
2018-04-10 19:11:55 +00:00
Pppery fe1aad4adf Re-enable "MediaWiki.Commenting.FunctionComment.MissingDocumentationPublic" sniff
Bug: T170583
Change-Id: I33b56a824d26feb208492e8623c3c654a1372c47
2017-12-06 17:48:13 -05:00
Umherirrender 88758f5884 Add phpcs and make pass
Change-Id: I129fd23a375b4f7de893d3b98f67fdd8de89b4bd
2017-05-30 21:49:44 +02:00
Erik Bernhardson 9b20854a59 Wrap waitForReplication in try/catch
Very few of these jobs seem to be finishing, due to some
replicas being so far behind we get DBReplicationWaitError
thrown, which causes the job to be restarted. Catch and log,
but don't stop processing because of it.

There is also a problem with jobs that blow out the memory limits,
but not sure what to do with that.

Change-Id: Idffcfad76936f5e62e9018c58f2cb57db35af4b8
2016-12-07 10:18:53 -08:00
Erik Bernhardson 03e14d0c86 Add job queue option for initImageData maintenance script
Trying to run this script in the cluster fatals out due to memory
problems somewhat regularly. The --start option helps to restart
it where it fell down, but when trying to run against hundreds of
wiki's that is a one-off solution that makes ensuring everything is
actually visited a pain.

To try and isolate errors add an option to push the parsing into the
job queue. There is still the possibility to miss pages, but job queue
retries should take care of us for the most part. Attempts to keep
load down on the databases by making sure no more than a specified
number of jobs are queued/processing at a given time.

Bug: T152155
Change-Id: I3a4e3a415b2f03de0bb36ac0515241e950130fde
2016-12-06 10:56:52 -08:00