Changes to the use statements done automatically via script
Addition of missing use statement done manually
Change-Id: I6b49b9837ed6d3dfdd8a4bc0420848c09fcb1540
As suggested in 26505b170adb24a6ae68945920db322c9382e470 for better
readability. Also the sniff now knows about the maintenance script
Change-Id: I5c6a4438fcd58b369e1f637899897d7667d56e34
The following sniffs are failing and were disabled:
* MediaWiki.Commenting.FunctionComment.MissingParamComment
* MediaWiki.Commenting.FunctionComment.MissingParamTag
* MediaWiki.Commenting.FunctionComment.MissingReturn
* MediaWiki.Commenting.FunctionComment.ParamNameNoMatch
* MediaWiki.FunctionComment.Missing.Public
* MediaWiki.NamingConventions.LowerCamelFunctionsName.FunctionName
* MediaWiki.WhiteSpace.SpaceBeforeSingleLineComment.NewLineComment
Change-Id: I3554682b5c8686299dc8cf23a3ec8c59514ff008
It looks like the reason these jobs arn't processing is because
abandoned jobs are being considered as part of the acquired jobs
count, so if a job gets abandoned it keeps taking up a slot in
our job pressure calculation. These shouldn't count as pressure
because they are not running anymore.
Change-Id: I44fbce2b7dc47345ab0e3745d1653f418d75943d
Trying to run this script in the cluster fatals out due to memory
problems somewhat regularly. The --start option helps to restart
it where it fell down, but when trying to run against hundreds of
wiki's that is a one-off solution that makes ensuring everything is
actually visited a pain.
To try and isolate errors add an option to push the parsing into the
job queue. There is still the possibility to miss pages, but job queue
retries should take care of us for the most part. Attempts to keep
load down on the databases by making sure no more than a specified
number of jobs are queued/processing at a given time.
Bug: T152155
Change-Id: I3a4e3a415b2f03de0bb36ac0515241e950130fde