Use the '--json' flag to get Pygments to output its list of supported
lexers in a machine-readable format. Support for this flag was added (at
our request) to Pygments and included in the 2.11 release[1].
Tested by running updateLexerList.php and confirming empty diff.
[1]: https://github.com/pygments/pygments/issues/1437
Change-Id: I0f1d7fceca9034e6034bafa6a8dd312b99d379d1
When using a non-bundled Pygments (which is required on Windows, as the
bundled version is an ELF binary), we call into the Pygments executable
to generate the list of supported languages (lexers). This list seems to
occasionally include carraige returns, causing some languages to not be
processed correctly. Trim those CRs out so the language list is
accurate.
Change-Id: If8b1f145dd10e2c4707d6d32927e85d1d2459f15
Replacing the HIGHLIGHT_MAX_LINES and HIGHLIGHT_MAX_BYTES constants with $wgSyntaxHighlightMaxLines and $wgSyntaxHighlightMaxBytes respectively, so sysadmins can adjust the limits to their needs if performance is not of their concern.
Bug: T322293
Bug: T104109
Change-Id: I80768d3cb45ac01c004fc812832878c83ca4ecdb
Python on Windows requires the SystemRoot environment variable in order
to initialize its internal RNG, so make sure that is passed along to the
subprocess.
Bug: T300223
Change-Id: I170ce627a3f00c023f4b1f11613f4fe2cb17bd31
Skip the expensive check,
for example when no highlighting is wanted because there is no lexer
Also all validation of the tag is now processed and
invalid tags also not counted.
Bug: T316858
Change-Id: Ifad9a9a14fae92463c345fb12defb41f14c2e1f3
The shell out to get styled text is expensive.
Call Parser::incrementExpensiveFunctionCount to limit the highlighted
text snippet on a page and not reaching a timeout.
This would count each tag and not deduplicate the text snippet to count
only once or if pygmentize needs to call or is in the cache.
This also not affect Parsoid, not sure if the concept of expensive
parser function exists there
Bug: T316858
Change-Id: I8afe61e9be4a34e5f0725a9b65ef43c345e1be5f
* Added Parsoid config, and refactored code slightly to
add native Parsoid handlers for parser tags exposed
by this extension.
* Enabled parsoid mode testing on the test file.
* Added html/parsoid sections on a few tests.
* Marked rest of tests as wt2html and wt2wt only since
html2wt and html2html will fail without a html/parsoid section
and there is no real benefit to adding them to all tests.
* Added a couple tests to the known failures list:
- One is because of T299103.
- The other is because Parsoid always emits attributes in the
form <tag .. foo="bar"..> instead of just <tag ... foo ..>
Since Parsoid needs to accept this format that is present on
wikis, I added a html/parsoid section for this test and
added the failures to the known failures list.
Bug: T272939
Change-Id: Ie30aa6b082d4fc43c73296ff2ed6cb8c3873f48f
Follow up to ae07430. The method needs to be public so that
WANObjectCache can call it from a callback, but we don't expect any
external callers.
Follows-Up: I424926d071e1cfd454a0c2d45a83693f41bdea55
Change-Id: Ia96d3132782435c693d2eaa77fd551fe9590b113
* Add rationale for each cache key's strategy being in Memc vs APCU.
* Extend pygmentize-lexers from 1 day to 1 week. It rarely changes
and already varies by version. Few things survive the day, but
there's not a reason to explicitly expire it sooner I think.
* Add a layer of Memc to the pygments-version APCU cache given that
it has a short expiry and thus relatively high miss rate.
The main rationale for this is noise in mwdebug logs since this
is currently the only thing we log by default in Logstash with prod
severity (exec INFO) during every pageview (after a php-fpm restart
which clears APCU). By adding Memc here we lose less of the cache
churn by reviving it via Memcached, and we keep the sense of there
being nothing in the logs "by default" at prod severity after restart,
e.g. don't get used to any fatigue.
Unlike the other cache keys and hooks, getVersion is the only
thing that gets called widely regardless of whether syntaxhighlight
is in use on the given page.
Change-Id: I424926d071e1cfd454a0c2d45a83693f41bdea55
Extensions using Phan need to be updated simultaneously with core due
to T308443.
Bug: T308718
Depends-On: Id08a220e1d6085e2b33f3f6c9d0e3935a4204659
Change-Id: Ie1356c582baf9a66b868f7349cc71c26f8f1ead3
The order of style inclusion matters, some of our overrides were no
longer in effect.
Follow-up to: I2e82e5aa2a71604b87ffb4936204201d06678341
Bug: T292736
Change-Id: If202c26d2c29994cb3680eb76a86bb7efacc3ff9
All of the interactions with `pygmentize` have been refactored into a
new class, conviently called Pygmentize. It is responsible for getting
* pygments version (cached in APCu for 1 hour)
* generated CSS (cached in WAN by version for 1 week)
* lexer list (cached in APCu by version for 1 day)
and actually highlighting stuff! Most code paths differentiate whether
we're using a bundled version of pygments or one that has been
explicitly configured. If using the bundled one, we take shortcuts since
we already know the lexer list, have the CSS generated, etc.
ResourceLoaderPygmentsModule is added to switch between loading
generated CSS from the bundled file or Shellboxing out to get it from
pygments.
Bug: T289227
Change-Id: I2e82e5aa2a71604b87ffb4936204201d06678341
With "ability-shell" set in extension.json's requirements as of
commit b5a904e2ec this extension will refuse to load if shelling
out is disabled.
Change-Id: Ie8f446fbb33e585ffcc7d0adda1894a5497f2dad
ContentHandler::getContentText() is deprecated and should be
replaced with Content::getText() for TextContent instances.
Change-Id: I8767a925148c31b3a64761f1173a2a85bd28dfe0
The replacement, Parser::getStripState(), was added to MediaWiki in
1.34. This extension already requires MediaWiki >= 1.34.
Bug: T275160
Change-Id: I7806068e1cd6e4da66adfe7bb75095d4bfb5d6bc
Also fixes the Phan warning about Xml::encodeJsCall/FormatJson
needing booleaen where int inDebugMode() is passed.
Change-Id: Id8de16ab683948eae096b43462118ea837f53038
We already add the dir=ltr/rtl HTML attribute so this
should be a no-op and makes it consistent with block
snippets.
Change-Id: I53e9204cc3bd54ba167f6f91e718a9d35b5bdfd0
This means all callers to #highlight get code wrapped
in the correct HTML.
This was done outside of #highlight before as the transformation
depended on $parser, so optionally pass in a $parser object if
the contents are going to go through the parser.
Change-Id: Ic5d5c341687e965804cb33da07dda23913718ff5
* Render a solid gutter that can take 3-4 digit line numbers
* Position line numbers absolutely in the gutter
* Add padding to code so that it doesn't wrap into the gutter
Change-Id: I7abb87452ad61808dad32b41c1d2d86b8ababb28
Previously the full page output was missing the
-lang-<languagename> and mw-content-<dir> classes.
Change-Id: I54f4ed0a86e78a3a7ff1d670ebbdfdb6f05f86cc
When the `enclose` attribute is used `syntaxhighlight-enclose-category`
is added, and when a <source> tag is used, the
`syntaxhighlight-source-category` is used.
Parser tests verify the tracking category is added when appropriate.
Bug: T241636
Bug: T237267
Change-Id: I7a21c635de426ab024703c04acdc6fa2184daedb
The following sniffs are failing and were disabled:
* MediaWiki.Commenting.FunctionComment.MissingReturn
Change-Id: I6576c262bf717aa9b3b0577caa27c05cff0cb44b