Instead of doing a separate tree walk and finding all timestamps
separately, make it part of the getComments tree walk, and find
timestamps one at a time.
Change-Id: I47f466eaf228504faa189fd99e07493bc7f022cd
This is similar to what the JS version does.
The TreeWalker and NodeFilter classes are adapted from
https://github.com/Krinkle/dom-TreeWalker-polyfill
(MIT license).
This makes #getComments twice as fast on en-big-oldparser.html
Change-Id: I2441f33e6e7bad753ac830d277e6a2e81ee8c93d
* Move modifier#getFullyCoveredWrapper to utils
* Use that method to find the node where we start searching for
template wrappers, rather than using endContainer
Bug: T252058
Change-Id: I55de58102f3468fce01290bd413a7fdc96d322d6
When there is a wrapper element whose range matches the range of
a comment, any replies will now be added outside of that wrapper,
instead of directly after the comment (inside the wrapper).
Bug: T250126
Change-Id: I6b42c4db019ae998e91eebd324f9cbd2aa791b4f
It was useful when I was debugging those parts of the code, but now
it's usually annoying.
The warnings can still sometimes be useful for understanding how the
tool parses some discussion, though. To keep that functionality, add
displaying warnings for each comment in the debug mode.
Change-Id: I2d218a8a394f179bcc0990ff988a0567c275ccf2
Follow-up to Ic1438d516e223db462cb227f6668e856672f538c.
Minor corrections and comment improvements in PHP parser,
and "backporting" some changes to JS parser that I like.
Change-Id: I5e54121914ec6b323e556dd133bcb71b3aefbb61
This method shouldn't be required on the server. Leave comments
relating to it in addListItem so JS & PHP can be kept in sync.
Change-Id: I849fac660faf6e750272c20776f96b9250f96b1b
In JS, strings are internally encoded as UTF-16, and properties like
.length return values in UTF-16 code units.
In PHP, strings are internally encoded as UTF-8, and we have the
option of using methods that return bytes like strlen() or UTF-8 code
units like mb_strlen().
However, the values produced by preg_match( …, PREG_OFFSET_CAPTURE )
are in bytes, and there's nothing we can do about that. So let's use
bytes throughout, mixing the two types results in meaningless numbers.
Then in the test code, we have to calculate UTF-16 code units offsets
based on the UTF-8 byte offsets.
We also have to copy the entire workaround for mw:Entity nodes… Maybe
the parser should be fixed to return the real nodes for ranges' ends
in this case.
Change-Id: I05804489d7de0d60be6e9f84e6a49a885e9fb870
It appears PHP's DOM library always uses CDATA nodes for the contents of
<style> tags, even if there is no such markup in the source HTML.
Change-Id: Id04b27086c5e7a0b016a3a440b2b4895d6b13c93
Profiling reveals that >87% of the run time of our test suite is spent
in this tiny method. Apparently, DOMNodeList::item() is extremely slow
(possibly it's linear time instead of constant time?).
Profiled using XDebug and KCacheGrind:
https://phabricator.wikimedia.org/F31815264
We can calculate the child's index in its parent by counting its
precending siblings instead, which turns out to be much faster.
Before:
1. 275444ms to run DiscussionToolsCommentParserTest:testGetComments with data set #2
2. 12668ms to run DiscussionToolsCommentParserTest:testGetComments with data set #3
...
After:
1. 9545ms to run DiscussionToolsCommentParserTest:testGetComments with data set #2
2. 5549ms to run DiscussionToolsCommentParserTest:testGetComments with data set #3
...
That's still kind of slow but now it's bearable to run the test suite.
Change-Id: I49155f7aa2e231a9a20bf282cf6aaa28fc902e0b
* Not to be confused with the Parsing Team's
"Great Parser JS to PHP port of 2019"
Gasp as OR hacks are changed to null coalescing operators.
Applaud as variable declarations are dropped.
Cheer as parameters and return values are type-hinted.
Shudder as DomNodeLists have no indexOf method.
Moving discussion parsing to the server should allow
us to implement much cleaner APIs for commenting.
Bug: T252252
Co-authored-by: Ed Sanders <esanders@wikimedia.org>
Change-Id: Ic1438d516e223db462cb227f6668e856672f538c
When the user clicks a "Reply" link on a page that is affected by the
'fostered' lint error (indicating fostered content in the HTML
representation), display an error and refuse to edit it, as Parsoid's
transformations will damage the page content.
The error message includes a link to documentation about lint errors,
and a link to the editor that will highlight the error location.
Depends-On: I723ec766d1244d117f8d624440026fe5af0d3403
Bug: T246481
Change-Id: Ic60cb58f98d10dc9b113469e5d3bbfb2d2b0564f
By default, DiscussionTools loads on all talk pages when the extension
is installed. This can now be disabled by setting the configuration
option `$wgDiscussionToolsEnable=false`.
To test DiscussionTools, one can now use the query parameter
`?dtenable=1`, which allows it to be loaded on any wikitext page
(overriding the config option).
Bug: T243621
Change-Id: I3d5a9cc9a4183fb6951f05c557b1d42735a9df7c
This sets up the tags:
* discussiontools
* discussiontools-reply
* discussiontools-edit (not yet implemented)
* discussiontools-newsection (not yet implemented)
The tags are flagged as user-addable, because otherwise they can't be
passed through to the VE API (at least, not without editing it so that
it explicitly knows about them, which seems like a strange
interdependency). It's assumed that letting users who know about the
tags add them to random changes via action=editchangetags would be
(a) the pettiest and most inconsequential vandalism possible, and
(b) unlikely to happen.
This relies upon I2c1d0f8d69bc03e5c1877c790247e165f160e966 in
VisualEditor to not also tag the edits with `visualeditor`.
Bug: T242184
Change-Id: I4e5e26afdd52279df242e1912f073b415b812c3b
Add .phpcs.xml to run codesniffer against the extension.
The entry in composer.json is already there
Change-Id: If9de7e9b1b9439dc4d5fa2ba521883d13deaa739
Add the Moment Timezone library. Add a script for managing libraries,
like in MediaWiki core.
Depends-On: I9a59a6ad01850b30327e4215f2be61b8d1c41277
Change-Id: I64bc79e7d0ccdf42b006e5a225c8aa70ea5f4e15