This includes the dtrepliedto URL functionality from
I3f81e4d77faed367606e47678b8896051982359d.
Bug: T274831
Bug: T274832
Bug: T277329
Change-Id: I035d04f30c8312b0cb42902d3bf940df1482ffb3
Use `DOMCompat::getBody( ... )` as a nicer getter than
`->getElementsByTagName( 'body' )->item( 0 )`.
Remove overly defensive checks and redundant annotations on its
return value. Since we're dealing with HTML documents throughout,
the document body is guaranteed to exist.
We previously needed some of them to convince Phan when it thought
the body may be null, but this seems to no longer be needed.
Change-Id: If7aee7b6adbfa78269c7ba28b26a6eaa21fe935b
Use DOMCompat::querySelectorAll() instead.
CommentModifier::isHtmlSigned()
* Copied the CSS selector from the JS equivalent function.
CommentUtils::unwrapParsoidSections()
* Copied the CSS selector from the JS equivalent function (in VisualEditor).
CommentItem::getMentions()
* Trivial.
This causes Phan to report some more issues, which are also fixed.
Follow-up to 25272e7a4a.
Change-Id: Iaf1222f7114916f2eca19942c3686168899486fd
We can just use insertBefore() normally. There was never a PHP bug,
but rather a Phan bug, and it no longer affects us.
https://gerrit.wikimedia.org/r/c/mediawiki/extensions/DiscussionTools/+/596813/70/includes/ImmutableRange.php#373
This reveals a bug in CommentParser where it sometimes produces
incorrect ranges (we were incorrectly treating `false` like `null`,
hiding the issue). I'll fix it in a separate commit.
Follow-up to 25272e7a4a.
Change-Id: I4afba38f1d82ddbf8732bfe3e4d4f6ebe2f8de5d
In case 4 and case 6, no notifications are expected. In all other
cases we now get the expected notifications.
Bug: T285528
Change-Id: I9e813bb3a053bc1232783f9eae1ad75672b4fa7e
These changes ensure that DiscussionTools is independent of DOM
library choice, and will not break if/when Parsoid switches to an
alternate (more standards-compliant) DOM library.
We run `phan` against the Dodo standards-compliant DOM library,
so this ends up flagging uses of non-standard PHP extensions to
the DOM. These will be suppressed for now with a "Nonstandard DOM"
comment that can be grepped for, since they will eventually
will need to be rewritten or worked around.
Most frequent issues:
* Node::nodeValue and Node::textContent and Element::getAttribute()
can return null in a spec-compliant implementation. Add `?? ''` to
make spec-compliant results consistent w/ what PHP returns.
* DOMXPath doesn't accept anything except DOMDocument. These uses
should be replaced with DOMCompat::querySelectorAll() or similar
(which end up using DOMXPath under the covers for DOMDocument any way,
but are implemented more efficiently in a spec-compliant
implementation).
* A couple of times we have code like:
`while ($node->firstChild!==null) { $node = $node->firstChild; }`
and phan's analysis isn't strong enough to determine that $node is still
non-null after the while. This same issue should appear with DOMDocument
but phan doesn't complain for some reason.
One apparently legit issue:
* Node::insertBefore() is once called in a funny way which leans on
the fact that the second option is optional in PHP. This seems to be
a workaround for an ancient PHP bug, and can probably be safely
removed.
Bug: T287611
Bug: T217867
Change-Id: I3c4f41c3819770f85d68157c9f690d650b7266a3
For compatibility with Parsoid's document abstraction (Parsoid may
switch to an alternate DOM library in the future), don't explicitly
create a new document object using `new DOMDocument`; instead use
the Parsoid wrapper `DOMCompat::newDocument()`. This ensures that
the Document object created will be compatible with Parsoid.
There are a number of other subtle dependencies on the PHP `dom`
extension in DiscussionTools, like explicit `instanceof` tests; those
will be tweaked in a follow-up patch
(I3c4f41c3819770f85d68157c9f690d650b7266a3) since they do not affect
correctness so long as Parsoid is aliasing Document to a subclass of
the built-in DOMDocument. Similarly, the Phan warnings we suppress
do not cause runtime errors (because of the fixes included in
c5265341afd9efde6b54ba56dc009aab88eff83c) but phan will be happier
once the follow-up patch lands and aligns all the DOM types.
Bug: T287611
Depends-On: If0671255779571a91d3472a9d90d0f2d69dd1f7d
Change-Id: Ib98bd5b76de7a0d32a29840d1ce04379c72ef486
The user interface only allows you to subscribe to level 2 headings.
But we would generate events for whatever heading was the closest,
If it was e.g. level 3, no one would receive that notification.
Now we generate events for the closest level 2 heading, or we don't
generate the event at all if there isn't one (if the only headings are
of level 3 and below, or level 1, or if the comment is added before
the first heading on the page).
Bug: T286736
Change-Id: Iae99853070e353ab81c9cc29ef1d53c877adfc66
We used an internal API requests to fetch page content because it was
easy, but there's no way to guarantee that it returns data from the
primary database.
Use ParserOutputAccess::getParserOutput() to fetch from cache if
available. Also, use canonical output instead of user-specific,
not that it should matter.
Bug: T285895
Change-Id: I7dcd9659be77746dc2a0c4eeae2319887936b555
…without making the topic subscriptions feature available in user preferences.
Follow-up to these commits, which added these checks in ad-hoc ways:
* 9420f22e9d
* f3422f40a6
* 23a490deca
* a555db7892
Bug: T284491
Change-Id: If2e3fb1e06d1cc489fbca14796ed77c83bb52991
If the revision from which we generated the notification has been
deleted, we shouldn't include the content snippet, nor the direct link
to the comment (because the fragment ID is generated from the content).
This matches how Echo handles mention notifications.
Change-Id: Ica939f3a4efd39d0c295511d58280d3f9d584129
As it happens, most of Echo does not actually parse this message,
but it is for some reason parsed in HTML email notifications.
Change-Id: I414cd242d9bcc4d8b5a1c2a2a71be9e5f00ea8be
We don't display [subscribe] buttons on your user talk page,
but the API still allows those subscriptions.
Use the same approach as for mentions to ensure this doesn't cause
duplicate notifications.
Remove some code in SubscribedNewCommentPresentationModel,
now guaranteed to be unused.
Change-Id: I99a276a48d8562552ed2c54cc0323e8e428845fd
Otherwise, the global context is used (RequestContext::getMain()),
which is undesirable when you're building a rubegoldbergian
contraption and we're already inside an internal action API request
with a fake context.
Change-Id: I01daf8dc70b5751bc1e157fe598988cd5d3219e5
Per Manuel Arostegui in T263817#7033384. The limit is 5000.
(I picked it arbitrarily, there's no real rationale for it.)
Also log a warning when any user reaches half of the limit,
so that we might make a decision about changing this mechanism
before it starts affecting users. Maybe at that time we'll
have data to show that it's safe to remove the limit.
Bug: T263817
Change-Id: I18a8ee0ad7383759229c5721d5253fb591457d4d
Using `updateCacheExpiry()` in this way appears to be established
with examples of other use in WMF production such as:
- CategortyTree extension:
custom cache expiry for pages with `<categorytree>`.
- RSS extension:
custom cache expiry for pages with `<rss>`.
- intersection extension:
custom cache expiry for pages with `<DynamicPageList>`.
- Math extension:
custom cache expiry if `<math>` failed.
- Wikibase extension, Flow extension:
no caching for certain namespaces or content types.
- Graph extension, Kartographer extension:
via onParserAfterParse hook, no caching if on preview.
Bug: T280605
Change-Id: Iea41ab8599ffae4622c97d682258b1b64eaf9ba2
Previously we relied on the NamespaceInfo check below to reject
special pages, but after commit 07d885248bc54bdc0f12d9745916c794d45ec81c
in MediaWiki core, PageProps throws an exception when called with a
special page.
Bug: T281180
Depends-On: I32c94107fde96b9d6344c77b621be9b3b9b7faaf
Change-Id: I0fd893c63a6e92f6c84e7aa92270852e1137fcad
The issue occurred when replying to a comment consisting of multiple
list items, starting with a <dt> (instead of the expected <dd>), so
that the comment is considered to be unindented.
Modifier tried to add the reply directly inside the list (<dl>) rather
than inside the last list item (<dt>), which caused it to be confused
about indentation levels and try to un-indent more times than there
were indentations.
The simplest solution, given the existing code, is to add the reply
outside the list instead, in a new list. This results in a "list gap"
(<dl><dt>...</dt><dd>...</dd></dl><dl><dd>...</dd></dl>), but I think
it's acceptable for this rare case.
There are separate tests cases for old Parser and for Parsoid HTML,
because they parse the original wikitext differently (with the old
Parser producing HTML with a list gap too).
Bug: T279445
Change-Id: Ie0ee960e7090cf051ee547b480c980e9530eda51
The regexps needs to be non-greedy, otherwise it could swallow up a
large chunk of the page (up to the next comment).
I noticed this when adding tests for this code, in the 'unclosed-font'
test case (Ief9648b8805fadcc170c54b627eb669cc8b907b6).
Change-Id: I5f67a9599b0cb07bdd53abeebac9ada221181b66
If curLevel or desiredLevel are calculated incorrectly, this loop
could never end.
In JS, something would throw an exception before going infinite, but
PHP is happy to trot along despite accessing properties of null and
attaching multiple children to a document node.
Bug: T279445
Change-Id: I1784f550ec3a23dcded4f2b1def97e51cb414b7b
We added it because the initial designs for the subscribe action were
much easier to implement like this, and topic "containers" (T269950)
would have required it.
However, the latest design of the subscribe action will not need it
(T279149), and topic containers are still very far away, so let's
remove it for now.
Bug: T280433
Change-Id: I21a23e9bea43f24d265750926fbd62b99038d3f1
The 'dtenable' does not appear to take user-provided content that
requires language normalization. In general getRawVal should be used,
or if it's user input that needs normalization, use getText(). Perhaps
one day we'll alias or deprecate getVal (which is currently an odd
mid-way hybrid, most closely to getText; originally created for
EditPage.php textareas).
Change-Id: I8364c84f8c4f700da6e208df2e87c29bf254d685
We can't allow it, because the required database tables may not exist
yet (T280082).
This is meant to be temporary until we complete DBA review and the
tables are created.
Bug: T280082
Change-Id: I8f947b779c6829763d3413931c6d354e6f7aee4d
As of 7ad6328223, we also use this data
to check whether comments exist on the page, not only whether they're
transcluded.
Follow-up to 42ce942c86.
Bug: T275821
Bug: T273413
Change-Id: I95eb85354e7b84cc10ab703d28315d0667696f4c
The existing comment IDs can't be used to find the same comment on
a different revision or page (when it's transcluded), because they
depend on the comment's parent and its position on the page.
Comment names depend only on the author and timestamp. The trade-off
is that they can't distinguish comments posted within the same minute,
or in the same edit, so we will still need the IDs sometimes.
Prefer using comment names when replying, if they're not ambiguous.
This fixes T273413 and T275821.
Heading names depend on the author and timestamp of the oldest comment.
This way we don't have to detect changes to the heading text, but we
can't distinguish headings without any comments.
Bug: T274685
Bug: T273413
Bug: T275821
Change-Id: Id85c50ba38d1e532cec106708c077b908a3fcd49
Longer, but follows the style guide and less likely to conflict.
We need to account for init classes in the cache being around for
a while.
Change-Id: I738bc93393850db320fdbda2b003ca8ac40556da
Now it detect signatures generated by en.wp's {{Undated}} template,
and signatures of people who do weird stuff to the timestamps.
Bug: T275938
Change-Id: I27b07f6786ca5433a3c02a5fe68e4716d41401bb
In some cases it would return the parent node, instead of the siblings
it should return.
It's a private method only called by getFullyCoveredSiblings(), and
that method had a bug that cancelled out this one, so everything
worked correctly. But I want to use it elsewhere now and ran into it.
Change-Id: Ic12f007d57a8502a1bea5f0af17b29e9d59093d6
The horrendous 11-line if() condition did not correctly handle
signatures wrapped in inline formatting markup, like <small>.
Instead, implement this logic in the code for skipping to the end
of a paragraph, which didn't exist yet when that condition was
added, but seems like a much better place to check this now.
Bug: T275934
Change-Id: I5cccff889b5e15b5f8fde0538bf4bccb22e762cf
We may need this for topic subscriptions, and it seems
generally useful to include in the API response anyway.
Change-Id: If9522dc0c79a9a9ffb3a80f83fb17bf3c9399d6d
This code expected $container->firstChild to be a
<div class="mw-parser-output">, but that element is not present
when we're running on HTML to be saved in parser cache.
We ended up inserting the marker inside whatever node was the
first on the page, and if it was a <style> element, both our
marker and the styles would be lost when serializing, like in
6c7a0ca9a2.
When we're running on final HTML, the marker will now be outside
of <div class="mw-parser-output">, but that seems to be fine. Only
early versions of I4e60fdbc098c1a74757d6e60fec6bcf8e5db37c1 had
problems with that (see comments on patchset 41), but it works now.
The added test case also covers the fix for T274709.
Bug: T275440
Change-Id: I38d45dd8686919be51e1d307ded12b0afe185eb5
* HookUtils:FEATURES lists all features
* CommentFormatter::USE_WITH_FEATURES are all features
which require the comment formatter
Change-Id: Idbbe8bdd910b9c7b23c7fee76af7bb7ee13c2759
Going forward this will allow us to remove the parser
cache split, and toggle features just using CSS.
The CSS will be modified in a later commit to give the
anon caches time to clear.
Bug: T273072
Change-Id: I83c84b8bc63e1881e07b49acd8499b811adfccd4
I found this error in our logstash. I was not able to find an
existing Phabricator ticket.
Note how line #348 extracts the last element from the
$siblings array. It uses the function end() there, which
returns false in case the array is empty. $siblings[0] can't
do this but yields an error.
An alternative is to use reset(), which can return false as
well. But that's not really better. Especially not better
readable, I would argue.
Change-Id: Ic90cd2392ede15078ba0d5b4d67b8dc5d05f9bf7
Yes, this is still needed, removing it causes failures in tests
(and the old outputs look better).
Change-Id: I5bcedb0295a1f0ac4f6e51eaa9a9e072d8236f3c
Top-level comments that start or end with a list (inconsistent
indentation) would not have triggered the logic for detecting
wrappers.
Bug: T273692
Change-Id: Idcb4eed73e391f5f86eca2eb05cb3cea0d86f30a
* Ignore rendering-transparent nodes between discussion comments.
* Improve isRenderingTransparentNode() so that <link> nodes
representing TemplateStyles are not considered transparent,
otherwise this would undo ae920b831f.
Using a regexp from Parsoid.
Bug: T272746
Change-Id: I0b3c3251156ba6c4826abf5ba44ea93f80ebc01d
Splits the cache on the reply links feature being enabled
for a particular user and title.
An additional check is done after parsing in case the user
has the feature enabled via query string or cookie.
Bug: T267404
Depends-On: I883a37fd67108243e7a20683b1a5d59fd0f6e39f
Change-Id: I3bc06ca7d4aea7f0fe39eef0e77ad88d1f9c1043
Add yet another tree walking utility: CommentUtils::linearWalk().
Unlike TreeWalker, it allows handling the beginnings and ends of nodes
separately – kind of like parsing a XML token stream, or kind of like
VisualEditor's linear model.
(Add unit tests for this utility. The simple.html test case is copied
from [VisualEditor/VisualEditor]/demos/ve/pages/simple.html.)
Use this utility to stop skipping when we reach either a closing or
opening block node tag. Previously we'd skip over such tags inside
nested "transparent" nodes (like <a>, <del>, or apparently <font>).
Bug: T271385
Change-Id: I201a942eb3a56335e84d94e150ec2c33f8b4f4e0
The tool may go in and out of beta as new features
are release/graduated to opt-out. Users should only
need to opt in to the beta feature once to get all
future sub-features.
Bug: T272071
Change-Id: If6834b7fc07fc7e84757dc5fdcea814cd0d65936
Don't assume a feature is available because the code has
loaded and the user option is set. Export the logic from
Hooks.php to the client.
Change-Id: Ica0e58de7ed0d59e3b09645193eb2b691ae41c39
If DiscussionToolsABTest is enabled (set to `all` or a feature), logged
in users who have never used the tool before will be assigned to an a/b
test bucket. If they're in the test bucket, they get the feature
enabled.
If they manually set their beta feature preference, we don't override
that but do maintain their bucket for logging purposes.
Bug: T268191
Change-Id: I9c4d60e9f9aaef11afa7f8661b9c49130dde3ffa
1. Extend the JS modifier to allow adding top-level comments
(that is, replies to headings). PHP modifier doesn't do this
because we'll save the changes using paction=addtopic instead.
2. Subclass CommentController to allow adding a new heading and a
top-level comment underneath it at the same time.
3. A lot of ugly code in ReplyWidget to customize the interface
for this case. Much of it should probably be moved to
CommentController/NewTopicController.
Bug: T267595
Change-Id: I9c707bb7f7aae1b92c72fb4dee436490f8c8409b
Allows for multiple features in the near future.
Separate availability and enabled.
Separate User/Title/Output checks.
Change-Id: I454bd8407675749d93ff3d2b4c5d624b433204db
As a result of 0fc71f60cd, "empty" text
nodes (containing only whitespace) at the end of the comment may be
inside the comment's range, and trying to ignore them caused the
ranges not to match and the frame not to be detected.
Now the code works whether they're inside the comment's range or not.
Add a test case for wrapped discussion comments with HTML comments and
with whitespace.
Bug: T250126
Bug: T268407
Change-Id: I2217ff5a635fd1c9c9e803f46795b1bfb3d17535
Now we don't require the ParserOutput to be available.
As a result, we now check the flag on the latest revision of the page,
rather than the one being viewed.
Change-Id: Id77a332643cb8ad95afc5cec6713fa0a3636a5ce
While working on T270009, I noticed that <style> and <link> nodes
are treated differently, which seemed weird. Rewrite this again,
hopefully this is the last time.
The changed test cases also involve <area> and <input> nodes,
and the new results make more sense to me.
Bug: T264116
Change-Id: I3af90c84768a4b3dc53446927f4dba6f72175a2f
We've recently decided that we want to "extend" comments until
the end of the paragraph (e36dc8e78a,
d0ae6c4e44).
However, we still had this special case that did the opposite: it
ensured that if a comment ended in the middle of a text node, the
comment would not be extended to the end of the node. Remove it.
Note the change in the test file signatures-funny-formattedreply.html,
which actually covered this case specifically.
Change-Id: Id1384bb0c6e1a5f0c70f55efcb4caa240f230f07
The end marker is skipped forward until an open or close
block tag is reached. In tree traversal terms this means
moving either to the next sibling, or the parent (to skip
over close tags).
Bug: T256033
Change-Id: Iaa2c588698790d576ac4f9ecc126f58a082ef6b3
The general rule is that comments start after their preceding
thread item, but when that is a heading we should skip past
the entire <h[1-6]> node to avoid making section edit links
part of the first comment.
Bug: T267988
Change-Id: Ia7f1b27e0a69a9aab7c7da743bf8549479304096
As CommentFormatter no longer needs HTMLFormatter, remove
the inheritance and make addReplyLinks a static method.
Testing locally this is marginally slower, going from 2.55s
to 2.9s for the CommentFormatterTest case.
Bug: T266317
Bug: T267973
Change-Id: If69749cae678a1647a138d782a32032189f55cec
A TreeWalker ends up walking potentially every single subsequent
node in the document looking for a target node. Instead use upwards
traversal to find a common ancestor, then sibling traversal to
compare document order.
This makes calling cloneContents on every comment on a 300k talk page
significantly faster, going from >30s to 500ms locally.
Change-Id: I28a2b8c11d4098d9bc44d19b98e19ccc02273098
Ideally the edit autosummary would be generated in the same
way as in the old wikitext editor: from the wikitext of the
heading. But on the JS side, we don't have access to the
wikitext, or to the PHP method that generates autosummaries.
This might seem crazy at first, but ultimately the point of
the autosummaries is to link to the section heading by its
'id' attribute, so it is perfectly reliable.
Doing it this way depends on $wgFragmentMode being set to
[ 'html5', 'legacy' ] or [ 'html5' ], otherwise the escaped IDs
are super garbled (particularly in non-Latin-alphabet languages)
and can't be unescaped reliably. Conveniently, we already
require that since 9ee0fd69f5.
Bug: T264561
Bug: T266725
Change-Id: I7d35098d672d0edb50d49e22de1686d5cc83b60e
The condition was wrong, it could return either an element child with
.mw-headline, or a non-element child.
Bug: T267284
Change-Id: I28cda22ee8c5fe4a3259621adddd647b31291703
Internal PHP errors (such as "Call to undefined method…") are not Exceptions.
Follow-up to e18a0f3dcd.
Bug: T267035
Change-Id: I3cbf2b6b0d1d8a97cdac9791ec4f7b2ec807c7e5
After recent changes allowing ThreadItems to have IDs, they can now
also have warnings about duplicate IDs.
Bug: T267035
Change-Id: If3edfe34e6e29741e29fac8946a3c88badc4ab7f
The following sniffs are failing and were disabled:
* MediaWiki.Commenting.PropertyDocumentation.MissingDocumentationPrivate
* MediaWiki.Commenting.PropertyDocumentation.MissingDocumentationProtected
* MediaWiki.Commenting.PropertyDocumentation.MissingDocumentationPublic
Additional changes:
* Dropped .inc files from .phpcs.xml (T200956).
Change-Id: I340d6b573e9ae2a99085fb19a705fcf567b03f92
Use the same logic for marking ranges in the document, and ensure
that the heading range does not include section edit links or
section numberings.
Change-Id: I782caafc34fee2a822b0a17b24dd6b9528202eca
If A follows B, then we can assume that B does not follow A.
Calling the function recursively computes that twice,
we can instead make some simple changes to "invert" the result.
Change-Id: I709aca7cb997dd2fe3980468a8c6bde6f366fb5b
It's an expensive method, and we previously called it for
every child of the common ancestor, completely unnecessarily.
These changes follow from two observations:
* If there is a $firstPartiallyContainedChild, then the
first fully contained child must follow it; similarly,
if there is a $lastPartiallyContainedChild, then the
last fully contained child must precede it.
* All nodes between the first and last fully contained
children are also fully contained.
Maybe it can be made cleverer still, but it's a lot better.
Change-Id: I4e596c62274c2c0be115f0ddec42629115b430a4
Skipping them could result in incorrect handling when RESTBase HTML is
outdated.
When a result for a given comment is not found, display an error
instead of assuming it is not transcluded.
Bug: T262065
Change-Id: I14a7a0a25d5181b5c49bd5677f0c002dce5a3cb9
To avoid old threads re-appearing on popular pages when someone
uses a vague title (e.g. dozens of threads titled "question" on
[[Wikipedia:Help desk]]: https://w.wiki/fbN), include the oldest
timestamp in the thread (i.e. date the thread was started) in the
heading ID.
Bug: T264478
Change-Id: If918bfd5e025248923d1939bc86916697ead95a0
Sequential numbers aren't great because they change when an earlier
comment is archived. Parent comment/heading IDs should change less
often.
This also makes much more sense for disambiguating subsections,
e.g. a dozen identical ===Votes=== sections for a dozen proposals.
Bug: T264478
Change-Id: I466454984fd919ebef35f2b37ddb5d86dc842996
Our threads now also contain all replies to their sub-threads.
This is similar to how sections work in MediaWiki, where the parent
section also contains the content of all the lower-level sections.
We're going to need this for notifications about replies in a thread.
Bug: T264478
Change-Id: I241fc58e2088a7555942824b0f184ed21e3a8b6f
Previously, only comments could have IDs, because we only needed IDs
for replying. But we might also use them for notifications soon.
Bug: T264478
Change-Id: I1bcad02bf17ab54bc5028a959543c10f0430836b
The output of CommentFormatter::addReplyLinks() and consequently
ThreadItem::jsonSerialize() can end up in the HTTP cache (Varnish) on
Wikimedia wikis. We need to consider that when changing that code.
Introduce a concept of legacy ID (generated by the older algorithm
after it changes), add some placeholder code that will generate them
in the future, and update some code to find comments by either normal
or legacy IDs.
Add dire comments in a bunch of places (as if that ever helps).
Bug: T264478
Change-Id: I4368f366800ab21b8b184b09378037614fdecd33
"This modifies the original objects…" – I feel like this is obvious
now, but maybe it wasn't so obvious when this code was structured
differently before a2431fe006. Also,
it refers to a variable that doesn't exist.
"FIXME this will clone the reply…" – No, actually, it will not.
It would if replies were associative arrays, but they are objects,
and have always been, ever since the PHP parser was merged in
7b7a2cd69c. Maybe they were arrays
once in Roan's mind before he pushed that for review.
Change-Id: I1348e111699fdbde99cd1f9ef45d8f465f7391b0
We can check whether a node is a child of another node directly,
without iterating over all its children.
Change-Id: I3a26df89365bf765348d96b477c983ec9c4e43fe
* Add the preference
* Only display it when the reply tool is enabled
* Use it when opening the reply tool
* Save it when the menu is toggled from the reply tool interface
Bug: T261539
Change-Id: Icb8fa6b3f1e9a3644669f21b08f34ea8c175f2f9
This is not necessary, and never has been. This variable contains an
object and it's never assigned to.
Instead, the reference creates hard-to-debug bugs (I've just spent
an hour debugging one). When the variable name is reused later in
the same function as the loop variable of another foreach() loop
(such as in If918bfd5e0), the result is overwriting of the last entry
in $this->threadItems with the last entry from the other array.
I was questioning everything I know about variables until I noticed.
Change-Id: Ibb57f915b39dd4d6d2e744903f9ecadd67b1f52d
Also, add tests covering this and the previous bug fixes in this code
(T259818, T261706).
Note that the test data added in tests/cases/ doesn't exactly match
the entire configuration of the wiki, only the parts we want to cover.
This is unlike the data in tests/data/, which was literally copied
from the relevant wikis, and which is used as input for other tests.
Bug: T265500
Change-Id: I29a59a5952f6dc9fb5910434bb6bcc9dcdaa01a9
When a timestamp directly followed a `<div>…</div>` tag (or perhaps
some other wrapper containing lots of content), we would detect the
username from the earliest links in the wrapper (furthest from the
timestamp), rather than the latest links (closest to the timestamp).
Bug: T262573
Change-Id: Id16449a86a731b13dc79846bb30ecf6554e26f1d
The wikitext parser outputs `<p><br></p>` for empty paragraphs, so we
need to ignore `<br>` tags when searching for an "interesting" node
that marks the beginning of a comment. Otherwise the empty paragraphs
mess up the detection of indentation levels.
Bug: T264116
Change-Id: I84a97ab577baa7336b78935ccdc48041ecfc231a
* Export parser data (date format, digits, timezone names, and
messages for weekday/month names) converted to language variants
* Update the parsers to try matching using every variant, in case
the page is displayed in non-default variant (and to avoid
problems with incomplete variant conversion)
Bug: T259818
Change-Id: I04d73992cd31ce06fa79f87df0c0a53d7efc3c58
Avoids using the deprecated $noSeparators parameter to Language::formatNum
in favor of Language::formatNumNoSeparators, which has been around since
MW 1.21.
Change-Id: I012434d5f6c749fec45a6c160e8d5d03686192e9
PHP was counting UTF-8 bytes, JS was counting UTF-16 bytes.
Both should have been counting codepoints (although it doesn't
really matter as long as they both count the same things).
I noticed the issue after adding some tests using the Cyrillic
script, when one case had different results in PHP and JS:
Id25b537fecd789640c209ff7f30e777455a3aece.
Change-Id: Ic31240678f71ba48e6ec202126bf490cea12bb66
Move the code so that we check for "?title=" query parameter first,
because we don't handle this right in the other code path.
Use parse_url() instead of wfParseUrl() because the latter doesn't
accept relative URLs, and we don't care about the other differences.
Bug: T261711
Depends-On: I4da952876e1c3d1a41d06b51f7e26015ff5e34d7
Change-Id: I70fac2b41befd782b0a47a4f726ae748dc0f775d
The PHP code incorrectly assumed that the digits are single-byte in
UTF-8, which is never the case (except for 0-9).
The JS code worked correctly because it uses UTF-16 strings, so the
bug would only affect non-BMP digits there. This was noted in a TODO
comment, but we overlooked it when reimplementing in PHP.
Instead of a string of 10 characters, use an array of 10
single-character strings.
Bug: T261706
Change-Id: Ic5421382474c88f003424799c53ff473d99cce92
As we do in VE, extract the revid from the document.
Unlike in VE we don't need to throw an error if there is
a mis-match, as we will likely be able to make the edit anyway.
Just use the ID we got from the document.
Log a warning if there is ever an ID mis-match so we
can evaluate if this check is actually needed.
Change-Id: I94c37980524a9faabac49495903a5262387af562
When a comment ended before the end of a paragraph, the next
comment would begin right there in the middle of the paragraph.
This could result in the detected indentation level of that
comment being incorrect, and replies being inserted in wrong
places, as seen in the 'signatures-funny' test case.
The code moved to the parser was previously repeated twice in
addListItem() and addReplyLink(), which should have been a hint
that something isn't quite right.
Also, fix the code guarding against overlapping signatures,
now that signatures may not be at the end of a comment.
Bug: T260855
Change-Id: Ic26a87642f8a15d5de2f7073d4d8176b299c7f94
Causes page corruption, in a new way we haven't seen before.
* Revert "Move page updating logic to controller.js"
This reverts commit 54fdc6de06.
* Revert "ReplyWidget: Move clear methods from #teardown to #clear"
This reverts commit 9b811a94e0.
* Revert "ApiDiscussionToolsEdit: Do not pass 'basetimestamp'"
This reverts commit 7de5938a6f.
* Revert "Use DOMCompat::getOuterHTML instead of doc->saveHTML()"
This reverts commit 7b2448d2f0.
* Revert "CommentController: Remove remains of client-side edit conflict handling"
This reverts commit 2d038af705.
* Revert "Restore error message for when comment is deleted while replying"
This reverts commit 655c0526d6.
* Revert "Use transcluded from API to avoid ever fetching Parsoid DOM in client"
This reverts commit 9d0fc184fe.
* Revert "Create a 'transcludedfrom' API endpoint"
This reverts commit 5d8f3b9051.
* Revert "Edit API for replies"
This reverts commit 8829a1a412.
Bug: T259855
Change-Id: I6419408c6194ec0afa6b8ee604b12c1a24c6ac7b
Previously, parser would output offsets that don't exist in their
containers, because we were pretending that entities are parts of
their neighboring text nodes.
Turns out it's much easier to do it right when going backwards.
Change-Id: I9bccca2d403f1a976ae517449989170cdd99721e
Something terrible has happened to this function… It seems that I have
brutalized it when rebasing 092cfd6075.
Change-Id: I12d75c69d15645112563a7bc345209b23b54cb3e
Only 'baserevid' should be required. That's what we used before commit
8829a1a412, since switching from
'basetimestamp' in commit 4e135c7f07 in
order to better handle edit conflicts with yourself. That fix seems to
have regressed, so let's try this and see if it helps.
Bug: T252558
Change-Id: Iff5911384f3320b6e7f97a1fa34e82ecd4b44fb3
The latter results in lots of extra HTML entity encoding.
The former is built by the Parsing team and appears to result
in no unexpected changes elsewhere in the document.
As Parsoid's selser relies on HTML fragments being byte-for-byte
equal, these changes were resulting in wikitext normalisations
in untouched parts of the document ("dirty diffs").
Bug: T259855
Change-Id: Ib3cb605911e690ec3e8c2f9df25fd1a2e2849d7e
This reverts commit 96953647c3.
* Re-apply "Edit API for replies"
This applies commit 8829a1a412.
* Re-apply "Create a 'transcludedfrom' API endpoint"
This applies commit 5d8f3b9051.
* Re-apply "Use transcluded from API to avoid ever fetching Parsoid DOM in client"
This applies commit 9d0fc184fe.
* Re-apply "Restore error message for when comment is deleted while replying"
This applies commit 655c0526d6.
Change-Id: Id20d21899f87464636022aa0683f8c03e0060117
Causes page corruption.
* Revert "Restore error message for when comment is deleted while replying"
This reverts commit 655c0526d6.
* Revert "Use transcluded from API to avoid ever fetching Parsoid DOM in client"
This reverts commit 9d0fc184fe.
* Revert "Create a 'transcludedfrom' API endpoint"
This reverts commit 5d8f3b9051.
* Revert "Edit API for replies"
This reverts commit 8829a1a412.
Bug: T259855
Change-Id: I98036f14dd900b51f20e98696e31b9b618eceee1
When adding a reply, we take a node at the end of the previous comment,
compare that comment's indentation level to the expected indentation level
of the reply, and add (or remove) that number of wrapper lists.
The existing code did not consider that comments may have lists within
them, and so the indentation of that node may not match the indentation
of the comment.
Bug: T252702
Change-Id: Icc5ff19783d2b213bff99f283cb0599a8b5c1ab4
Previously we preferred that, but used '*' (<ul><li>) when the parent
comment or the previous reply also used it.
Bug: T252708
Change-Id: I3abf606da6693905764f1be745fad999fdf57fbe
* Remove the existing approach for detecting signatures that only
worked in source mode; remove autoSignWikitext()
* Use the same approach for auto-signing in source mode as we have
already used in visual
* In both modes, detect whether the user has already typed a signature
at the end of their comment in the modifier, and if so, don't add a
signature
* Add test cases for the detection
Bug: T255738
Change-Id: I791d3035cb1ffc33ce3966d4617a25d08700c35b
* Pass rootNode to the constructor
* Rename getters to match CommentItem/HeadingItem/ThreadItem
value classes.
* Always build the thread tree so CommentItem's always have
and ID and replies/parent.
Change-Id: I508be9534de59016ff806e3d84edcbb1c76cb0c6
Instead of doing a separate tree walk and finding all timestamps
separately, make it part of the getComments tree walk, and find
timestamps one at a time.
Change-Id: I47f466eaf228504faa189fd99e07493bc7f022cd
This is similar to what the JS version does.
The TreeWalker and NodeFilter classes are adapted from
https://github.com/Krinkle/dom-TreeWalker-polyfill
(MIT license).
This makes #getComments twice as fast on en-big-oldparser.html
Change-Id: I2441f33e6e7bad753ac830d277e6a2e81ee8c93d
* Move modifier#getFullyCoveredWrapper to utils
* Use that method to find the node where we start searching for
template wrappers, rather than using endContainer
Bug: T252058
Change-Id: I55de58102f3468fce01290bd413a7fdc96d322d6
When there is a wrapper element whose range matches the range of
a comment, any replies will now be added outside of that wrapper,
instead of directly after the comment (inside the wrapper).
Bug: T250126
Change-Id: I6b42c4db019ae998e91eebd324f9cbd2aa791b4f
It was useful when I was debugging those parts of the code, but now
it's usually annoying.
The warnings can still sometimes be useful for understanding how the
tool parses some discussion, though. To keep that functionality, add
displaying warnings for each comment in the debug mode.
Change-Id: I2d218a8a394f179bcc0990ff988a0567c275ccf2
Follow-up to Ic1438d516e223db462cb227f6668e856672f538c.
Minor corrections and comment improvements in PHP parser,
and "backporting" some changes to JS parser that I like.
Change-Id: I5e54121914ec6b323e556dd133bcb71b3aefbb61
This method shouldn't be required on the server. Leave comments
relating to it in addListItem so JS & PHP can be kept in sync.
Change-Id: I849fac660faf6e750272c20776f96b9250f96b1b
In JS, strings are internally encoded as UTF-16, and properties like
.length return values in UTF-16 code units.
In PHP, strings are internally encoded as UTF-8, and we have the
option of using methods that return bytes like strlen() or UTF-8 code
units like mb_strlen().
However, the values produced by preg_match( …, PREG_OFFSET_CAPTURE )
are in bytes, and there's nothing we can do about that. So let's use
bytes throughout, mixing the two types results in meaningless numbers.
Then in the test code, we have to calculate UTF-16 code units offsets
based on the UTF-8 byte offsets.
We also have to copy the entire workaround for mw:Entity nodes… Maybe
the parser should be fixed to return the real nodes for ranges' ends
in this case.
Change-Id: I05804489d7de0d60be6e9f84e6a49a885e9fb870
It appears PHP's DOM library always uses CDATA nodes for the contents of
<style> tags, even if there is no such markup in the source HTML.
Change-Id: Id04b27086c5e7a0b016a3a440b2b4895d6b13c93
Profiling reveals that >87% of the run time of our test suite is spent
in this tiny method. Apparently, DOMNodeList::item() is extremely slow
(possibly it's linear time instead of constant time?).
Profiled using XDebug and KCacheGrind:
https://phabricator.wikimedia.org/F31815264
We can calculate the child's index in its parent by counting its
precending siblings instead, which turns out to be much faster.
Before:
1. 275444ms to run DiscussionToolsCommentParserTest:testGetComments with data set #2
2. 12668ms to run DiscussionToolsCommentParserTest:testGetComments with data set #3
...
After:
1. 9545ms to run DiscussionToolsCommentParserTest:testGetComments with data set #2
2. 5549ms to run DiscussionToolsCommentParserTest:testGetComments with data set #3
...
That's still kind of slow but now it's bearable to run the test suite.
Change-Id: I49155f7aa2e231a9a20bf282cf6aaa28fc902e0b
* Not to be confused with the Parsing Team's
"Great Parser JS to PHP port of 2019"
Gasp as OR hacks are changed to null coalescing operators.
Applaud as variable declarations are dropped.
Cheer as parameters and return values are type-hinted.
Shudder as DomNodeLists have no indexOf method.
Moving discussion parsing to the server should allow
us to implement much cleaner APIs for commenting.
Bug: T252252
Co-authored-by: Ed Sanders <esanders@wikimedia.org>
Change-Id: Ic1438d516e223db462cb227f6668e856672f538c
When the user clicks a "Reply" link on a page that is affected by the
'fostered' lint error (indicating fostered content in the HTML
representation), display an error and refuse to edit it, as Parsoid's
transformations will damage the page content.
The error message includes a link to documentation about lint errors,
and a link to the editor that will highlight the error location.
Depends-On: I723ec766d1244d117f8d624440026fe5af0d3403
Bug: T246481
Change-Id: Ic60cb58f98d10dc9b113469e5d3bbfb2d2b0564f
By default, DiscussionTools loads on all talk pages when the extension
is installed. This can now be disabled by setting the configuration
option `$wgDiscussionToolsEnable=false`.
To test DiscussionTools, one can now use the query parameter
`?dtenable=1`, which allows it to be loaded on any wikitext page
(overriding the config option).
Bug: T243621
Change-Id: I3d5a9cc9a4183fb6951f05c557b1d42735a9df7c
This sets up the tags:
* discussiontools
* discussiontools-reply
* discussiontools-edit (not yet implemented)
* discussiontools-newsection (not yet implemented)
The tags are flagged as user-addable, because otherwise they can't be
passed through to the VE API (at least, not without editing it so that
it explicitly knows about them, which seems like a strange
interdependency). It's assumed that letting users who know about the
tags add them to random changes via action=editchangetags would be
(a) the pettiest and most inconsequential vandalism possible, and
(b) unlikely to happen.
This relies upon I2c1d0f8d69bc03e5c1877c790247e165f160e966 in
VisualEditor to not also tag the edits with `visualeditor`.
Bug: T242184
Change-Id: I4e5e26afdd52279df242e1912f073b415b812c3b
Add .phpcs.xml to run codesniffer against the extension.
The entry in composer.json is already there
Change-Id: If9de7e9b1b9439dc4d5fa2ba521883d13deaa739
Add the Moment Timezone library. Add a script for managing libraries,
like in MediaWiki core.
Depends-On: I9a59a6ad01850b30327e4215f2be61b8d1c41277
Change-Id: I64bc79e7d0ccdf42b006e5a225c8aa70ea5f4e15