* Follow up to f7594328
* Fixes the regression found in rt:
node bin/roundtrip-test.js --domain ru.wikipedia.org "Феодосия"
* Simple test case (do we really not have one!?):
test <ref>haha{{#tag:ref|ok}}</ref>
Change-Id: Ie0a53a8e885a6d94769034cce4ef432773635842
* The contents of "mw:dom-fragment-token"s was being serialized
after processing to the DOM and stored on the token to be
shuttled through tree building. Only to be reparsed in the
unpacking phase.
* Here we store a pointer to the contents in a fragment map.
* Doing less work results in a performance improvement, though
only slightly because the content still needs to be adopted
by the main document.
Change-Id: Ia0aec7de469101a2a93342ea89daac0f0e73cf1a
* Replace ref markers instead of waiting for cleanup to remove them
since that doesn't happen on embedded html.
Change-Id: Ied746f025a0ac7f14d922aff6640fef3aa4b55b0
* Also, change the input parameter of buildDOMFragmentTokens
to uniformly accepts a <body>, rather than a doc or string,
so that we can pass it nodes from our dummy document.
Change-Id: I4bb44573fe7203277d51e804a4a6423100a34f03
These calls force us to allocate a backing array for childNodes,
deoptimizing a linked list representation. Use the standard
Node#hasChildNodes() method instead, which can be efficiently
implemented without allocating a backing array.
Change-Id: I1706bfa15263564bc981d689947835a3d0d4a68f
* Ports commit 04c3ad01 in core's Cite extension.
* Until $wgCiteResponsiveReferences is exported, a number of wikis
which have enabled this won't be getting the right default.
https://github.com/wikimedia/operations-mediawiki-config/blob/master/wmf-config/InitialiseSettings.php#L14945-L14970
However, it looks like enwiki's {{reflist}} explicitly sets the
parameter either way, so not a bad start.
* The "ext.cite.styles" css resource is added from core's Cite
extension 05cb5cc1, since that's where the responsive css lives.
* The blacklist changes for existing tests are because we now only
serialize the children of the div wrappers. Those tests probably
deserve Parsoid specific sections..
Depends-On: I2404999ab11b5cf7b740ae43696c4676ab1b6d22
Change-Id: I8f9277b3ecb253e0bee7ee55eef7af4935821527
* Lots of overrides here that we should either fix or be explicit about
wanting to diverge and maybe talk about upstreaming the change.
Change-Id: I927dd325e49ef72ccbe5e8b926f7984b82dd0f2e
* .eslintrc was generated from .jshintrc and .jscsrc using polyjuice
and some manual tweaking.
* Forced to follow the convention in eslint #4174
* However, in a follow up, we'll use wikimedia/eslint-config-node-services
Change-Id: I2500a6520a5c9f41d5333e937c151228aec88be0
* Without this, refs like <ref name=":0"> won't generate the same
links that the PHP parser generates.
* Updated an existing test to add the :0 key that require escapeId
to be encoded properly.
Change-Id: I69e5f16ccf64bd1c9cf05bdea7a379e679d36b1a
* With Parsoid's base href pointing to the wiki, plain #-fragment
links won't resolve properly. Add the page title to the href
for the links to start resolving properly again.
* Updated parser tests accordingly.
Change-Id: I280c41a0382bd2acd82cc586212695aa3b920171
* To support the #tag parser function.
* An aside, #tag is nutty,
{{#tag:nowiki|test<ref>haha</ref>}}
Change-Id: I0ab93f6ae959b30a887e967952c904ef0400b189
* Updated HISTORY file based on Parsoid deployment logs and
added only the most pertinent entries.
Change-Id: If07bc75fcb09507cae30fc82d7d89b8d87a4c69b
* Use nestedRefsHTML.length to determine when we need a dataMw.body,
the regexps there were unnecessary.
* Pulled out of https://gerrit.wikimedia.org/r/#/c/264026/
Change-Id: I4602594e468322c8b6b8653eee33047ef9af9ebc
* FIXME: stop skipping jsapi tests
* DU.serializeNode is renamed to DU.toXML, to avoid confusion with
wikitext serialization (and a name clash when grepping).
* DU.serializeToXML is renamed to DU.ppToXML, to distinguish it both in
name and use from DU.toXML. This is a special xml serializer that's
meant to be used in Parsoid's DOM post-processing phase. It's aware
of a node's .dataobject and automatically transfers that to json
stringified attributes before serializing a node.
* DU.ppToDOM is added as helper for loading attributes when parsing
html, as in DU.parseHTML. It's the converse to DU.ppToXML
* Diff markers are no longer stored as json stringified attributes on
nodes while serializing. Only when dom dumping, and even then only
on clones.
* Once T100856 is fixed, we can look into discarding data-parsoid from
html that's stored in data-mw (ie. captions, references, etc.)
* Filed T133334 to note that some of the html we're stuffing in data-mw
needs further processing, since we're storing ref markers.
Bug: T91700
Change-Id: Ia1a60951e6292b5eb073eedca0b79938094809d2
This parallels the autoload mechanism in mediawiki core. In fact, small
wikis can use the same extensions directory for both core and Parsoid.
We also export a very basic "extension API" for external extensions
and convert our current "native extensions" to use this to show how it
is to be done.
Bug: T133320
Change-Id: I8e05d5bfdff873f28a58dead68aaca0e4823cf32
* And use it where appropriate. This has the effect of reducing the
dom size because of the use of smartQuote (see all the parserTests /
blacklist changes).
* The serializeChildren name clashes with the one in the
serializeState, causing confusion and making grepping harder.
* Bonus: removed some dsr info in parserTests and fixed a html2wt typo
that was prevented a test from running.
Change-Id: I287f1efd4afe06b158844e0de9f16494d4a89f93
* An example of an href from eswiki/Kate_Gosselin?oldid=90347467
is <a href="#cite_note-13.3F_It_Can't_Be!-3">
which domino should do a better job validating, instead of throwing
"Object [object Object] has no method '''".
Change-Id: Ie4bd702d7eee23ad021f2ad31b47cfbada947cd9
VisualEditor requires this info and presumably other clients might
and this is, in reality, semantic information about wikitext.
The html2html failure is because:
* even when the references html has been autogenerated,
in non-rt-testing html2wt mode, we always generate the
<references /> tag.
* we moved the flag from data-parsoid to data-mw but ignore changes
to data-parsoid while comparing test output.
* So, in html -> wt mode, <references /> is always generated
=> data-mw.autoGenerated flag doesn't show up in the wt -> html
phase of the (html -> wt -> html) test
=> the html2html test fails.
Change-Id: I8e79f2a436a1ca276b9351228a3d8f02d7ebd0c6
* First pass moving around files into different directories.
* Renamed files to remove unnecessary prefixes or align the name
closer to what the file contains.
* Added temporary soft links to bin/parse.js and bin/roundtrip-test.js
in the tests/ directory since jenkins jobs seems to have hardcoded
refs to those paths.
* Deleted:
- a couple of stale scripts in tests/ that are no longer relevant.
- a couple of state scripts in api/ that didn't look relevant.
- swagger spec since it was incomplete, stale, and unmaintained.
Change-Id: I97c30467b190b417eec9e750238704330ae91137
Inline images don't display their captions and don't have a
DOM structure for them. They are hidden away in the data-mw
attribute.
So far, we weren't handling <ref>s embedded in these inline
image captions -- both causing rendering diffs in the references
section and crashing the serializer when these nodes were
encountered.
This patch fixes that and also adds parser tests.
The html2html failure added to the blacklist is because of
about-id mismatches in the embedded html. We need smarter
normalization in parserTests.js to remove these false failures.
There is also a long comment about generalizing how this kind
of scenario is handled. Worth tackling that in a separate patch.
In order to simplify and generalize the adhoc nature of
trace output that is currently in place, I had been pondering
creating a DOMProcessor class with state to do that, and
looks like something like will help with this as well.
For this patch, I am leaving behind the special case handling.
Adding state and generalizing how embedded HTML is processed
should be done separately since that requires more thinking
and experimentation.
enwiki/High_Laver?oldid=659441291 and hiwiki/मुक्केबाज़ी?oldid=2689792
now RT without crashers.
Change-Id: I39854c7b5b3e8d7cce84b1b4e05213185f8cccb0
* 4ea8dbd8 exposed a bug handling of nested <refs> (done via
templates). It caused crashers in rt-testing of a few pages.
Ex: enwiki/António de Oliveira Salazar?oldid=676623209
* Reproducible with the following wikitext:
-------------------------------------
x <ref>y {{sfn|Kay|1970|pp=123}}</ref>
<references />
-------------------------------------
* This patch fixes the problem by making sure data-mw and
data-parsoid of the DOM for the nested ref are saved before
serialization to a string.
* Removed the saveHandler from dom.t.unpackDOMFragments.js and
reused the simpler dom-walk code added as part of this patch.
* Also updated comments in ext.Cite.js
* Hard to introduce nested <ref> in parser tests, so no
new tests for this scenario.
Change-Id: I2298bbe87ccddd87f307d206d77d78fcfb0d8a75
Output a default style of reference numbers and text for <ref>s and
<references>, use CSS content generation in supported browsers to render
per-language style.
Right now it provides pixel-perfect renditions of refs in enwiki and
eswiki, and initial support for fawiki (which can't be tested with current
phantomjs). The CSS is loaded from the new module 'ext.cite.style',
implemented in the Cite.php extension.
Further changes to Cite.php will be necessary to change it to use this
same CSS instead of system messages.
Test results (and two blacklisted results) changed due to new HTML
structure.
Related to bugs T51538, T45235 and T73803.
Bug: T86782
Change-Id: I21fbbd3247bf7801e5ef9bd5312f95777f4dd6ef
* Changes xmlserializer.serializeToString and DU.serializeNode to return
an object with the str and, if options.captureOffsets, the html offsets
of top-level nodes.
* Added mocha tests to capture expectations about offsets computation.
Bug: T96279
Change-Id: Id74988d3ef39078fbfea72359884b75290da041b
* Continues on 3311936a5e15c2080d8040dd605e58fa33407e60 and
05a8947fef30e6fc4519b822b0931d965d57efa8
* With some whitespace cleanup.
Change-Id: I32109c6fd35439914df4df06432376d6a968268b
Keep all state in the DOM post-processor in dom.processRefs.js, so
that there's no need to reset the extension and it can be safely
used for multiple documents
Change-Id: Iacf7e3003929f74b76774d3e765c792991700e51
Once all the missing references have been added, ensure that the
the reference index is reset to 0.
Also related to T93715.
Change-Id: I00a2b2da9116602d2eb00e782d4832cf580e4b59
Instead of outputting a <ref>'s HTML in both data-mw and in
<references>, output it only in the later and point to it from
data-mw.body.id.
Also preserve data-parsoid for <ref>s text in <references>, as now
that's the only representation of it.
To correctly do html2wt when there are <ref>s inside <references>
we need access to the main document DOM when serializing, so also
ensure that env.page.dom is correctly set (it was only set in v2
before).
Updated tests results and blacklist (some tests now pass).
Change-Id: I0fa7ad692585af19136909bfec39db9868b137c5
* Fixed parser test output of one test to add unique ids. Output for
other parser tests modified in f528f508 still need fixing up.
Left for a separate patch.
Change-Id: I04546c2a590930121d960239a1954b26771e9c80
* <references /> was not appearing on its own line and was
instead getting tacked onto previous line of wikitext output.
* Change in blacklisted wt2wt test shows that the new output
is better.
Change-Id: Ie82401a3bc6082b733339e2456810b6b1c87529a
This patch emits a reflist for all ref groups that still have
<ref>s in them at the end of the document. Currently Cite.php only
does so for the default group. See also T88290.
On html2wt the missing <references> are added to the wikitext,
which makes the wikitext correct. Selser catches this if not part
of the edit.
Change tests to include an explicit <references /> tag, and add
one for explcitly testing that they do get added. This last one
has to be blacklisted as the new <references /> don't appear with
selser.
Change-Id: I79af2c34481cadbf0d68d9571928979adf559b58
One particular case is that Cite.php considers equal a name and
its encoding, i.e. "a & b" === "a & b". Added a new test for
this case, but blacklisted it on html2wt, wt2wt and html2html due
to a different problem with how Parsoid encodes entities. This
will be investigated separately, as a simple fix could break
unrelated cases.
Also updated tests and blacklist to the new ids.
Change-Id: I87637a1dc812a3a8f29327b9e6c0040b22a651c4
Also encode cite ids properly as now they can contain arbitrary
text. Change in blacklist due to this.
TODO: Investigate if it would be better to do this directly in
the tokenizer.
Change-Id: Ic112124e90d256d73a351d0d57fe3c7546fa065f
* Although this resolves the crashes, I'm unsatisfied with it as a
proper fix to the underlying issue. There are many places throughout
the codebase where we serialize and then parse document fragments
that should be instrumented to store and unpack data-* attributes.
Bug: T76518
Change-Id: Idca1b0a37ec924a71cb51160d000c7de9717d422
* Currently, this mimics Cite.php behavior where "a b" and "a_b"
are considered identical ids.
* Added new parser test.
* Fixed output of another test.
* Fixed section name of a commented out test.
Change-Id: I0c51404c3e659bbddfe9a8909aa6a109d368b762
* This is both faster and consistent with how we're accessing other
parsoid attributes. It's also a step towards not having this data in
the html output.
* Changes to parserTests and the blacklist are for attribute order.
* Requires upgrading domino to 1.0.18
https://github.com/fgnass/domino/pull/48
Change-Id: I1edbc260887d480adf04763b15043c374e27cceb
General changes
---------------
* Replaced the hacky 'inBlockNode' parser pipeline option with
a cleaner 'noPWrapping' option that suppresses paragraph wrapping
in sub-pipelines (ex: recursive link content, ref tags, attribute
content, etc.).
Changes to wt2html pipeline
---------------------------
* Fixed paragraph-wrapping code to ensure that there are no bare
text nodes left behind, but without removing the line-based block-tag
influences on p-wrapping. Some simplifications as well.
TODO: There are still some discrepancies around <blockquote>
p-wrapping behavior. These will be investigated and addressed
in a future patch.
* Fixed foster parenting code to ensure that fostered content is
added in p-tags where necessary rather than span-tags.
Changes to html2wt/selser pipeline
----------------------------------
* Fixed DOMDiff to tag mw:DiffMarker nodes with a is-block-node
attribute when the deleted node is a block node. This is used
during selective serialization to discard original separators
between adjacent p-nodes if either of their neighbors is a
deleted block node.
* Fixed serialization to account for changes to p-wrapping.
- Updated tag handlers for the <p> tag.
- Updated separator handling to deal with deleted block tags
and their influence on separators around adjacent p-tags.
- Updated selser output code to test whether a deleted block
tag forces nowiki escaping on unedited content from adjacent
p-tags.
Changes to parser tests / test setup
------------------------------------
* Tweaked selser test generation to ensure that text nodes are always
inserted in p-wrappers where necessary.
* Updated parser test output for several tests to introduce p-tags
instead of span-tags or missing p-tags, add html/parsoid section,
or in one case, add missing HTML output.
Parser Test Result changes
--------------------------
Newly passing
- 12 wt2html
- 1 wt2wt
- 3 html2html
- 3 html2wt
Newly failing
- 1 html2wt
"3. Leading whitespace in indent-pre suppressing contexts should not be escaped"
This is just normalization of output where multiple HTML forms
serialize to the same wikitext with a newline difference. It is not
worth the complexity to fix this.
- 1 wt2wt
""Trailing newlines in a deep dom-subtree that ends a wikitext line"
This is again normalization during serialization where an extra
unnecessary newline is introduced.
- A bunch of selser test changes.
182 +add, 188 -add => 6 fewer selser failures
- That is a lot of changes to sift through, and I didn't look at every
one of those, but a number of changes seem to be harmless, and just
a change to previously "failing" tests.
- "Media link with nasty text" test seems to have a lot of selser
changes, but the HTML generated by Parsoid seems to be "buggy" with
interesting DSR values as well. That needs investigation separately.
- "HTML nested bullet list, closed tags (bug 5497) [[3,3,4,[0,1,4],3]]"
has seen a degradation where a dirty diff got introduced.
Haven't investigated carefully why that is so.
Change-Id: Ia9c9950717120fbcd03abfe4e09168e787669ac4
* Nested <ref> tag support was broken in 69b6ec4d since that patch
effectively processed <refs> only on the final DOM rather than
extracted <ref> information in sub-pipelines. In doing so, it
broke support for <ref> in <ref> tags which are supported by {{efn}}
templates and used as follows.
{{efn|A clarification.{{sfn|Smith|2009|p=2}}}}
The {{sfn}} tpl generates a <ref> tag inside another <ref> tag
generated by the {{efn}} tpl.
* This patch fixes the breakage by processing <ref> content after it
is extracted (in case it is known to have nested <ref> tags).
* This fixes the rendering on enwiki/Otto_I%2C_Holy_Roman_Emperor
and is an improvement over when it was broken. The nested <ref>
gets id 18 (just like the Cite.php handling) whereas it used to
get id 1 before the breakage.
* TODO:
- Figure out a way to add a test for this.
Change-Id: Ib82b75f66249b2133a123adbe6fd7acbfd8ec8fb
* Process <ref> and <references> tag on the top-level DOM only
and ignore the generateRefs pass when processing other content.
* This required a few fixes:
- ensure that DOMPostProcessor knows about the top-level.
- ensure that DOMVisitor knows about the top-level.
- cleanup pass leaves behind the ref-marker metas from DOMs from
non top-level content.
- process nested references content.
* One of the references tests had incorrect parsed output. That test
has been updated to reflect the correct output from this patch.
* Barack Obama seems to now have the correct numbering on references.
Change-Id: I5465721d2fc715f2168f267e773a446bc37d198b
* Keep track of table nesting in token stream patcher and use it to
convert <td>, <tr>, and <th> tags to plain strings.
* This fix is only enabled on the top-level token stream.
To support this, fixed the resetState function in the parser
construction code to pass in a toplevel flag which lets the
token stream patcher know the context it is in.
* Fixes 29 (wt2html,wt2wt,html2html,selser) tests and improves
results of 1 previously blacklisted tests. The failing selser
test is actually a false failure because selser is more accurate
than non-selser wts.
* Consolidated a few separate tests into a single test that covers
all this functionality.
- This new test fails wt2wt and html2wt modes because serializer
uses tokenizer information which continues to return table tokens
and results in <nowiki> wrappers.
Bug: 66489
Bug: 66498
Change-Id: I9f42354ea9efb0f8adfc96c23760012220d00dd4
The Cite extension does not currently handle resetState calls in
sub-pipelines, and relies on sharing a single Cite instance between all
pipelines. Fixing this is a longer project, so this patch works around the
issue for now by passing a flag indicating resetState calls in sub-pipelines
and ignoring the call in Cite in that case.
Change-Id: If3d426a5311a55d1c1530860d2b665d3681f1aa9
* Entities in ref name weren't expected
* Fixes the crash from arwiki:تأثير_الدمعة_السوداء
* Makes use of the fix from de3642b8dd4a804ac654f2943a900496f2c8b3f3
Bug: 63790
Change-Id: Icb8781b4d9decc5a8b115d0b11def4d18f5d5025
* Thus far, <references> tag content was being parsed to
stage 2 and merged into main pipeline. This patch takes
this all the way to DOM. This required some tweaks to
handling of <ref>s nested inside <references>.
* Fixed up a buggy parser test in the bargain -- the old parsoid
result was buggy as well. I verified output in the enwp
sandbox.
Change-Id: Iff6c528066b71ce1b00dd769910a04ee66623340
* The current fix is a hack to fix dsr issues right away.
Meanwhile, will invesitigate a fix that will involve processing
<references> in its own subpipeline and persisting state into
the top-level page.
* Fixes a known selser failure from bug 62025.
Change-Id: I0f80d68e927f500939a44af401cc73c07e24721f
* Renamed buildDOMFragmentForTokenStream --> buildDOMFragmentTokens
and made env. the first arg.
* Added documentation to buildDOMFragmentTokens and encapsulateHTML
Change-Id: I7eccfd3f4dc5b4b91d20d1d24d98ec514df6dfbc
* Removed manager and passed in env and parent-frame to all
utilities that process content in new pipelines.
* Added more documentation to mediawiki.Util.js.
* Renamed processAttributeToDOM to a more appropriate name.
* Added pipelineFactory property to env and used that to
construct parsing pipelines everywhere.
Change-Id: Ic612e5630d19d4e3f5d6388bc5cd117d337fd799
* This patch adds a flag to DOM fragment unpacking to update
fragment DSR in cases where the fragment needs it (references
block + reused template/extension/images content). In other
cases of dom-fragment use, the DSR should not be updated.
Ex: All cases where fragments are used to implement parsing
scopes (all A-tag content currently) since DSR computation
is set up with offsets in the top-level source.
This is not just an optimization, but a correctness issue
since the fragment unwrapper always sets the fragment DSR
on the first node which would be incorrect in scenarios where
the fragment DOM has multiple top-level nodes.
* Parser test runs now have better DSR values in certain cases.
Change-Id: If1f5bf98dab246a3c8a1869b38335e90268cb5c5
This reads better than manually testing the constructor, and often
leads to terser code since we don't have to check whether the argument
is an non-null object before querying the constructor field.
Change-Id: I53ec87d6e80d658aa3d26dc2b613dc6c58e2d026
In particular, use `Array.isArray` instead of `$.isArray`, and
`Object.assign` instead of `$.extend`. `Object.assign` operates only on own
properties, so use `Object.create` on the prototype where necessary to
get inherited properties. `Object.assign` does a simple assignment and
is appropriate in most places, but be careful if we ever install
getters/setters on a prototype.
Implement `Util.clone()` from scratch to have a jquery-compatible deep
clone operation. In particular, this needs to ignore objects which
aren't "plain objects", so we don't try to clone DOM nodes. Our
definition of a "plain object" is compatible with jquery/zepto.js, and
is thus something of a hack. We should eventually replace this with a
`console.assert()` and remove/rewrite the places where we try to clone
objects which contain DOM trees and other cruft.
Change-Id: I88c8fe41a9be84c167d5a0ea1187fd258f077968
* For now it stores in .dataobject, freeing up .data to
handle <object> elements.
* Adds a test that crashes master.
Bug: 57394
Change-Id: I4207d76ad9dab660e890008b2ee5014554ce52c8
* Adds an index of all the references on a page in order to avoid
repeating attrs when multiple <references /> tags are present.
* Update tests to reflect the new behaviour.
Bug: 59782
Change-Id: Ia44bf59a9304788aca170041d3b85f53557151fc