Emit them earlier (before post-processing) so that attributes don't need
to be migrated.
Bug: T181408
Change-Id: I2b5c7ff552b3322be74f79a81936c41d58fecabc
This change was prompted by a request to follow the PHP Cite extensions
lead in using <sup> for linkbacks. Also, using superscript for
notations/citations is semantically appropriate and follows style guide
conventions.
Change-Id: I7c83d12dd900682799c124ddae1a8689969d5e8c
This better matches the behaviour of the php extension. An instructive
example is,
<ref>123 {{#tag:ref|321}}</ref>
{{#tag:ref|456 <ref>654</ref>}}
The php implementation only parses the ref contents when producing the
output for reference tags, at which point, nested refs will be ignored,
as in the former case. The "321" is dropped.
The latter is special, since the extension tag takes precedence over the
parser function, the inner ref will already have been processed when the
outer is added to the stack, and hence the nesting is permitted. This
is why the inner ref precedes the outer in the references list (has a
lower number). Unfortunately, Parsoid doesn't yet get that ordering
right.
Change-Id: Ieb0e418cca634605c2a9f1487139b15095f38d81
* <ref> content ends up in references section. The original <ref>
site only has a backlink to that references section.
* Given this, when the references tag itself comes from a template,
this throws off the linter attribution code. It assigns the
references template's DSR to the lints from the <ref> tags.
* A clean way of dealing with this would be to treat extension
content as its own independent document (which we are already doing
currently in our wt2html code path) and asking the extension to process
its content while giving it the handler for linting an individual node.
* This patch implements special handlers for <ref> and <references> content
in the Cite extension and registers them.
* Move cite-specific lint handling code to the Cite extension.
* New mocha tests spec this behavior.
Change-Id: Ib57f0303605a73408333e133f8051be0b8d45d69
The citation portion contains fixes for []-encoding in attributes,
based on Iec3439f76ecc2a3543b30b35f8735c92b0cfb711.
Fix some other no- or double-encoding issues with HTML entities in the
process, based on I88e8e2077e6f5eec2b232391f7818370894a62dc
(T103714/T104196).
Unrelated parsertest fix which needed an id fix -- incorrect id
added by the section-tag patches exposed here.
Bug: T152540
Bug: T103714
Change-Id: I12b2a148f7170d20bd9aacd3b5b8ee1965859592
* Follow up to f7594328
* Fixes the regression found in rt:
node bin/roundtrip-test.js --domain ru.wikipedia.org "Феодосия"
* Simple test case (do we really not have one!?):
test <ref>haha{{#tag:ref|ok}}</ref>
Change-Id: Ie0a53a8e885a6d94769034cce4ef432773635842
* The contents of "mw:dom-fragment-token"s was being serialized
after processing to the DOM and stored on the token to be
shuttled through tree building. Only to be reparsed in the
unpacking phase.
* Here we store a pointer to the contents in a fragment map.
* Doing less work results in a performance improvement, though
only slightly because the content still needs to be adopted
by the main document.
Change-Id: Ia0aec7de469101a2a93342ea89daac0f0e73cf1a
* Replace ref markers instead of waiting for cleanup to remove them
since that doesn't happen on embedded html.
Change-Id: Ied746f025a0ac7f14d922aff6640fef3aa4b55b0
* Also, change the input parameter of buildDOMFragmentTokens
to uniformly accepts a <body>, rather than a doc or string,
so that we can pass it nodes from our dummy document.
Change-Id: I4bb44573fe7203277d51e804a4a6423100a34f03
These calls force us to allocate a backing array for childNodes,
deoptimizing a linked list representation. Use the standard
Node#hasChildNodes() method instead, which can be efficiently
implemented without allocating a backing array.
Change-Id: I1706bfa15263564bc981d689947835a3d0d4a68f
* Ports commit 04c3ad01 in core's Cite extension.
* Until $wgCiteResponsiveReferences is exported, a number of wikis
which have enabled this won't be getting the right default.
https://github.com/wikimedia/operations-mediawiki-config/blob/master/wmf-config/InitialiseSettings.php#L14945-L14970
However, it looks like enwiki's {{reflist}} explicitly sets the
parameter either way, so not a bad start.
* The "ext.cite.styles" css resource is added from core's Cite
extension 05cb5cc1, since that's where the responsive css lives.
* The blacklist changes for existing tests are because we now only
serialize the children of the div wrappers. Those tests probably
deserve Parsoid specific sections..
Depends-On: I2404999ab11b5cf7b740ae43696c4676ab1b6d22
Change-Id: I8f9277b3ecb253e0bee7ee55eef7af4935821527
* Lots of overrides here that we should either fix or be explicit about
wanting to diverge and maybe talk about upstreaming the change.
Change-Id: I927dd325e49ef72ccbe5e8b926f7984b82dd0f2e
* .eslintrc was generated from .jshintrc and .jscsrc using polyjuice
and some manual tweaking.
* Forced to follow the convention in eslint #4174
* However, in a follow up, we'll use wikimedia/eslint-config-node-services
Change-Id: I2500a6520a5c9f41d5333e937c151228aec88be0
* Without this, refs like <ref name=":0"> won't generate the same
links that the PHP parser generates.
* Updated an existing test to add the :0 key that require escapeId
to be encoded properly.
Change-Id: I69e5f16ccf64bd1c9cf05bdea7a379e679d36b1a
* With Parsoid's base href pointing to the wiki, plain #-fragment
links won't resolve properly. Add the page title to the href
for the links to start resolving properly again.
* Updated parser tests accordingly.
Change-Id: I280c41a0382bd2acd82cc586212695aa3b920171
* To support the #tag parser function.
* An aside, #tag is nutty,
{{#tag:nowiki|test<ref>haha</ref>}}
Change-Id: I0ab93f6ae959b30a887e967952c904ef0400b189
* Updated HISTORY file based on Parsoid deployment logs and
added only the most pertinent entries.
Change-Id: If07bc75fcb09507cae30fc82d7d89b8d87a4c69b
* Use nestedRefsHTML.length to determine when we need a dataMw.body,
the regexps there were unnecessary.
* Pulled out of https://gerrit.wikimedia.org/r/#/c/264026/
Change-Id: I4602594e468322c8b6b8653eee33047ef9af9ebc
* FIXME: stop skipping jsapi tests
* DU.serializeNode is renamed to DU.toXML, to avoid confusion with
wikitext serialization (and a name clash when grepping).
* DU.serializeToXML is renamed to DU.ppToXML, to distinguish it both in
name and use from DU.toXML. This is a special xml serializer that's
meant to be used in Parsoid's DOM post-processing phase. It's aware
of a node's .dataobject and automatically transfers that to json
stringified attributes before serializing a node.
* DU.ppToDOM is added as helper for loading attributes when parsing
html, as in DU.parseHTML. It's the converse to DU.ppToXML
* Diff markers are no longer stored as json stringified attributes on
nodes while serializing. Only when dom dumping, and even then only
on clones.
* Once T100856 is fixed, we can look into discarding data-parsoid from
html that's stored in data-mw (ie. captions, references, etc.)
* Filed T133334 to note that some of the html we're stuffing in data-mw
needs further processing, since we're storing ref markers.
Bug: T91700
Change-Id: Ia1a60951e6292b5eb073eedca0b79938094809d2
This parallels the autoload mechanism in mediawiki core. In fact, small
wikis can use the same extensions directory for both core and Parsoid.
We also export a very basic "extension API" for external extensions
and convert our current "native extensions" to use this to show how it
is to be done.
Bug: T133320
Change-Id: I8e05d5bfdff873f28a58dead68aaca0e4823cf32
* And use it where appropriate. This has the effect of reducing the
dom size because of the use of smartQuote (see all the parserTests /
blacklist changes).
* The serializeChildren name clashes with the one in the
serializeState, causing confusion and making grepping harder.
* Bonus: removed some dsr info in parserTests and fixed a html2wt typo
that was prevented a test from running.
Change-Id: I287f1efd4afe06b158844e0de9f16494d4a89f93
* An example of an href from eswiki/Kate_Gosselin?oldid=90347467
is <a href="#cite_note-13.3F_It_Can't_Be!-3">
which domino should do a better job validating, instead of throwing
"Object [object Object] has no method '''".
Change-Id: Ie4bd702d7eee23ad021f2ad31b47cfbada947cd9
VisualEditor requires this info and presumably other clients might
and this is, in reality, semantic information about wikitext.
The html2html failure is because:
* even when the references html has been autogenerated,
in non-rt-testing html2wt mode, we always generate the
<references /> tag.
* we moved the flag from data-parsoid to data-mw but ignore changes
to data-parsoid while comparing test output.
* So, in html -> wt mode, <references /> is always generated
=> data-mw.autoGenerated flag doesn't show up in the wt -> html
phase of the (html -> wt -> html) test
=> the html2html test fails.
Change-Id: I8e79f2a436a1ca276b9351228a3d8f02d7ebd0c6
* First pass moving around files into different directories.
* Renamed files to remove unnecessary prefixes or align the name
closer to what the file contains.
* Added temporary soft links to bin/parse.js and bin/roundtrip-test.js
in the tests/ directory since jenkins jobs seems to have hardcoded
refs to those paths.
* Deleted:
- a couple of stale scripts in tests/ that are no longer relevant.
- a couple of state scripts in api/ that didn't look relevant.
- swagger spec since it was incomplete, stale, and unmaintained.
Change-Id: I97c30467b190b417eec9e750238704330ae91137