These calls force us to allocate a backing array for childNodes,
deoptimizing a linked list representation. Use the standard
Node#hasChildNodes() method instead, which can be efficiently
implemented without allocating a backing array.
Change-Id: I1706bfa15263564bc981d689947835a3d0d4a68f
* Ports commit 04c3ad01 in core's Cite extension.
* Until $wgCiteResponsiveReferences is exported, a number of wikis
which have enabled this won't be getting the right default.
https://github.com/wikimedia/operations-mediawiki-config/blob/master/wmf-config/InitialiseSettings.php#L14945-L14970
However, it looks like enwiki's {{reflist}} explicitly sets the
parameter either way, so not a bad start.
* The "ext.cite.styles" css resource is added from core's Cite
extension 05cb5cc1, since that's where the responsive css lives.
* The blacklist changes for existing tests are because we now only
serialize the children of the div wrappers. Those tests probably
deserve Parsoid specific sections..
Depends-On: I2404999ab11b5cf7b740ae43696c4676ab1b6d22
Change-Id: I8f9277b3ecb253e0bee7ee55eef7af4935821527
* Lots of overrides here that we should either fix or be explicit about
wanting to diverge and maybe talk about upstreaming the change.
Change-Id: I927dd325e49ef72ccbe5e8b926f7984b82dd0f2e
* .eslintrc was generated from .jshintrc and .jscsrc using polyjuice
and some manual tweaking.
* Forced to follow the convention in eslint #4174
* However, in a follow up, we'll use wikimedia/eslint-config-node-services
Change-Id: I2500a6520a5c9f41d5333e937c151228aec88be0
* Without this, refs like <ref name=":0"> won't generate the same
links that the PHP parser generates.
* Updated an existing test to add the :0 key that require escapeId
to be encoded properly.
Change-Id: I69e5f16ccf64bd1c9cf05bdea7a379e679d36b1a
* With Parsoid's base href pointing to the wiki, plain #-fragment
links won't resolve properly. Add the page title to the href
for the links to start resolving properly again.
* Updated parser tests accordingly.
Change-Id: I280c41a0382bd2acd82cc586212695aa3b920171
* To support the #tag parser function.
* An aside, #tag is nutty,
{{#tag:nowiki|test<ref>haha</ref>}}
Change-Id: I0ab93f6ae959b30a887e967952c904ef0400b189
* Updated HISTORY file based on Parsoid deployment logs and
added only the most pertinent entries.
Change-Id: If07bc75fcb09507cae30fc82d7d89b8d87a4c69b
* Use nestedRefsHTML.length to determine when we need a dataMw.body,
the regexps there were unnecessary.
* Pulled out of https://gerrit.wikimedia.org/r/#/c/264026/
Change-Id: I4602594e468322c8b6b8653eee33047ef9af9ebc
* FIXME: stop skipping jsapi tests
* DU.serializeNode is renamed to DU.toXML, to avoid confusion with
wikitext serialization (and a name clash when grepping).
* DU.serializeToXML is renamed to DU.ppToXML, to distinguish it both in
name and use from DU.toXML. This is a special xml serializer that's
meant to be used in Parsoid's DOM post-processing phase. It's aware
of a node's .dataobject and automatically transfers that to json
stringified attributes before serializing a node.
* DU.ppToDOM is added as helper for loading attributes when parsing
html, as in DU.parseHTML. It's the converse to DU.ppToXML
* Diff markers are no longer stored as json stringified attributes on
nodes while serializing. Only when dom dumping, and even then only
on clones.
* Once T100856 is fixed, we can look into discarding data-parsoid from
html that's stored in data-mw (ie. captions, references, etc.)
* Filed T133334 to note that some of the html we're stuffing in data-mw
needs further processing, since we're storing ref markers.
Bug: T91700
Change-Id: Ia1a60951e6292b5eb073eedca0b79938094809d2
This parallels the autoload mechanism in mediawiki core. In fact, small
wikis can use the same extensions directory for both core and Parsoid.
We also export a very basic "extension API" for external extensions
and convert our current "native extensions" to use this to show how it
is to be done.
Bug: T133320
Change-Id: I8e05d5bfdff873f28a58dead68aaca0e4823cf32
* And use it where appropriate. This has the effect of reducing the
dom size because of the use of smartQuote (see all the parserTests /
blacklist changes).
* The serializeChildren name clashes with the one in the
serializeState, causing confusion and making grepping harder.
* Bonus: removed some dsr info in parserTests and fixed a html2wt typo
that was prevented a test from running.
Change-Id: I287f1efd4afe06b158844e0de9f16494d4a89f93
* An example of an href from eswiki/Kate_Gosselin?oldid=90347467
is <a href="#cite_note-13.3F_It_Can't_Be!-3">
which domino should do a better job validating, instead of throwing
"Object [object Object] has no method '''".
Change-Id: Ie4bd702d7eee23ad021f2ad31b47cfbada947cd9
VisualEditor requires this info and presumably other clients might
and this is, in reality, semantic information about wikitext.
The html2html failure is because:
* even when the references html has been autogenerated,
in non-rt-testing html2wt mode, we always generate the
<references /> tag.
* we moved the flag from data-parsoid to data-mw but ignore changes
to data-parsoid while comparing test output.
* So, in html -> wt mode, <references /> is always generated
=> data-mw.autoGenerated flag doesn't show up in the wt -> html
phase of the (html -> wt -> html) test
=> the html2html test fails.
Change-Id: I8e79f2a436a1ca276b9351228a3d8f02d7ebd0c6
* First pass moving around files into different directories.
* Renamed files to remove unnecessary prefixes or align the name
closer to what the file contains.
* Added temporary soft links to bin/parse.js and bin/roundtrip-test.js
in the tests/ directory since jenkins jobs seems to have hardcoded
refs to those paths.
* Deleted:
- a couple of stale scripts in tests/ that are no longer relevant.
- a couple of state scripts in api/ that didn't look relevant.
- swagger spec since it was incomplete, stale, and unmaintained.
Change-Id: I97c30467b190b417eec9e750238704330ae91137