* This makes wikilink attrs more similar to ext links.
* Added 'content' key to ISBN links, but couldn't add it to regular
wikilinks yet because of complexity of how they are handled in
the rest of the pipeline. Changing this requires fixing up other
parts down the pipeline -- something for later.
* Fixed up wikilink handler to use named lookup for 'href' and
'tail' rather than positional lookup. Content lookup is still
positional as before.
Change-Id: I657b1f338d38df3cfdfa99f27ac46e7fe1c9fd65
Now that we have access to the contents we can more easily compare the content
with link targets. This is still to do- this commit only converts the link
handler to work on the collected tokens.
* Start to implement latest RDFa spec from
http://www.mediawiki.org/wiki/Parsoid/RDFa_vocabulary
* Capitalize types, add mw:Entity type for html entities
* Detect changes to entities using tokenCollector and srcContent
Change-Id: I45429f4b930858a16e166ef8377c8f6f5114c414
Anything with data-gen="both" and dataAttribs.src defined serializes to
dataAttribs.src and drops its contents (if any). We can use this to round-trip
elements we don't properly parse or serialize yet. Without RDFa info, the
editor will not touch the contents after encountering data-gen="both".
Change-Id: Ia39e5fdd765c2c9b36f26313455685d29f118839
* Don't consider them for auto-numbered links
* Don't insert a trailing space if the content is empty
These links are still wrapped in nowiki on round-tripping since the
valid/invalid url determination is done in the LinkHandler and not the
Tokenizer as it is configuration-dependent. Not incorrect for rendering (and
perhaps easier to understand for humans too), but might still introduce a
dirty diff. We'll still need reconciliation / damage tracking in the end ;)
Change-Id: I959ebc1b7f81d110a1141bb38ba5ee97f52ebf96
This is a zero-length tsr for now (and thus not 100% correct), but will do the
job for starttag / endtag range establishment
Change-Id: Iedd50ad319aa8d5916434fb6744deb04e031e456
297 round-trip tests are passing with this patch.
TODO:
* generalize data-mw-gc handling in the serializer for any tag
* use data-mw-gc="both" and data-mw.src: 'the wikitext' for round-tripping of
wikitext structures, optionally with some presentational (but read-only)
content
* use span and data-mw-gc="both" for nowiki
Change-Id: I700142a56818977c20c8c06e6a5f2e77a708d25e
* Removed murky ' :' -> ' :' replacement in tokenizer. This breaks four
parser tests, and should be fixed in a token stream transformer or DOM
postprocessor. This replacement clashes with round-tripping, and is not
terribly important visually.
* Added stx:row annotation to single-line dt/dd pairs and use it to preserve
single-line syntax in the serializer. There is no attempt yet to support the
addition of nested lists in an originally single-line dd. We'd need to look
ahead in the serializer to support this. Perhaps the editor can simply drop
data-mw in that case.
* Switched default dt/dd serialization to multi-line. This supports all nested
lists and multiple dds.
* Don't close dls when switching from dt to dd or back in the token stream
ListHandler.
Overall 290 round-trip tests are passing now (up from 284, some due to ,
some due to lists). The number of passing parser tests dropped slightly from
303 to 297 (or 301/295 on weekdays other than Thursday).
Change-Id: I85ff40571833713388c6523e6a4ba2e94daa3807
* Don't escape html-syntax pre content for now; Should parse this with a new
pre content production later (which needs to be split out of the regular pre
production in the tokenizer)
* Protect indent-pre content from start-of-line syntax escaping
* Preserve extra leading spaces in the tokenizer
* Two more (now 284) round-trip tests are passing
Change-Id: I199b89c0ee7fae12546df10c1b5117c97caccac5
* Started to add more complete tag source range (tsr) annotations to most
start / empty tags. These replace the old sourcePos and sourceTagPos
annotations, and look more promising for general round-tripping than block
source ranges (bsr). See
http://www.mediawiki.org/wiki/User:GWicke/Parsoid_source_ranges for some
notes on this.
* Added an escapeWikitext method in the serializer that tokenizes supposedly
text-only content from the DOM with the tokenizer and wraps runs of returned
non-text tokens into nowiki tags. The source corresponding to non-text
tokens is retrieved using the tsr annotations.
* Removed old (unused) table productions to avoid confusion.
* 276 round-trip tests are passing, vs. 283 without escaping.
Known issues:
* harmless for now, can be improved later: urllinks in external link captions
are wrapped in nowiki. Example HTML:
<a rel='mw:extLink' href="http://example.com">http://example2.com</a>
* some start-of-line syntax in wiki-syntax preformatted blocks might be
wrapped into nowiki when that would not really be needed. Example HTML DOM:
<pre>
* foo
* bar
</pre>
Change-Id: I01c34aedd5c566614d36924add47a6a960e91987
An improvement, but there still are some extra newlines inserted after
paragraphs. Example input:
-------
Foo:
{|
|foo
|}
-------
Extra newlines are inserted after the Foo: and the foo in the table. They are
not fed as tokens or text to the tree builder, so there is likely a bug in the
html5 library or JSDom.
Change-Id: I83eb6180e3cd1c4e7f9b15b31d339e1d32bccd3f
* Possibly more efficient under heavy GC load -- untested.
* No change in time and memory use for single file parsing.
Change-Id: Id2f3f65cc0e5f38ed968bbda60b97e46523e700e
* Moved the tail attribute to the second attribute (a bit cleaner)
* Disallowed newlines in the tail production
* Improved the selection of round-tripped href vs. generated content vs. href
in the serializer
* renamed state.linkTail to state.dropTail
Change-Id: I5d98c704b6ea566011e22237786f8da17548570f
* Don't explicitly add the newline in the pre, as we preserve newline tokens
now. This avoids doubling of newlines when round-tripping.
* Use the sHref attribute even if the href contains spaces.
Change-Id: I8bec8fbfd6a7836bf2e5eec20869a0edd95c93b6
* The state of syntax stops is now properly included in the cache key for the
tokenizer-internal backtracking cache. This fixes some mis-parses when
re-parsing a bit of text with different flags.
* Clear the backtracking cache after each toplevelblock. This drops the peak
memory usage when expanding [[:en:Barack Obama]] from ~380M to ~110M.
Change-Id: Icdb879cae5907e4595903dd6acba2e686e8c2e4b
This patch fixes a tokenizer syntax error encountered on
[[:en:Template:JacksonvilleWikiProject-Member]] and [[:en:Template:Infobox
former country]] by allowing optional whitespace before start-of-line template
syntax.
Change-Id: Ic214a731de58bf766e51f23d5e24ea2ce6788f58
254 round-trip tests (up from 184) are now passing.
Also:
* tweaked runtests.sh slightly (use less -R instead of -r).
* made sure the EOFTk is preserved in phase 3 transforms
Change-Id: I1de22186bdb78e52019370e43f096877005b8f5a
* Added a generic stx_v 'syntax variant' round-trip attribute
* For pre, use stx:'html' vs. no syntax annotation. This might not be 100%
safe for arbitrary html input, so we might want to flip this to stx:'wiki'
later.
* 181 round-trip tests passing
Change-Id: If6080917a3a7c069066db3db60efe59b1f6c28d8
* Changed RDFa for links according to
http://www.mediawiki.org/wiki/Parsoid/RDFa_vocabulary
* Added basic support for internal/external link serialization
* Moved numbering of external links from tokenizer to LinkHandler
* Added round-tripping for generic HTML tags
* Replaced nowiki tag with <meta typeOf="mw:tag" content="nowiki"> and <meta
typeOf="mw:tag" content="/nowiki"> for now.
* 154 round-trip tests passing (node parserTests.js --roundtrip).
Change-Id: I16c4db21b1b543ee57c73e569c83025b64664542
* added stx: 'html' round-trip information for html tags
* added t_stx: 'row' info for row-wise table wiki syntax, and support for it
in the serializer
* the first table row is implicit in wikitext
* renamed lastToken to prevToken in serializer
* strip first newline in an initial chunkCB
Change-Id: I014b046539d1b674d830551c5fd1b74a67f81993
* Tokens are now immutable. The progress of transformations is tracked on
chunks instead of tokens. Tokenizer output is cached and can be directly
returned without a need for cloning. Transforms are required to clone or
newly create tokens they are modifying.
* Expansions per chunk are now shared between equivalent frames via a cache
stored on the chunk itself. Equivalence of frames is not yet ideal though,
as right now a hash tree of *unexpanded* arguments is used. This should be
switched to a hash of the fully expanded local parameters instead.
* There is now a vastly improved maybeSyncReturn wrapper for async transforms
that either forwards processing to the iterative transformTokens if the
current transform is still ongoing, or manages a recursive transformation if
needed.
* Parameters for parser functions are now wrapped in abstract Params and
ParserValue objects, which support some handy on-demand *value* expansions.
Keys are always expanded. Parser functions are converted to use these
interfaces, and now properly expand their values in the correct frame.
Making this expansion lazier is certainly possible, but would complicate
transformTokens and other token-handling machinery. Need to investigate if
it would really be worth it. Dead branch elimination is certainly a bigger
win overall.
* Complex recursive asynchronous expansions should now be closer to correct
for both the iterative (transformTokens) and recursive (maybeSyncReturn
after transformTokens has returned) code paths.
* Performance degraded slightly. There are no micro-optimizations done yet
and the shared expansion cache still has a low hit rate. The progress
tracking on chunks is not yet perfect, so there are likely a lot of unneeded
re-expansions that can be easily eliminated. There is also more debug
tracing right now. Obama currently expands in 54 seconds on my laptop.
Change-Id: I4a603f3d3c70ca657ebda9fbb8570269f943d6b6
This makes it possible to transclude list items from a template.
Note: "5 quotes" test is broken by this patch, it appears that ListHandler
newline processing is changing some state which mysteriously affects the
QuoteTransformer. This is ominous, hopefully there's a simple explanation...
gwicke: fix a bug in tokenizer triggered by definition lists like this:
**; foo : bar
Change-Id: I4e3a86596fe9bffcbfc4bf22895362c3bf742bad