Commit graph

147 commits

Author SHA1 Message Date
Subramanya Sastry 49ed0d3adf Fixed parser and serializer to deal with a 4+ length dash sequence.
Change-Id: If7caaefec1ad55e7604712ef959ff0c843392adf
2012-07-13 15:12:09 -05:00
Gabriel Wicke 1736e52bfb Abstract out chunk emission from tokenizer
Patch by Adam Wight, fixes bug #35377.

Change-Id: I183baeed8dd78e7d3c775f44d62bec8e6f9fc608
2012-06-30 10:39:12 +02:00
Gabriel Wicke 9f753d8009 Source-based round-tripping for behavior switches
Change-Id: I46d12d338314a8dbfdc9a8448a74680e67c3a720
2012-06-28 18:20:13 +02:00
Gabriel Wicke ff414ad825 Add generic source round-trip mode, and use it for plain images (for now)
Anything with data-gen="both" and dataAttribs.src defined serializes to
dataAttribs.src and drops its contents (if any). We can use this to round-trip
elements we don't properly parse or serialize yet. Without RDFa info, the
editor will not touch the contents after encountering data-gen="both".

Change-Id: Ia39e5fdd765c2c9b36f26313455685d29f118839
2012-06-28 17:44:26 +02:00
Gabriel Wicke e1a7d10063 Fix round-tripping of invalid external links somewhat
* Don't consider them for auto-numbered links
* Don't insert a trailing space if the content is empty

These links are still wrapped in nowiki on round-tripping since the
valid/invalid url determination is done in the LinkHandler and not the
Tokenizer as it is configuration-dependent. Not incorrect for rendering (and
perhaps easier to understand for humans too), but might still introduce a
dirty diff. We'll still need reconciliation / damage tracking in the end ;)

Change-Id: I959ebc1b7f81d110a1141bb38ba5ee97f52ebf96
2012-06-28 16:12:23 +02:00
Gabriel Wicke 198e55a32b Update nowiki handling to latest spec; some fixes to it
346 round-trip tests are passing now (up from 343).

Change-Id: I27bdc9c5e010a13c2b4dddc6f263cbf9d3adac36
2012-06-28 14:57:05 +02:00
Gabriel Wicke 17af335748 Fix a crasher in unbalanced heading tokenization
Example input:

=== foo ==

Old result:
http://www.mediawiki.org/w/index.php?title=VisualEditor:Test&diff=prev&oldid=554403

Change-Id: I0bc135884833607cedb62ec9c045310df3649dd8
2012-06-28 12:34:32 +02:00
Gabriel Wicke 424a246b00 Rename data-mw-gc to data-gen. Credit to James!
Change-Id: Iacbe20b355ddf5f12fffb71ff4dd978ac4364928
2012-06-27 19:08:14 +02:00
Gabriel Wicke df26663a3f Add basic tsr on indent-pre end tag
This is a zero-length tsr for now (and thus not 100% correct), but will do the
job for starttag / endtag range establishment

Change-Id: Iedd50ad319aa8d5916434fb6744deb04e031e456
2012-06-27 18:08:49 +02:00
Gabriel Wicke c02218c736 Merge changes Idfa5d6a8,I700142a5
* changes:
  Represent nowiki as span instead of meta
  Round-trip html entities and introduce data-mw-gc attribute
2012-06-27 16:07:48 +00:00
Gabriel Wicke a1d05976ce Merge "Small (and incomplete) fix to table cell tsr" 2012-06-27 12:45:39 +00:00
Gabriel Wicke 53451bfc50 Small (and incomplete) fix to table cell tsr
Change-Id: I14347939de32af698d7ce0b649165982908c49aa
2012-06-27 14:45:12 +02:00
Gabriel Wicke 7108ee985a Represent nowiki as span instead of meta
Change-Id: Idfa5d6a8ee7b2d17205779361ca69d075a79964d
2012-06-27 13:59:14 +02:00
Gabriel Wicke 0b9a420129 Round-trip html entities and introduce data-mw-gc attribute
297 round-trip tests are passing with this patch.

TODO:
* generalize data-mw-gc handling in the serializer for any tag
* use data-mw-gc="both" and data-mw.src: 'the wikitext' for round-tripping of
  wikitext structures, optionally with some presentational (but read-only)
  content
* use span and data-mw-gc="both" for nowiki

Change-Id: I700142a56818977c20c8c06e6a5f2e77a708d25e
2012-06-27 12:52:52 +02:00
Subramanya Sastry 1a504a5f54 Added tokenizer support for ----
Change-Id: Idc5519350d11ae91b2ec64553f847d56e22d63bb
2012-06-25 16:40:34 -05:00
Subramanya Sastry d5e6ec34aa Deleted dead PEG productions
Change-Id: I9b859f79f9900b3d320aa1ad0283a4b5ae6c4331
2012-06-25 13:17:01 -05:00
Gabriel Wicke b3bd2ffe8d Fix definition list parsing and round-trip single vs. multi-line dt/dd
* Removed murky ' :' -> ' :' replacement in tokenizer. This breaks four
  parser tests, and should be fixed in a token stream transformer or DOM
  postprocessor. This replacement clashes with round-tripping, and is not
  terribly important visually.
* Added stx:row annotation to single-line dt/dd pairs and use it to preserve
  single-line syntax in the serializer. There is no attempt yet to support the
  addition of nested lists in an originally single-line dd. We'd need to look
  ahead in the serializer to support this. Perhaps the editor can simply drop
  data-mw in that case.
* Switched default dt/dd serialization to multi-line. This supports all nested
  lists and multiple dds.
* Don't close dls when switching from dt to dd or back in the token stream
  ListHandler.

Overall 290 round-trip tests are passing now (up from 284, some due to  ,
some due to lists). The number of passing parser tests dropped slightly from
303 to 297 (or 301/295 on weekdays other than Thursday).

Change-Id: I85ff40571833713388c6523e6a4ba2e94daa3807
2012-06-21 17:34:25 +02:00
Gabriel Wicke 344fac19b5 Improve preformatted text handling
* Don't escape html-syntax pre content for now; Should parse this with a new
  pre content production later (which needs to be split out of the regular pre
  production in the tokenizer)
* Protect indent-pre content from start-of-line syntax escaping
* Preserve extra leading spaces in the tokenizer
* Two more (now 284) round-trip tests are passing

Change-Id: I199b89c0ee7fae12546df10c1b5117c97caccac5
2012-06-20 19:28:34 +02:00
Gabriel Wicke c9d3db8f34 Fix a few round-tripping and list issues
At least partly fixes some bugs in
http://www.mediawiki.org/wiki/Parsoid/Bug_test_cases. 276 round-trip tests are
passing.

* Fixes
  http://www.mediawiki.org/wiki/Parsoid/Bug_test_cases#extra_newline_after_empty_dd,
  except for lost newline in 'working' example before next heading
* Fixes newlines in definition lists
  (http://www.mediawiki.org/wiki/Parsoid/Bug_test_cases#dd_indentation etc),
  but does not fix missing / incorrect bullets for those

Change-Id: I21f66e265e43e1d1a4c7da70984a9984b8e6d0dd
2012-06-20 13:53:47 +02:00
Gabriel Wicke e117f09362 Wikitext escaping and quite complete source range tracking
* Started to add more complete tag source range (tsr) annotations to most
  start / empty tags. These replace the old sourcePos and sourceTagPos
  annotations, and look more promising for general round-tripping than block
  source ranges (bsr). See
  http://www.mediawiki.org/wiki/User:GWicke/Parsoid_source_ranges for some
  notes on this.
* Added an escapeWikitext method in the serializer that tokenizes supposedly
  text-only content from the DOM with the tokenizer and wraps runs of returned
  non-text tokens into nowiki tags. The source corresponding to non-text
  tokens is retrieved using the tsr annotations.
* Removed old (unused) table productions to avoid confusion.
* 276 round-trip tests are passing, vs. 283 without escaping.

Known issues:
* harmless for now, can be improved later: urllinks in external link captions
  are wrapped in nowiki. Example HTML:

<a rel='mw:extLink' href="http://example.com">http://example2.com</a>

* some start-of-line syntax in wiki-syntax preformatted blocks might be
  wrapped into nowiki when that would not really be needed. Example HTML DOM:

<pre>
* foo
* bar
</pre>

Change-Id: I01c34aedd5c566614d36924add47a6a960e91987
2012-06-19 23:36:44 +02:00
Gabriel Wicke a146fcb8ad Improve the handling of newlines for round-tripping
An improvement, but there still are some extra newlines inserted after
paragraphs. Example input:

-------

Foo:
{|
|foo
|}
-------

Extra newlines are inserted after the Foo: and the foo in the table. They are
not fed as tokens or text to the tree builder, so there is likely a bug in the
html5 library or JSDom.

Change-Id: I83eb6180e3cd1c4e7f9b15b31d339e1d32bccd3f
2012-06-06 10:17:03 +02:00
Subramanya Sastry fe6f289486 Merge changes I5d98c704,Ib8d3de75
* changes:
  A few tweaks to link round-tripping
  Use word diff if --color is enabled
2012-06-05 16:04:23 +00:00
Subramanya Sastry b095db4303 Simpler implementation of flatten.
* Possibly more efficient under heavy GC load -- untested.
* No change in time and memory use for single file parsing.

Change-Id: Id2f3f65cc0e5f38ed968bbda60b97e46523e700e
2012-06-05 10:47:46 -05:00
Gabriel Wicke dc3168cf6d A few tweaks to link round-tripping
* Moved the tail attribute to the second attribute (a bit cleaner)
* Disallowed newlines in the tail production
* Improved the selection of round-tripped href vs. generated content vs. href
  in the serializer
* renamed state.linkTail to state.dropTail

Change-Id: I5d98c704b6ea566011e22237786f8da17548570f
2012-06-05 17:26:27 +02:00
Gabriel Wicke d16032ae9a Track html syntax in block_tag production
Change-Id: If560523644f007485809762f12216e08fb3c3ed3
2012-06-05 12:39:56 +02:00
Gabriel Wicke 92f753a365 Pre and link target improvements
* Don't explicitly add the newline in the pre, as we preserve newline tokens
  now. This avoids doubling of newlines when round-tripping.
* Use the sHref attribute even if the href contains spaces.

Change-Id: I8bec8fbfd6a7836bf2e5eec20869a0edd95c93b6
2012-06-04 14:03:05 +02:00
Gabriel Wicke ece2b0f810 Tokenizer backtracking cache bug fix and memory savings
* The state of syntax stops is now properly included in the cache key for the
  tokenizer-internal backtracking cache. This fixes some mis-parses when
  re-parsing a bit of text with different flags.
* Clear the backtracking cache after each toplevelblock. This drops the peak
  memory usage when expanding [[:en:Barack Obama]] from ~380M to ~110M.

Change-Id: Icdb879cae5907e4595903dd6acba2e686e8c2e4b
2012-06-01 12:53:49 +02:00
Gabriel Wicke c5d7e01944 Another tokenizer robustness improvement
This patch fixes a tokenizer syntax error encountered on
[[:en:Template:JacksonvilleWikiProject-Member]] and [[:en:Template:Infobox
former country]] by allowing optional whitespace before start-of-line template
syntax.

Change-Id: Ic214a731de58bf766e51f23d5e24ea2ce6788f58
2012-05-30 18:38:23 +02:00
Gabriel Wicke a133768781 Don't eat '}}' in generic attributes and similar productions
This fixes some syntax errors, at least one in Template:Geobox.

Change-Id: I32338febe25d0833c1d9bc4de293cd15b4cbb7be
2012-05-30 17:37:10 +02:00
Gabriel Wicke 36084c5d93 Preserve original newlines in HTML and serialization
254 round-trip tests (up from 184) are now passing.

Also:
* tweaked runtests.sh slightly (use less -R instead of -r).
* made sure the EOFTk is preserved in phase 3 transforms

Change-Id: I1de22186bdb78e52019370e43f096877005b8f5a
2012-05-29 23:29:03 +02:00
Gabriel Wicke b2adee0ae7 Basic rt support for indent pre variant
* Added a generic stx_v 'syntax variant' round-trip attribute
* For pre, use stx:'html' vs. no syntax annotation. This might not be 100%
  safe for arbitrary html input, so we might want to flip this to stx:'wiki'
  later.
* 181 round-trip tests passing

Change-Id: If6080917a3a7c069066db3db60efe59b1f6c28d8
2012-05-25 18:55:38 +02:00
Gabriel Wicke a31ccaabe4 Support definition lists with empty definition
Change-Id: I81c39a7e49f2ea7ce32cdd3600caeb5eb9f50d84
2012-05-25 15:40:32 +02:00
Gabriel Wicke 39c6f42879 Link round-tripping and other improvements
* Changed RDFa for links according to
  http://www.mediawiki.org/wiki/Parsoid/RDFa_vocabulary
* Added basic support for internal/external link serialization
* Moved numbering of external links from tokenizer to LinkHandler
* Added round-tripping for generic HTML tags
* Replaced nowiki tag with <meta typeOf="mw:tag" content="nowiki"> and <meta
  typeOf="mw:tag" content="/nowiki"> for now.
* 154 round-trip tests passing (node parserTests.js --roundtrip).

Change-Id: I16c4db21b1b543ee57c73e569c83025b64664542
2012-05-22 13:36:06 +02:00
Gabriel Wicke 7e21b7380a Merge "Round-trip nowiki" 2012-05-21 17:16:56 +00:00
Gabriel Wicke fb7d5418a5 Round-trip nowiki
Change-Id: I5f7e6a43f5fdc1708ee710b2a601b20db733452c
2012-05-21 18:06:09 +02:00
Gabriel Wicke a6610e52c2 Serializer and table round-tripping improvements
* added stx: 'html' round-trip information for html tags
* added t_stx: 'row' info for row-wise table wiki syntax, and support for it
  in the serializer
* the first table row is implicit in wikitext
* renamed lastToken to prevToken in serializer
* strip first newline in an initial chunkCB

Change-Id: I014b046539d1b674d830551c5fd1b74a67f81993
2012-05-21 14:59:53 +02:00
Gabriel Wicke e2815b516c Start to handle links
Change-Id: I1fb975910651820fd889d77152562fd4fbcb5db8
2012-05-17 14:32:56 +02:00
Gabriel Wicke d918fa18ac Big token transform framework overhaul part 2
* Tokens are now immutable. The progress of transformations is tracked on
  chunks instead of tokens. Tokenizer output is cached and can be directly
  returned without a need for cloning. Transforms are required to clone or
  newly create tokens they are modifying.

* Expansions per chunk are now shared between equivalent frames via a cache
  stored on the chunk itself. Equivalence of frames is not yet ideal though,
  as right now a hash tree of *unexpanded* arguments is used. This should be
  switched to a hash of the fully expanded local parameters instead.

* There is now a vastly improved maybeSyncReturn wrapper for async transforms
  that either forwards processing to the iterative transformTokens if the
  current transform is still ongoing, or manages a recursive transformation if
  needed.

* Parameters for parser functions are now wrapped in abstract Params and
  ParserValue objects, which support some handy on-demand *value* expansions.
  Keys are always expanded. Parser functions are converted to use these
  interfaces, and now properly expand their values in the correct frame.
  Making this expansion lazier is certainly possible, but would complicate
  transformTokens and other token-handling machinery. Need to investigate if
  it would really be worth it. Dead branch elimination is certainly a bigger
  win overall.

* Complex recursive asynchronous expansions should now be closer to correct
  for both the iterative (transformTokens) and recursive (maybeSyncReturn
  after transformTokens has returned) code paths.

* Performance degraded slightly. There are no micro-optimizations done yet
  and the shared expansion cache still has a low hit rate. The progress
  tracking on chunks is not yet perfect, so there are likely a lot of unneeded
  re-expansions that can be easily eliminated. There is also more debug
  tracing right now. Obama currently expands in 54 seconds on my laptop.

Change-Id: I4a603f3d3c70ca657ebda9fbb8570269f943d6b6
2012-05-15 17:05:47 +02:00
Adam Wight 0a7f0b7630 List markup is created during the sync23 phase.
This makes it possible to transclude list items from a template.

Note: "5 quotes" test is broken by this patch, it appears that ListHandler
newline processing is changing some state which mysteriously affects the
QuoteTransformer.  This is ominous, hopefully there's a simple explanation...

gwicke: fix a bug in tokenizer triggered by definition lists like this:
**; foo : bar

Change-Id: I4e3a86596fe9bffcbfc4bf22895362c3bf742bad
2012-05-08 11:39:36 +02:00
Gabriel Wicke 909633ea08 Improve template / tplarg precedence in tokenizer
Change-Id: If9b24b42ea223e0f30f906a83496d73ec60c4a0d
2012-05-04 13:17:06 +02:00
Gabriel Wicke 30a83d7fd7 Accept wikilink parameters with dangling equal ('|arg=|')
Change-Id: Ib4f6d186da2a74522b17c377dac5c9a7de7e5861
2012-04-27 11:35:00 +02:00
Gabriel Wicke 1d70e7b81c Disable preformatted text from indents in template args
Change-Id: I84144d3fab6541ed264d9b092806c8bf9de6e8b2
2012-04-27 10:45:08 +02:00
Gabriel Wicke 3be4992782 'Obama finally expands' ;) Misc fixes and documentation updates
* [[:en:Barack Obama]] can now be expanded in 77 seconds using 330MB RAM,
  while it would prevously run out of RAM after ~30 minutes. Wohoooo!
  The token transform framework rework really paid off.
* 303 parser tests are passing in the new record time of 5.5 seconds. Two more
  tests are passing since these tests expect the day of the week to be
  Thursday.  Won't be the case tomorrow.

Change-Id: I56e850838476b546df10c6a239c8c9e29a1a3136
2012-04-26 18:18:08 +02:00
Gabriel Wicke 8368e17d6a Biggish token transform system refactoring
* All parser pipelines including tokenizer and DOM stuff are now constructed
  from a 'recipe' data structure in a ParserPipelineFactory.

* All sub-pipelines of these can now be cached

* Event registrations to a pipeline are directly forwarded to the last
  pipeline member to save relatively expensive event forwarding.

* Some APIs for on-demand expansion / format conversion of parameters from
  parser functions are added:

  param.to('tokens/expanded', cb)
  param.to('text/wiki', cb) (this does not work yet)

  All parameters are additionally wrapped into a Param object that provides
  method for positional parameter naming (.named() or conversion to a dict
  (.dict()).

* The async token transform manager is now separated from a frame object, with
  the frame holding arguments, an on-demand expansion method and loop checks.

* Only keys of template parameters are now expanded. Parser functions or
  template arguments trigger an expansion on-demand. This (unsurprisingly)
  makes a big performance difference with typical switch-heavy template
  systems.

* Return values from async transforms are no longer used in favor of plain
  callbacks. This saves the complication of having to maintain two code paths.
  A trick in transformTokens still avoids the construction of unneeded
  TokenAccumulators.

* The results of template expansions are no longer buffered.

* 301 parser tests are passing

Known issues:

* Cosmetic cleanup remains to do
* Some parser functions do not support async expansions yet, and need to be
  modified.

Change-Id: I1a7690baffbe8141cadf67270904a1b2e1df879a
2012-04-25 16:51:36 +02:00
Gabriel Wicke c688b039de Collected tweaks
* less verbose logging in noinclude processing and template expansion
* Give priority to the processing of templates transcluded from transclusions
  to get closer to depth-first processing. This serves to minimize memory
  usage from queued-up tokens.
* Increase the maximum outstanding requests per template retrieval. 10000
  amazingly proved too low a limit on some big pages.
* Only process a single template request callback at a time for now
* Add a debug print in the treebuilder wrapper
* Don't treat multiple comments on a single line as a single comment to match
  the PHP parser's behavior

Change-Id: I9a86b6d7bec3b9e1f17415daf1bf74170240721a
2012-04-16 15:47:03 +02:00
Gabriel Wicke efd4c026ea Disallow &lt; and &gt; in external link urls
Change-Id: Id865c3d46b33b182bb5b244e77e815c0afd7fa49
2012-04-16 15:36:56 +02:00
Gabriel Wicke df050e4481 Convert external link syntax stops to stack
Eat unbalanced external link parts within template parameters. This does not
produce the same output as the PHP parser
(try echo '{{YouTube}}' | node parse.js), but preserves a level of sanity.
Need to check how common this is for external links. If it is rare enough,
moving the ']' after the parser function manually would fix the rendering for
the YouTube case.

Change-Id: I597d808efff36baa22191e7946a0061cc31120e8
2012-04-13 11:08:42 +02:00
Gabriel Wicke bff43938f6 Support noinclude/includeonly/onlyinclude in attributes
Fun test case:
{|
|-<includeonly>
foo
</includeonly>
|Hello
|}

Change-Id: I353bb287d3967ade549fbcb4ae64511a1f1f7e36
2012-04-11 17:37:25 +02:00
Gabriel Wicke 5a33099875 Improve template tokenization in template arguments
Taxobox tables now render pretty much correctly.

Change-Id: I5a0564138ff0c688d8a5a69b7867646fd3763946
2012-04-10 16:40:49 +02:00
Gabriel Wicke dbdd320348 Improve parameter tokenization support especially for table rows
Change-Id: I961d69e228b96adc69ea9acb3733d13f5898602d
2012-04-05 16:00:26 +02:00