* the used RDFa types for links are now identical to those listed in
http://www.mediawiki.org/wiki/Parsoid/RDFa_vocabulary, and are supported for
serialization
* Editors are responsible for adjusting the type when converting between link
types. Adding a caption to an mw:UrlLink for example should convert it into
an mw:ExtLink.
Update: rebased on top of trace patches
Change-Id: Ie1b882e2b3fbad08be94769e1167dccd8dfea65d
* Source-based round-tripping now uses typeof="mw:Placeholder" instead of
data-gen.
* mw:Image is supported for round-tripping, but not yet for modifications as
it is still source-based
Change-Id: Ie5cf4e54de0163168c25c2b5c09380657a15970f
* This makes wikilink attrs more similar to ext links.
* Added 'content' key to ISBN links, but couldn't add it to regular
wikilinks yet because of complexity of how they are handled in
the rest of the pipeline. Changing this requires fixing up other
parts down the pipeline -- something for later.
* Fixed up wikilink handler to use named lookup for 'href' and
'tail' rather than positional lookup. Content lookup is still
positional as before.
Change-Id: I657b1f338d38df3cfdfa99f27ac46e7fe1c9fd65
* Copied over utility methods from mediawiki.parser.environment.js
to ext.Util.js.
* Moved over utility method from mediawiki.parser.defines.js to
ext.Util.js.
* Converted Util to be a singleton object rather than an allocatable
class. There is no reason to allocate a new utility class everywhere
since this utility object has no useful state.
* Fixed up use of utility methods to use Util rather than env.
Change-Id: Ib81f96b894f6528f2ccbe36e1fd4c3d50cd1f6b7
- Added extra debug_name parameter to addTransform which is
used in addTransform to output useful trace info.
Change-Id: I160ba0c45f681149375e32ab19f97baa439b09a8
Now that we have access to the contents we can more easily compare the content
with link targets. This is still to do- this commit only converts the link
handler to work on the collected tokens.
* Start to implement latest RDFa spec from
http://www.mediawiki.org/wiki/Parsoid/RDFa_vocabulary
* Capitalize types, add mw:Entity type for html entities
* Detect changes to entities using tokenCollector and srcContent
Change-Id: I45429f4b930858a16e166ef8377c8f6f5114c414
* Looks like I misled myself in commit 88fc91 -- that wikitext
roundtripped perfectly because it went through the 'src' route
because it was a thumbnail with an explicit image which doesn't
go through renderThumb -- so, the serializer simply spit out the
original 'src' string and hence perfect rt :).
* More whitespace preserving fixes in LinkHandler.
* Also changed resource value in the img tag to use the original
filename rather than the normalized capitalized filename.
* 2 more parsertests rt -- now upto 400.
Change-Id: I144a6486dd9d07da8a74a68700fe96c78d192826
* Changed PrefixImageOptions so that thumb and thumbnail are
distinct key-value pairs. Without this fix, cannot distinguish
between thumb=foo.jpg and thumbnail=foo.jpg
* Fixed link handler so whitespace is preserved around prefixed image
options.
* Fixed figure handler to process the 3 different kind of image options:
size, simple image options, and prefixed image options.
* There is a hack/fixme for "upright: aspect" prefixed image option
which needs to be looked into.
* Still need to fix uppercasing of the image resource name.
With these fixes, the following wikitext roundtrips perfectly
(after newline breaks are removed)
[[Image:Foo.jpg|thumbnail = 'baby.jpg'|100x100px|center| alt =bbbbb|
upright=true|bottom|link='http://foo.bar'|
This is a [[Linked Caption]] in the image]]
Change-Id: I6606df56874c2b97f00f08cb6bbeaec9878167d3
* For now, extracted image markup options out of the link handler.
* This info will also be used by the serializer.
* More properties/global constants can be moved into this structure
over time.
Change-Id: I4cfbfd703f42e93fbad52b38b435f68d8a5c22ee
* Minor refactoring
* Cleared src in dataAttribs in renderThumb since we can serialize
thumbs now (or at least we can once all bugs are fixed and missing
pieces are handled).
Change-Id: If18865801cdd3d89c1477e68bfa3e13107c45b40
Anything with data-gen="both" and dataAttribs.src defined serializes to
dataAttribs.src and drops its contents (if any). We can use this to round-trip
elements we don't properly parse or serialize yet. Without RDFa info, the
editor will not touch the contents after encountering data-gen="both".
Change-Id: Ia39e5fdd765c2c9b36f26313455685d29f118839
* Don't consider them for auto-numbered links
* Don't insert a trailing space if the content is empty
These links are still wrapped in nowiki on round-tripping since the
valid/invalid url determination is done in the LinkHandler and not the
Tokenizer as it is configuration-dependent. Not incorrect for rendering (and
perhaps easier to understand for humans too), but might still introduce a
dirty diff. We'll still need reconciliation / damage tracking in the end ;)
Change-Id: I959ebc1b7f81d110a1141bb38ba5ee97f52ebf96
* Moved the tail attribute to the second attribute (a bit cleaner)
* Disallowed newlines in the tail production
* Improved the selection of round-tripped href vs. generated content vs. href
in the serializer
* renamed state.linkTail to state.dropTail
Change-Id: I5d98c704b6ea566011e22237786f8da17548570f
If the href would not denormalize, add a copy of the original href in data-mw
and use it to preserve non-conventional capitalization etc.
Change-Id: Ifef50eec7343b0e6b0ba66b6d19a8a3e8c9f8001
- Added a tail json attribute for wikiLinks
- During serialization, this attribute is used to strip the tail from
the link target and render it after the link
[[hen]]s ==> <a ... data-mw="{gc:1, tail: 's'}" ...>hens</a>
==> [[hen]]s
- 2 more roundtrip tests green
Change-Id: I84f3dabaf0271f7a67641a00148467daa8310eb0
* Changed RDFa for links according to
http://www.mediawiki.org/wiki/Parsoid/RDFa_vocabulary
* Added basic support for internal/external link serialization
* Moved numbering of external links from tokenizer to LinkHandler
* Added round-tripping for generic HTML tags
* Replaced nowiki tag with <meta typeOf="mw:tag" content="nowiki"> and <meta
typeOf="mw:tag" content="/nowiki"> for now.
* 154 round-trip tests passing (node parserTests.js --roundtrip).
Change-Id: I16c4db21b1b543ee57c73e569c83025b64664542
* Tokens are now immutable. The progress of transformations is tracked on
chunks instead of tokens. Tokenizer output is cached and can be directly
returned without a need for cloning. Transforms are required to clone or
newly create tokens they are modifying.
* Expansions per chunk are now shared between equivalent frames via a cache
stored on the chunk itself. Equivalence of frames is not yet ideal though,
as right now a hash tree of *unexpanded* arguments is used. This should be
switched to a hash of the fully expanded local parameters instead.
* There is now a vastly improved maybeSyncReturn wrapper for async transforms
that either forwards processing to the iterative transformTokens if the
current transform is still ongoing, or manages a recursive transformation if
needed.
* Parameters for parser functions are now wrapped in abstract Params and
ParserValue objects, which support some handy on-demand *value* expansions.
Keys are always expanded. Parser functions are converted to use these
interfaces, and now properly expand their values in the correct frame.
Making this expansion lazier is certainly possible, but would complicate
transformTokens and other token-handling machinery. Need to investigate if
it would really be worth it. Dead branch elimination is certainly a bigger
win overall.
* Complex recursive asynchronous expansions should now be closer to correct
for both the iterative (transformTokens) and recursive (maybeSyncReturn
after transformTokens has returned) code paths.
* Performance degraded slightly. There are no micro-optimizations done yet
and the shared expansion cache still has a low hit rate. The progress
tracking on chunks is not yet perfect, so there are likely a lot of unneeded
re-expansions that can be easily eliminated. There is also more debug
tracing right now. Obama currently expands in 54 seconds on my laptop.
Change-Id: I4a603f3d3c70ca657ebda9fbb8570269f943d6b6
* All parser pipelines including tokenizer and DOM stuff are now constructed
from a 'recipe' data structure in a ParserPipelineFactory.
* All sub-pipelines of these can now be cached
* Event registrations to a pipeline are directly forwarded to the last
pipeline member to save relatively expensive event forwarding.
* Some APIs for on-demand expansion / format conversion of parameters from
parser functions are added:
param.to('tokens/expanded', cb)
param.to('text/wiki', cb) (this does not work yet)
All parameters are additionally wrapped into a Param object that provides
method for positional parameter naming (.named() or conversion to a dict
(.dict()).
* The async token transform manager is now separated from a frame object, with
the frame holding arguments, an on-demand expansion method and loop checks.
* Only keys of template parameters are now expanded. Parser functions or
template arguments trigger an expansion on-demand. This (unsurprisingly)
makes a big performance difference with typical switch-heavy template
systems.
* Return values from async transforms are no longer used in favor of plain
callbacks. This saves the complication of having to maintain two code paths.
A trick in transformTokens still avoids the construction of unneeded
TokenAccumulators.
* The results of template expansions are no longer buffered.
* 301 parser tests are passing
Known issues:
* Cosmetic cleanup remains to do
* Some parser functions do not support async expansions yet, and need to be
modified.
Change-Id: I1a7690baffbe8141cadf67270904a1b2e1df879a
* Ignore safesubst for now
* Remove an unneeded whitelist entry
* Make sure the caption is not lost for thumbs (fix to last commit) and remove
debug print
Change-Id: I243584ed0838cf7c3b4110fe9cdf869272477312
The HTML5 parser we are using to normalize expected HTML output in parserTests
reverses the order of attributes (see
https://github.com/aredridel/html5/pull/53 for the fix). Remove whitelist
entries concerned with this and use the proper order in external image
attributes.
Change-Id: If1868cae05396a150757c85a20473ab756cbcd97
* DOM based on Wikia's thumb output: HTML5, clean caption without magnify
icon.
* basic RDFa annotations, but most options additionally in data-mw object-
might want to move more (or all?) of those into RDFa data using meta tags.
* no support yet for framed or other formats, image scaling etc
* also tweaked some config options in the environment
Change-Id: Ie461fcdce060cfc2dec65cc057709ae650ef3368
wgUploadPath configurable. Also change the hard-coded fall-back image sizes to
sensible defaults. This breaks three parser tests until image size retrieval
from the wiki is implemented.
construction' part of the HTML5 spec:
http://www.whatwg.org/specs/web-apps/current-work/multipage/urls.html#url-manipulation-and-creation
Removed a few whitelisted test cases that are now passing directly.
The encoding canonicalization could also be moved to the Sanitizer. Doing this
early in token stream processing however has the advantage of providing further
transformations uniform data to work with. We could even consider to move this
even further into the tokenizer.
possible to support template / template argument expansion in image options,
and causes little trouble for wikilinks. Non-image wikilinks with multiple
text pipes are quite rare in the dumps, and concatenating description tokens
with a plain '|' is quite easy. 261 parser tests passing.