mirror of
https://gerrit.wikimedia.org/r/mediawiki/extensions/VisualEditor
synced 2024-11-15 18:39:52 +00:00
e2ca8c24c7
This is a bit better than cloning tokens wholesale, but not by much. There is a lot of potential for much better per-token caching with reduced token cloning. Need to map out all dependencies besides token attributes expanded from template parameters or other scoped state. Even if tokens themselves don't need transformation, they might still need to be considered for other token transformers, so simply keeping the final rank won't quite work even if the token itself is fully transformed. As a minimum, a shallow clone would need to be made and the rank reset (as in env.cloneTokens). Change-Id: I4329113bb21750bae9a635229ed1b08da75dc614 |
||
---|---|---|
.. | ||
html5 | ||
ext.Cite.js | ||
ext.cite.taghook.ref.js | ||
ext.core.AttributeExpander.js | ||
ext.core.BehaviorSwitchHandler.js | ||
ext.core.LinkHandler.js | ||
ext.core.NoIncludeOnly.js | ||
ext.core.ParserFunctions.js | ||
ext.core.PostExpandParagraphHandler.js | ||
ext.core.QuoteTransformer.js | ||
ext.core.Sanitizer.js | ||
ext.core.TemplateHandler.js | ||
ext.Util.js | ||
ext.util.TokenCollector.js | ||
mediawiki.DOMConverter.js | ||
mediawiki.DOMPostProcessor.js | ||
mediawiki.HTML5TreeBuilder.node.js | ||
mediawiki.LinearModelConverter.js | ||
mediawiki.parser.defines.js | ||
mediawiki.parser.environment.js | ||
mediawiki.parser.js | ||
mediawiki.Title.js | ||
mediawiki.tokenizer.peg.js | ||
mediawiki.TokenTransformManager.js | ||
package.json | ||
parse.js | ||
pegTokenizer.pegjs.txt | ||
README.txt |
A combined Mediawiki and html parser in JavaScript running on node.js. Please see (https://www.mediawiki.org/wiki/Future/Parser_development) for an overview of the current implementation, and instructions on running the tests. You might need to set the NODE_PATH environment variable, export NODE_PATH="node_modules" Download the dependencies: npm install Run tests: npm test