An improvement, but there still are some extra newlines inserted after
paragraphs. Example input:
-------
Foo:
{|
|foo
|}
-------
Extra newlines are inserted after the Foo: and the foo in the table. They are
not fed as tokens or text to the tree builder, so there is likely a bug in the
html5 library or JSDom.
Change-Id: I83eb6180e3cd1c4e7f9b15b31d339e1d32bccd3f
* Possibly more efficient under heavy GC load -- untested.
* No change in time and memory use for single file parsing.
Change-Id: Id2f3f65cc0e5f38ed968bbda60b97e46523e700e
* Moved the tail attribute to the second attribute (a bit cleaner)
* Disallowed newlines in the tail production
* Improved the selection of round-tripped href vs. generated content vs. href
in the serializer
* renamed state.linkTail to state.dropTail
Change-Id: I5d98c704b6ea566011e22237786f8da17548570f
Pages titles with a wikipedia interwiki prefix now load the page from
corresponding Wikipedia. Links in a page then stay within the given language.
Note that Parsoid currently makes no effort to recognize localized namespaces,
so it won't render media files, categories etc correctly.
Change-Id: I7bc4102e81a402772ea23231170734d580ea15b9
Functional changes (fixes):
* Make writeElement() also update parentNode and parentType for openings
* Also add to fixupStack when opening a wrapper for a text node
Non-functional changes (cleanup&docs):
* Document all variables at the beginning of the function
* Group variables according to where/how they're used
* Move expectedType into writeElement()
* Kill node, duplicates parentNode unnecessarily
* Kill paragraphOpened, was misnamed and unnecessary
* Rename closedElements to reopenElements
Change-Id: Ie5b4e4f30b267943048fdc170accb29139039192
* Push entire elements onto openingStack rather than type strings
* When closing an element, build a clone of the opening and push it onto
closedElements, then insert that clone when reopening the element
Change-Id: I8b0fb44394aed6c471dc6dacaab03e44c2333733
* Don't explicitly add the newline in the pre, as we preserve newline tokens
now. This avoids doubling of newlines when round-tripping.
* Use the sHref attribute even if the href contains spaces.
Change-Id: I8bec8fbfd6a7836bf2e5eec20869a0edd95c93b6
Lists interrupted by non-empty lines would not close the list properly.
Register for any token instead of just for newlines and close the list if no
listItem follows the newline.
Change-Id: I1743901e3db541bbeda78d17707db943e6ceb9b9
If the href would not denormalize, add a copy of the original href in data-mw
and use it to preserve non-conventional capitalization etc.
Change-Id: Ifef50eec7343b0e6b0ba66b6d19a8a3e8c9f8001
A tail containing regexp syntax (a ? in [[:en:Main Page]]) would crash the
serializer. Use substr instead.
Change-Id: I8519aec9c07dfe31893d676b1c936a42d2af74a0
- Added a tail json attribute for wikiLinks
- During serialization, this attribute is used to strip the tail from
the link target and render it after the link
[[hen]]s ==> <a ... data-mw="{gc:1, tail: 's'}" ...>hens</a>
==> [[hen]]s
- 2 more roundtrip tests green
Change-Id: I84f3dabaf0271f7a67641a00148467daa8310eb0
This allows us to check the watchlist checkbox on save dialog.
Added watchlist toggling to ve save api.
Added some i18n messages to core integration.
Change-Id: Ibed8edb2c59ad49e1738c937c3bea518238d0845
* The state of syntax stops is now properly included in the cache key for the
tokenizer-internal backtracking cache. This fixes some mis-parses when
re-parsing a bit of text with different flags.
* Clear the backtracking cache after each toplevelblock. This drops the peak
memory usage when expanding [[:en:Barack Obama]] from ~380M to ~110M.
Change-Id: Icdb879cae5907e4595903dd6acba2e686e8c2e4b
* Added converters to all relevant node implementations
* Added new annotation objects with their own factory
Change-Id: I9870d6d5eac45083929d74d2e58917d0939ca917
Also:
* Refactored tests
* Added tests for ve.dm.Transaction.newFromInsertion
* Added tests for ve.dm.Transaction.newFromRemoval
* Fixed problems with ve.dm.Transaction.newFromInsertion
* Added ve.dm.Node.canBeMergedWith which is partially a port of ve.Node.getCommonAncestorPaths merged with canMerge from within ve.dm.DocumentNode.prepareRemoval from the old ve codebase
Change-Id: Ibbc3887d08286d8ab33fd6296487802d65b319fa
* This routine attempts to rewrite the DOM to maximize tag overlap
and thus minimize tag uses.
* This takes as input a set of tags which participate in the
minimization.
* Tested on the following example
<b><i><u><s>BIUS</s></u></i></b><b><i><s>BIS</s></i></b><b><u><s>BUS</s></u></b><u><i>UI</i></u>
with multiple combinations of the 2^4 possible variations of i,b,u,s
tags: [], ['i','b','u','s'], ['i'], ['b','s'], ['i','b','u']
- But, I am not fully sure if this implements the right behavior when
only a subset of inline tags are provided. Needs discussion and tweaking
as necessary.
* Also tested on few others:
<b>B</b><b><i>BI</i></b><b><i><u>BIU</u></i></b><b><i><u><s>BIUS</s></u></i></b>
<s><i><b>SIB</s></i></b><s><i><u>SIU</u></i></s><i><u>IU</u></i><i>I</i>
* The previous pairwise tag rewriting version fails on several of these
examples, so this new version is a definite improvement.
* No change in parserTests run (203 passing before and after).
* Possible improvements that could/should be undertaken:
- get rid of useless/idempotent add/remove of nodes that don't change
the DOM.
- ensure that node attributes post-restructuring are correct.
Change-Id: Ib4a8b39583fa96a2be880a77021ca81cefa06484
Copy-pasting things like "text<IMAGE>moretext" failed spectacularly,
this commit fixes that.
* Check for content rather than structure in the inserted/removed data
* In the content case
** Run selectNodes() over the removal range, rather than just the cursor
*** i.e. no longer assume that content replacements only affect one node
** If there is structure involved, rebuild all affected nodes
Change-Id: I80e40b5b7c514a3fb105d57e4a17770d0fefaaea
Some of the replacement code was assuming that "does not contain
elements" and "is content" were the same. They're not any more, because
we have content nodes (like image) now, so I need a separate function
to distinguish between these cases.
Change-Id: I206ccdf082b7baddf99d382eb3cdd77ea34fb479
If the last element of the input data array was text, the resulting text
node would have length=0 rather than the expected length value.
Change-Id: I3d089a80b8a447a12ba411b2e11c1b84f14f2959
To allow non sysops to save via VE, refactored ve save api
to use doEdit which bypasses namespace protection.
Add edit link in view nav for non sysop so that they may edit
Add View source link in dropdown for non sysops
Add Edit source link in dropdown for sysops
Cleaned up some of the integration core code
UI tweaks
Change-Id: Ib4249bc5fb7ffa6410e4f2d278aafbb871800981
WARNING: This is not as fast as the implementation of getNodeFromOffset in dm
Change-Id: I5fbe9b6edc66169b9caaa6751fde1b7b752814d1
NOTE: ve.ce.getNodeFromOffset and ve.dm.getNodeFromOffset should be renamed to getBranchNodeFromOffset to clarify that they only return branch nodes.
This patch fixes a tokenizer syntax error encountered on
[[:en:Template:JacksonvilleWikiProject-Member]] and [[:en:Template:Infobox
former country]] by allowing optional whitespace before start-of-line template
syntax.
Change-Id: Ic214a731de58bf766e51f23d5e24ea2ce6788f58
254 round-trip tests (up from 184) are now passing.
Also:
* tweaked runtests.sh slightly (use less -R instead of -r).
* made sure the EOFTk is preserved in phase 3 transforms
Change-Id: I1de22186bdb78e52019370e43f096877005b8f5a
- This is implemented as a post-processing pass.
- Might require additional checks to verify rewriteability.
- Implemented as a pair-wise tag DOM minimization strategy,
i.e. it takes tag pairs (B, I) for ex, and attempts to
normalize the tree just for those tag pairs. Normalizing
across multiple tags is implemented as pairwise rewriting
across all pairs: Ex:(b,i), (b,u),(i,u) for (b,i,u)
- Copied over attributes as part of rewriting, but some of the
attributes lose their meaning on rewriting since tags are
reordered (ex: sourcePosn, sourceTagPosn). How do we handle this?
Output examples and possible issues to fix:
<i><b><u>biu</u></b></i><b><u>bu</u></b><u>u</u>
gets rewritten to:
<u><b><i>biu</i>bu</b>u</u>
But, the equivalent wikitext form:
'''''<u>biu</u>''''''''<u>bu</u>'''<u>u</u>
does not get rewritten because of parsing differences.
This wikitext gets parsed into:
<i><b><u>biu</u>'''</b></i><u>bu<b>u</b></u>
The extra ''' token in the middle thwarts DOM rewriting.
However, a slightly different version:
"'''''<u>biu</u>''<u>bu</u>'''<u>u</u>"
gets properly normalized to:
<u>'''''biu''bu'''u</u>
An alternative, but fun strategy to play with is to use the following
two normalization primitives: S(wap) and M(erge).
- S rewrites T1(T2(x)) into T2(T1(x))
(ex: <b><i>foo</i></b> ==> <i><b>foo</b></i>)
- M rewrites (T(x),T(y)) into (T(x,y)).
(ex: <b>foo</b><b>bar</b> ==> <b>foobar</b>)
The current rewriting strategy could possibly be re-implemented as S-M
rewriting. The problem to solve there would be to find an efficient
rewriting strategy that is guaranteed to lead to a normal form. I may
not play with it now, but just documenting it for later (to play with
in my spare time).
This commit is just as a record of fun/experimental code where I get to
learn details of JS, wikitext, parsing, and DOM manipulation. Next
version of this code will attempt to introduce minimal DOM restructuring
across multiple tags at once which can be more efficient.
gwicke: Removed now passing test from whitelist, and updated another whitelist
entry which is now improved.
Change-Id: Ie97bcb164eb62c34ba61aa76ba2f4c232aa713d8
Save action
1) posts html to parsoid service in exchange for wikitext
2) saves wikitext
3) returns parsed content.
On save, VE is hidden and page content is replaced.
Demoing save in toolbar, followup commit will redesign save options
Change-Id: Ibfbe52de08e3483e1a33f0740c03f96ec2b7f90a
This was caused by the fact that a non-structural leaf can not have children, which makes it appear incompatible as a sibling to an arbitrary structural element (like a paragraph) but since it can not contain content we can check that instead.
Change-Id: Ie3c58b4b43f2aa6921f8f82aa82511e231207854
* Added a generic stx_v 'syntax variant' round-trip attribute
* For pre, use stx:'html' vs. no syntax annotation. This might not be 100%
safe for arbitrary html input, so we might want to flip this to stx:'wiki'
later.
* 181 round-trip tests passing
Change-Id: If6080917a3a7c069066db3db60efe59b1f6c28d8
* Very basic support attribute key-value pairs emitted from templates
* Add TALKPAGENAME stub implementation
* Only show 'no revisions' message for top-level pages
Change-Id: I4b4ac0c7b2c0531ac4b39f0f49f4217302576ab9
Using this argument will only return true if the offset is a place you can add any element to (hence the unrestricted part of it). This is good for testing if a paragraph could potentially be inserted there.
Change-Id: I6cc91da437c52493de03eb687b28966198270fea
This code still needs a lot of work, but it seems to work for most
cases. Things that still need to be done:
* Documentation and comments
* Handling of content and text nodes
** Use Trevor's isContent/canContainContent code which I don't have yet
* Preserve attributes when reopening closed elements
* Tests :)
Change-Id: I3bc16c964ef158693490a61ce12beb21e6fe2a9d
The 'insert' and 'remove' operations weren't implemented in the
transaction processor and were a holdover from the old DM
implementation.
Also migrated the tests, especially those that asserted that consecutive
insert/remove operations were combined (this is no longer the case now
that they are replace operations)
Change-Id: I2379fe92b331c5316f70f4b695397da41581cce9
Removed hard-coding of alien nodes, now aliens are automatically used for anything unknown, and block or inline aliens are selected based on whether the parent element can contain content or not.
Change-Id: I5d2a521ead4f4c96cb44d084a5c160cc20d8048e
* After installing Parsoid (sudo npm install -g in modules/parser), run 'node
server.js' from the api directory and navigate to http://localhost:8000/ and
follow the directions. You can start to navigate the English wikipedia at
http://localhost:8000/Main_Page, or manually enter wikitext or HTML DOM to
convert.
* Uses the express framework, could also use just connect
* Uses the cluster module to manage workers per-core and restart those on
failure
Change-Id: I443f2996ed3df00826b038b7476a2f966ab0c425
* Changed RDFa for links according to
http://www.mediawiki.org/wiki/Parsoid/RDFa_vocabulary
* Added basic support for internal/external link serialization
* Moved numbering of external links from tokenizer to LinkHandler
* Added round-tripping for generic HTML tags
* Replaced nowiki tag with <meta typeOf="mw:tag" content="nowiki"> and <meta
typeOf="mw:tag" content="/nowiki"> for now.
* 154 round-trip tests passing (node parserTests.js --roundtrip).
Change-Id: I16c4db21b1b543ee57c73e569c83025b64664542
Create context instance in surface.
Move over getSelectionRect into ce.surface
Cleanup ve.surface contstructor class
Move linmod/html test objects to sandbox.js
Change-Id: I0cf602ef991100bf6128c68750b02a00566911dc
Switch sandbox demo to use new ui modules
Update VisualEditor.php to use ve2 modules
SpecialPageSandbox working
Change-Id: I8261d6bf6ceb6ae7b7bfa5f61aec6a0121906765
* added stx: 'html' round-trip information for html tags
* added t_stx: 'row' info for row-wise table wiki syntax, and support for it
in the serializer
* the first table row is implicit in wikitext
* renamed lastToken to prevToken in serializer
* strip first newline in an initial chunkCB
Change-Id: I014b046539d1b674d830551c5fd1b74a67f81993
in the new DM. Change method name getAnnotationRange from offset to
getAnnotatedRangeFromOffset. Write tests
Change-Id: I7028803065409e271ceced73e4803954d4a956dc
Lists are a bit tricky, as nested lists are not wrapped in a separate list
item. Should work now though.
Change-Id: I2e5f29f6afa6bdd2d5e5c0c5d019b70c611b73d1
Splits and merges now work, or at least the tests for it pass
The strategy I used is to gather the affected ranges for each of the
following:
* removed stuff
* the entirety of each node touched by a non-zero removal
* if the inserted data busts out of its parent, the entirety of that
parent node (the 'scope')
then get the covering range of all those ranges, and rebuild that.
Change-Id: I7c3b421abc0ba134157ac8b59042675bb1b5073c
getAnnotationRangeFromOffset and offsetContainsAnnotation
which deprecated getAnnotationBoundaries, and getIndexOfAnnotation
write unit tests for proof
Change-Id: I6c0d4e3ca96dd569b1909cd22fce68c3a6fe382c
This fix only affects following transforms, of which there are few right now.
Also removed a stray token mutation in QuoteTransformer.
Change-Id: Id6d4adce944b06fc1a3651cfbf63fc2670125225
* Tokens are now immutable. The progress of transformations is tracked on
chunks instead of tokens. Tokenizer output is cached and can be directly
returned without a need for cloning. Transforms are required to clone or
newly create tokens they are modifying.
* Expansions per chunk are now shared between equivalent frames via a cache
stored on the chunk itself. Equivalence of frames is not yet ideal though,
as right now a hash tree of *unexpanded* arguments is used. This should be
switched to a hash of the fully expanded local parameters instead.
* There is now a vastly improved maybeSyncReturn wrapper for async transforms
that either forwards processing to the iterative transformTokens if the
current transform is still ongoing, or manages a recursive transformation if
needed.
* Parameters for parser functions are now wrapped in abstract Params and
ParserValue objects, which support some handy on-demand *value* expansions.
Keys are always expanded. Parser functions are converted to use these
interfaces, and now properly expand their values in the correct frame.
Making this expansion lazier is certainly possible, but would complicate
transformTokens and other token-handling machinery. Need to investigate if
it would really be worth it. Dead branch elimination is certainly a bigger
win overall.
* Complex recursive asynchronous expansions should now be closer to correct
for both the iterative (transformTokens) and recursive (maybeSyncReturn
after transformTokens has returned) code paths.
* Performance degraded slightly. There are no micro-optimizations done yet
and the shared expansion cache still has a low hit rate. The progress
tracking on chunks is not yet perfect, so there are likely a lot of unneeded
re-expansions that can be easily eliminated. There is also more debug
tracing right now. Obama currently expands in 54 seconds on my laptop.
Change-Id: I4a603f3d3c70ca657ebda9fbb8570269f943d6b6
This means inserting things like </p><p> are now synced correctly and
split the paragraph in the model tree. Merges (removing e.g. </p><p>)
aren't supported yet.
Also, this needs tests, Trevor tells me he's working on porting replace
tests from the old ve/ directory
Change-Id: Ic5050849d7d007a1696dc36548654979aedb53a8
The tree sync for content replacements was adjusting the parent of the
text node affected, rather than the text node itself. This was because
it called getNodeFromOffset(), which returns branch nodes. Switched it
to use selectNodes() in leaves mode
Change-Id: I50a9be18151a1b75815ab19b787b16b6be385bf9
This was because the while loop was never entered as end >= left was
true from the start. Convert the while loop to a do-while loop to make
sure it runs at least once
Change-Id: I9c6436a7b296e65a36b8301095b6edd00507d321
Now returning an empty array when a non annotated character is found
in the range. No longer looping through each annotation, simply
comparing to previous characters annotations and trimming differences.
Write additional test.
Change-Id: I41d2422a931a74325693edca409aed6d5da20ba8
This makes TransactionProcessor work for regular replacements, as well
as insertions and deletions of self-contained pieces of data. This does
NOT yet work for inserting and deleting unbalanced data
(splitting/merging nodes).
I've tested this from the console for insertions and deletions and
simple replacements, but I haven't tested wrappings. We should write a
bunch of unit tests for this some time :)
Change-Id: Ic2fd75d1cf2e127bc9ae58debce67576be2c912f
This is for the case where we have a zero-length range in between two
siblings, and we need to know what index that corresponds to in order to
be able to insert nodes there (rebuildNodes() will use it for this
purpose)
Change-Id: I357d1cd665667a76f955a10b8d9d2810976cdbd7
* Initialize startOffset to 0 not 1, don't know what I was thinking
* Use currentFrame for nodeRange instead of parentFrame, don't know what
I was thinking there either
* If the returned node has no parent (is the document node), don't
attempt to access parentFrame and don't set index
Change-Id: Iad969a7c29436cdf4151ead7e9d3d8e2a30befb3
Selecting a zero-length range at the start or end of a text node
(e.g. (1,1)) would return the text node's parent instead of the text
node
Change-Id: I7fe089bf66b93185dd3415eff53aa7e04e3ffdb2
* Needs to be initialized to 1, not 0
* Needs to be stored *after* left is incremented to account for an
opening
Change-Id: I7978ae241578a8a17120e494684e6e93626a8529
* Text nodes do not have a wrapper to set classes on
* Use CSS class names that are equivalent to JS class names, swapping . with -
Change-Id: I49c877dd5c9b5dd2a9afad3137f12b14883043a1
Trying something trivial like echo 'Hello world' | node parse.js
would throw TypeError: Function.prototype.apply: Arguments list has wrong type
Change-Id: Ia0a1154b0f3edbfb1f228a1d2072fced1b147141
* Setting the rank on tokens is still used currently, but will be phased out
in favor of setting it on chunks. Tokens will be immutable to allow sharing
and caching without a need for cloning.
* Only register for newline and end tokens in QuoteTransformer when active.
Change-Id: I2c45bc7e4a105219a1404ab221eed7f242128f1e
I was using data.length to check if the range was out of bounds, but
this is a problem when using selectNodes() inside of tree sync code
(which happens when performing rebuilds). While tree sync is in
progress, the model tree and the linear model don't match, so we
shouldn't be looking at the linear model for information about the model
tree. Instead, get the length of the DocumentNode and use that.
`
Change-Id: I11a378544ce1281a89cdcd4363c5cb1bf56f3434
This makes it possible to transclude list items from a template.
Note: "5 quotes" test is broken by this patch, it appears that ListHandler
newline processing is changing some state which mysteriously affects the
QuoteTransformer. This is ominous, hopefully there's a simple explanation...
gwicke: fix a bug in tokenizer triggered by definition lists like this:
**; foo : bar
Change-Id: I4e3a86596fe9bffcbfc4bf22895362c3bf742bad
Currently only implements mode=='leaves', i.e. traverse all leaf nodes.
Seems to work from casual testing, but is missing unit tests. See also
other TODO comments in this commit
Change-Id: I41292c21c627a18af7985e8ef9e23c7b14252b21
This gets rid of the length == outerLength-2 hack in getDataFromNode()
and will make it easier to implement similar logic in selectNodes()
Change-Id: I1294350b67ca3eefde2b7fe9fea0bc6d8b90f772
* Moved implementation of getting and updating a DOM wrapper to ve.ce.BranchNode
* Updated ve.ce.BranchNode tests
* Renamed ve.NodeFactory.createNode to ve.NodeFactory.create
* Added ve.NodeFactory.lookup which gets the constructor of a type
* Added attribute pass-through to ve.dm.BranchNodeStub
Change-Id: I8f5b7d3d3ae616cc5f39828b24b655163d782ae5
The bubbling update events are not needed with ce or dm, but were once upon a time useful for es, this just eliminates some unused cruft that was costing extra function call overhead.
Change-Id: Ia16d0f4cd74c84cded5caecada33ee83d0882f30
* Makes it simpler in the linear model because we don't have to use style: "item" for regular list items and style: "definition" for definition lists
* Enforces correct nesting through existing node rules systems
* Updates tests accordingly
Change-Id: I64d80af938e325f1961226505bdc386bb35ccdda
* Also fixed calls to addListenerMethod
* Also routed adding children in the constructor of ve.dm.BranchNode to the splice method
* Renamed types of ve.dm stub nodes to avoid collisions (since we have to register ce nodes by the same names for them to be generated by onSplice)
Change-Id: Ia2e75cf0a62186cc0e214683feb25c619590318a
* Also renamed convertDomElement to replaceDomWrapper in ve.ce.BranchNode
* Also added extra documentation for node rules
Change-Id: Ia8ac6be34e2b021be96974ac1ba9119bd8077d60
By using this.$ as a selection of contiguous top-level nodes (text or otherwise) we can avoid wrapping each text node in a span
Change-Id: I141c2df8f13646db3fff0da93d218c2dcf154c8a
* Fixed constructor of ve.ce.BranchNode which was calling the wrong method to perform an onSplice and with the wrong arguments
* Removed/renamed events emitted from ve.ce.BranchNode.onSplice
* Reintroduced .$ to all ce nodes
* Ported over functionality for DOM node type variance used in headings, lists and list items
* Moved the old ve.ce.Content guts to ve.ce.TextNode
* Added getOffsetFromNode and getDataFromNode to ve.dm.DocumentFragment
* Added setDocument and getDocument to dm nodes
Change-Id: I185423ba2f1a858dde562cb2f5bc3852aec930db
* Add .process() and its helpers from ve1
* Fix applyAnnotations()
** Use data[i] rather than data[j]
** Don't add empty annotation objects for no reason
** Remove empty annotation objects
*** I thought this wasn't needed, but it is needed for clean rollbacks
* Remove unused second parameter in applyAnnotations() call
Change-Id: Ia338f62d2eaf2a76f8ef653eead05bc44757a122
* Also removed beforeSplice and afterSplice in favor of just plain splice which is the same as afterSplice used to be - beforeSplice was never used and it was making things more complex looking than needed
Change-Id: Icbbc57eac73a2a206ba35409ab57b3d1a49ab1a5
They are used by ve.dm.factory so this might make it easier for people to understand what's going on.
Change-Id: I490627e3bfc55ca9c96fdc4f5d047737b6a3db8c
* Added support for asking if a given node type can have children or grandchildren and what types of nodes can be it's parent or child
* Removed canHaveChildren methods from leaf and branch nodes and converted use of them to depend on factory to read static rules from constructor lookup by type
Change-Id: I9769f95647066576416bacb791c4b68dd0285b35
* Moved node tree assertion to ve.dm.example
* Added rebuildNodes test
* Fixed some typos in rebuildNodes
Change-Id: I4853ded4b062aaa3758435093368bc23667ca3bf
Image nodes are leafs, so providing an empty array to their children/length "contents" constructor argument ends up setting the numeric length to [], which casts to an empty string in arithmetic, causing all further calculations to be concatenations instead of additions.
Change-Id: I40e1ea2295f6095318bc4c24185cadfdfb684557
Within ve.dm.DocumentFragment it makes more sense to call the root node (which is always a document node) a document node, especially since there may be a different node used as a root.
This commit also adds test for getDocumentNode and getNodeFromOffset which uses the offset map.
Change-Id: Ic4609233cedc41f7e5a5f8fdb0e6178652c95554
And fixed ve.dm.DocumentFragment constructor to generate a correct offset map which creates references to branch nodes only
Change-Id: If9e515be0c63d272bfed9bf4da625a48edd36f48
* parentCB (if set) is called with { async: true } if expansion is going to be
asynchronous.
* Strings are handled efficiently
* all value parameter chunks can now be converted using .to().
Change-Id: Ib013e1bc3d8e7f692009038209db6a056887326e
Now setting up multiple toolbars per config
Tools & Modes are now configurable per toolbar per instance
Base elements are created on demand and no longer id specific
Note: There are some bugs with multiple instances.
Change-Id: Id0bbbca2d1b76fd2db3f3b0f9abd90194930b610
* [[:en:Barack Obama]] can now be expanded in 77 seconds using 330MB RAM,
while it would prevously run out of RAM after ~30 minutes. Wohoooo!
The token transform framework rework really paid off.
* 303 parser tests are passing in the new record time of 5.5 seconds. Two more
tests are passing since these tests expect the day of the week to be
Thursday. Won't be the case tomorrow.
Change-Id: I56e850838476b546df10c6a239c8c9e29a1a3136
* All parser pipelines including tokenizer and DOM stuff are now constructed
from a 'recipe' data structure in a ParserPipelineFactory.
* All sub-pipelines of these can now be cached
* Event registrations to a pipeline are directly forwarded to the last
pipeline member to save relatively expensive event forwarding.
* Some APIs for on-demand expansion / format conversion of parameters from
parser functions are added:
param.to('tokens/expanded', cb)
param.to('text/wiki', cb) (this does not work yet)
All parameters are additionally wrapped into a Param object that provides
method for positional parameter naming (.named() or conversion to a dict
(.dict()).
* The async token transform manager is now separated from a frame object, with
the frame holding arguments, an on-demand expansion method and loop checks.
* Only keys of template parameters are now expanded. Parser functions or
template arguments trigger an expansion on-demand. This (unsurprisingly)
makes a big performance difference with typical switch-heavy template
systems.
* Return values from async transforms are no longer used in favor of plain
callbacks. This saves the complication of having to maintain two code paths.
A trick in transformTokens still avoids the construction of unneeded
TokenAccumulators.
* The results of template expansions are no longer buffered.
* 301 parser tests are passing
Known issues:
* Cosmetic cleanup remains to do
* Some parser functions do not support async expansions yet, and need to be
modified.
Change-Id: I1a7690baffbe8141cadf67270904a1b2e1df879a
Renamed Selection method to more suitable name.
Misc cleanup
Patchset 2, whitespace cleanup
Patchset 3: Change values used with selection direction to -1 or 1
1 for left to right (normal)
-1 for right to left (opposite)
Change-Id: If9ecc721ace1c7550903170f92395947f1ccc22c
It's not being used at all, and it's broken because
this.lengthDifference is set to zero regardless of what length
difference the operations passed into the constructor might cause
Change-Id: I3b7a312a1920347e7bf34df88a05bf6f2ff11f7d
* Changed splice to check all elements about to be inserted are allowed before inserting any of them so that catching an exception leaves you in a sane state
* Fixed the order of execution of parent class constructors in ve.dm.LeafNode and ve.dm.TwigNode so that canHaveChildren and canHaveGrandchildren produce correct values and added tests to ensure these methods are correctly inherited in subclasses
* Added tests that check for exceptions when adding nodes that can have children to nodes that can not have grandchildren
* Added test that check for events being emitted before and after splicing, including that beforeSplice should be emitted even in cases where a splice fails and throws an exception because the nodes are incompatible (but afterSplice is not called in this case) since beforeSplice might modify the nodes in some way before the compatibility tests are run
Change-Id: Id12aea995a42c26ff63a74ae3d31f2bf455759e3
* Moved getParent and getRoot from ve.dm.Node back to ve.Node
* Fixed use of getElementLength that should have been changed to getOuterLength, but was changed to getLength (oops)
Change-Id: Ibe5b855aef533dcd493f762a8a02c6a11ce6e7de
In this commit several methods (child node add/remove and parent/root modification) were also moved to ve.dm.BranchNode ve.dm.Node respectively. ve.Node and ve.BranchNode are immutable. ve.dm.Node and ve.dm.BranchNode are mutable. Other subclasses of ve.Node and ve.BranchNode should implement functionality to mimic changes made to a data model.
Change-Id: Ia9ff78764f8f50f99fc8f9f9593657c0a0bf287e
Ground-up rewrite of the data model. Putting this in the ve2 directory for now so we still have the old code floating around.
Main changes so far in this rewrite:
* Renamed hasChildren() to canHaveChildren()
* Added canHaveGrandchildren()
* Added a new node type TwigNode that can have children but not grandchildren (so all of its children must be LeafNodes)
* Implemented push/pop/shift/unshift as wrappers around splice()
* Renamed getElementType() to getType(). Nodes now take a string as a type, and the element stuff is gone and won't be back
* Removed clearRoot(), replaced it with setRoot( this ) where needed
Change-Id: I23f3bb1b4a2473575e5446e87fdf17af107bacf6
This is a bit better than cloning tokens wholesale, but not by much. There is
a lot of potential for much better per-token caching with reduced token
cloning. Need to map out all dependencies besides token attributes expanded
from template parameters or other scoped state. Even if tokens themselves
don't need transformation, they might still need to be considered for other
token transformers, so simply keeping the final rank won't quite work even if
the token itself is fully transformed. As a minimum, a shallow clone would
need to be made and the rank reset (as in env.cloneTokens).
Change-Id: I4329113bb21750bae9a635229ed1b08da75dc614
* Added an LRU cache (using the lru-cache node module) for tokenizer output
* Mutation of nested attributes now replaces the containers. A shallow copy of
tokens is sufficient to isolate token transformations. Need to investigate
if we can actually get away without isolation and re-transformation for most
ordinary tokens.
Change-Id: I9136b1d7a1fbcc538183a319d4ecaa290d616fdf
* Ignore safesubst for now
* Remove an unneeded whitelist entry
* Make sure the caption is not lost for thumbs (fix to last commit) and remove
debug print
Change-Id: I243584ed0838cf7c3b4110fe9cdf869272477312
The HTML5 parser we are using to normalize expected HTML output in parserTests
reverses the order of attributes (see
https://github.com/aredridel/html5/pull/53 for the fix). Remove whitelist
entries concerned with this and use the proper order in external image
attributes.
Change-Id: If1868cae05396a150757c85a20473ab756cbcd97
* less verbose logging in noinclude processing and template expansion
* Give priority to the processing of templates transcluded from transclusions
to get closer to depth-first processing. This serves to minimize memory
usage from queued-up tokens.
* Increase the maximum outstanding requests per template retrieval. 10000
amazingly proved too low a limit on some big pages.
* Only process a single template request callback at a time for now
* Add a debug print in the treebuilder wrapper
* Don't treat multiple comments on a single line as a single comment to match
the PHP parser's behavior
Change-Id: I9a86b6d7bec3b9e1f17415daf1bf74170240721a
This has some TODOs still but I want to land it now anyway, and fix the
TODOs later.
* Add this.offsetMap which maps each linear model offset to a model tree node
* Refactor createNodesFromData()
** Rename it to buildSubtreeFromData()
** Have it build an offset map as well as a node subtree
** Have it set the root on the fake root node so that when the subtree
is attached to the main tree later, we don't get a rippling root
update all the way down
** Normalize the way the loop processes content, that way adding offsets
for content is easier
* Add rebuildNodes() which uses buildSubtreeFromData() to rebuild stuff
* Use rebuildNodes() in DocumentSynchronizer
* Use pushRebuild() in TransactionProcessor
* Optimize setRoot() for the case where the root is already set correctly
Change-Id: I8b827d0823c969e671615ddd06e5f1bd70e9d54c
In experiments this dropped the memory consumption further, and reduces the
queuing overhead in the node reactor.
Change-Id: I9409b6ca863b43b7557663bbec9572365059c078
Only call back a few callbacks per reactor iteration from the template fetch
request queue. This changes the expansion pattern from a (memory intensive)
breadth-first expansion to something quite close to depth-first expansion.
Additionally, retrieved pages are quickly added to the page cache so that a
lot of request queuing is avoided in favor of synchronous expansion from the
cache. On pages like Barack Obama that previously ran out of memory after
consuming node's 1.6G heap limit, expansion now runs in relatively constant
100-300M resident (so far, still running).
Change-Id: Ie34a1eeff00d868416de45ef8d289898258f560c
Eat unbalanced external link parts within template parameters. This does not
produce the same output as the PHP parser
(try echo '{{YouTube}}' | node parse.js), but preserves a level of sanity.
Need to check how common this is for external links. If it is rare enough,
moving the ']' after the parser function manually would fix the rendering for
the YouTube case.
Change-Id: I597d808efff36baa22191e7946a0061cc31120e8
Effectively stopping & starting polling prior to conversion
Getting Selection from model
Reselecting after conversion (TODO: modify selection to entire block ?)
Change-Id: I9ba331b5393bf568cc8d137646b43244ae2640a8
* add past paths for empty arguments etc
* cache attribute token transform pipelines
* fix bugs in TokenCollector and NoIncludeOnly handler, and improve its
efficiency by only registering for 'end' tokens on demand
* Remove empty reset methods from a few handlers
* Add a simple 'ap' debug print function that makes it easy to only print some
debug prints by temporarily changing 'dp' to 'ap'
* Improvements and bug fixes in AttributeExpander
Change-Id: Ie69729c8f62d48bba922712e44ebce484c621c50
Non-include attribute pipelines are not cached for now. Adding separate
caching for non-include attribute pipelines is very likely worth it, but
deferred for now.
Change-Id: I13f949d9f0a04536f9ccfcb73a2be69c5c08be01
This was an artifact from experimentation with multiple cursors long long ago in a land far far away
Change-Id: I14491c4adbd40bb8df4b1c31725cb1621351bef2
* Convert isNoInclude logic to positive isInclude throughout and set it
properly on attribute pipelines. Also don't cache non-include pipelines.
* Add a --pagename parameter to parse.js, which sets the page name in the
environment. This is then returned by {{PAGENAME}}. Not the final solution,
but useful for taxobox testing as taxons are selected based on PAGENAME.
* Add rudimentary pagenamebase parser function
Change-Id: If9c0be4c255200d0f2a30f02e5619437b4fd8f12
* DOM based on Wikia's thumb output: HTML5, clean caption without magnify
icon.
* basic RDFa annotations, but most options additionally in data-mw object-
might want to move more (or all?) of those into RDFa data using meta tags.
* no support yet for framed or other formats, image scaling etc
* also tweaked some config options in the environment
Change-Id: Ie461fcdce060cfc2dec65cc057709ae650ef3368
This makes it possible to get identical rendering in the editor, but may make other things more complex. The Wikitext serializer is no longer compatible for rendering lists so it's been stubbed out. Also the way the toolbar works with lists is broken, so that's been disabled. The HTML serializer has been fixed to work correctly and no-longer-used styles have been removed.
Change-Id: If156f55068b1f6d229b3fa789164f28b2e3dfc76
Also:
* Simplified ve.ce.Surface.getLeafNode, which may be better to just be removed and be used inline in the few places it's being used.
* Removed method wrapper for static function ve.ce.Surface.getLeafNode
Change-Id: I1d4cf0bb7ecc8f07f030753e40a13ebef7d02daa
behavior switches are converted to tokens which set parser.environment flags during the async transformation stage.
The next step would be for handlers in the sync23 stage to generate the TOC, section edit links, and so on according to these directives.
No tests written, because the switches are consumed and don't appear in rendered html. We can test the magic word layout controls individually, once they're implemented.
Another small change was to store option flags directly in the environment object, not that it makes much difference.
Change-Id: I863fbf4be1a17d2f6c31158298dd301f19ae1137
Explained in the README how to use npm to load the dependencies and run tests. Too bad about NODE_PATH...
Don't try to find parserTests.txt in assorted places--if it isn't present, fetch from gerrit. You can symlink from core if you're developing on both parsers, and the fetch script will not overwrite.
Use __dirname in parserTests.js to allow the script to run independent of current working directory.
Change-Id: I4c8b884e91f4fdeae385c7697aff768bdd199dd5
Match pairs of {{!}} or | for template productions, but not a mix of the two.
Example:
{{#if:1|{{!}}-
{{!}} {{#if:1|style="color: red"{{!}}|}}
}}
Note that the style parameter ends up as the *key* of an empty-valued
attribute on the table cell currently.
Change-Id: I5f9357dd1645ef97b0af89f32e8d92ae49218c72
Parser functions which only accept positional arguments now return both the
key and value of arguments. Complete attributes (key and value) for templates
and the like from parser functions are not yet supported though.
Change-Id: I3f81bb35acd27186222ce6d5217e820042527c01
Instead of a proliferation of data-mw-* attributes, it should be easier to
stash all private / non-semantic round-trip information in a JSON object
stored in data-mw.
Change-Id: Id200a6a8789fa152f29ea530e5a24b6ee7b4b285
* This high level surface object is responsible for creating & managing editor instances
* Revised Sandbox demo to invoke in this way.
Change-Id: I4043779af9a2ab964deaf26079a992e82ebeef27
* Configured VisualEditorSandbox to use es
* Reconfigured the ce demo to share the sandbox module
* Removed es demo
* Renamed ce demo to ve (es is broken anyways)
Patchset 2: squashed in https://gerrit.wikimedia.org/r/3953
Change-Id: If8d13bf7011616d222be78899b23186859d5ed70
Also, in ParserPipeline:
* Import the LM converter and expose it through getLinearModel()
* Fix getWikiDom() to actually work (still unused)
In parse.js:
* Add --help option that prints usage information (was unreachable)
* Add --linearmodel option to output linear model JSON instead of HTML
Change-Id: Ic534e03ff40a7c9117bb63f0c635a4213d5e3406
To handle replace operations that are not themselves consistent (these
are common, for instance when replacing an opening element in one place,
then replacing the closing element somewhere else), we process
subsequent replace operations inside the first one until things are
balanced again, then issue a single rebuild for the whole thing.
Change-Id: Ide4613f046fabfeeef383138c39e350b1b710033
gets a bit closer to supporting table fragments passed through template
arguments. Next, we'll need a way to indicate start-of-line position to
enable sol block-levels in template parameters.
Example:
{|
{{#if: true|{{!}}Table cell|}}
|}
re-processing in a phase is wanted. By default, after a token type change or
the return of multiple tokens only the remaining transforms with higher ranks
are applied.
Updated a few comments as well.
to maximize IO concurrency. Signal that all tokens are fully transformed to
callbacks called from TokenAccumulator._returnTokens. The result should be a
single re-transformation when entering the callback chain, and only if the
transform does not signal that it took care of full transformation itself.
Template expansion would set this flag, as the nested transform pipeline
processes all tokens to the end of phase async12.
to callback which lets transforms indicate if their returned tokens are fully
processed for their phase. If not, the callback re-processes them so that any
remaining transforms are applied.
wgUploadPath configurable. Also change the hard-coded fall-back image sizes to
sensible defaults. This breaks three parser tests until image size retrieval
from the wiki is implemented.
construction' part of the HTML5 spec:
http://www.whatwg.org/specs/web-apps/current-work/multipage/urls.html#url-manipulation-and-creation
Removed a few whitelisted test cases that are now passing directly.
The encoding canonicalization could also be moved to the Sanitizer. Doing this
early in token stream processing however has the advantage of providing further
transformations uniform data to work with. We could even consider to move this
even further into the tokenizer.
possible to support template / template argument expansion in image options,
and causes little trouble for wikilinks. Non-image wikilinks with multiple
text pipes are quite rare in the dumps, and concatenating description tokens
with a plain '|' is quite easy. 261 parser tests passing.
mediawiki.tokenizer.js module, and pass a reference to parse(). Faster
inline_breaks production using a JS function which seems to be generally
correct, but still breaks five tests when enabled. Seems to be some weird
interaction with peg.js, possibly something to do with caching.
* Convert all attributes into strings in Sanitizer
* Use strict comparison against empty string in tokenizer
* Add very simple sitename parserfunction
* 138 tests passing
wrapper. HTML ist now the only supported format. The DOMConverter is now no
longer used. Roan, feel free to remove / butcher it for direct HTML to linear
model conversion.
serialized into a single data-mw-rt attribute if present. Update parserTests
to ignore this attribute for comparisons with expected parser output.
A few more tweaks and notes are thrown into this commit too. 233 tests are
passing now.
only expand used branches selected by parser functions. Template (and
-argument) expansion is simply registered before general expansion.
Additionally, a few more simple time-based magic words are added in
ParserFunctions.
values. This includes comments, templates and template arguments.
This also replaces the specialized expansion logic in the TemplateHandler. The
removal of link validation lets one more parser test fail for now. External
link target validation will need to be implemented in the token stream handler
for links. This is noted as TODO in
https://www.mediawiki.org/wiki/Future/Parser_development#Token_stream_transforms.
functionality (comments, templates, template arguments) in arbitrary
attributes. The grammar for this is still quite rough, will need to
consolidate that area.
other tokens. This is only the first half of the conversion. The next step is
to drop the type attribute on most tokens and match on the constructor in the
token transform machinery.
improvements to parser functions on the way to support the cite extensions.
Preparation for generic template and template arg in attribute support. 222
parser tests now passing.
* Fix getScope()
** Drop the -1 which caused the result to be off by one level
** Prevent JS errors from occurring if bad input causes the loop to try to traverse up above the root node
* insert()
** Detect the case where the input data tries to close the containing element; in that case, we'll get scope != node
** Move the getNodeFromOffset() and getScope() calls up and out of the conditionals
** Remove unnecessary parent==model conditional, no longer needed now that getScope() can safely handle things that try to traverse too far up
** Add some comments to explain what's going on. I'll restructure this function a bit more shortly
page like this:
cd extensions/VisualEditor/modules/parser
echo '{{:Main Page}}' | node parse.js
echo '{{:Main Page}}' | node parse.js --html
echo '{{:Main Page}}' | node parse.js --debug
Even the date-based includes work somewhat, although they don't yet accept
passed-in dates.
directly to WikiDom from enwiki using a commandline like this:
echo '{{User:GWicke/Test}}' | node parse.js
Wohoo!
Complex pages with templates won't render properly yet, as noinclude /
includeonly and parser functions are not yet implemented. As a result, the
parser will run out of memory or hit the currently low expansion depth limit
as it tries to expand documentation for all templates.
disable it by default in parserTests as it tries to fetch all sorts of parser
functions and is not yet fully supported in parserTests. The next step will be
to build a list of parser functions (to avoid fetching them as templates) and
pushing the event interface into parserTests.
characters from host portions of links hrefs for now. This module needs to be
filled up with pretty much everything Sanitizer.php does, including tag and
attribute whitelists and attribute value sanitation (especially for style
attributes).
We'll also need to think about round-tripping of sanitized tokens.
* Add handler for post-expand paragraph wrapping on token stream, to handle
things like comments on its own line post-expand
* Add general Util module
* Fix self-closing tag handling in HTML5 tree builder
* Created AttributeTokenTransformManager for generic attribute conversion, and
removed { title, template argument {key, value} } expansion from
TemplateHandler.
* Added caching for attribute and input sub-pipelines. Especially attribute
pipelines would otherwise be recreated for each attribute value and key.
* TokenTransformDispatcher is now renamed to TokenTransformManager, and is
also turned into a base class
* SyncTokenTransformManager and AsyncTokenTransformManager subclass
TokenTransformManager and implement synchronous (phase 1,3) and asynchronous
(phase 2) transformation stages.
* Communication between stages uses the same chunk / end events as all the
other token stages.
* The AsyncTokenTransformManager now supports the creation of nested
AsyncTokenTransformManagers for template expansion.
The AsyncTokenTransformManager object takes on the responsibilities of a
preprocessor frame. Transforms are newly created (or potentially resurrected
from a cache), so that transforms do not have to worry about concurrency.
* The environment is pushed through to all transform managers and the
individual transforms.
are now merged with specific registrations by rank. Not yet clear if that is a
good idea overall, need to check use cases when implementing template expansion
and other functionality.
183 parser test now passing.
The TokenTransformDispatcher now actually implements an asynchronous, phased
token transformation framework as described in
https://www.mediawiki.org/wiki/Future/Parser_development/Token_stream_transformations.
Additionally, the parser pipeline is now mostly held together using events.
The tokenizer still emits a lame single events with all tokens, as block-level
emission failed with scoping issues specific to the PEGJS parser generator.
All stages clean up when receiving the end tokens, so that the full pipeline
can be used for repeated parsing.
The QuoteTransformer is not yet 100% fixed to work with the new interface, and
the Cite extension is disabled for now pending adaptation. Bold-italic related
tests are failing currently.
tests now passing.
Link trails depend on language-dependent positive character classes in the PHP
parser. These classes all seem to disallow punctuation implicitly and list
differing plain text characters instead, so it might be possible to get away
with identifying a common class of non-trail punctuation instead. This would
help to keep the tokenizer independent of configurations, which is very
desirable for caching and simplified external parsing.
start / row / table end). The old productions are not deleted yet to make it
easy to compare the output on more complex articles. 181 tests passing after
adding two table tests with whitespace-only differences to the whitelist.
is in its early stages and nowhere near deployment, please Be Bold and just
commit things like this directly! IMHO it makes more sense to fully review this
once it settles down a bit.
This required a few further additions to the TokenTransformDispatcher. In
particular, there is now an 'any' token match whose callbacks are executed
before more specific callbacks. This is used by the Cite extension to eat all
tokens between ref and /ref tags. This need is very common, so should be
broken out to an intermediate layer in the future.
In general, the requirements for the TokenTransformDispatcher API are now
clearer, and the API should likely be cleaned up / simplified.
token stream. This is the first token transformation exercising the
TokenTransformer class as its dispatcher. Template expansions, wiki link
formatting, tag sanitation and extensions should be able to use the same
dispatcher by registering for specific token types.
The parser performance is very slightly improved as the token stream is only
traversed once.
token type, and supports asynchronous token expansion (for example for async
template expansion). This code is not yet tested or used. The interface for
token insertion from transformation functions will be expanded as needed.
html markup handling.
* Remove global 'use strict' declarations from html5 parser.
* Add trailing whitespace handling in dt
Overall, 55 parser tests are now passing.
HTML is parsed using a HTML parser and re-serialized, and the output compared
to the serialization of the new parser's dom. Newline normalization is a
cheap hack for now, need to improve that later.