* added stx: 'html' round-trip information for html tags
* added t_stx: 'row' info for row-wise table wiki syntax, and support for it
in the serializer
* the first table row is implicit in wikitext
* renamed lastToken to prevToken in serializer
* strip first newline in an initial chunkCB
Change-Id: I014b046539d1b674d830551c5fd1b74a67f81993
- Just a quick first pass updating the parserTests.js script so we can
test DOM -> wikitext serialization (but which in effect really tests
roundtripping).
- There is no output normalization yet which is needed for now since we
are not yet preserving white-space.
Change-Id: Ie52058e0dc3330f852c24fa05641dced19f950e0
in the new DM. Change method name getAnnotationRange from offset to
getAnnotatedRangeFromOffset. Write tests
Change-Id: I7028803065409e271ceced73e4803954d4a956dc
Lists are a bit tricky, as nested lists are not wrapped in a separate list
item. Should work now though.
Change-Id: I2e5f29f6afa6bdd2d5e5c0c5d019b70c611b73d1
Splits and merges now work, or at least the tests for it pass
The strategy I used is to gather the affected ranges for each of the
following:
* removed stuff
* the entirety of each node touched by a non-zero removal
* if the inserted data busts out of its parent, the entirety of that
parent node (the 'scope')
then get the covering range of all those ranges, and rebuild that.
Change-Id: I7c3b421abc0ba134157ac8b59042675bb1b5073c
getAnnotationRangeFromOffset and offsetContainsAnnotation
which deprecated getAnnotationBoundaries, and getIndexOfAnnotation
write unit tests for proof
Change-Id: I6c0d4e3ca96dd569b1909cd22fce68c3a6fe382c
This fix only affects following transforms, of which there are few right now.
Also removed a stray token mutation in QuoteTransformer.
Change-Id: Id6d4adce944b06fc1a3651cfbf63fc2670125225
* Tokens are now immutable. The progress of transformations is tracked on
chunks instead of tokens. Tokenizer output is cached and can be directly
returned without a need for cloning. Transforms are required to clone or
newly create tokens they are modifying.
* Expansions per chunk are now shared between equivalent frames via a cache
stored on the chunk itself. Equivalence of frames is not yet ideal though,
as right now a hash tree of *unexpanded* arguments is used. This should be
switched to a hash of the fully expanded local parameters instead.
* There is now a vastly improved maybeSyncReturn wrapper for async transforms
that either forwards processing to the iterative transformTokens if the
current transform is still ongoing, or manages a recursive transformation if
needed.
* Parameters for parser functions are now wrapped in abstract Params and
ParserValue objects, which support some handy on-demand *value* expansions.
Keys are always expanded. Parser functions are converted to use these
interfaces, and now properly expand their values in the correct frame.
Making this expansion lazier is certainly possible, but would complicate
transformTokens and other token-handling machinery. Need to investigate if
it would really be worth it. Dead branch elimination is certainly a bigger
win overall.
* Complex recursive asynchronous expansions should now be closer to correct
for both the iterative (transformTokens) and recursive (maybeSyncReturn
after transformTokens has returned) code paths.
* Performance degraded slightly. There are no micro-optimizations done yet
and the shared expansion cache still has a low hit rate. The progress
tracking on chunks is not yet perfect, so there are likely a lot of unneeded
re-expansions that can be easily eliminated. There is also more debug
tracing right now. Obama currently expands in 54 seconds on my laptop.
Change-Id: I4a603f3d3c70ca657ebda9fbb8570269f943d6b6
By first summarizing a node tree or node selection and then using the normal deepEqual function we are able to take advantage of native QUnit functionality to make diagnosing a problem easier under failing conditions and reduce the verbosity of the output under passing conditions.
Change-Id: I4af5d69596cf5459aa32f61ee6d5b8355233c3df