Stripping all HTML atributes (to avoid CE-added styles such as
'font-size: 1em;') also strips data-parsoid which can cause
round trip errors. As an improvement only strip the style
attribute.
Bug: 58136
Change-Id: I34386bd847d1cf0583317a8b07916e43ff7af029
Let's experiment with this via our local Gruntfile. If it works
fine we can install it in Jenkins (similar to node-csslint).
Verify through $ npm install && npm test;
Fixed all outstanding violations.
Also:
* Added syntaxhighight to ignore.
* Added imetests (which contain unformatted JSON) to ignore.
* In ve.dm.ModelRegistry#matchTypeRegExps, removed redundant
!! cast from the [+!!withFunc] statement which was hitting
a bug in node-jscs. All callers to this local private function
pass a literal boolean true/false so no need to cast it.
* Removed "/* key .. , value */" from ve.setProp, though this
wasn't caught by node-jscs, found it when searching for " , ".
* Made npm.devDependencies fixed instead of using tilde-ranges.
This too often leads to strange bugs or sudden changes. Fixed
them at the version they were currently ranging to.
Bug: 54218
Change-Id: Ib2630806f3946874c8b01e58cf171df83a28da29
It's not much of an optimisation to combine these loops but
separating them gives us greater flexibility.
Move the building of the node tree to happen lazily when
getDocumentNode is called.
In the rich paste path we can now create the DM without building
the node tree and remove the metadata.
Change-Id: I10b4bc486ff8ff8037158aa6dfd45aac87557d42
Following on from getDomFromModel, this returns a document model
instead of element linear data. The only instance that hasn't been
replaced is in rich paste, where we need to sanitize the converted
data before constructing the document model.
This should be cleaned up in a later commit.
Change-Id: I37a2b641632af2cb515e3409deed5cd1fa358af5
Currently it takes 4 arguments which are all properties
of the document model, so just pass the model instead and
access the properties later. Rename to getDomFromModel.
Change-Id: I0c378a04dc08b9b90bdc3984f8fa8c4acfe0b667
Register ctrl/cmd+shift+v as a trigger which sets a flag for the
next paste event.
When the paste special flag is set, modify the sanitizeData method
to strip all annotations, and any elements other than paragraphs.
Bug: 53781
Change-Id: If814e1786ffa805b52ab32f4a06f52da743fd9af
Allow pasting of rich (HTML) content.
ve.ce.Surface
* Use a sliced document clone for converting to DM HTML (copy)
* Add full context to pasteTarget before copying
* Add ve-pasteProtect class to spans to prevent them being dropped
* Implement external paste by converting HTML to data and inserting
with newFromDocumentInsertion
* Remove clipboard key placeholder after read so they aren't picked
up by rich paste. Hash no longer includes the placeholder.
* Detect the corruption of important spans and fallback to clipboard
data HTML if available.
ve.dm.LinearData
* Add clone method for copy
ve.dm.ElementLinearData
* Add compareUnannotated for use by context diffing.
* Add sanitize method for cleaning data according to a set of rules.
ve.dm.Transaction
* Add range parameter for inserting a range of a document only,
e.g. stripping the paste context.
ve.dm.Document
* Implement sliced document clone creation so that DM HTML
is generated correctly in onCopy
ve.dm.DocumentSlice
* Replaces LinearDataSlice. Now has two ranges for balanced data
and data with a full context.
ve.init.Target.js
* Define default, loose, paste rules (just remove aliens).
ve.init.mw.ViewPageTarget
* Define strict MW paste rules:
+ no links, spans, underlines
+ no images, divs, aliens
+ strip extra HTML attribues
ve.init.sa.Target, ve.init.mw.ViewPageTarget, ve.ui.Surface
* Pass through and store paste rules.
Bug: 41193
Bug: 48170
Bug: 50128
Bug: 53828
Change-Id: I38d63e31ee3e3ee11707e3fffed5174e1d633b42
In order to do this we have to separate out the removal
operation from NDFR, so it becomes newFromDocumentInsertion
(again, although actually, for the first time). As NFDI is
an insertion we can just run fixUpInsertion on the data
part of it.
In order for the removal operation to be a proper removal
we have to allow metadata removal (the default is to merge it).
Change-Id: I16d575b61b9796e7e889f2c27cfe02b4a40b7639
This fixes some of the problems with pasting references.
It's a bit overzealous in that references get renumbered even when
replacing, which is unnecessary but doesn't actually have any
noticeable effect.
Unfortunately, the internal list state depends so much on the converter
having run that we now need to add yet another hack, to set the counter
to the appropriate value.
Change-Id: I3c6514ce600af4f4c037f419554d34b5a5c86a63
Calculate and store the two inner whitespace values of the body in the
dm.Document. When converting back, make sure the first/last nodes
pre/post outer whitespace matches the inner left/right whitespace
of the body.
Bug: 54964
Change-Id: I45f1ffd63669f25a6cae878400bfe21719ed58ee
Add it as an optional parameter to the constructor, and create a new
one if omitted.
This is going to be used to resolve URLs according to the right <base>,
but really that's a hack and we should come up with a better way to
track metadata from the <head>.
Change-Id: I49dfc81ff793d73e08a20e502d681a15613d23f7
Because that's what it is now since 'head' was added. Also removed
the wrapping <body> tag (now added by the test runner) and renamed
normalizedHtml to normalizedBody.
Change-Id: I5624ae076c5e661d2789e499cd28e8282c885409
This is done by using the computed property value rather than the
literal attribute value when rendering href and src attributes.
Helpfully, this provides perfect URL resolution natively in the browser,
which means the document's <base> is respected and all that good stuff.
For GeneratedContentNodes, we also need to find all DOM elements inside
the rendered DOM that have href or src attributes and resolve those.
This is done in the new getRenderedDomElements() function, which the
existing cleanup steps (remove <link>/<meta>/<style>, clone for
correct document) were moved into.
In order to make sure that the computed values are always computed
correctly, we need to make sure that in cases where HTML strings
in data-mw are parsed, they're parsed in the context of the correct
document so the correct <base> is applied.
We still need to solve this problem for models that actually store and
edit an href or src as an attribute. I'll post more about that on
bug 48915.
Bug: 48915
Change-Id: Iaccb9e3fc05cd151a0f5e632c8d3bd3568735309
* Replace surface 'transact' event with 'documentUpdate' event
* Have surface listen for all document transactions and update selection
as appropriate (as well as emitting 'documentUpdate')
* Implement change() in terms of setSelection()
** Queue 'contextChange' events so contextChange is only emitted once
** Use this.transacting flag to prevent setSelection() (which is called
because the model emits transact events) from doing too much
** Behavioral change: lock/unlock now emitted separately for
transaction and selection changes
Change-Id: I44425d260ec70758f99d13f99e41f5c993d260c2
ve.dm.Surface.js:
* Stop emitting 'change' and remove its event documentation
ve.ce.Surface.js:
* Listen to 'select' instead of 'change'
* Perform a CE surface update after model-based keydown handling
ve.dm.Surface.test.js:
* Stop asserting that 'change' is emitted
Change-Id: I8f16289493e835d890709c6dfe093d04c18522b6
Previously we returned ElementLinearData from the converter, then
stripped out the MetaLinearData. This meant that before processing
the ElementLinearData from the converter actually contained metadata
which is confusing.
The new document constructor stores the converter results in a
FlatLinearData object and simultaneously populates element and meta
data stores.
Also in this commit I have moved various methods from ElementLinearData
to FlatLinearData, from which ElementLinearData inherits.
Change-Id: I64561bde2c31d8f703c13ac7b0a0c5f7ade9f3d4
InternalList.clone() assumed that all properties are automatically rebuilt
when a new document is built, but that's not true for .nextUniqueNumber
(or for .itemHtmlQueue for that matter). This meant that, in practice,
.nextUniqueNumber was being reset to 0 after auto/N numbers for existing
references had been assigned, but before assigning numbers to newly
created references. This caused all sorts of naming collision fun.
Bug: 54712
Change-Id: I1d087a5f3c23979d7d488e3ab32eb064ebc23e94
Document slice only ever contained linear data, with extra functionality
to preserve the range. It pre-dated LinearData, but now we should
refactor it to reflect its purpose.
Change-Id: Ifc908f7526c83a43a51372c8d2494d7260e7facd
We already getSlice which returns a ve.dm.DocumentSlice, so using
the word slice in this method is very confusing. What we are actually
doing is creating a ve.dm.Document from a range. Also remove argument
overloading as it's not particularly helpful and would make the new
name a lie.
Change-Id: I93da3419510410b170396e6765fbe2a87f9795be
The way we implemented undoing transactions was horrible. We'd process
the original transaction, but with a reversed=true flag. That meant we
had to keep track of the 'reversed' flag everywhere, and use ternaries
like insert = reversed ? op.remove : op.insert; all over the place to
access transaction operations. Redo then worked by reapplying the
transaction. We would verify that this was OK by tracking whether the
transaction was in an applied state or an undone state.
This commit makes it so every transaction can only be applied once. To
undo, you obtain a mirror image of the transaction with tx.reverse(),
then apply that. To redo, you clone the original transaction with
tx.clone() and apply that. All the code that had to use ternaries to
check whether the transaction was being applied in reverse or not is
gone now, because you can only apply a given transaction forwards,
never in reverse.
Bonus:
* Make ve.dm.Document's .completeHistory a simple array of
transactions, rather than transaction/boolean pairs
* In the protection of double application test, clone the example
document properly; it modified ve.dm.example.data, which was "fine"
because it ran .commit() and .rollback() the same number of times
Change-Id: I3050c5430be4a12510f22e20853560b92acebb67
Replaces newFromNodeReplacement(). newFromNodeReplacement was very
simplistic and didn't support metadata or internal list items, so
if you had comments or references inside of the data you were editing
(reference contents or an image caption), they'd get mangled.
With this, you can do:
newDoc = doc.getDocumentSlice( node );
// Edit newDoc
tx = ve.dm.Transaction.newFromDocumentReplace( doc, node, newDoc );
surface.change( newDoc );
and that takes care of metadata, internal list items, and things like
references that reference internal list items.
ve.dm.Document.js:
* In getDocumentSlice(), store a reference to the original document
and the number of items in its InternalList at the time of slicing
in the created slice. This is used for reconciliation when the
modified slice is injected back into the parent document with
newFromDocumentReplace().
ve.dm.InternalList.js:
* Add a method for merging in another InternalList. This provides a
mapping from old to new InternalList indexes so the linear model data
being injected by newFromDocumentReplace() can have its InternalList
indexes remapped.
ve.dm.Transaction.js:
* Replace newFromNodeReplacement() with newFromDocumentReplace()
ve.ui.MWMediaEditDialog.js, ve.ui.MWReferenceDialog.js:
* Use getDocumentSlice/newFromDocumentReplace for editing captions/refs
* Change insertion code path to insert an empty internalItem/caption, then
newFromDocumentReplace into that
* Add empty internalList to new mini-documents
ve/test/dm/ve.dm.Transaction.test.js:
* Replace newFromNodeReplacement tests with newFromDocumentReplace tests
ve-mw/test/dm/ve.dm.Transaction.test.js (new):
* Add tests for newFromDocumentReplace with mwReference nodes
ve.dm.mwExample.js:
* Add data for newFromDocumentReplace with mwReference tests
VisualEditor.hooks.php:
* Add new test file
Bug: 52102
Change-Id: I4aa980780114b391924f04df588e81c990c32983
Code with a similar purpose was added in 568e0e5701 but got lost
when some things were moved from ve.Surface to ve.ce.Surface in
5012ed10.
Initializing the selection at (0,0) was known to cause problems before,
and since 789d0caf09 breaks editing of empty documents: typing in an
empty document begins in an inline slug, but SurfaceObserver doesn't
notice typing in an inline slug unless the ce.Surface pawns it, which
is OK because insertions in slugs are always pawned, but the pawning
logic believes the cursor to be at offset 0 where there is no slug
(it's at offset 1) and so it doesn't pawn.
Bonus: update tests and add descriptions for dm.Surface.change tests
Change-Id: Id72314d0fe650dacc7cdb842f5cea2f3bfba5145
The previous implementation couldn't deal with transactions that
replaced both data and metadata at the same time (rather than replacing
data while moving metadata around), and extending its approach to
deal with that case would have made it much more complex.
So I rewrote the algorithm from scratch. The previous implementation
scheduled deferred moves for existing items, but immediately processed
insertions and removals. This is problematic for replacements and
maintaining the order in the binary search list. So instead, this new
implementation builds an array representing what the new item list
should be, then processes insertions, removals and moves in the correct
order to achieve that state.
It looks like the previous implementation didn't always work correctly,
which was masked because the test suite passed full=false to
assertItemsMatchMetadata(). This rewrite fixes this.
Also remove setMove/applyMove from MetaItem, because we don't need them
anymore and they're evil anyway; and add isAttached(), because the new
algorithm needs it.
Change-Id: I899d2b3c94c2cfa55823879bca95456750f64382
Transactions that replaced metadata twice at the same offset
(with retainMeta-replaceMeta-retainMeta-replaceMeta) were broken
because the processor for replaceMetadata didn't advance the
metadata cursor.
Change-Id: I7ad24e7ffb4c39b40ec9c347db301f8e28f3692d
Add some test cases for documents with trailing metadata, and fix an
off-by-one error in the metadata-mutating transactions (since the
document metadata array is one larger than the document data array).
Change-Id: I8f049466e03ed55010dfcf0a35702536edfa7b0a
This version pushes a `replaceMetadata` operation after a `replace` to
fixup trailing metadata if there is no inserted region and the removed
region includes metadata. This avoids a corner case where the
size of the metadata arrays inserted/removed in `Transaction.pushReplace()`
do not match the size of the data arrays inserted/removed.
We remove the now-unused `Document.getMetadataReplace()` method.
We also adjust `MetaList.onTransact()` so that it continues to work
properly when the number of metadata entries in `replace.insert` is
not the same as the number of metadata entries in `replace.remove`.
Change-Id: I1d600405b855ca1cb569853bb885b0752df47173
The `Transaction.pushReplace` method has a corner case if the removed
region has metadata and the inserted region is empty. This works fine
unless there are two adjacent `pushReplace` operations, which can occur
in `Transaction.newFromUnwrap`. Fix this by having `pushReplace` look
at a preceding replace and correctly merge the two operations if
possible (in particular in the tricky case where the previous case has
a zero-length insertion). Pleasantly, this can be done without a lot of
special-casing code in `pushReplace` or `newFromUnwrap`.
Add test cases verifying the `newFromUnwrap` works correctly (both
in commit and in rollback) when there is metadata present.
Change-Id: I6cfec0d2b1823dad724422f018a3c73dc0c7f186
Doing retainMetadata followed by a replace (not replaceMetadata)
doesn't make any sense and its behavior is undefined, so don't do that
in the test suite.
Change-Id: Ica032b0a5122d24e40e43e3eb43fea940270aece
In ve.dm.Document.getMetadataReplace(), we used to only merge metadata
if the amount removed is larger than the amount inserted. But this
could end up putting metadata in odd positions, for example if you
have Foo[[Category:Bar]]Baz and you delete 'ooBa' and replace it with
{image}xxx{/image}, then the category ends up inside the image.
We should always merge metadata when a segment is deleted, so that it
appears outside any new structure added.
There's a weird corner case here when a segment is removed but no
insertion is made: the removed metadata then needs to get glommed onto
the next element. We extend the insert/replace metadata array
when this happens.
Bug: 53444
Bug: 53445
Change-Id: I51d55fb370b473273f9cf152fdd0f356377d4109
This avoids problems when unnamed references were copy-pasted.
Knowing that key is always non-null simplifies a lot of logic
elsewhere.
Bug: 53365
Change-Id: I3a23123ae732d9583814d38dd880a0cdf691fd5d
Currently ignores all non-element data, but element content (e.g.
images) can be annotated.
Added test cases and updated the test runner to only compare
store indexes for a more readable output.
Bug: 50127
Change-Id: I234586a28072811c8288aab56f6abaaa0da0c88d
This is bit of a hack, as leading whitespace could be
significant if styled with white-space:pre.
Long term VE shouldn't be editing the user's HTML, and
should just highlight potential formatting issues.
We avoid the stripping in preformatted elements as we
expect they will have that styling.
Bug: 51462
Change-Id: I654d98e17dd604cb2a192831ff3f3597f95b9962
These have been pointing to the same method for a while now,
we can safely remove these obsolete aliases and just use it
as generic copy.
* Each file touched by my editor had its new line at EOF fixed
where absent
* Don't copy an otherwise unused empty object
(ve.dm.Converter)
* Use common ve#copy syntax instead to create a link
(ve.dm.Document, ve.dm.example)
* Remove redundant conditionals for isArray/copyArray/copyObject
(ve.dm.example)
Change-Id: If560e658dc1fb59bf01f702c97e3e82a50a8a255