Arrays were introduced with the name "lists". While it **may** look
user-friendlier and so on, it actually uses a wrong name: lists are
different from arrays. I ran a grep and I should've replaced
every occurrence, plus everything seems to work, however a double check
wouldn't be bad.
Change-Id: I6a858f02f5dd9250ba7e1abf9c6422fd98758c9e
This is taken from I6a57a28f22600aafb2e529587ecce6083e9f7da4 and makes
all the needed changes to make phan pass. Seccheck will instead fail,
but since it's not clear how to fix it (and it is non-voting), for the
moment we may merge this and enable phan on IC.
Bug: T192325
Change-Id: I77648b6f8e146114fd43bb0f4dfccdb36b7ac1ac
I added on MW an example of comparison with empty array, which we should
keep inside the dedicated test as well.
Change-Id: Ifa4bca85c8978ef24ed5bb26787730bb4521261f
Introduce a new function which can be used to group multiple comparisons
in a single condition. In particular, equals_to_any(S, A, B) is the
equivalent of S === A || S === B. This is especially useful in checking
for multiple namespaces, as proposed in the Community health initiative.
Change-Id: I9dcfe303eb5e51e1882fe4a65fa876aa93db7686
I left as ToDo the checks between an array and something else. With this
patch, it'll work like PHP: the result will be true iff the comparison
is loose, the array is empty and the other operand is either false or
null.
Change-Id: Idc5cadb697ed4fc7f4856967274169f77495ed9f
This should fix every error with excluded rules, leaving only the one
for $wgTitle. A double check would be nice in order to avoid regressions
due to stupid mistakes.
Bug: T178007
Change-Id: I22c179f3a01d652640304b59e43fcb5b5a9abac3
Some of them are actually too simple, and may be unuseful in tricky
situations. This patch adds a lot of test cases to provide an (almost)
bombproof safety with future patches.
Depends-On: I0bb1ed0109af66997e238b532d342d82d4c4ae19
Change-Id: I274ef306775c36be20acb662353f6537ff3f1a33
So that type and value will be identical to PHP's ones.
Bug: T191688
Depends-On: I1140900cdda63eed292d9f20aefd721ef9247fcd
Change-Id: I398c9a972b7e9fcb27d055d23939be2b8bb68244
This feature was never implemented. I'm not sure whether we need a way to compare array and other types of variables (left as ToDo), since e.g. in PHP it's always false.
Bug: T179238
Change-Id: I5d2c33fd117e69cbc84c0b04b6cb82edbdcadf16
The following sniffs are failing and were disabled:
* MediaWiki.Commenting.LicenseComment.InvalidLicenseTag
The following sniffs now pass and were enabled:
* MediaWiki.Commenting.FunctionComment.MissingParamComment
Change-Id: I38c334ea6c6ff07dfcb64d551413a02dc8c5e51e
Added the contains_all function, with basically the same role as
contains_any but using logic AND instead of OR. Also added
ccnorm_contains_all, that is the same of ccnorm_contains_any but with
AND mode. Finally, fixed three wrong task IDs.
Co-authored with Valerio Bozzolan.
Bug: T21176
Change-Id: Ib0a8b783db6ce0d5db64771c8e0c70f0f8d13d36
Use the new equivset library instead of AntiSpoof.
Bug: T175413
Change-Id: I439387deeba99543e194c210953ac73ff98bc5b7
Depends-On: I977d3498b2084a426e2ab4d85c000d1b9dcfe824
This filter is fully functional. The old filter is still enabled by
default for a transitional period in case the new one suddenly has
issues.
Change-Id: I4aea5f00c62420108030e60e79d5bf34e913e95d
I am pretty sure all of the behavior documented in these tests is a bad
idea. It is possible that we can fix it since some of those features
are probably unused, but for now those tests will serve as a
documentation of the current behavior.
Change-Id: Ia2a2f57a538d7aef2ac73fb2e47fe82dd5d5e09a
substr_count() is just as fast as looped strpos() when there are no
matches, and gets faster as the number of matches increases.
Note that this introduces a small change in behavior when the needle
is composed of repeated substrings, e.g. 'asdasdasd' or 'aa', and
haystack is such that the needle can be matched in overlapping
positions, e.g. 'asdasdasdasd' or 'aaaaa'. The old implementation
counted overlapping matches, the new one doesn't. I don't think this
behavior was intentional and I don't think this change will cause any
real problems.
Change-Id: Icc905ca34bf08d63e969787a5e3c119d498bf878
Previously, 'false & a == b' would actually execute the comparison and
count it against the condition limit, while 'false & (a == b)' wouldn't.
They behave the same now.
mShortCircuit was only checked for the most potentially expensive
operations (computing functions and getting variables), all the other
operations on bogus values generated by this would be executed and the
results ignored later.
This probably doesn't noticeably improve performance, but it corrects
how the condition limit is counted.
Bug: T43693
Change-Id: Id1d5f577b14b6ae6d987ded12689788eb7922474
With the new behavior, the number of conditions in incremented when:
* Evaluating a function
* Evaluating a comparison operator (== === != !== < > <= >= =)
* Evaluating a keyword (in like matches contains rlike irlike regex)
Previously, the number of conditions was incremented when:
* Evaluating a function
* Entering the comparison operator evaluation mode
This resulted in a number of surprising behaviors. In particular:
* '(((a == b)))' counted as 4 conditions, not 1
* 'contains_any(a, b, c)' counted as 5 conditions, not 1
* 'a == b == c' counted as 1 condition, not 2
* 'a in b + c in d + e in f' counted as 1 condition, not 3
* 'true' counted as 1 condition, not 0
It is still possible to easily cheat the count by rewriting comparisons
as arithmetic operations. I believe this is meant to advise users of
the complexity of their rules and not really enforce strict limits.
Bug: T132190
Change-Id: I897769db4c2ceac802e3ae5d6fa8e9c9926ef246
* Move AbuseFilterParser::nextToken() and the various AbuseFilterParser
properties that accompanied it to a new class, AbuseFilterTokenizer.
* Tokenize rules eagerly and cache the result in APC.
Change-Id: I15f5b5b65e8c4ec4fba3000d7c9fd78b98967d1d
No functional changes.
* Don't include $code as part of the return value; it is ignored anyway.
* Removed AbuseFilterParser::lastHandledToken and AFPParserState::lastInput,
because AbuseFilterParser::nextToken() no longer calls itself recursively.
* The regular expression that matches operators is no longer constructed
dynamically, but hard-coded into the class. To make sure it does not drift
apart from the more legible AbuseFilterParser::$mOps, add a unit test that
constructs the regex dynamically as before and compares it to
AbuseFilterParser::OPERATOR_RE.
* AbuseFilterParser::RADIX_RE ditto.
Change-Id: I9c23b60759ed2f4c73a9b480243b16bbce5a208f
If we don't map '\-' and '\+' to themselves, the leading slash gets escaped,
and the resultant pattern only matches a literal slash.
Bug: 67670
Change-Id: Ifa1e3edd6f41985a3bb97bfb1497985f8fa64af5
I've also added myself to the credits file as I'm the only
maintainer of this extension for a while now.
Change-Id: Id998172ea2abd70b8243de9db1a96cc2cfa47a64
The abusefilter array test failed because length( ['a', 'b', 'c'] )
returned 12 instead of 6. That was du to it converted the array
to a string with new line seperated values first before measuring
the string length. Changed that behaviour to act like the php count()
function or the python len() function which seems far more useful to me.
The old behaviour can be established using length( string( array ) ).
Change-Id: I16646891837c9743ca5af2dd328077a7225bb5f1
ATTENTION! This may break filters that rely on "added_lines contains 'bla-bla'" syntax. They'll need to be replaced with "string(added_lines) contains 'bla-bla'"
* Introduce := operator for setting variables
* Throw an exception when user tries to override built-in variable
* Fix UTF-8 handling in fnmatch() fallback
* Copy three main abuse filters from enwiki to test suite
* Fix update.php integration
* Use strcspn to scan ahead for long regions of uninteresting text in string handling (performance).
* Remove cruft specific to my system in phpTest.php.
* Remove a test that was in incorrect syntax, and useless without adding variable support.
I've made it more performant and fixed a few bugs by using regexes
instead of PHP loops, where possible, under the assumption that the
PCRE parser is more efficient than the same thing implemented in pure PHP.
Also, I'm now passing the same string around and calculating offsets, which
Tim tells me is far more performant than continually truncating the same string.
All tests still pass, with the exception of string.t, which I've modified
to remove the offending code, which never worked.