4.4.1 SequenceMatcher Objects

The SequenceMatcher class has this constructor:

class SequenceMatcher( [isjunk[, a[, b]]])
Optional argument isjunk must be None (the default) or a one-argument function that takes a sequence element and returns true if and only if the element is ``junk'' and should be ignored. Passing None for isjunk is equivalent to passing lambda x: 0; in other words, no elements are ignored. For example, pass:

lambda x: x in " \t"

if you're comparing lines as sequences of characters, and don't want to synch up on blanks or hard tabs.

The optional arguments a and b are sequences to be compared; both default to empty strings. The elements of both sequences must be hashable.

SequenceMatcher objects have the following methods:

set_seqs( a, b)
Set the two sequences to be compared.

SequenceMatcher computes and caches detailed information about the second sequence, so if you want to compare one sequence against many sequences, use set_seq2() to set the commonly used sequence once and call set_seq1() repeatedly, once for each of the other sequences.

set_seq1( a)
Set the first sequence to be compared. The second sequence to be compared is not changed.

set_seq2( b)
Set the second sequence to be compared. The first sequence to be compared is not changed.

find_longest_match( alo, ahi, blo, bhi)
Find longest matching block in a[alo:ahi] and b[blo:bhi].

If isjunk was omitted or None, get_longest_match() returns (i, j, k) such that a[i:i+k] is equal to b[j:j+k], where alo <= i <= i+k <= ahi and blo <= j <= j+k <= bhi. For all (i', j', k') meeting those conditions, the additional conditions k >= k', i <= i', and if i == i', j <= j' are also met. In other words, of all maximal matching blocks, return one that starts earliest in a, and of all those maximal matching blocks that start earliest in a, return the one that starts earliest in b.

>>> s = SequenceMatcher(None, " abcd", "abcd abcd")
>>> s.find_longest_match(0, 5, 0, 9)
(0, 4, 5)

If isjunk was provided, first the longest matching block is determined as above, but with the additional restriction that no junk element appears in the block. Then that block is extended as far as possible by matching (only) junk elements on both sides. So the resulting block never matches on junk except as identical junk happens to be adjacent to an interesting match.

Here's the same example as before, but considering blanks to be junk. That prevents ' abcd' from matching the ' abcd' at the tail end of the second sequence directly. Instead only the 'abcd' can match, and matches the leftmost 'abcd' in the second sequence:

>>> s = SequenceMatcher(lambda x: x==" ", " abcd", "abcd abcd")
>>> s.find_longest_match(0, 5, 0, 9)
(1, 0, 4)

If no blocks match, this returns (alo, blo, 0).

get_matching_blocks( )
Return list of triples describing matching subsequences. Each triple is of the form (i, j, n), and means that a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in i and j.

The last triple is a dummy, and has the value (len(a), len(b), 0). It is the only triple with n == 0.

If (i, j, n) and (i', j', n') are adjacent triples in the list, and the second is not the last triple in the list, then i+n != i' or j+n != j'; in other words, adjacent triples always describe non-adjacent equal blocks. Changed in version 2.5: The guarantee that adjacent triples always describe non-adjacent blocks was implemented.

>>> s = SequenceMatcher(None, "abxcd", "abcd")
>>> s.get_matching_blocks()
[(0, 0, 2), (3, 2, 2), (5, 4, 0)]

get_opcodes( )
Return list of 5-tuples describing how to turn a into b. Each tuple is of the form (tag, i1, i2, j1, j2). The first tuple has i1 == j1 == 0, and remaining tuples have i1 equal to the i2 from the preceding tuple, and, likewise, j1 equal to the previous j2.

The tag values are strings, with these meanings:

Value Meaning
'replace' a[i1:i2] should be replaced by b[j1:j2].
'delete' a[i1:i2] should be deleted. Note that j1 == j2 in this case.
'insert' b[j1:j2] should be inserted at a[i1:i1]. Note that i1 == i2 in this case.
'equal' a[i1:i2] == b[j1:j2] (the sub-sequences are equal).

For example:

>>> a = "qabxcd"
>>> b = "abycdf"
>>> s = SequenceMatcher(None, a, b)
>>> for tag, i1, i2, j1, j2 in s.get_opcodes():
...    print ("%7s a[%d:%d] (%s) b[%d:%d] (%s)" %
...           (tag, i1, i2, a[i1:i2], j1, j2, b[j1:j2]))
 delete a[0:1] (q) b[0:0] ()
  equal a[1:3] (ab) b[0:2] (ab)
replace a[3:4] (x) b[2:3] (y)
  equal a[4:6] (cd) b[3:5] (cd)
 insert a[6:6] () b[5:6] (f)

get_grouped_opcodes( [n])
Return a generator of groups with up to n lines of context.

Starting with the groups returned by get_opcodes(), this method splits out smaller change clusters and eliminates intervening ranges which have no changes.

The groups are returned in the same format as get_opcodes(). New in version 2.3.

ratio( )
Return a measure of the sequences' similarity as a float in the range [0, 1].

Where T is the total number of elements in both sequences, and M is the number of matches, this is 2.0*M / T. Note that this is 1.0 if the sequences are identical, and 0.0 if they have nothing in common.

This is expensive to compute if get_matching_blocks() or get_opcodes() hasn't already been called, in which case you may want to try quick_ratio() or real_quick_ratio() first to get an upper bound.

quick_ratio( )
Return an upper bound on ratio() relatively quickly.

This isn't defined beyond that it is an upper bound on ratio(), and is faster to compute.

real_quick_ratio( )
Return an upper bound on ratio() very quickly.

This isn't defined beyond that it is an upper bound on ratio(), and is faster to compute than either ratio() or quick_ratio().

The three methods that return the ratio of matching to total characters can give different results due to differing levels of approximation, although quick_ratio() and real_quick_ratio() are always at least as large as ratio():

>>> s = SequenceMatcher(None, "abcd", "bcde")
>>> s.ratio()
0.75
>>> s.quick_ratio()
0.75
>>> s.real_quick_ratio()
1.0

See About this document... for information on suggesting changes.