* Fix ast.GroupDefinition.glyphSet() by using ast.GlyphName,
ast.GroupName and ast.Range in Parser.parse_coverage_(), and making it return
ast.Enum.
* Add ast.Enum.__len_() to fix the calculation of max_src and max_dest
in Parser.parse_substitution_(). I’m not sure I understand the logic
of this many to many check, will double check later.
* Update the test suite to reflect this. Had to add ast.Enum.__eq__() to
make it less painful, and __hash__() as otherwise ast.Enum wouldn’t be
used as a key in dicts (not sure this is a goo idea either, will
double check later).
The parser was still trying to read the next token even when the current
token was END, but I think it should just stop reading here. When
reading from TSIV table there can be null bytes at the end when would
cause an exception in the lexer.
When checking for duplicate anchors, the component number should be
taken into account since the same anchors can be used for different
components i.e. over ligatures.