* Revert "load_masters: actually assign font attributes"
This reverts commit ef1d4cd02d1e46f5dac3914f547a6e4275cf3077, which caused a
crash in `interpolate_layout()` when `deepcopy`ing OTFs.
Amend code and tests while I work on a real fix.
Glyph names matching reserved keywords were not consistently escaped;
they were escaped in GDEF classes but not elsewhere. Call module’s
asFea() function in GlyohName.asFea() to ensure they are consistently
escaped.
When a MarkClassDefinition is inside a block, it gets double indentation
compared to the rest of the block. It should ignore the indent argument
like other similar statements.
This is a followup to commit 94633e9f46975c356ec3a2d21ed30768c2fa0cd5,
where I mistakenly made it return an Enum but parse_coverage_() it not
used only for ENUMs and in many (most?) places returning an Enum is
wrong as you have a list of separate items that has to rmain separate.
ValueRecord had a makeString() method that takes an optional “vertical”
argument, but no code outside the tests sets this argument. Renamed it
to asFea() and dropped the “vertical”, so that it consistent with the
rest of feaLib.ast classes.
PROCESS_MARKS followed by a group name is used for markAttachmentType
lookup flag, while followed by MARK_GLYPH_SET is used for
useMarkFilteringSet. The code parsed both correctly but did not
distinguish between the two in the generated AST as it should since they
compile to different lookup flags.
* Fix ast.GroupDefinition.glyphSet() by using ast.GlyphName,
ast.GroupName and ast.Range in Parser.parse_coverage_(), and making it return
ast.Enum.
* Add ast.Enum.__len_() to fix the calculation of max_src and max_dest
in Parser.parse_substitution_(). I’m not sure I understand the logic
of this many to many check, will double check later.
* Update the test suite to reflect this. Had to add ast.Enum.__eq__() to
make it less painful, and __hash__() as otherwise ast.Enum wouldn’t be
used as a key in dicts (not sure this is a goo idea either, will
double check later).
* a charstring is not guaranteed to end in an operator, so the final bytecodes 11 and 14 can be part of an encoded numeric value; so remove 'return' or 'endchar' at the program level instead of bytescode
* move non-CFF2 test+error to elif clause of earlier isCFF2 test
Fixes the remaining issue from #1451
The parser was still trying to read the next token even when the current
token was END, but I think it should just stop reading here. When
reading from TSIV table there can be null bytes at the end when would
cause an exception in the lexer.
When checking for duplicate anchors, the component number should be
taken into account since the same anchors can be used for different
components i.e. over ligatures.