When the VarStore.optimize() algorithm was improved with the addition of a priority queue where pairs of encodings are sorted by decreasing gain (from merging one into the other), specifically with this commit https://github.com/fonttools/fonttools/commit/47ec18f788, the pre-sorting step of the todo list (before the queue itself was populated) was kept to make sure the algorithm produces stable results no matter the order of the inputs.
The optimizer's output was itself sorted, again for stability, but using a different key function with_sort_key() from the one used on the input todo list gain_sort_key().
The rationale for a distinct gain_sort_key, where encodings get sorted by maximum theoretical gain (decreasing, initally, when reverse=True was set, then incrasing as reverse was somehow dropped), is no longer needed now that a priority queue is used (which sorts by actual gains from merging specific pairs of encodings) and is a remnant of the previous algorithm.
I propose we keep some pre-sorting to ensure stability (in case the priority queue initially contains multiple pairs with the same gain), but we use the same width_sort_key that is used at the end.
Note this doesn't change the overall level of compression achieved by the optimizer, but makes the algorithm a bit less complicated, and easier to match in our alternative Rust implementation.
In some cases we were seeing different output from iup depending on
whether or not we were running cython code.
I've tracked this particular issue down to the line that is changed in
this diff, and the change introduced in this diff does (locally, for me,
on one machine with one architecture and one compiler) seem to suppress
the problem.
However... it feels pretty bad??
I'm not sure how motivated I am to try and generate a proper minimal
test case and try to get this fixed upstream. I guess I'm.. medium
motivated? But at the very least it would be nice to figure out a more
robust way to prevent this optimization from happening, and at the very
_very_ least it would be nice to figure out away to test this.
The solution I was hoping for was some way to write some actual
hand-written C so we could have finer-grained control over what's going
on, and use that just for this one little bit of arithmetic, but I
didn't see an easy way to do that.
Closes#3634
To produce inferred deltas that will be correct given OpenType's gvar
semantics, fontTool's IUP optimisation module checks the equality of
some points. However, this happens before the points are rounded,
whereas the point comparison that happens at runtime will occur after
the points are rounded (as is necessary to serialise glyf), which leads
to diverging semantics and so diverging and incorrect implied deltas.
This leads to significant visual artefacts, e.g. where large deltas that
should be inferred based on previous values are instead interpreted as 0
at runtime.
I suspect this has gone undetected as the subsetter normally works with
rounded points; in the rarer case that partial VF instancing is
occurring with a different default position, however, varLib.instancer
will calculate and apply the relevant deltas to the font's original
coordinates to effect the new default position, which leads to unrounded
points in memory. This commit ensures that we round directly before
optimising (but still after calculating `glyf` metrics, for backward
compatibility).
All limits are tuples now when not None. The old logic was
broken and for the following command:
$ fonttools varLib.instancer AdobeVFPrototype.otf CNTR=50:80 wght=900
it was saving the output with the name suffix `-instance`, whereas
it's clearly a partial instantiation. This fixes that.