Merge branch 'fonttools:main' into ttf2otf

This commit is contained in:
ftCLI 2024-08-12 08:46:27 +02:00 committed by GitHub
commit 8aebeeb7d7
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
111 changed files with 8277 additions and 3617 deletions

View File

@ -88,7 +88,7 @@ jobs:
- uses: actions/checkout@v4
with:
submodules: true
- uses: docker/setup-qemu-action@v3.0.0
- uses: docker/setup-qemu-action@v3.1.0
with:
platforms: all
- name: Install dependencies
@ -118,7 +118,7 @@ jobs:
# so that all artifacts are downloaded in the same directory specified by 'path'
merge-multiple: true
path: dist
- uses: pypa/gh-action-pypi-publish@v1.8.14
- uses: pypa/gh-action-pypi-publish@v1.9.0
with:
user: __token__
password: ${{ secrets.PYPI_PASSWORD }}

View File

@ -8,7 +8,7 @@ If you are unfamiliar with that, check out [opensource.guide](https://opensource
We use Github's Issue Tracker to report, discuss and track bugs, map out future improvements, set priorities, and self-assign issues.
If you find a bug, have an idea for a new feature, then please [create a new issue](https://github.com/fonttools/fonttools/issues) and we'll be happy to work with you on it!
If you have a question or want to discuss usage from an end-user perspective, there is a mailing list at [groups.google.com/d/forum/fonttools](https://groups.google.com/d/forum/fonttools) mailing list.
If you have a question or want to discuss usage from an end-user perspective, check out the [Discussions](https://github.com/fonttools/fonttools/discussions).
If you would like to speak to someone directly, you can also email the project lead, Behdad Esfahbod, privately at <behdad@behdad.org>

View File

@ -1,4 +1,4 @@
sphinx==7.2.6
sphinx==7.4.3
sphinx_rtd_theme==2.0.0
reportlab==4.1.0
reportlab==4.2.2
freetype-py==2.4.0

View File

@ -5,15 +5,14 @@
:align: center
fontTools Docs
==============
---fontTools Documentation---
=======
About
-----
fontTools is a family of libraries and utilities for manipulating fonts in Python.
The project has an `MIT open-source license <https://github.com/fonttools/fonttools/blob/main/LICENSE>`_. Among other things this means you can use it free of charge.
The project is licensed under the `MIT open-source license <https://github.com/fonttools/fonttools/blob/main/LICENSE>`_, allowing free usage.
Installation
------------
@ -22,94 +21,71 @@ Installation
fontTools requires `Python <http://www.python.org/download/>`_ 3.8 or later.
The package is listed in the Python Package Index (PyPI), so you can install it with `pip <https://pip.pypa.io/>`_::
To install fontTools, use `pip <https://pip.pypa.io/>`_:
pip install fonttools
See the Optional Requirements section below for details about module-specific dependencies that must be installed in select cases.
Utilities
---------
fontTools installs four command-line utilities:
fontTools includes the following command-line utilities:
- ``pyftmerge``, a tool for merging fonts; see :py:mod:`fontTools.merge`
- ``pyftsubset``, a tool for subsetting fonts; see :py:mod:`fontTools.subset`
- ``ttx``, a tool for converting between OpenType binary fonts (OTF) and an XML representation (TTX); see :py:mod:`fontTools.ttx`
- ``fonttools``, a "meta-tool" for accessing other components of the fontTools family.
- ``pyftmerge``: Tool for merging fonts; see :py:mod:`fontTools.merge`
- ``pyftsubset``: Tool for subsetting fonts; see :py:mod:`fontTools.subset`
- ``ttx``: Tool for converting between OTF and XML representation; see :py:mod:`fontTools.ttx`
- ``fonttools``: Meta-tool for accessing other fontTools components.
This last utility takes a subcommand, which could be one of:
For ``fonttools``, you can use subcommands like:
- ``cffLib.width``: Calculate optimum defaultWidthX/nominalWidthX values
- ``cu2qu``: Convert a UFO font from cubic to quadratic curves
- ``feaLib``: Add features from a feature file (.fea) into a OTF font
- ``help``: Show this help
- ``merge``: Merge multiple fonts into one
- ``mtiLib``: Convert a FontDame OTL file to TTX XML
- ``subset``: OpenType font subsetter and optimizer
- ``ttLib.woff2``: Compress and decompress WOFF2 fonts
- ``ttx``: Convert OpenType fonts to XML and back
- ``varLib``: Build a variable font from a designspace file and masters
- ``varLib.instancer``: Partially instantiate a variable font.
- ``varLib.interpolatable``: Test for interpolatability issues between fonts
- ``varLib.interpolate_layout``: Interpolate GDEF/GPOS/GSUB tables for a point on a designspace
- ``varLib.models``: Normalize locations on a given designspace
- ``varLib.mutator``: Instantiate a variation font
- ``varLib.varStore``: Optimize a font's GDEF variation store
- ``varLib.instancer``: Partially instantiate a variable font
- ``voltLib.voltToFea``: Convert MS VOLT to AFDKO feature files.
Libraries
---------
The main library you will want to access when using fontTools for font
engineering is likely to be :py:mod:`fontTools.ttLib.ttFont`, which is the module
for handling TrueType/OpenType fonts. However, there are many other
libraries in the fontTools suite:
The main library for font engineering is :py:mod:`fontTools.ttLib.ttFont`, which handles TrueType/OpenType fonts. Other libraries include:
- :py:mod:`fontTools.afmLib`: Module for reading and writing AFM files
- :py:mod:`fontTools.agl`: Access to the Adobe Glyph List
- :py:mod:`fontTools.cffLib`: Read/write tools for Adobe CFF fonts
- :py:mod:`fontTools.colorLib`: Module for handling colors in CPAL/COLR fonts
- :py:mod:`fontTools.config`: Configure fontTools
- :py:mod:`fontTools.cu2qu`: Module for cubic to quadratic conversion
- :py:mod:`fontTools.afmLib`: Read and write AFM files
- :py:mod:`fontTools.agl`: Access the Adobe Glyph List
- :py:mod:`fontTools.cffLib`: Tools for Adobe CFF fonts
- :py:mod:`fontTools.colorLib`: Handle colors in CPAL/COLR fonts
- :py:mod:`fontTools.cu2qu`: Convert cubic to quadratic curves
- :py:mod:`fontTools.designspaceLib`: Read and write designspace files
- :py:mod:`fontTools.encodings`: Support for font-related character encodings
- :py:mod:`fontTools.feaLib`: Read and read AFDKO feature files
- :py:mod:`fontTools.encodings`: Support for font-related encodings
- :py:mod:`fontTools.feaLib`: Read and write AFDKO feature files
- :py:mod:`fontTools.fontBuilder`: Construct TTF/OTF fonts from scratch
- :py:mod:`fontTools.merge`: Tools for merging font files
- :py:mod:`fontTools.pens`: Various classes for manipulating glyph outlines
- :py:mod:`fontTools.subset`: OpenType font subsetting and optimization
- :py:mod:`fontTools.svgLib.path`: Library for drawing SVG paths onto glyphs
- :py:mod:`fontTools.t1Lib`: Tools for PostScript Type 1 fonts (Python2 only)
- :py:mod:`fontTools.tfmLib`: Module for reading TFM files
- :py:mod:`fontTools.ttLib`: Module for reading/writing OpenType and Truetype fonts
- :py:mod:`fontTools.ttx`: Module for converting between OTF and XML representation
- :py:mod:`fontTools.ufoLib`: Module for reading and writing UFO files
- :py:mod:`fontTools.unicodedata`: Convert between Unicode and OpenType script information
- :py:mod:`fontTools.varLib`: Module for dealing with 'gvar'-style font variations
- :py:mod:`fontTools.voltLib`: Module for dealing with Visual OpenType Layout Tool (VOLT) files
A selection of sample Python programs using these libaries can be found in the `Snippets directory <https://github.com/fonttools/fonttools/blob/main/Snippets/>`_ of the fontTools repository.
- :py:mod:`fontTools.svgLib.path`: Draw SVG paths onto glyphs
- :py:mod:`fontTools.ttLib`: Read/write OpenType and TrueType fonts
- :py:mod:`fontTools.ttx`: Convert between OTF and XML representation
- :py:mod:`fontTools.ufoLib`: Read and write UFO files
- :py:mod:`fontTools.unicodedata`: Convert between Unicode and OpenType script info
- :py:mod:`fontTools.varLib`: Deal with 'gvar'-style font variations
- :py:mod:`fontTools.voltLib`: Deal with Visual OpenType Layout Tool (VOLT) files
Optional Dependencies
---------------------
The fontTools package currently has no (required) external dependencies
besides the modules included in the Python Standard Library.
However, a few extra dependencies are required to unlock optional features
in some of the library modules. See the :doc:`optional requirements <./optional>`
page for more information.
fontTools has no external dependencies besides the Python Standard Library. Some optional features require additional modules; see the :doc:`optional requirements </optional>` page for details.
Developer information
Developer Information
---------------------
Information for developers can be found :doc:`here <./developer>`.
For developer resources, refer to the :doc:`developer information </developer>`.
License
-------
`MIT license <https://github.com/fonttools/fonttools/blob/main/LICENSE>`_. See the full text of the license for details.
fontTools is licensed under the `MIT license <https://github.com/fonttools/fonttools/blob/main/LICENSE>`_. Refer to the full text of the license for details.
Table of Contents
-----------------
@ -144,7 +120,6 @@ Table of Contents
varLib/index
voltLib/index
.. |Travis Build Status| image:: https://travis-ci.org/fonttools/fonttools.svg
:target: https://travis-ci.org/fonttools/fonttools
.. |Appveyor Build status| image:: https://ci.appveyor.com/api/projects/status/0f7fmee9as744sl7/branch/master?svg=true

View File

@ -107,3 +107,11 @@ STAT Table Builder
.. currentmodule:: fontTools.otlLib.builder
.. autofunction:: buildStatTable
------------------
MATH Table Builder
------------------
.. currentmodule:: fontTools.otlLib.builder
.. autofunction:: buildMathTable

View File

@ -39,10 +39,10 @@ The following tables are currently supported::
FFTM, Feat, GDEF, GMAP, GPKG, GPOS, GSUB, Glat, Gloc, HVAR, JSTF,
LTSH, MATH, META, MVAR, OS/2, SING, STAT, SVG, Silf, Sill, TSI0,
TSI1, TSI2, TSI3, TSI5, TSIB, TSIC, TSID, TSIJ, TSIP, TSIS, TSIV,
TTFA, VDMX, VORG, VVAR, ankr, avar, bsln, cidg, cmap, cvar, cvt,
feat, fpgm, fvar, gasp, gcid, glyf, gvar, hdmx, head, hhea, hmtx,
kern, lcar, loca, ltag, maxp, meta, mort, morx, name, opbd, post,
prep, prop, sbix, trak, vhea and vmtx
TTFA, VARC, VDMX, VORG, VVAR, ankr, avar, bsln, cidg, cmap, cvar,
cvt, feat, fpgm, fvar, gasp, gcid, glyf, gvar, hdmx, head, hhea,
hmtx, kern, lcar, loca, ltag, maxp, meta, mort, morx, name, opbd,
post, prep, prop, sbix, trak, vhea and vmtx
.. end table list

View File

@ -3,6 +3,6 @@ from fontTools.misc.loggingTools import configLogger
log = logging.getLogger(__name__)
version = __version__ = "4.51.1.dev0"
version = __version__ = "4.53.2.dev0"
__all__ = ["version", "log", "configLogger"]

View File

@ -0,0 +1,187 @@
"""CFF2 to CFF converter."""
from fontTools.ttLib import TTFont, newTable
from fontTools.misc.cliTools import makeOutputFileName
from fontTools.cffLib import (
TopDictIndex,
buildOrder,
buildDefaults,
topDictOperators,
privateDictOperators,
)
from .width import optimizeWidths
from collections import defaultdict
import logging
__all__ = ["convertCFF2ToCFF", "main"]
log = logging.getLogger("fontTools.cffLib")
def _convertCFF2ToCFF(cff, otFont):
"""Converts this object from CFF2 format to CFF format. This conversion
is done 'in-place'. The conversion cannot be reversed.
The CFF2 font cannot be variable. (TODO Accept those and convert to the
default instance?)
This assumes a decompiled CFF table. (i.e. that the object has been
filled via :meth:`decompile` and e.g. not loaded from XML.)"""
cff.major = 1
topDictData = TopDictIndex(None, isCFF2=True)
for item in cff.topDictIndex:
# Iterate over, such that all are decompiled
topDictData.append(item)
cff.topDictIndex = topDictData
topDict = topDictData[0]
if hasattr(topDict, "VarStore"):
raise ValueError("Variable CFF2 font cannot be converted to CFF format.")
opOrder = buildOrder(topDictOperators)
topDict.order = opOrder
for key in topDict.rawDict.keys():
if key not in opOrder:
del topDict.rawDict[key]
if hasattr(topDict, key):
delattr(topDict, key)
fdArray = topDict.FDArray
charStrings = topDict.CharStrings
defaults = buildDefaults(privateDictOperators)
order = buildOrder(privateDictOperators)
for fd in fdArray:
fd.setCFF2(False)
privateDict = fd.Private
privateDict.order = order
for key in order:
if key not in privateDict.rawDict and key in defaults:
privateDict.rawDict[key] = defaults[key]
for key in privateDict.rawDict.keys():
if key not in order:
del privateDict.rawDict[key]
if hasattr(privateDict, key):
delattr(privateDict, key)
for cs in charStrings.values():
cs.decompile()
cs.program.append("endchar")
for subrSets in [cff.GlobalSubrs] + [
getattr(fd.Private, "Subrs", []) for fd in fdArray
]:
for cs in subrSets:
cs.program.append("return")
# Add (optimal) width to CharStrings that need it.
widths = defaultdict(list)
metrics = otFont["hmtx"].metrics
for glyphName in charStrings.keys():
cs, fdIndex = charStrings.getItemAndSelector(glyphName)
if fdIndex == None:
fdIndex = 0
widths[fdIndex].append(metrics[glyphName][0])
for fdIndex, widthList in widths.items():
bestDefault, bestNominal = optimizeWidths(widthList)
private = fdArray[fdIndex].Private
private.defaultWidthX = bestDefault
private.nominalWidthX = bestNominal
for glyphName in charStrings.keys():
cs, fdIndex = charStrings.getItemAndSelector(glyphName)
if fdIndex == None:
fdIndex = 0
private = fdArray[fdIndex].Private
width = metrics[glyphName][0]
if width != private.defaultWidthX:
cs.program.insert(0, width - private.nominalWidthX)
def convertCFF2ToCFF(font, *, updatePostTable=True):
cff = font["CFF2"].cff
_convertCFF2ToCFF(cff, font)
del font["CFF2"]
table = font["CFF "] = newTable("CFF ")
table.cff = cff
if updatePostTable and "post" in font:
# Only version supported for fonts with CFF table is 0x00030000 not 0x20000
post = font["post"]
if post.formatType == 2.0:
post.formatType = 3.0
def main(args=None):
"""Convert CFF OTF font to CFF2 OTF font"""
if args is None:
import sys
args = sys.argv[1:]
import argparse
parser = argparse.ArgumentParser(
"fonttools cffLib.CFFToCFF2",
description="Upgrade a CFF font to CFF2.",
)
parser.add_argument(
"input", metavar="INPUT.ttf", help="Input OTF file with CFF table."
)
parser.add_argument(
"-o",
"--output",
metavar="OUTPUT.ttf",
default=None,
help="Output instance OTF file (default: INPUT-CFF2.ttf).",
)
parser.add_argument(
"--no-recalc-timestamp",
dest="recalc_timestamp",
action="store_false",
help="Don't set the output font's timestamp to the current time.",
)
loggingGroup = parser.add_mutually_exclusive_group(required=False)
loggingGroup.add_argument(
"-v", "--verbose", action="store_true", help="Run more verbosely."
)
loggingGroup.add_argument(
"-q", "--quiet", action="store_true", help="Turn verbosity off."
)
options = parser.parse_args(args)
from fontTools import configLogger
configLogger(
level=("DEBUG" if options.verbose else "ERROR" if options.quiet else "INFO")
)
import os
infile = options.input
if not os.path.isfile(infile):
parser.error("No such file '{}'".format(infile))
outfile = (
makeOutputFileName(infile, overWrite=True, suffix="-CFF")
if not options.output
else options.output
)
font = TTFont(infile, recalcTimestamp=options.recalc_timestamp, recalcBBoxes=False)
convertCFF2ToCFF(font)
log.info(
"Saving %s",
outfile,
)
font.save(outfile)
if __name__ == "__main__":
import sys
sys.exit(main(sys.argv[1:]))

View File

@ -0,0 +1,303 @@
"""CFF to CFF2 converter."""
from fontTools.ttLib import TTFont, newTable
from fontTools.misc.cliTools import makeOutputFileName
from fontTools.misc.psCharStrings import T2WidthExtractor
from fontTools.cffLib import (
TopDictIndex,
FDArrayIndex,
FontDict,
buildOrder,
topDictOperators,
privateDictOperators,
topDictOperators2,
privateDictOperators2,
)
from io import BytesIO
import logging
__all__ = ["convertCFFToCFF2", "main"]
log = logging.getLogger("fontTools.cffLib")
class _NominalWidthUsedError(Exception):
def __add__(self, other):
raise self
def __radd__(self, other):
raise self
def _convertCFFToCFF2(cff, otFont):
"""Converts this object from CFF format to CFF2 format. This conversion
is done 'in-place'. The conversion cannot be reversed.
This assumes a decompiled CFF table. (i.e. that the object has been
filled via :meth:`decompile` and e.g. not loaded from XML.)"""
# Clean up T2CharStrings
topDict = cff.topDictIndex[0]
fdArray = topDict.FDArray if hasattr(topDict, "FDArray") else None
charStrings = topDict.CharStrings
globalSubrs = cff.GlobalSubrs
localSubrs = (
[getattr(fd.Private, "Subrs", []) for fd in fdArray]
if fdArray
else (
[topDict.Private.Subrs]
if hasattr(topDict, "Private") and hasattr(topDict.Private, "Subrs")
else []
)
)
for glyphName in charStrings.keys():
cs, fdIndex = charStrings.getItemAndSelector(glyphName)
cs.decompile()
# Clean up subroutines first
for subrs in [globalSubrs] + localSubrs:
for subr in subrs:
program = subr.program
i = j = len(program)
try:
i = program.index("return")
except ValueError:
pass
try:
j = program.index("endchar")
except ValueError:
pass
program[min(i, j) :] = []
# Clean up glyph charstrings
removeUnusedSubrs = False
nominalWidthXError = _NominalWidthUsedError()
for glyphName in charStrings.keys():
cs, fdIndex = charStrings.getItemAndSelector(glyphName)
program = cs.program
thisLocalSubrs = (
localSubrs[fdIndex]
if fdIndex
else (
getattr(topDict.Private, "Subrs", [])
if hasattr(topDict, "Private")
else []
)
)
# Intentionally use custom type for nominalWidthX, such that any
# CharString that has an explicit width encoded will throw back to us.
extractor = T2WidthExtractor(
thisLocalSubrs,
globalSubrs,
nominalWidthXError,
0,
)
try:
extractor.execute(cs)
except _NominalWidthUsedError:
# Program has explicit width. We want to drop it, but can't
# just pop the first number since it may be a subroutine call.
# Instead, when seeing that, we embed the subroutine and recurse.
# If this ever happened, we later prune unused subroutines.
while program[1] in ["callsubr", "callgsubr"]:
removeUnusedSubrs = True
subrNumber = program.pop(0)
op = program.pop(0)
bias = extractor.localBias if op == "callsubr" else extractor.globalBias
subrNumber += bias
subrSet = thisLocalSubrs if op == "callsubr" else globalSubrs
subrProgram = subrSet[subrNumber].program
program[:0] = subrProgram
# Now pop the actual width
program.pop(0)
if program and program[-1] == "endchar":
program.pop()
if removeUnusedSubrs:
cff.remove_unused_subroutines()
# Upconvert TopDict
cff.major = 2
cff2GetGlyphOrder = cff.otFont.getGlyphOrder
topDictData = TopDictIndex(None, cff2GetGlyphOrder)
for item in cff.topDictIndex:
# Iterate over, such that all are decompiled
topDictData.append(item)
cff.topDictIndex = topDictData
topDict = topDictData[0]
if hasattr(topDict, "Private"):
privateDict = topDict.Private
else:
privateDict = None
opOrder = buildOrder(topDictOperators2)
topDict.order = opOrder
topDict.cff2GetGlyphOrder = cff2GetGlyphOrder
if not hasattr(topDict, "FDArray"):
fdArray = topDict.FDArray = FDArrayIndex()
fdArray.strings = None
fdArray.GlobalSubrs = topDict.GlobalSubrs
topDict.GlobalSubrs.fdArray = fdArray
charStrings = topDict.CharStrings
if charStrings.charStringsAreIndexed:
charStrings.charStringsIndex.fdArray = fdArray
else:
charStrings.fdArray = fdArray
fontDict = FontDict()
fontDict.setCFF2(True)
fdArray.append(fontDict)
fontDict.Private = privateDict
privateOpOrder = buildOrder(privateDictOperators2)
if privateDict is not None:
for entry in privateDictOperators:
key = entry[1]
if key not in privateOpOrder:
if key in privateDict.rawDict:
# print "Removing private dict", key
del privateDict.rawDict[key]
if hasattr(privateDict, key):
delattr(privateDict, key)
# print "Removing privateDict attr", key
else:
# clean up the PrivateDicts in the fdArray
fdArray = topDict.FDArray
privateOpOrder = buildOrder(privateDictOperators2)
for fontDict in fdArray:
fontDict.setCFF2(True)
for key in list(fontDict.rawDict.keys()):
if key not in fontDict.order:
del fontDict.rawDict[key]
if hasattr(fontDict, key):
delattr(fontDict, key)
privateDict = fontDict.Private
for entry in privateDictOperators:
key = entry[1]
if key not in privateOpOrder:
if key in list(privateDict.rawDict.keys()):
# print "Removing private dict", key
del privateDict.rawDict[key]
if hasattr(privateDict, key):
delattr(privateDict, key)
# print "Removing privateDict attr", key
# Now delete up the deprecated topDict operators from CFF 1.0
for entry in topDictOperators:
key = entry[1]
# We seem to need to keep the charset operator for now,
# or we fail to compile with some fonts, like AdditionFont.otf.
# I don't know which kind of CFF font those are. But keeping
# charset seems to work. It will be removed when we save and
# read the font again.
#
# AdditionFont.otf has <Encoding name="StandardEncoding"/>.
if key == "charset":
continue
if key not in opOrder:
if key in topDict.rawDict:
del topDict.rawDict[key]
if hasattr(topDict, key):
delattr(topDict, key)
# TODO(behdad): What does the following comment even mean? Both CFF and CFF2
# use the same T2Charstring class. I *think* what it means is that the CharStrings
# were loaded for CFF1, and we need to reload them for CFF2 to set varstore, etc
# on them. At least that's what I understand. It's probably safe to remove this
# and just set vstore where needed.
#
# See comment above about charset as well.
# At this point, the Subrs and Charstrings are all still T2Charstring class
# easiest to fix this by compiling, then decompiling again
file = BytesIO()
cff.compile(file, otFont, isCFF2=True)
file.seek(0)
cff.decompile(file, otFont, isCFF2=True)
def convertCFFToCFF2(font):
cff = font["CFF "].cff
del font["CFF "]
_convertCFFToCFF2(cff, font)
table = font["CFF2"] = newTable("CFF2")
table.cff = cff
def main(args=None):
"""Convert CFF OTF font to CFF2 OTF font"""
if args is None:
import sys
args = sys.argv[1:]
import argparse
parser = argparse.ArgumentParser(
"fonttools cffLib.CFFToCFF2",
description="Upgrade a CFF font to CFF2.",
)
parser.add_argument(
"input", metavar="INPUT.ttf", help="Input OTF file with CFF table."
)
parser.add_argument(
"-o",
"--output",
metavar="OUTPUT.ttf",
default=None,
help="Output instance OTF file (default: INPUT-CFF2.ttf).",
)
parser.add_argument(
"--no-recalc-timestamp",
dest="recalc_timestamp",
action="store_false",
help="Don't set the output font's timestamp to the current time.",
)
loggingGroup = parser.add_mutually_exclusive_group(required=False)
loggingGroup.add_argument(
"-v", "--verbose", action="store_true", help="Run more verbosely."
)
loggingGroup.add_argument(
"-q", "--quiet", action="store_true", help="Turn verbosity off."
)
options = parser.parse_args(args)
from fontTools import configLogger
configLogger(
level=("DEBUG" if options.verbose else "ERROR" if options.quiet else "INFO")
)
import os
infile = options.input
if not os.path.isfile(infile):
parser.error("No such file '{}'".format(infile))
outfile = (
makeOutputFileName(infile, overWrite=True, suffix="-CFF2")
if not options.output
else options.output
)
font = TTFont(infile, recalcTimestamp=options.recalc_timestamp, recalcBBoxes=False)
convertCFFToCFF2(font)
log.info(
"Saving %s",
outfile,
)
font.save(outfile)
if __name__ == "__main__":
import sys
sys.exit(main(sys.argv[1:]))

View File

@ -45,96 +45,6 @@ maxStackLimit = 513
# maxstack operator has been deprecated. max stack is now always 513.
class StopHintCountEvent(Exception):
pass
class _DesubroutinizingT2Decompiler(psCharStrings.SimpleT2Decompiler):
stop_hintcount_ops = (
"op_hintmask",
"op_cntrmask",
"op_rmoveto",
"op_hmoveto",
"op_vmoveto",
)
def __init__(self, localSubrs, globalSubrs, private=None):
psCharStrings.SimpleT2Decompiler.__init__(
self, localSubrs, globalSubrs, private
)
def execute(self, charString):
self.need_hintcount = True # until proven otherwise
for op_name in self.stop_hintcount_ops:
setattr(self, op_name, self.stop_hint_count)
if hasattr(charString, "_desubroutinized"):
# If a charstring has already been desubroutinized, we will still
# need to execute it if we need to count hints in order to
# compute the byte length for mask arguments, and haven't finished
# counting hints pairs.
if self.need_hintcount and self.callingStack:
try:
psCharStrings.SimpleT2Decompiler.execute(self, charString)
except StopHintCountEvent:
del self.callingStack[-1]
return
charString._patches = []
psCharStrings.SimpleT2Decompiler.execute(self, charString)
desubroutinized = charString.program[:]
for idx, expansion in reversed(charString._patches):
assert idx >= 2
assert desubroutinized[idx - 1] in [
"callsubr",
"callgsubr",
], desubroutinized[idx - 1]
assert type(desubroutinized[idx - 2]) == int
if expansion[-1] == "return":
expansion = expansion[:-1]
desubroutinized[idx - 2 : idx] = expansion
if not self.private.in_cff2:
if "endchar" in desubroutinized:
# Cut off after first endchar
desubroutinized = desubroutinized[
: desubroutinized.index("endchar") + 1
]
else:
if not len(desubroutinized) or desubroutinized[-1] != "return":
desubroutinized.append("return")
charString._desubroutinized = desubroutinized
del charString._patches
def op_callsubr(self, index):
subr = self.localSubrs[self.operandStack[-1] + self.localBias]
psCharStrings.SimpleT2Decompiler.op_callsubr(self, index)
self.processSubr(index, subr)
def op_callgsubr(self, index):
subr = self.globalSubrs[self.operandStack[-1] + self.globalBias]
psCharStrings.SimpleT2Decompiler.op_callgsubr(self, index)
self.processSubr(index, subr)
def stop_hint_count(self, *args):
self.need_hintcount = False
for op_name in self.stop_hintcount_ops:
setattr(self, op_name, None)
cs = self.callingStack[-1]
if hasattr(cs, "_desubroutinized"):
raise StopHintCountEvent()
def op_hintmask(self, index):
psCharStrings.SimpleT2Decompiler.op_hintmask(self, index)
if self.need_hintcount:
self.stop_hint_count()
def processSubr(self, index, subr):
cs = self.callingStack[-1]
if not hasattr(cs, "_desubroutinized"):
cs._patches.append((index, subr._desubroutinized))
class CFFFontSet(object):
"""A CFF font "file" can contain more than one font, although this is
extremely rare (and not allowed within OpenType fonts).
@ -389,115 +299,29 @@ class CFFFontSet(object):
self.minor = int(attrs["value"])
def convertCFFToCFF2(self, otFont):
"""Converts this object from CFF format to CFF2 format. This conversion
is done 'in-place'. The conversion cannot be reversed.
from .CFFToCFF2 import _convertCFFToCFF2
This assumes a decompiled CFF table. (i.e. that the object has been
filled via :meth:`decompile`.)"""
self.major = 2
cff2GetGlyphOrder = self.otFont.getGlyphOrder
topDictData = TopDictIndex(None, cff2GetGlyphOrder)
topDictData.items = self.topDictIndex.items
self.topDictIndex = topDictData
topDict = topDictData[0]
if hasattr(topDict, "Private"):
privateDict = topDict.Private
else:
privateDict = None
opOrder = buildOrder(topDictOperators2)
topDict.order = opOrder
topDict.cff2GetGlyphOrder = cff2GetGlyphOrder
for entry in topDictOperators:
key = entry[1]
if key not in opOrder:
if key in topDict.rawDict:
del topDict.rawDict[key]
if hasattr(topDict, key):
delattr(topDict, key)
_convertCFFToCFF2(self, otFont)
if not hasattr(topDict, "FDArray"):
fdArray = topDict.FDArray = FDArrayIndex()
fdArray.strings = None
fdArray.GlobalSubrs = topDict.GlobalSubrs
topDict.GlobalSubrs.fdArray = fdArray
charStrings = topDict.CharStrings
if charStrings.charStringsAreIndexed:
charStrings.charStringsIndex.fdArray = fdArray
else:
charStrings.fdArray = fdArray
fontDict = FontDict()
fontDict.setCFF2(True)
fdArray.append(fontDict)
fontDict.Private = privateDict
privateOpOrder = buildOrder(privateDictOperators2)
for entry in privateDictOperators:
key = entry[1]
if key not in privateOpOrder:
if key in privateDict.rawDict:
# print "Removing private dict", key
del privateDict.rawDict[key]
if hasattr(privateDict, key):
delattr(privateDict, key)
# print "Removing privateDict attr", key
else:
# clean up the PrivateDicts in the fdArray
fdArray = topDict.FDArray
privateOpOrder = buildOrder(privateDictOperators2)
for fontDict in fdArray:
fontDict.setCFF2(True)
for key in fontDict.rawDict.keys():
if key not in fontDict.order:
del fontDict.rawDict[key]
if hasattr(fontDict, key):
delattr(fontDict, key)
def convertCFF2ToCFF(self, otFont):
from .CFF2ToCFF import _convertCFF2ToCFF
privateDict = fontDict.Private
for entry in privateDictOperators:
key = entry[1]
if key not in privateOpOrder:
if key in privateDict.rawDict:
# print "Removing private dict", key
del privateDict.rawDict[key]
if hasattr(privateDict, key):
delattr(privateDict, key)
# print "Removing privateDict attr", key
# At this point, the Subrs and Charstrings are all still T2Charstring class
# easiest to fix this by compiling, then decompiling again
file = BytesIO()
self.compile(file, otFont, isCFF2=True)
file.seek(0)
self.decompile(file, otFont, isCFF2=True)
_convertCFF2ToCFF(self, otFont)
def desubroutinize(self):
for fontName in self.fontNames:
font = self[fontName]
cs = font.CharStrings
for g in font.charset:
c, _ = cs.getItemAndSelector(g)
c.decompile()
subrs = getattr(c.private, "Subrs", [])
decompiler = _DesubroutinizingT2Decompiler(
subrs, c.globalSubrs, c.private
)
decompiler.execute(c)
c.program = c._desubroutinized
del c._desubroutinized
# Delete all the local subrs
if hasattr(font, "FDArray"):
for fd in font.FDArray:
pd = fd.Private
if hasattr(pd, "Subrs"):
del pd.Subrs
if "Subrs" in pd.rawDict:
del pd.rawDict["Subrs"]
else:
pd = font.Private
if hasattr(pd, "Subrs"):
del pd.Subrs
if "Subrs" in pd.rawDict:
del pd.rawDict["Subrs"]
# as well as the global subrs
self.GlobalSubrs.clear()
from .transforms import desubroutinize
desubroutinize(self)
def remove_hints(self):
from .transforms import remove_hints
remove_hints(self)
def remove_unused_subroutines(self):
from .transforms import remove_unused_subroutines
remove_unused_subroutines(self)
class CFFWriter(object):
@ -764,8 +588,8 @@ class Index(object):
compilerClass = IndexCompiler
def __init__(self, file=None, isCFF2=None):
assert (isCFF2 is None) == (file is None)
self.items = []
self.offsets = offsets = []
name = self.__class__.__name__
if file is None:
return
@ -782,7 +606,6 @@ class Index(object):
offSize = readCard8(file)
log.log(DEBUG, " index count: %s offSize: %s", count, offSize)
assert offSize <= 4, "offSize too large: %s" % offSize
self.offsets = offsets = []
pad = b"\0" * (4 - offSize)
for index in range(count + 1):
chunk = file.read(offSize)
@ -960,7 +783,6 @@ class TopDictIndex(Index):
compilerClass = TopDictIndexCompiler
def __init__(self, file=None, cff2GetGlyphOrder=None, topSize=0, isCFF2=None):
assert (isCFF2 is None) == (file is None)
self.cff2GetGlyphOrder = cff2GetGlyphOrder
if file is not None and isCFF2:
self._isCFF2 = isCFF2
@ -1050,6 +872,7 @@ class VarStoreData(object):
reader = OTTableReader(self.data, globalState)
self.otVarStore = ot.VarStore()
self.otVarStore.decompile(reader, self.font)
self.data = None
return self
def compile(self):
@ -1647,7 +1470,7 @@ class CharsetConverter(SimpleConverter):
else: # offset == 0 -> no charset data.
if isCID or "CharStrings" not in parent.rawDict:
# We get here only when processing fontDicts from the FDArray of
# CFF-CID fonts. Only the real topDict references the chrset.
# CFF-CID fonts. Only the real topDict references the charset.
assert value == 0
charset = None
elif value == 0:
@ -2860,9 +2683,11 @@ class PrivateDict(BaseDict):
# Provide dummy values. This avoids needing to provide
# an isCFF2 state in a lot of places.
self.nominalWidthX = self.defaultWidthX = None
self._isCFF2 = True
else:
self.defaults = buildDefaults(privateDictOperators)
self.order = buildOrder(privateDictOperators)
self._isCFF2 = False
@property
def in_cff2(self):

View File

@ -43,10 +43,8 @@ def programToCommands(program, getNumRegions=None):
hintmask/cntrmask argument, as well as stray arguments at the end of the
program (🤷).
'getNumRegions' may be None, or a callable object. It must return the
number of regions. 'getNumRegions' takes a single argument, vsindex. If
the vsindex argument is None, getNumRegions returns the default number
of regions for the charstring, else it returns the numRegions for
the vsindex.
number of regions. 'getNumRegions' takes a single argument, vsindex. It
returns the numRegions for the vsindex.
The Charstring may or may not start with a width value. If the first
non-blend operator has an odd number of arguments, then the first argument is
a width, and is popped off. This is complicated with blend operators, as
@ -61,7 +59,7 @@ def programToCommands(program, getNumRegions=None):
"""
seenWidthOp = False
vsIndex = None
vsIndex = 0
lenBlendStack = 0
lastBlendIndex = 0
commands = []
@ -813,7 +811,7 @@ if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(
"fonttools cffLib.specialer",
"fonttools cffLib.specializer",
description="CFF CharString generalizer/specializer",
)
parser.add_argument("program", metavar="command", nargs="*", help="Commands.")

View File

@ -0,0 +1,483 @@
from fontTools.misc.psCharStrings import (
SimpleT2Decompiler,
T2WidthExtractor,
calcSubrBias,
)
def _uniq_sort(l):
return sorted(set(l))
class StopHintCountEvent(Exception):
pass
class _DesubroutinizingT2Decompiler(SimpleT2Decompiler):
stop_hintcount_ops = (
"op_hintmask",
"op_cntrmask",
"op_rmoveto",
"op_hmoveto",
"op_vmoveto",
)
def __init__(self, localSubrs, globalSubrs, private=None):
SimpleT2Decompiler.__init__(self, localSubrs, globalSubrs, private)
def execute(self, charString):
self.need_hintcount = True # until proven otherwise
for op_name in self.stop_hintcount_ops:
setattr(self, op_name, self.stop_hint_count)
if hasattr(charString, "_desubroutinized"):
# If a charstring has already been desubroutinized, we will still
# need to execute it if we need to count hints in order to
# compute the byte length for mask arguments, and haven't finished
# counting hints pairs.
if self.need_hintcount and self.callingStack:
try:
SimpleT2Decompiler.execute(self, charString)
except StopHintCountEvent:
del self.callingStack[-1]
return
charString._patches = []
SimpleT2Decompiler.execute(self, charString)
desubroutinized = charString.program[:]
for idx, expansion in reversed(charString._patches):
assert idx >= 2
assert desubroutinized[idx - 1] in [
"callsubr",
"callgsubr",
], desubroutinized[idx - 1]
assert type(desubroutinized[idx - 2]) == int
if expansion[-1] == "return":
expansion = expansion[:-1]
desubroutinized[idx - 2 : idx] = expansion
if not self.private.in_cff2:
if "endchar" in desubroutinized:
# Cut off after first endchar
desubroutinized = desubroutinized[
: desubroutinized.index("endchar") + 1
]
charString._desubroutinized = desubroutinized
del charString._patches
def op_callsubr(self, index):
subr = self.localSubrs[self.operandStack[-1] + self.localBias]
SimpleT2Decompiler.op_callsubr(self, index)
self.processSubr(index, subr)
def op_callgsubr(self, index):
subr = self.globalSubrs[self.operandStack[-1] + self.globalBias]
SimpleT2Decompiler.op_callgsubr(self, index)
self.processSubr(index, subr)
def stop_hint_count(self, *args):
self.need_hintcount = False
for op_name in self.stop_hintcount_ops:
setattr(self, op_name, None)
cs = self.callingStack[-1]
if hasattr(cs, "_desubroutinized"):
raise StopHintCountEvent()
def op_hintmask(self, index):
SimpleT2Decompiler.op_hintmask(self, index)
if self.need_hintcount:
self.stop_hint_count()
def processSubr(self, index, subr):
cs = self.callingStack[-1]
if not hasattr(cs, "_desubroutinized"):
cs._patches.append((index, subr._desubroutinized))
def desubroutinize(cff):
for fontName in cff.fontNames:
font = cff[fontName]
cs = font.CharStrings
for c in cs.values():
c.decompile()
subrs = getattr(c.private, "Subrs", [])
decompiler = _DesubroutinizingT2Decompiler(subrs, c.globalSubrs, c.private)
decompiler.execute(c)
c.program = c._desubroutinized
del c._desubroutinized
# Delete all the local subrs
if hasattr(font, "FDArray"):
for fd in font.FDArray:
pd = fd.Private
if hasattr(pd, "Subrs"):
del pd.Subrs
if "Subrs" in pd.rawDict:
del pd.rawDict["Subrs"]
else:
pd = font.Private
if hasattr(pd, "Subrs"):
del pd.Subrs
if "Subrs" in pd.rawDict:
del pd.rawDict["Subrs"]
# as well as the global subrs
cff.GlobalSubrs.clear()
class _MarkingT2Decompiler(SimpleT2Decompiler):
def __init__(self, localSubrs, globalSubrs, private):
SimpleT2Decompiler.__init__(self, localSubrs, globalSubrs, private)
for subrs in [localSubrs, globalSubrs]:
if subrs and not hasattr(subrs, "_used"):
subrs._used = set()
def op_callsubr(self, index):
self.localSubrs._used.add(self.operandStack[-1] + self.localBias)
SimpleT2Decompiler.op_callsubr(self, index)
def op_callgsubr(self, index):
self.globalSubrs._used.add(self.operandStack[-1] + self.globalBias)
SimpleT2Decompiler.op_callgsubr(self, index)
class _DehintingT2Decompiler(T2WidthExtractor):
class Hints(object):
def __init__(self):
# Whether calling this charstring produces any hint stems
# Note that if a charstring starts with hintmask, it will
# have has_hint set to True, because it *might* produce an
# implicit vstem if called under certain conditions.
self.has_hint = False
# Index to start at to drop all hints
self.last_hint = 0
# Index up to which we know more hints are possible.
# Only relevant if status is 0 or 1.
self.last_checked = 0
# The status means:
# 0: after dropping hints, this charstring is empty
# 1: after dropping hints, there may be more hints
# continuing after this, or there might be
# other things. Not clear yet.
# 2: no more hints possible after this charstring
self.status = 0
# Has hintmask instructions; not recursive
self.has_hintmask = False
# List of indices of calls to empty subroutines to remove.
self.deletions = []
pass
def __init__(
self, css, localSubrs, globalSubrs, nominalWidthX, defaultWidthX, private=None
):
self._css = css
T2WidthExtractor.__init__(
self, localSubrs, globalSubrs, nominalWidthX, defaultWidthX
)
self.private = private
def execute(self, charString):
old_hints = charString._hints if hasattr(charString, "_hints") else None
charString._hints = self.Hints()
T2WidthExtractor.execute(self, charString)
hints = charString._hints
if hints.has_hint or hints.has_hintmask:
self._css.add(charString)
if hints.status != 2:
# Check from last_check, make sure we didn't have any operators.
for i in range(hints.last_checked, len(charString.program) - 1):
if isinstance(charString.program[i], str):
hints.status = 2
break
else:
hints.status = 1 # There's *something* here
hints.last_checked = len(charString.program)
if old_hints:
assert hints.__dict__ == old_hints.__dict__
def op_callsubr(self, index):
subr = self.localSubrs[self.operandStack[-1] + self.localBias]
T2WidthExtractor.op_callsubr(self, index)
self.processSubr(index, subr)
def op_callgsubr(self, index):
subr = self.globalSubrs[self.operandStack[-1] + self.globalBias]
T2WidthExtractor.op_callgsubr(self, index)
self.processSubr(index, subr)
def op_hstem(self, index):
T2WidthExtractor.op_hstem(self, index)
self.processHint(index)
def op_vstem(self, index):
T2WidthExtractor.op_vstem(self, index)
self.processHint(index)
def op_hstemhm(self, index):
T2WidthExtractor.op_hstemhm(self, index)
self.processHint(index)
def op_vstemhm(self, index):
T2WidthExtractor.op_vstemhm(self, index)
self.processHint(index)
def op_hintmask(self, index):
rv = T2WidthExtractor.op_hintmask(self, index)
self.processHintmask(index)
return rv
def op_cntrmask(self, index):
rv = T2WidthExtractor.op_cntrmask(self, index)
self.processHintmask(index)
return rv
def processHintmask(self, index):
cs = self.callingStack[-1]
hints = cs._hints
hints.has_hintmask = True
if hints.status != 2:
# Check from last_check, see if we may be an implicit vstem
for i in range(hints.last_checked, index - 1):
if isinstance(cs.program[i], str):
hints.status = 2
break
else:
# We are an implicit vstem
hints.has_hint = True
hints.last_hint = index + 1
hints.status = 0
hints.last_checked = index + 1
def processHint(self, index):
cs = self.callingStack[-1]
hints = cs._hints
hints.has_hint = True
hints.last_hint = index
hints.last_checked = index
def processSubr(self, index, subr):
cs = self.callingStack[-1]
hints = cs._hints
subr_hints = subr._hints
# Check from last_check, make sure we didn't have
# any operators.
if hints.status != 2:
for i in range(hints.last_checked, index - 1):
if isinstance(cs.program[i], str):
hints.status = 2
break
hints.last_checked = index
if hints.status != 2:
if subr_hints.has_hint:
hints.has_hint = True
# Decide where to chop off from
if subr_hints.status == 0:
hints.last_hint = index
else:
hints.last_hint = index - 2 # Leave the subr call in
elif subr_hints.status == 0:
hints.deletions.append(index)
hints.status = max(hints.status, subr_hints.status)
def _cs_subset_subroutines(charstring, subrs, gsubrs):
p = charstring.program
for i in range(1, len(p)):
if p[i] == "callsubr":
assert isinstance(p[i - 1], int)
p[i - 1] = subrs._used.index(p[i - 1] + subrs._old_bias) - subrs._new_bias
elif p[i] == "callgsubr":
assert isinstance(p[i - 1], int)
p[i - 1] = (
gsubrs._used.index(p[i - 1] + gsubrs._old_bias) - gsubrs._new_bias
)
def _cs_drop_hints(charstring):
hints = charstring._hints
if hints.deletions:
p = charstring.program
for idx in reversed(hints.deletions):
del p[idx - 2 : idx]
if hints.has_hint:
assert not hints.deletions or hints.last_hint <= hints.deletions[0]
charstring.program = charstring.program[hints.last_hint :]
if not charstring.program:
# TODO CFF2 no need for endchar.
charstring.program.append("endchar")
if hasattr(charstring, "width"):
# Insert width back if needed
if charstring.width != charstring.private.defaultWidthX:
# For CFF2 charstrings, this should never happen
assert (
charstring.private.defaultWidthX is not None
), "CFF2 CharStrings must not have an initial width value"
charstring.program.insert(
0, charstring.width - charstring.private.nominalWidthX
)
if hints.has_hintmask:
i = 0
p = charstring.program
while i < len(p):
if p[i] in ["hintmask", "cntrmask"]:
assert i + 1 <= len(p)
del p[i : i + 2]
continue
i += 1
assert len(charstring.program)
del charstring._hints
def remove_hints(cff, *, removeUnusedSubrs: bool = True):
for fontname in cff.keys():
font = cff[fontname]
cs = font.CharStrings
# This can be tricky, but doesn't have to. What we do is:
#
# - Run all used glyph charstrings and recurse into subroutines,
# - For each charstring (including subroutines), if it has any
# of the hint stem operators, we mark it as such.
# Upon returning, for each charstring we note all the
# subroutine calls it makes that (recursively) contain a stem,
# - Dropping hinting then consists of the following two ops:
# * Drop the piece of the program in each charstring before the
# last call to a stem op or a stem-calling subroutine,
# * Drop all hintmask operations.
# - It's trickier... A hintmask right after hints and a few numbers
# will act as an implicit vstemhm. As such, we track whether
# we have seen any non-hint operators so far and do the right
# thing, recursively... Good luck understanding that :(
css = set()
for c in cs.values():
c.decompile()
subrs = getattr(c.private, "Subrs", [])
decompiler = _DehintingT2Decompiler(
css,
subrs,
c.globalSubrs,
c.private.nominalWidthX,
c.private.defaultWidthX,
c.private,
)
decompiler.execute(c)
c.width = decompiler.width
for charstring in css:
_cs_drop_hints(charstring)
del css
# Drop font-wide hinting values
all_privs = []
if hasattr(font, "FDArray"):
all_privs.extend(fd.Private for fd in font.FDArray)
else:
all_privs.append(font.Private)
for priv in all_privs:
for k in [
"BlueValues",
"OtherBlues",
"FamilyBlues",
"FamilyOtherBlues",
"BlueScale",
"BlueShift",
"BlueFuzz",
"StemSnapH",
"StemSnapV",
"StdHW",
"StdVW",
"ForceBold",
"LanguageGroup",
"ExpansionFactor",
]:
if hasattr(priv, k):
setattr(priv, k, None)
if removeUnusedSubrs:
remove_unused_subroutines(cff)
def _pd_delete_empty_subrs(private_dict):
if hasattr(private_dict, "Subrs") and not private_dict.Subrs:
if "Subrs" in private_dict.rawDict:
del private_dict.rawDict["Subrs"]
del private_dict.Subrs
def remove_unused_subroutines(cff):
for fontname in cff.keys():
font = cff[fontname]
cs = font.CharStrings
# Renumber subroutines to remove unused ones
# Mark all used subroutines
for c in cs.values():
subrs = getattr(c.private, "Subrs", [])
decompiler = _MarkingT2Decompiler(subrs, c.globalSubrs, c.private)
decompiler.execute(c)
all_subrs = [font.GlobalSubrs]
if hasattr(font, "FDArray"):
all_subrs.extend(
fd.Private.Subrs
for fd in font.FDArray
if hasattr(fd.Private, "Subrs") and fd.Private.Subrs
)
elif hasattr(font.Private, "Subrs") and font.Private.Subrs:
all_subrs.append(font.Private.Subrs)
subrs = set(subrs) # Remove duplicates
# Prepare
for subrs in all_subrs:
if not hasattr(subrs, "_used"):
subrs._used = set()
subrs._used = _uniq_sort(subrs._used)
subrs._old_bias = calcSubrBias(subrs)
subrs._new_bias = calcSubrBias(subrs._used)
# Renumber glyph charstrings
for c in cs.values():
subrs = getattr(c.private, "Subrs", None)
_cs_subset_subroutines(c, subrs, font.GlobalSubrs)
# Renumber subroutines themselves
for subrs in all_subrs:
if subrs == font.GlobalSubrs:
if not hasattr(font, "FDArray") and hasattr(font.Private, "Subrs"):
local_subrs = font.Private.Subrs
else:
local_subrs = None
else:
local_subrs = subrs
subrs.items = [subrs.items[i] for i in subrs._used]
if hasattr(subrs, "file"):
del subrs.file
if hasattr(subrs, "offsets"):
del subrs.offsets
for subr in subrs.items:
_cs_subset_subroutines(subr, local_subrs, font.GlobalSubrs)
# Delete local SubrsIndex if empty
if hasattr(font, "FDArray"):
for fd in font.FDArray:
_pd_delete_empty_subrs(fd.Private)
else:
_pd_delete_empty_subrs(font.Private)
# Cleanup
for subrs in all_subrs:
del subrs._used, subrs._old_bias, subrs._new_bias

View File

@ -13,6 +13,9 @@ from operator import add
from functools import reduce
__all__ = ["optimizeWidths", "main"]
class missingdict(dict):
def __init__(self, missing_func):
self.missing_func = missing_func

View File

@ -1,5 +1,5 @@
import sys
from .cli import main
from .cli import _main as main
if __name__ == "__main__":

View File

@ -45,7 +45,6 @@ def run_benchmark(module, function, setup_suffix="", repeat=5, number=1000):
def main():
"""Benchmark the cu2qu algorithm performance."""
run_benchmark("cu2qu", "curve_to_quadratic")
run_benchmark("cu2qu", "curves_to_quadratic")

View File

@ -64,7 +64,7 @@ def _copytree(input_path, output_path):
shutil.copytree(input_path, output_path)
def main(args=None):
def _main(args=None):
"""Convert a UFO font from cubic to quadratic curves"""
parser = argparse.ArgumentParser(prog="cu2qu")
parser.add_argument("--version", action="version", version=fontTools.__version__)

View File

@ -880,8 +880,13 @@ class Builder(object):
# l.lookup_index will be None when a lookup is not needed
# for the table under construction. For example, substitution
# rules will have no lookup_index while building GPOS tables.
# We also deduplicate lookup indices, as they only get applied once
# within a given feature:
# https://github.com/fonttools/fonttools/issues/2946
lookup_indices = tuple(
[l.lookup_index for l in lookups if l.lookup_index is not None]
dict.fromkeys(
l.lookup_index for l in lookups if l.lookup_index is not None
)
)
size_feature = tag == "GPOS" and feature_tag == "size"
@ -1281,10 +1286,7 @@ class Builder(object):
self, location, prefix, glyph, suffix, replacements, forceChain=False
):
if prefix or suffix or forceChain:
chain = self.get_lookup_(location, ChainContextSubstBuilder)
sub = self.get_chained_lookup_(location, MultipleSubstBuilder)
sub.mapping[glyph] = replacements
chain.rules.append(ChainContextualRule(prefix, [{glyph}], suffix, [sub]))
self.add_multi_subst_chained_(location, prefix, glyph, suffix, replacements)
return
lookup = self.get_lookup_(location, MultipleSubstBuilder)
if glyph in lookup.mapping:
@ -1364,7 +1366,7 @@ class Builder(object):
# https://github.com/fonttools/fonttools/issues/512
# https://github.com/fonttools/fonttools/issues/2150
chain = self.get_lookup_(location, ChainContextSubstBuilder)
sub = chain.find_chainable_single_subst(mapping)
sub = chain.find_chainable_subst(mapping, SingleSubstBuilder)
if sub is None:
sub = self.get_chained_lookup_(location, SingleSubstBuilder)
sub.mapping.update(mapping)
@ -1372,6 +1374,19 @@ class Builder(object):
ChainContextualRule(prefix, [list(mapping.keys())], suffix, [sub])
)
def add_multi_subst_chained_(self, location, prefix, glyph, suffix, replacements):
if not all(prefix) or not all(suffix):
raise FeatureLibError(
"Empty glyph class in contextual substitution", location
)
# https://github.com/fonttools/fonttools/issues/3551
chain = self.get_lookup_(location, ChainContextSubstBuilder)
sub = chain.find_chainable_subst({glyph: replacements}, MultipleSubstBuilder)
if sub is None:
sub = self.get_chained_lookup_(location, MultipleSubstBuilder)
sub.mapping[glyph] = replacements
chain.rules.append(ChainContextualRule(prefix, [{glyph}], suffix, [sub]))
# GSUB 8
def add_reverse_chain_single_subst(self, location, old_prefix, old_suffix, mapping):
if not mapping:

View File

@ -269,7 +269,7 @@ class IncludingLexer(object):
fileobj, closing = file_or_path, False
else:
filename, closing = file_or_path, True
fileobj = open(filename, "r", encoding="utf-8")
fileobj = open(filename, "r", encoding="utf-8-sig")
data = fileobj.read()
filename = getattr(fileobj, "name", None)
if closing:

View File

@ -75,10 +75,11 @@ class VariableScalar:
return self.values[key]
def value_at_location(self, location, model_cache=None, avar=None):
loc = location
loc = Location(location)
if loc in self.values.keys():
return self.values[loc]
values = list(self.values.values())
loc = dict(self._normalized_location(loc))
return self.model(model_cache, avar).interpolateFromMasters(loc, values)
def model(self, model_cache=None, avar=None):

View File

@ -656,11 +656,7 @@ class FontBuilder(object):
if validateGlyphFormat and self.font["head"].glyphDataFormat == 0:
for name, g in glyphs.items():
if g.isVarComposite():
raise ValueError(
f"Glyph {name!r} is a variable composite, but glyphDataFormat=0"
)
elif g.numberOfContours > 0 and any(f & flagCubic for f in g.flags):
if g.numberOfContours > 0 and any(f & flagCubic for f in g.flags):
raise ValueError(
f"Glyph {name!r} has cubic Bezier outlines, but glyphDataFormat=0; "
"either convert to quadratics with cu2qu or set glyphDataFormat=1."

View File

@ -20,7 +20,8 @@ def main():
continue
try:
description = imports.main.__doc__
if description:
# Cython modules seem to return "main()" as the docstring
if description and description != "main()":
pkg = pkg.replace("fontTools.", "").replace(".__main__", "")
# show the docstring's first line only
descriptions[pkg] = description.splitlines()[0]

View File

@ -27,18 +27,18 @@ class Merger(object):
This class merges multiple files into a single OpenType font, taking into
account complexities such as OpenType layout (``GSUB``/``GPOS``) tables and
cross-font metrics (e.g. ``hhea.ascent`` is set to the maximum value across
all the fonts).
cross-font metrics (for example ``hhea.ascent`` is set to the maximum value
across all the fonts).
If multiple glyphs map to the same Unicode value, and the glyphs are considered
sufficiently different (that is, they differ in any of paths, widths, or
height), then subsequent glyphs are renamed and a lookup in the ``locl``
feature will be created to disambiguate them. For example, if the arguments
are an Arabic font and a Latin font and both contain a set of parentheses,
the Latin glyphs will be renamed to ``parenleft#1`` and ``parenright#1``,
the Latin glyphs will be renamed to ``parenleft.1`` and ``parenright.1``,
and a lookup will be inserted into the to ``locl`` feature (creating it if
necessary) under the ``latn`` script to substitute ``parenleft`` with
``parenleft#1`` etc.
``parenleft.1`` etc.
Restrictions:

View File

@ -225,7 +225,7 @@ def merge(self, m, tables):
g.removeHinting()
# Expand composite glyphs to load their
# composite glyph names.
if g.isComposite() or g.isVarComposite():
if g.isComposite():
g.expand(table)
return DefaultTable.merge(self, m, tables)
@ -294,6 +294,8 @@ def merge(self, m, tables):
extractor.execute(c)
width = extractor.width
if width is not defaultWidthXToken:
# The following will be wrong if the width is added
# by a subroutine. Ouch!
c.program.pop(0)
else:
width = defaultWidthX

View File

@ -18,6 +18,9 @@ except (AttributeError, ImportError):
COMPILED = False
EPSILON = 1e-9
Intersection = namedtuple("Intersection", ["pt", "t1", "t2"])
@ -92,7 +95,7 @@ def _split_cubic_into_two(p0, p1, p2, p3):
def _calcCubicArcLengthCRecurse(mult, p0, p1, p2, p3):
arch = abs(p0 - p3)
box = abs(p0 - p1) + abs(p1 - p2) + abs(p2 - p3)
if arch * mult >= box:
if arch * mult + EPSILON >= box:
return (arch + box) * 0.5
else:
one, two = _split_cubic_into_two(p0, p1, p2, p3)

View File

@ -0,0 +1,12 @@
from itertools import *
# Python 3.12:
if "batched" not in globals():
# https://docs.python.org/3/library/itertools.html#itertools.batched
def batched(iterable, n):
# batched('ABCDEFG', 3) --> ABC DEF G
if n < 1:
raise ValueError("n must be at least one")
it = iter(iterable)
while batch := tuple(islice(it, n)):
yield batch

View File

@ -0,0 +1,42 @@
from collections import UserDict, UserList
__all__ = ["LazyDict", "LazyList"]
class LazyDict(UserDict):
def __init__(self, data):
super().__init__()
self.data = data
def __getitem__(self, k):
v = self.data[k]
if callable(v):
v = v(k)
self.data[k] = v
return v
class LazyList(UserList):
def __getitem__(self, k):
if isinstance(k, slice):
indices = range(*k.indices(len(self)))
return [self[i] for i in indices]
v = self.data[k]
if callable(v):
v = v(k)
self.data[k] = v
return v
def __add__(self, other):
if isinstance(other, LazyList):
other = list(other)
elif isinstance(other, list):
pass
else:
return NotImplemented
return list(self) + other
def __radd__(self, other):
if not isinstance(other, list):
return NotImplemented
return other + list(self)

View File

@ -275,6 +275,24 @@ def encodeFloat(f):
s = s[1:]
elif s[:3] == "-0.":
s = "-" + s[2:]
elif s.endswith("000"):
significantDigits = s.rstrip("0")
s = "%sE%d" % (significantDigits, len(s) - len(significantDigits))
else:
dotIndex = s.find(".")
eIndex = s.find("E")
if dotIndex != -1 and eIndex != -1:
integerPart = s[:dotIndex]
fractionalPart = s[dotIndex + 1 : eIndex]
exponent = int(s[eIndex + 1 :])
newExponent = exponent - len(fractionalPart)
if newExponent == 1:
s = "%s%s0" % (integerPart, fractionalPart)
else:
s = "%s%sE%d" % (integerPart, fractionalPart, newExponent)
if s.startswith((".0", "-.0")):
sign, s = s.split(".", 1)
s = "%s%sE-%d" % (sign, s.lstrip("0"), len(s))
nibbles = []
while s:
c = s[0]
@ -286,6 +304,8 @@ def encodeFloat(f):
c = "E-"
elif c2 == "+":
s = s[1:]
if s.startswith("0"):
s = s[1:]
nibbles.append(realNibblesDict[c])
nibbles.append(0xF)
if len(nibbles) % 2:

View File

@ -64,7 +64,10 @@ def pack(fmt, obj):
elements = []
if not isinstance(obj, dict):
obj = obj.__dict__
for name in names:
string_index = formatstring
if formatstring.startswith(">"):
string_index = formatstring[1:]
for ix, name in enumerate(names.keys()):
value = obj[name]
if name in fixes:
# fixed point conversion
@ -72,6 +75,13 @@ def pack(fmt, obj):
elif isinstance(value, str):
value = tobytes(value)
elements.append(value)
# Check it fits
try:
struct.pack(names[name], value)
except Exception as e:
raise ValueError(
"Value %s does not fit in format %s for %s" % (value, names[name], name)
) from e
data = struct.pack(*(formatstring,) + tuple(elements))
return data
@ -87,7 +97,7 @@ def unpack(fmt, data, obj=None):
d = obj.__dict__
elements = struct.unpack(formatstring, data)
for i in range(len(names)):
name = names[i]
name = list(names.keys())[i]
value = elements[i]
if name in fixes:
# fixed point conversion
@ -141,7 +151,7 @@ def getformat(fmt, keep_pad_byte=False):
except KeyError:
lines = re.split("[\n;]", fmt)
formatstring = ""
names = []
names = {}
fixes = {}
for line in lines:
if _emptyRE.match(line):
@ -158,7 +168,7 @@ def getformat(fmt, keep_pad_byte=False):
name = m.group(1)
formatchar = m.group(2)
if keep_pad_byte or formatchar != "x":
names.append(name)
names[name] = formatchar
if m.group(3):
# fixed point
before = int(m.group(3))
@ -167,9 +177,10 @@ def getformat(fmt, keep_pad_byte=False):
if bits not in [8, 16, 32]:
raise Error("fixed point must be 8, 16 or 32 bits long")
formatchar = _fixedpointmappings[bits]
names[name] = formatchar
assert m.group(5) == "F"
fixes[name] = after
formatstring = formatstring + formatchar
formatstring += formatchar
_formatcache[fmt] = formatstring, names, fixes
return formatstring, names, fixes

View File

@ -76,16 +76,16 @@ class GreenPen(BasePen):
self.value = 0
def _moveTo(self, p0):
self.__startPoint = p0
self._startPoint = p0
def _closePath(self):
p0 = self._getCurrentPoint()
if p0 != self.__startPoint:
self._lineTo(self.__startPoint)
if p0 != self._startPoint:
self._lineTo(self._startPoint)
def _endPath(self):
p0 = self._getCurrentPoint()
if p0 != self.__startPoint:
if p0 != self._startPoint:
# Green theorem is not defined on open contours.
raise NotImplementedError
@ -145,19 +145,18 @@ class %s(BasePen):
print(
"""
def _moveTo(self, p0):
self.__startPoint = p0
self._startPoint = p0
def _closePath(self):
p0 = self._getCurrentPoint()
if p0 != self.__startPoint:
self._lineTo(self.__startPoint)
if p0 != self._startPoint:
self._lineTo(self._startPoint)
def _endPath(self):
p0 = self._getCurrentPoint()
if p0 != self.__startPoint:
# Green theorem is not defined on open contours.
if p0 != self._startPoint:
raise OpenContourError(
"Green theorem is not defined on open contours."
"Glyph statistics is not defined on open contours."
)
""",
end="",

View File

@ -422,6 +422,19 @@ class DecomposedTransform:
tCenterX: float = 0
tCenterY: float = 0
def __bool__(self):
return (
self.translateX != 0
or self.translateY != 0
or self.rotation != 0
or self.scaleX != 1
or self.scaleY != 1
or self.skewX != 0
or self.skewY != 0
or self.tCenterX != 0
or self.tCenterY != 0
)
@classmethod
def fromTransform(self, transform):
# Adapted from an answer on

View File

@ -61,7 +61,8 @@ class Visitor(object):
if _visitors is None:
break
m = celf._visitors.get(typ, None)
for base in typ.mro():
m = celf._visitors.get(base, None)
if m is not None:
return m

View File

@ -544,6 +544,10 @@ class ChainContextualBuilder(LookupBuilder):
f"{classRuleAttr}Count",
getattr(setForThisRule, f"{classRuleAttr}Count") + 1,
)
for i, classSet in enumerate(classSets):
if not getattr(classSet, classRuleAttr):
# class sets can be null so replace nop sets with None
classSets[i] = None
setattr(st, self.ruleSetAttr_(format=2, chaining=chaining), classSets)
setattr(
st, self.ruleSetAttr_(format=2, chaining=chaining) + "Count", len(classSets)
@ -781,14 +785,14 @@ class ChainContextSubstBuilder(ChainContextualBuilder):
)
return result
def find_chainable_single_subst(self, mapping):
"""Helper for add_single_subst_chained_()"""
def find_chainable_subst(self, mapping, builder_class):
"""Helper for add_{single,multi}_subst_chained_()"""
res = None
for rule in self.rules[::-1]:
if rule.is_subtable_break:
return res
for sub in rule.lookups:
if isinstance(sub, SingleSubstBuilder) and not any(
if isinstance(sub, builder_class) and not any(
g in mapping and mapping[g] != sub.mapping[g] for g in sub.mapping
):
res = sub

View File

@ -92,5 +92,5 @@ def maxCtxContextualRule(maxCtx, st, chain):
if not chain:
return max(maxCtx, st.GlyphCount)
elif chain == "Reverse":
return max(maxCtx, st.GlyphCount + st.LookAheadGlyphCount)
return max(maxCtx, 1 + st.LookAheadGlyphCount)
return max(maxCtx, st.InputGlyphCount + st.LookAheadGlyphCount)

View File

@ -15,6 +15,7 @@ __all__ = ["MomentsPen"]
class MomentsPen(BasePen):
def __init__(self, glyphset=None):
BasePen.__init__(self, glyphset)
@ -26,17 +27,17 @@ class MomentsPen(BasePen):
self.momentYY = 0
def _moveTo(self, p0):
self.__startPoint = p0
self._startPoint = p0
def _closePath(self):
p0 = self._getCurrentPoint()
if p0 != self.__startPoint:
self._lineTo(self.__startPoint)
if p0 != self._startPoint:
self._lineTo(self._startPoint)
def _endPath(self):
p0 = self._getCurrentPoint()
if p0 != self.__startPoint:
raise OpenContourError("Glyph statistics not defined on open contours.")
if p0 != self._startPoint:
raise OpenContourError("Glyph statistics is not defined on open contours.")
@cython.locals(r0=cython.double)
@cython.locals(r1=cython.double)

View File

@ -123,7 +123,7 @@ class StatisticsControlPen(StatisticsBase, BasePen):
def _endPath(self):
p0 = self._getCurrentPoint()
if p0 != self.__startPoint:
if p0 != self._startPoint:
raise OpenContourError("Glyph statistics not defined on open contours.")
def _update(self):

View File

@ -1,6 +1,6 @@
import sys
from .cli import main
from .cli import _main as main
if __name__ == "__main__":

View File

@ -48,7 +48,6 @@ def run_benchmark(module, function, setup_suffix="", repeat=25, number=1):
def main():
"""Benchmark the qu2cu algorithm performance."""
run_benchmark("qu2cu", "quadratic_to_curves")

View File

@ -42,7 +42,7 @@ def _font_to_cubic(input_path, output_path=None, **kwargs):
font.save(output_path)
def main(args=None):
def _main(args=None):
"""Convert an OpenType font from quadratic to cubic curves"""
parser = argparse.ArgumentParser(prog="qu2cu")
parser.add_argument("--version", action="version", version=fontTools.__version__)

View File

@ -14,7 +14,7 @@ from fontTools.misc.cliTools import makeOutputFileName
from fontTools.subset.util import _add_method, _uniq_sort
from fontTools.subset.cff import *
from fontTools.subset.svg import *
from fontTools.varLib import varStore # for subset_varidxes
from fontTools.varLib import varStore, multiVarStore # For monkey-patching
from fontTools.ttLib.tables._n_a_m_e import NameRecordVisitor
import sys
import struct
@ -2630,6 +2630,88 @@ def closure_glyphs(self, s):
s.glyphs.update(variants)
@_add_method(ttLib.getTableClass("VARC"))
def subset_glyphs(self, s):
indices = self.table.Coverage.subset(s.glyphs)
self.table.VarCompositeGlyphs.VarCompositeGlyph = _list_subset(
self.table.VarCompositeGlyphs.VarCompositeGlyph, indices
)
return bool(self.table.VarCompositeGlyphs.VarCompositeGlyph)
@_add_method(ttLib.getTableClass("VARC"))
def closure_glyphs(self, s):
if self.table.VarCompositeGlyphs is None:
return
glyphMap = {glyphName: i for i, glyphName in enumerate(self.table.Coverage.glyphs)}
glyphRecords = self.table.VarCompositeGlyphs.VarCompositeGlyph
glyphs = s.glyphs
covered = set()
new = set(glyphs)
while new:
oldNew = new
new = set()
for glyphName in oldNew:
if glyphName in covered:
continue
idx = glyphMap.get(glyphName)
if idx is None:
continue
glyph = glyphRecords[idx]
for comp in glyph.components:
name = comp.glyphName
glyphs.add(name)
if name not in covered:
new.add(name)
@_add_method(ttLib.getTableClass("VARC"))
def prune_post_subset(self, font, options):
table = self.table
store = table.MultiVarStore
if store is not None:
usedVarIdxes = set()
table.collect_varidxes(usedVarIdxes)
varidx_map = store.subset_varidxes(usedVarIdxes)
table.remap_varidxes(varidx_map)
axisIndicesList = table.AxisIndicesList.Item
if axisIndicesList is not None:
usedIndices = set()
for glyph in table.VarCompositeGlyphs.VarCompositeGlyph:
for comp in glyph.components:
if comp.axisIndicesIndex is not None:
usedIndices.add(comp.axisIndicesIndex)
usedIndices = sorted(usedIndices)
table.AxisIndicesList.Item = _list_subset(axisIndicesList, usedIndices)
mapping = {old: new for new, old in enumerate(usedIndices)}
for glyph in table.VarCompositeGlyphs.VarCompositeGlyph:
for comp in glyph.components:
if comp.axisIndicesIndex is not None:
comp.axisIndicesIndex = mapping[comp.axisIndicesIndex]
conditionList = table.ConditionList
if conditionList is not None:
conditionTables = conditionList.ConditionTable
usedIndices = set()
for glyph in table.VarCompositeGlyphs.VarCompositeGlyph:
for comp in glyph.components:
if comp.conditionIndex is not None:
usedIndices.add(comp.conditionIndex)
usedIndices = sorted(usedIndices)
conditionList.ConditionTable = _list_subset(conditionTables, usedIndices)
mapping = {old: new for new, old in enumerate(usedIndices)}
for glyph in table.VarCompositeGlyphs.VarCompositeGlyph:
for comp in glyph.components:
if comp.conditionIndex is not None:
comp.conditionIndex = mapping[comp.conditionIndex]
return True
@_add_method(ttLib.getTableClass("MATH"))
def closure_glyphs(self, s):
if self.table.MathVariants:
@ -2913,7 +2995,8 @@ def prune_post_subset(self, font, options):
visitor = NameRecordVisitor()
visitor.visit(font)
nameIDs = set(options.name_IDs) | visitor.seen
if "*" not in options.name_IDs:
if "*" in options.name_IDs:
nameIDs |= {n.nameID for n in self.names if n.nameID < 256}
self.names = [n for n in self.names if n.nameID in nameIDs]
if not options.name_legacy:
# TODO(behdad) Sometimes (eg Apple Color Emoji) there's only a macroman
@ -3297,20 +3380,6 @@ class Subsetter(object):
self.glyphs.add(font.getGlyphName(i))
log.info("Added first four glyphs to subset")
if self.options.layout_closure and "GSUB" in font:
with timer("close glyph list over 'GSUB'"):
log.info(
"Closing glyph list over 'GSUB': %d glyphs before", len(self.glyphs)
)
log.glyphs(self.glyphs, font=font)
font["GSUB"].closure_glyphs(self)
self.glyphs.intersection_update(realGlyphs)
log.info(
"Closed glyph list over 'GSUB': %d glyphs after", len(self.glyphs)
)
log.glyphs(self.glyphs, font=font)
self.glyphs_gsubed = frozenset(self.glyphs)
if "MATH" in font:
with timer("close glyph list over 'MATH'"):
log.info(
@ -3325,6 +3394,20 @@ class Subsetter(object):
log.glyphs(self.glyphs, font=font)
self.glyphs_mathed = frozenset(self.glyphs)
if self.options.layout_closure and "GSUB" in font:
with timer("close glyph list over 'GSUB'"):
log.info(
"Closing glyph list over 'GSUB': %d glyphs before", len(self.glyphs)
)
log.glyphs(self.glyphs, font=font)
font["GSUB"].closure_glyphs(self)
self.glyphs.intersection_update(realGlyphs)
log.info(
"Closed glyph list over 'GSUB': %d glyphs after", len(self.glyphs)
)
log.glyphs(self.glyphs, font=font)
self.glyphs_gsubed = frozenset(self.glyphs)
for table in ("COLR", "bsln"):
if table in font:
with timer("close glyph list over '%s'" % table):
@ -3344,6 +3427,20 @@ class Subsetter(object):
log.glyphs(self.glyphs, font=font)
setattr(self, f"glyphs_{table.lower()}ed", frozenset(self.glyphs))
if "VARC" in font:
with timer("close glyph list over 'VARC'"):
log.info(
"Closing glyph list over 'VARC': %d glyphs before", len(self.glyphs)
)
log.glyphs(self.glyphs, font=font)
font["VARC"].closure_glyphs(self)
self.glyphs.intersection_update(realGlyphs)
log.info(
"Closed glyph list over 'VARC': %d glyphs after", len(self.glyphs)
)
log.glyphs(self.glyphs, font=font)
self.glyphs_glyfed = frozenset(self.glyphs)
if "glyf" in font:
with timer("close glyph list over 'glyf'"):
log.info(

View File

@ -132,227 +132,6 @@ def subset_glyphs(self, s):
return True # any(cff[fontname].numGlyphs for fontname in cff.keys())
@_add_method(psCharStrings.T2CharString)
def subset_subroutines(self, subrs, gsubrs):
p = self.program
for i in range(1, len(p)):
if p[i] == "callsubr":
assert isinstance(p[i - 1], int)
p[i - 1] = subrs._used.index(p[i - 1] + subrs._old_bias) - subrs._new_bias
elif p[i] == "callgsubr":
assert isinstance(p[i - 1], int)
p[i - 1] = (
gsubrs._used.index(p[i - 1] + gsubrs._old_bias) - gsubrs._new_bias
)
@_add_method(psCharStrings.T2CharString)
def drop_hints(self):
hints = self._hints
if hints.deletions:
p = self.program
for idx in reversed(hints.deletions):
del p[idx - 2 : idx]
if hints.has_hint:
assert not hints.deletions or hints.last_hint <= hints.deletions[0]
self.program = self.program[hints.last_hint :]
if not self.program:
# TODO CFF2 no need for endchar.
self.program.append("endchar")
if hasattr(self, "width"):
# Insert width back if needed
if self.width != self.private.defaultWidthX:
# For CFF2 charstrings, this should never happen
assert (
self.private.defaultWidthX is not None
), "CFF2 CharStrings must not have an initial width value"
self.program.insert(0, self.width - self.private.nominalWidthX)
if hints.has_hintmask:
i = 0
p = self.program
while i < len(p):
if p[i] in ["hintmask", "cntrmask"]:
assert i + 1 <= len(p)
del p[i : i + 2]
continue
i += 1
assert len(self.program)
del self._hints
class _MarkingT2Decompiler(psCharStrings.SimpleT2Decompiler):
def __init__(self, localSubrs, globalSubrs, private):
psCharStrings.SimpleT2Decompiler.__init__(
self, localSubrs, globalSubrs, private
)
for subrs in [localSubrs, globalSubrs]:
if subrs and not hasattr(subrs, "_used"):
subrs._used = set()
def op_callsubr(self, index):
self.localSubrs._used.add(self.operandStack[-1] + self.localBias)
psCharStrings.SimpleT2Decompiler.op_callsubr(self, index)
def op_callgsubr(self, index):
self.globalSubrs._used.add(self.operandStack[-1] + self.globalBias)
psCharStrings.SimpleT2Decompiler.op_callgsubr(self, index)
class _DehintingT2Decompiler(psCharStrings.T2WidthExtractor):
class Hints(object):
def __init__(self):
# Whether calling this charstring produces any hint stems
# Note that if a charstring starts with hintmask, it will
# have has_hint set to True, because it *might* produce an
# implicit vstem if called under certain conditions.
self.has_hint = False
# Index to start at to drop all hints
self.last_hint = 0
# Index up to which we know more hints are possible.
# Only relevant if status is 0 or 1.
self.last_checked = 0
# The status means:
# 0: after dropping hints, this charstring is empty
# 1: after dropping hints, there may be more hints
# continuing after this, or there might be
# other things. Not clear yet.
# 2: no more hints possible after this charstring
self.status = 0
# Has hintmask instructions; not recursive
self.has_hintmask = False
# List of indices of calls to empty subroutines to remove.
self.deletions = []
pass
def __init__(
self, css, localSubrs, globalSubrs, nominalWidthX, defaultWidthX, private=None
):
self._css = css
psCharStrings.T2WidthExtractor.__init__(
self, localSubrs, globalSubrs, nominalWidthX, defaultWidthX
)
self.private = private
def execute(self, charString):
old_hints = charString._hints if hasattr(charString, "_hints") else None
charString._hints = self.Hints()
psCharStrings.T2WidthExtractor.execute(self, charString)
hints = charString._hints
if hints.has_hint or hints.has_hintmask:
self._css.add(charString)
if hints.status != 2:
# Check from last_check, make sure we didn't have any operators.
for i in range(hints.last_checked, len(charString.program) - 1):
if isinstance(charString.program[i], str):
hints.status = 2
break
else:
hints.status = 1 # There's *something* here
hints.last_checked = len(charString.program)
if old_hints:
assert hints.__dict__ == old_hints.__dict__
def op_callsubr(self, index):
subr = self.localSubrs[self.operandStack[-1] + self.localBias]
psCharStrings.T2WidthExtractor.op_callsubr(self, index)
self.processSubr(index, subr)
def op_callgsubr(self, index):
subr = self.globalSubrs[self.operandStack[-1] + self.globalBias]
psCharStrings.T2WidthExtractor.op_callgsubr(self, index)
self.processSubr(index, subr)
def op_hstem(self, index):
psCharStrings.T2WidthExtractor.op_hstem(self, index)
self.processHint(index)
def op_vstem(self, index):
psCharStrings.T2WidthExtractor.op_vstem(self, index)
self.processHint(index)
def op_hstemhm(self, index):
psCharStrings.T2WidthExtractor.op_hstemhm(self, index)
self.processHint(index)
def op_vstemhm(self, index):
psCharStrings.T2WidthExtractor.op_vstemhm(self, index)
self.processHint(index)
def op_hintmask(self, index):
rv = psCharStrings.T2WidthExtractor.op_hintmask(self, index)
self.processHintmask(index)
return rv
def op_cntrmask(self, index):
rv = psCharStrings.T2WidthExtractor.op_cntrmask(self, index)
self.processHintmask(index)
return rv
def processHintmask(self, index):
cs = self.callingStack[-1]
hints = cs._hints
hints.has_hintmask = True
if hints.status != 2:
# Check from last_check, see if we may be an implicit vstem
for i in range(hints.last_checked, index - 1):
if isinstance(cs.program[i], str):
hints.status = 2
break
else:
# We are an implicit vstem
hints.has_hint = True
hints.last_hint = index + 1
hints.status = 0
hints.last_checked = index + 1
def processHint(self, index):
cs = self.callingStack[-1]
hints = cs._hints
hints.has_hint = True
hints.last_hint = index
hints.last_checked = index
def processSubr(self, index, subr):
cs = self.callingStack[-1]
hints = cs._hints
subr_hints = subr._hints
# Check from last_check, make sure we didn't have
# any operators.
if hints.status != 2:
for i in range(hints.last_checked, index - 1):
if isinstance(cs.program[i], str):
hints.status = 2
break
hints.last_checked = index
if hints.status != 2:
if subr_hints.has_hint:
hints.has_hint = True
# Decide where to chop off from
if subr_hints.status == 0:
hints.last_hint = index
else:
hints.last_hint = index - 2 # Leave the subr call in
elif subr_hints.status == 0:
hints.deletions.append(index)
hints.status = max(hints.status, subr_hints.status)
@_add_method(ttLib.getTableClass("CFF "))
def prune_post_subset(self, ttfFont, options):
cff = self.cff
@ -381,13 +160,6 @@ def prune_post_subset(self, ttfFont, options):
return True
def _delete_empty_subrs(private_dict):
if hasattr(private_dict, "Subrs") and not private_dict.Subrs:
if "Subrs" in private_dict.rawDict:
del private_dict.rawDict["Subrs"]
del private_dict.Subrs
@deprecateFunction(
"use 'CFFFontSet.desubroutinize()' instead", category=DeprecationWarning
)
@ -396,141 +168,17 @@ def desubroutinize(self):
self.cff.desubroutinize()
@deprecateFunction(
"use 'CFFFontSet.remove_hints()' instead", category=DeprecationWarning
)
@_add_method(ttLib.getTableClass("CFF "))
def remove_hints(self):
cff = self.cff
for fontname in cff.keys():
font = cff[fontname]
cs = font.CharStrings
# This can be tricky, but doesn't have to. What we do is:
#
# - Run all used glyph charstrings and recurse into subroutines,
# - For each charstring (including subroutines), if it has any
# of the hint stem operators, we mark it as such.
# Upon returning, for each charstring we note all the
# subroutine calls it makes that (recursively) contain a stem,
# - Dropping hinting then consists of the following two ops:
# * Drop the piece of the program in each charstring before the
# last call to a stem op or a stem-calling subroutine,
# * Drop all hintmask operations.
# - It's trickier... A hintmask right after hints and a few numbers
# will act as an implicit vstemhm. As such, we track whether
# we have seen any non-hint operators so far and do the right
# thing, recursively... Good luck understanding that :(
css = set()
for g in font.charset:
c, _ = cs.getItemAndSelector(g)
c.decompile()
subrs = getattr(c.private, "Subrs", [])
decompiler = _DehintingT2Decompiler(
css,
subrs,
c.globalSubrs,
c.private.nominalWidthX,
c.private.defaultWidthX,
c.private,
)
decompiler.execute(c)
c.width = decompiler.width
for charstring in css:
charstring.drop_hints()
del css
# Drop font-wide hinting values
all_privs = []
if hasattr(font, "FDArray"):
all_privs.extend(fd.Private for fd in font.FDArray)
else:
all_privs.append(font.Private)
for priv in all_privs:
for k in [
"BlueValues",
"OtherBlues",
"FamilyBlues",
"FamilyOtherBlues",
"BlueScale",
"BlueShift",
"BlueFuzz",
"StemSnapH",
"StemSnapV",
"StdHW",
"StdVW",
"ForceBold",
"LanguageGroup",
"ExpansionFactor",
]:
if hasattr(priv, k):
setattr(priv, k, None)
self.remove_unused_subroutines()
self.cff.remove_hints()
@deprecateFunction(
"use 'CFFFontSet.remove_unused_subroutines' instead", category=DeprecationWarning
)
@_add_method(ttLib.getTableClass("CFF "))
def remove_unused_subroutines(self):
cff = self.cff
for fontname in cff.keys():
font = cff[fontname]
cs = font.CharStrings
# Renumber subroutines to remove unused ones
# Mark all used subroutines
for g in font.charset:
c, _ = cs.getItemAndSelector(g)
subrs = getattr(c.private, "Subrs", [])
decompiler = _MarkingT2Decompiler(subrs, c.globalSubrs, c.private)
decompiler.execute(c)
all_subrs = [font.GlobalSubrs]
if hasattr(font, "FDArray"):
all_subrs.extend(
fd.Private.Subrs
for fd in font.FDArray
if hasattr(fd.Private, "Subrs") and fd.Private.Subrs
)
elif hasattr(font.Private, "Subrs") and font.Private.Subrs:
all_subrs.append(font.Private.Subrs)
subrs = set(subrs) # Remove duplicates
# Prepare
for subrs in all_subrs:
if not hasattr(subrs, "_used"):
subrs._used = set()
subrs._used = _uniq_sort(subrs._used)
subrs._old_bias = psCharStrings.calcSubrBias(subrs)
subrs._new_bias = psCharStrings.calcSubrBias(subrs._used)
# Renumber glyph charstrings
for g in font.charset:
c, _ = cs.getItemAndSelector(g)
subrs = getattr(c.private, "Subrs", None)
c.subset_subroutines(subrs, font.GlobalSubrs)
# Renumber subroutines themselves
for subrs in all_subrs:
if subrs == font.GlobalSubrs:
if not hasattr(font, "FDArray") and hasattr(font.Private, "Subrs"):
local_subrs = font.Private.Subrs
else:
local_subrs = None
else:
local_subrs = subrs
subrs.items = [subrs.items[i] for i in subrs._used]
if hasattr(subrs, "file"):
del subrs.file
if hasattr(subrs, "offsets"):
del subrs.offsets
for subr in subrs.items:
subr.subset_subroutines(local_subrs, font.GlobalSubrs)
# Delete local SubrsIndex if empty
if hasattr(font, "FDArray"):
for fd in font.FDArray:
_delete_empty_subrs(fd.Private)
else:
_delete_empty_subrs(font.Private)
# Cleanup
for subrs in all_subrs:
del subrs._used, subrs._old_bias, subrs._new_bias
self.cff.remove_unused_subroutines()

View File

@ -77,7 +77,7 @@ def main(args=None):
outFile = options.output
lazy = options.lazy
flavor = options.flavor
tables = options.table if options.table is not None else []
tables = options.table if options.table is not None else ["*"]
fonts = []
for f in options.font:
@ -88,6 +88,7 @@ def main(args=None):
collection = TTCollection(f, lazy=lazy)
fonts.extend(collection.fonts)
if lazy is False:
for font in fonts:
for table in tables if "*" not in tables else font.keys():
font[table] # Decompiles

View File

@ -7,11 +7,14 @@ import itertools
import logging
from typing import Callable, Iterable, Optional, Mapping
from fontTools.misc.roundTools import otRound
from fontTools.cffLib import CFFFontSet
from fontTools.ttLib import ttFont
from fontTools.ttLib.tables import _g_l_y_f
from fontTools.ttLib.tables import _h_m_t_x
from fontTools.misc.psCharStrings import T2CharString
from fontTools.misc.roundTools import otRound, noRound
from fontTools.pens.ttGlyphPen import TTGlyphPen
from fontTools.pens.t2CharStringPen import T2CharStringPen
import pathops
@ -81,6 +84,14 @@ def ttfGlyphFromSkPath(path: pathops.Path) -> _g_l_y_f.Glyph:
return glyph
def _charString_from_SkPath(
path: pathops.Path, charString: T2CharString
) -> T2CharString:
t2Pen = T2CharStringPen(width=charString.width, glyphSet=None)
path.draw(t2Pen)
return t2Pen.getCharString(charString.private, charString.globalSubrs)
def _round_path(
path: pathops.Path, round: Callable[[float], float] = otRound
) -> pathops.Path:
@ -90,7 +101,12 @@ def _round_path(
return rounded_path
def _simplify(path: pathops.Path, debugGlyphName: str) -> pathops.Path:
def _simplify(
path: pathops.Path,
debugGlyphName: str,
*,
round: Callable[[float], float] = otRound,
) -> pathops.Path:
# skia-pathops has a bug where it sometimes fails to simplify paths when there
# are float coordinates and control points are very close to one another.
# Rounding coordinates to integers works around the bug.
@ -105,7 +121,7 @@ def _simplify(path: pathops.Path, debugGlyphName: str) -> pathops.Path:
except pathops.PathOpsError:
pass
path = _round_path(path)
path = _round_path(path, round=round)
try:
path = pathops.simplify(path, clockwise=path.clockwise)
log.debug(
@ -124,6 +140,10 @@ def _simplify(path: pathops.Path, debugGlyphName: str) -> pathops.Path:
raise AssertionError("Unreachable")
def _same_path(path1: pathops.Path, path2: pathops.Path) -> bool:
return {tuple(c) for c in path1.contours} == {tuple(c) for c in path2.contours}
def removeTTGlyphOverlaps(
glyphName: str,
glyphSet: _TTGlyphMapping,
@ -144,7 +164,7 @@ def removeTTGlyphOverlaps(
path2 = _simplify(path, glyphName)
# replace TTGlyph if simplified path is different (ignoring contour order)
if {tuple(c) for c in path.contours} != {tuple(c) for c in path2.contours}:
if not _same_path(path, path2):
glyfTable[glyphName] = glyph = ttfGlyphFromSkPath(path2)
# simplified glyph is always unhinted
assert not glyph.program
@ -159,42 +179,16 @@ def removeTTGlyphOverlaps(
return False
def removeOverlaps(
def _remove_glyf_overlaps(
*,
font: ttFont.TTFont,
glyphNames: Optional[Iterable[str]] = None,
removeHinting: bool = True,
ignoreErrors=False,
glyphNames: Iterable[str],
glyphSet: _TTGlyphMapping,
removeHinting: bool,
ignoreErrors: bool,
) -> None:
"""Simplify glyphs in TTFont by merging overlapping contours.
Overlapping components are first decomposed to simple contours, then merged.
Currently this only works with TrueType fonts with 'glyf' table.
Raises NotImplementedError if 'glyf' table is absent.
Note that removing overlaps invalidates the hinting. By default we drop hinting
from all glyphs whether or not overlaps are removed from a given one, as it would
look weird if only some glyphs are left (un)hinted.
Args:
font: input TTFont object, modified in place.
glyphNames: optional iterable of glyph names (str) to remove overlaps from.
By default, all glyphs in the font are processed.
removeHinting (bool): set to False to keep hinting for unmodified glyphs.
ignoreErrors (bool): set to True to ignore errors while removing overlaps,
thus keeping the tricky glyphs unchanged (fonttools/fonttools#2363).
"""
try:
glyfTable = font["glyf"]
except KeyError:
raise NotImplementedError("removeOverlaps currently only works with TTFs")
hmtxTable = font["hmtx"]
# wraps the underlying glyf Glyphs, takes care of interfacing with drawing pens
glyphSet = font.getGlyphSet()
if glyphNames is None:
glyphNames = font.getGlyphOrder()
# process all simple glyphs first, then composites with increasing component depth,
# so that by the time we test for component intersections the respective base glyphs
@ -225,25 +219,170 @@ def removeOverlaps(
log.debug("Removed overlaps for %s glyphs:\n%s", len(modified), " ".join(modified))
def main(args=None):
import sys
def _remove_charstring_overlaps(
*,
glyphName: str,
glyphSet: _TTGlyphMapping,
cffFontSet: CFFFontSet,
) -> bool:
path = skPathFromGlyph(glyphName, glyphSet)
if args is None:
args = sys.argv[1:]
# remove overlaps
path2 = _simplify(path, glyphName, round=noRound)
if len(args) < 2:
print(
f"usage: fonttools ttLib.removeOverlaps INPUT.ttf OUTPUT.ttf [GLYPHS ...]"
# replace TTGlyph if simplified path is different (ignoring contour order)
if not _same_path(path, path2):
charStrings = cffFontSet[0].CharStrings
charStrings[glyphName] = _charString_from_SkPath(path2, charStrings[glyphName])
return True
return False
def _remove_cff_overlaps(
*,
font: ttFont.TTFont,
glyphNames: Iterable[str],
glyphSet: _TTGlyphMapping,
removeHinting: bool,
ignoreErrors: bool,
removeUnusedSubroutines: bool = True,
) -> None:
cffFontSet = font["CFF "].cff
modified = set()
for glyphName in glyphNames:
try:
if _remove_charstring_overlaps(
glyphName=glyphName,
glyphSet=glyphSet,
cffFontSet=cffFontSet,
):
modified.add(glyphName)
except RemoveOverlapsError:
if not ignoreErrors:
raise
log.error("Failed to remove overlaps for '%s'", glyphName)
if not modified:
log.debug("No overlaps found in the specified CFF glyphs")
return
if removeHinting:
cffFontSet.remove_hints()
if removeUnusedSubroutines:
cffFontSet.remove_unused_subroutines()
log.debug("Removed overlaps for %s glyphs:\n%s", len(modified), " ".join(modified))
def removeOverlaps(
font: ttFont.TTFont,
glyphNames: Optional[Iterable[str]] = None,
removeHinting: bool = True,
ignoreErrors: bool = False,
*,
removeUnusedSubroutines: bool = True,
) -> None:
"""Simplify glyphs in TTFont by merging overlapping contours.
Overlapping components are first decomposed to simple contours, then merged.
Currently this only works for fonts with 'glyf' or 'CFF ' tables.
Raises NotImplementedError if 'glyf' or 'CFF ' tables are absent.
Note that removing overlaps invalidates the hinting. By default we drop hinting
from all glyphs whether or not overlaps are removed from a given one, as it would
look weird if only some glyphs are left (un)hinted.
Args:
font: input TTFont object, modified in place.
glyphNames: optional iterable of glyph names (str) to remove overlaps from.
By default, all glyphs in the font are processed.
removeHinting (bool): set to False to keep hinting for unmodified glyphs.
ignoreErrors (bool): set to True to ignore errors while removing overlaps,
thus keeping the tricky glyphs unchanged (fonttools/fonttools#2363).
removeUnusedSubroutines (bool): set to False to keep unused subroutines
in CFF table after removing overlaps. Default is to remove them if
any glyphs are modified.
"""
if "glyf" not in font and "CFF " not in font:
raise NotImplementedError(
"No outline data found in the font: missing 'glyf' or 'CFF ' table"
)
sys.exit(1)
src = args[0]
dst = args[1]
glyphNames = args[2:] or None
if glyphNames is None:
glyphNames = font.getGlyphOrder()
with ttFont.TTFont(src) as f:
removeOverlaps(f, glyphNames)
f.save(dst)
# Wraps the underlying glyphs, takes care of interfacing with drawing pens
glyphSet = font.getGlyphSet()
if "glyf" in font:
_remove_glyf_overlaps(
font=font,
glyphNames=glyphNames,
glyphSet=glyphSet,
removeHinting=removeHinting,
ignoreErrors=ignoreErrors,
)
if "CFF " in font:
_remove_cff_overlaps(
font=font,
glyphNames=glyphNames,
glyphSet=glyphSet,
removeHinting=removeHinting,
ignoreErrors=ignoreErrors,
removeUnusedSubroutines=removeUnusedSubroutines,
)
def main(args=None):
"""Simplify glyphs in TTFont by merging overlapping contours."""
import argparse
parser = argparse.ArgumentParser(
"fonttools ttLib.removeOverlaps", description=__doc__
)
parser.add_argument("input", metavar="INPUT.ttf", help="Input font file")
parser.add_argument("output", metavar="OUTPUT.ttf", help="Output font file")
parser.add_argument(
"glyphs",
metavar="GLYPHS",
nargs="*",
help="Optional list of glyph names to remove overlaps from",
)
parser.add_argument(
"--keep-hinting",
action="store_true",
help="Keep hinting for unmodified glyphs, default is to drop hinting",
)
parser.add_argument(
"--ignore-errors",
action="store_true",
help="ignore errors while removing overlaps, "
"thus keeping the tricky glyphs unchanged",
)
parser.add_argument(
"--keep-unused-subroutines",
action="store_true",
help="Keep unused subroutines in CFF table after removing overlaps, "
"default is to remove them if any glyphs are modified",
)
args = parser.parse_args(args)
with ttFont.TTFont(args.input) as font:
removeOverlaps(
font=font,
glyphNames=args.glyphs or None,
removeHinting=not args.keep_hinting,
ignoreErrors=args.ignore_errors,
removeUnusedSubroutines=not args.keep_unused_subroutines,
)
font.save(args.output)
if __name__ == "__main__":

View File

@ -10,8 +10,10 @@ import fontTools.ttLib.tables.otTables as otTables
from fontTools.cffLib import VarStoreData
import fontTools.cffLib.specializer as cffSpecializer
from fontTools.varLib import builder # for VarData.calculateNumShorts
from fontTools.varLib.multiVarStore import OnlineMultiVarStoreBuilder
from fontTools.misc.vector import Vector
from fontTools.misc.fixedTools import otRound
from fontTools.ttLib.tables._g_l_y_f import VarComponentFlags
from fontTools.misc.iterTools import batched
__all__ = ["scale_upem", "ScalerVisitor"]
@ -123,13 +125,6 @@ def visit(visitor, obj, attr, glyphs):
component.y = visitor.scale(component.y)
continue
if g.isVarComposite():
for component in g.components:
for attr in ("translateX", "translateY", "tCenterX", "tCenterY"):
v = getattr(component.transform, attr)
setattr(component.transform, attr, visitor.scale(v))
continue
if hasattr(g, "coordinates"):
coordinates = g.coordinates
for i, (x, y) in enumerate(coordinates):
@ -138,57 +133,105 @@ def visit(visitor, obj, attr, glyphs):
@ScalerVisitor.register_attr(ttLib.getTableClass("gvar"), "variations")
def visit(visitor, obj, attr, variations):
# VarComposites are a pain to handle :-(
glyfTable = visitor.font["glyf"]
for glyphName, varlist in variations.items():
glyph = glyfTable[glyphName]
isVarComposite = glyph.isVarComposite()
for var in varlist:
coordinates = var.coordinates
if not isVarComposite:
for i, xy in enumerate(coordinates):
if xy is None:
continue
coordinates[i] = visitor.scale(xy[0]), visitor.scale(xy[1])
continue
# VarComposite glyph
@ScalerVisitor.register_attr(ttLib.getTableClass("VARC"), "table")
def visit(visitor, obj, attr, varc):
# VarComposite variations are a pain
fvar = visitor.font["fvar"]
fvarAxes = [a.axisTag for a in fvar.axes]
store = varc.MultiVarStore
storeBuilder = OnlineMultiVarStoreBuilder(fvarAxes)
for g in varc.VarCompositeGlyphs.VarCompositeGlyph:
for component in g.components:
t = component.transform
t.translateX = visitor.scale(t.translateX)
t.translateY = visitor.scale(t.translateY)
t.tCenterX = visitor.scale(t.tCenterX)
t.tCenterY = visitor.scale(t.tCenterY)
if component.axisValuesVarIndex != otTables.NO_VARIATION_INDEX:
varIdx = component.axisValuesVarIndex
# TODO Move this code duplicated below to MultiVarStore.__getitem__,
# or a getDeltasAndSupports().
if varIdx != otTables.NO_VARIATION_INDEX:
major = varIdx >> 16
minor = varIdx & 0xFFFF
varData = store.MultiVarData[major]
vec = varData.Item[minor]
storeBuilder.setSupports(store.get_supports(major, fvar.axes))
if vec:
m = len(vec) // varData.VarRegionCount
vec = list(batched(vec, m))
vec = [Vector(v) for v in vec]
component.axisValuesVarIndex = storeBuilder.storeDeltas(vec)
else:
component.axisValuesVarIndex = otTables.NO_VARIATION_INDEX
if component.transformVarIndex != otTables.NO_VARIATION_INDEX:
varIdx = component.transformVarIndex
if varIdx != otTables.NO_VARIATION_INDEX:
major = varIdx >> 16
minor = varIdx & 0xFFFF
vec = varData.Item[varIdx & 0xFFFF]
major = varIdx >> 16
minor = varIdx & 0xFFFF
varData = store.MultiVarData[major]
vec = varData.Item[minor]
storeBuilder.setSupports(store.get_supports(major, fvar.axes))
if vec:
m = len(vec) // varData.VarRegionCount
flags = component.flags
vec = list(batched(vec, m))
newVec = []
for v in vec:
v = list(v)
i = 0
for component in glyph.components:
if component.flags & VarComponentFlags.AXES_HAVE_VARIATION:
i += len(component.location)
if component.flags & (
VarComponentFlags.HAVE_TRANSLATE_X
| VarComponentFlags.HAVE_TRANSLATE_Y
):
xy = coordinates[i]
coordinates[i] = visitor.scale(xy[0]), visitor.scale(xy[1])
## Scale translate & tCenter
if flags & otTables.VarComponentFlags.HAVE_TRANSLATE_X:
v[i] = visitor.scale(v[i])
i += 1
if component.flags & VarComponentFlags.HAVE_ROTATION:
if flags & otTables.VarComponentFlags.HAVE_TRANSLATE_Y:
v[i] = visitor.scale(v[i])
i += 1
if component.flags & (
VarComponentFlags.HAVE_SCALE_X | VarComponentFlags.HAVE_SCALE_Y
):
if flags & otTables.VarComponentFlags.HAVE_ROTATION:
i += 1
if component.flags & (
VarComponentFlags.HAVE_SKEW_X | VarComponentFlags.HAVE_SKEW_Y
):
if flags & otTables.VarComponentFlags.HAVE_SCALE_X:
i += 1
if component.flags & (
VarComponentFlags.HAVE_TCENTER_X | VarComponentFlags.HAVE_TCENTER_Y
):
xy = coordinates[i]
coordinates[i] = visitor.scale(xy[0]), visitor.scale(xy[1])
if flags & otTables.VarComponentFlags.HAVE_SCALE_Y:
i += 1
if flags & otTables.VarComponentFlags.HAVE_SKEW_X:
i += 1
if flags & otTables.VarComponentFlags.HAVE_SKEW_Y:
i += 1
if flags & otTables.VarComponentFlags.HAVE_TCENTER_X:
v[i] = visitor.scale(v[i])
i += 1
if flags & otTables.VarComponentFlags.HAVE_TCENTER_Y:
v[i] = visitor.scale(v[i])
i += 1
# Phantom points
assert i + 4 == len(coordinates)
for i in range(i, len(coordinates)):
xy = coordinates[i]
coordinates[i] = visitor.scale(xy[0]), visitor.scale(xy[1])
newVec.append(Vector(v))
vec = newVec
component.transformVarIndex = storeBuilder.storeDeltas(vec)
else:
component.transformVarIndex = otTables.NO_VARIATION_INDEX
varc.MultiVarStore = storeBuilder.finish()
@ScalerVisitor.register_attr(ttLib.getTableClass("kern"), "kernTables")

View File

@ -298,9 +298,9 @@ class BitmapSizeTable(object):
# cares about in terms of XML creation.
def _getXMLMetricNames(self):
dataNames = sstruct.getformat(bitmapSizeTableFormatPart1)[1]
dataNames = dataNames + sstruct.getformat(bitmapSizeTableFormatPart2)[1]
dataNames = {**dataNames, **sstruct.getformat(bitmapSizeTableFormatPart2)[1]}
# Skip the first 3 data names because they are byte offsets and counts.
return dataNames[3:]
return list(dataNames.keys())[3:]
def toXML(self, writer, ttFont):
writer.begintag("bitmapSizeTable")

View File

@ -22,6 +22,8 @@ PRIVATE_POINT_NUMBERS = 0x2000
DELTAS_ARE_ZERO = 0x80
DELTAS_ARE_WORDS = 0x40
DELTAS_ARE_LONGS = 0xC0
DELTAS_SIZE_MASK = 0xC0
DELTA_RUN_COUNT_MASK = 0x3F
POINTS_ARE_WORDS = 0x80
@ -366,8 +368,10 @@ class TupleVariation(object):
pos = TupleVariation.encodeDeltaRunAsZeroes_(deltas, pos, bytearr)
elif -128 <= value <= 127:
pos = TupleVariation.encodeDeltaRunAsBytes_(deltas, pos, bytearr)
else:
elif -32768 <= value <= 32767:
pos = TupleVariation.encodeDeltaRunAsWords_(deltas, pos, bytearr)
else:
pos = TupleVariation.encodeDeltaRunAsLongs_(deltas, pos, bytearr)
return bytearr
@staticmethod
@ -420,6 +424,7 @@ class TupleVariation(object):
numDeltas = len(deltas)
while pos < numDeltas:
value = deltas[pos]
# Within a word-encoded run of deltas, it is easiest
# to start a new run (with a different encoding)
# whenever we encounter a zero value. For example,
@ -442,6 +447,10 @@ class TupleVariation(object):
and (-128 <= deltas[pos + 1] <= 127)
):
break
if not (-32768 <= value <= 32767):
break
pos += 1
runLength = pos - offset
while runLength >= 64:
@ -461,18 +470,47 @@ class TupleVariation(object):
return pos
@staticmethod
def decompileDeltas_(numDeltas, data, offset):
def encodeDeltaRunAsLongs_(deltas, offset, bytearr):
pos = offset
numDeltas = len(deltas)
while pos < numDeltas:
value = deltas[pos]
if -32768 <= value <= 32767:
break
pos += 1
runLength = pos - offset
while runLength >= 64:
bytearr.append(DELTAS_ARE_LONGS | 63)
a = array.array("i", deltas[offset : offset + 64])
if sys.byteorder != "big":
a.byteswap()
bytearr.extend(a)
offset += 64
runLength -= 64
if runLength:
bytearr.append(DELTAS_ARE_LONGS | (runLength - 1))
a = array.array("i", deltas[offset:pos])
if sys.byteorder != "big":
a.byteswap()
bytearr.extend(a)
return pos
@staticmethod
def decompileDeltas_(numDeltas, data, offset=0):
"""(numDeltas, data, offset) --> ([delta, delta, ...], newOffset)"""
result = []
pos = offset
while len(result) < numDeltas:
while len(result) < numDeltas if numDeltas is not None else pos < len(data):
runHeader = data[pos]
pos += 1
numDeltasInRun = (runHeader & DELTA_RUN_COUNT_MASK) + 1
if (runHeader & DELTAS_ARE_ZERO) != 0:
if (runHeader & DELTAS_SIZE_MASK) == DELTAS_ARE_ZERO:
result.extend([0] * numDeltasInRun)
else:
if (runHeader & DELTAS_ARE_WORDS) != 0:
if (runHeader & DELTAS_SIZE_MASK) == DELTAS_ARE_LONGS:
deltas = array.array("i")
deltasSize = numDeltasInRun * 4
elif (runHeader & DELTAS_SIZE_MASK) == DELTAS_ARE_WORDS:
deltas = array.array("h")
deltasSize = numDeltasInRun * 2
else:
@ -481,10 +519,10 @@ class TupleVariation(object):
deltas.frombytes(data[pos : pos + deltasSize])
if sys.byteorder != "big":
deltas.byteswap()
assert len(deltas) == numDeltasInRun
assert len(deltas) == numDeltasInRun, (len(deltas), numDeltasInRun)
pos += deltasSize
result.extend(deltas)
assert len(result) == numDeltas
assert numDeltas is None or len(result) == numDeltas
return (result, pos)
@staticmethod

View File

@ -0,0 +1,5 @@
from .otBase import BaseTTXConverter
class table_V_A_R_C_(BaseTTXConverter):
pass

View File

@ -50,6 +50,7 @@ def _moduleFinderHint():
from . import T_S_I__3
from . import T_S_I__5
from . import T_T_F_A_
from . import V_A_R_C_
from . import V_D_M_X_
from . import V_O_R_G_
from . import V_V_A_R_

View File

@ -143,7 +143,9 @@ class table__a_v_a_r(BaseTTXConverter):
def renormalizeLocation(self, location, font):
if self.majorVersion not in (1, 2):
majorVersion = getattr(self, "majorVersion", 1)
if majorVersion not in (1, 2):
raise NotImplementedError("Unknown avar table version")
avarSegments = self.segments
@ -154,7 +156,7 @@ class table__a_v_a_r(BaseTTXConverter):
value = piecewiseLinearMap(value, avarMapping)
mappedLocation[axisTag] = value
if self.majorVersion < 2:
if majorVersion < 2:
return mappedLocation
# Version 2

View File

@ -424,29 +424,6 @@ class table__g_l_y_f(DefaultTable.DefaultTable):
for c in glyph.components
],
)
elif glyph.isVarComposite():
coords = []
controls = []
for component in glyph.components:
(
componentCoords,
componentControls,
) = component.getCoordinatesAndControls()
coords.extend(componentCoords)
controls.extend(componentControls)
coords = GlyphCoordinates(coords)
controls = _GlyphControls(
numberOfContours=glyph.numberOfContours,
endPts=list(range(len(coords))),
flags=None,
components=[
(c.glyphName, getattr(c, "flags", None)) for c in glyph.components
],
)
else:
coords, endPts, flags = glyph.getCoordinates(self)
coords = coords.copy()
@ -492,10 +469,6 @@ class table__g_l_y_f(DefaultTable.DefaultTable):
for p, comp in zip(coord, glyph.components):
if hasattr(comp, "x"):
comp.x, comp.y = p
elif glyph.isVarComposite():
for comp in glyph.components:
coord = comp.setCoordinates(coord)
assert not coord
elif glyph.numberOfContours == 0:
assert len(coord) == 0
else:
@ -737,8 +710,6 @@ class Glyph(object):
return
if self.isComposite():
self.decompileComponents(data, glyfTable)
elif self.isVarComposite():
self.decompileVarComponents(data, glyfTable)
else:
self.decompileCoordinates(data)
@ -758,8 +729,6 @@ class Glyph(object):
data = sstruct.pack(glyphHeaderFormat, self)
if self.isComposite():
data = data + self.compileComponents(glyfTable)
elif self.isVarComposite():
data = data + self.compileVarComponents(glyfTable)
else:
data = data + self.compileCoordinates()
return data
@ -769,10 +738,6 @@ class Glyph(object):
for compo in self.components:
compo.toXML(writer, ttFont)
haveInstructions = hasattr(self, "program")
elif self.isVarComposite():
for compo in self.components:
compo.toXML(writer, ttFont)
haveInstructions = False
else:
last = 0
for i in range(self.numberOfContours):
@ -842,15 +807,6 @@ class Glyph(object):
component = GlyphComponent()
self.components.append(component)
component.fromXML(name, attrs, content, ttFont)
elif name == "varComponent":
if self.numberOfContours > 0:
raise ttLib.TTLibError("can't mix composites and contours in glyph")
self.numberOfContours = -2
if not hasattr(self, "components"):
self.components = []
component = GlyphVarComponent()
self.components.append(component)
component.fromXML(name, attrs, content, ttFont)
elif name == "instructions":
self.program = ttProgram.Program()
for element in content:
@ -860,7 +816,7 @@ class Glyph(object):
self.program.fromXML(name, attrs, content, ttFont)
def getCompositeMaxpValues(self, glyfTable, maxComponentDepth=1):
assert self.isComposite() or self.isVarComposite()
assert self.isComposite()
nContours = 0
nPoints = 0
initialMaxComponentDepth = maxComponentDepth
@ -904,13 +860,6 @@ class Glyph(object):
len(data),
)
def decompileVarComponents(self, data, glyfTable):
self.components = []
while len(data) >= GlyphVarComponent.MIN_SIZE:
component = GlyphVarComponent()
data = component.decompile(data, glyfTable)
self.components.append(component)
def decompileCoordinates(self, data):
endPtsOfContours = array.array("H")
endPtsOfContours.frombytes(data[: 2 * self.numberOfContours])
@ -1027,9 +976,6 @@ class Glyph(object):
data = data + struct.pack(">h", len(instructions)) + instructions
return data
def compileVarComponents(self, glyfTable):
return b"".join(c.compile(glyfTable) for c in self.components)
def compileCoordinates(self):
assert len(self.coordinates) == len(self.flags)
data = []
@ -1231,13 +1177,6 @@ class Glyph(object):
else:
return self.numberOfContours == -1
def isVarComposite(self):
"""Test whether a glyph has variable components"""
if hasattr(self, "data"):
return struct.unpack(">h", self.data[:2])[0] == -2 if self.data else False
else:
return self.numberOfContours == -2
def getCoordinates(self, glyfTable):
"""Return the coordinates, end points and flags
@ -1308,8 +1247,6 @@ class Glyph(object):
allCoords.extend(coordinates)
allFlags.extend(flags)
return allCoords, allEndPts, allFlags
elif self.isVarComposite():
raise NotImplementedError("use TTGlyphSet to draw VarComposite glyphs")
else:
return GlyphCoordinates(), [], bytearray()
@ -1319,12 +1256,8 @@ class Glyph(object):
This method can be used on simple glyphs (in which case it returns an
empty list) or composite glyphs.
"""
if hasattr(self, "data") and self.isVarComposite():
# TODO(VarComposite) Add implementation without expanding glyph
self.expand(glyfTable)
if not hasattr(self, "data"):
if self.isComposite() or self.isVarComposite():
if self.isComposite():
return [c.glyphName for c in self.components]
else:
return []
@ -1367,8 +1300,6 @@ class Glyph(object):
if self.isComposite():
if hasattr(self, "program"):
del self.program
elif self.isVarComposite():
pass # Doesn't have hinting
else:
self.program = ttProgram.Program()
self.program.fromBytecode([])
@ -1450,13 +1381,6 @@ class Glyph(object):
i += 2 + instructionLen
# Remove padding
data = data[:i]
elif self.isVarComposite():
i = 0
MIN_SIZE = GlyphVarComponent.MIN_SIZE
while len(data[i : i + MIN_SIZE]) >= MIN_SIZE:
size = GlyphVarComponent.getSize(data[i : i + MIN_SIZE])
i += size
data = data[:i]
self.data = data
@ -1942,391 +1866,6 @@ class GlyphComponent(object):
return result if result is NotImplemented else not result
#
# Variable Composite glyphs
# https://github.com/harfbuzz/boring-expansion-spec/blob/main/glyf1.md
#
class VarComponentFlags(IntFlag):
USE_MY_METRICS = 0x0001
AXIS_INDICES_ARE_SHORT = 0x0002
UNIFORM_SCALE = 0x0004
HAVE_TRANSLATE_X = 0x0008
HAVE_TRANSLATE_Y = 0x0010
HAVE_ROTATION = 0x0020
HAVE_SCALE_X = 0x0040
HAVE_SCALE_Y = 0x0080
HAVE_SKEW_X = 0x0100
HAVE_SKEW_Y = 0x0200
HAVE_TCENTER_X = 0x0400
HAVE_TCENTER_Y = 0x0800
GID_IS_24BIT = 0x1000
AXES_HAVE_VARIATION = 0x2000
RESET_UNSPECIFIED_AXES = 0x4000
VarComponentTransformMappingValues = namedtuple(
"VarComponentTransformMappingValues",
["flag", "fractionalBits", "scale", "defaultValue"],
)
VAR_COMPONENT_TRANSFORM_MAPPING = {
"translateX": VarComponentTransformMappingValues(
VarComponentFlags.HAVE_TRANSLATE_X, 0, 1, 0
),
"translateY": VarComponentTransformMappingValues(
VarComponentFlags.HAVE_TRANSLATE_Y, 0, 1, 0
),
"rotation": VarComponentTransformMappingValues(
VarComponentFlags.HAVE_ROTATION, 12, 180, 0
),
"scaleX": VarComponentTransformMappingValues(
VarComponentFlags.HAVE_SCALE_X, 10, 1, 1
),
"scaleY": VarComponentTransformMappingValues(
VarComponentFlags.HAVE_SCALE_Y, 10, 1, 1
),
"skewX": VarComponentTransformMappingValues(
VarComponentFlags.HAVE_SKEW_X, 12, -180, 0
),
"skewY": VarComponentTransformMappingValues(
VarComponentFlags.HAVE_SKEW_Y, 12, 180, 0
),
"tCenterX": VarComponentTransformMappingValues(
VarComponentFlags.HAVE_TCENTER_X, 0, 1, 0
),
"tCenterY": VarComponentTransformMappingValues(
VarComponentFlags.HAVE_TCENTER_Y, 0, 1, 0
),
}
class GlyphVarComponent(object):
MIN_SIZE = 5
def __init__(self):
self.location = {}
self.transform = DecomposedTransform()
@staticmethod
def getSize(data):
size = 5
flags = struct.unpack(">H", data[:2])[0]
numAxes = int(data[2])
if flags & VarComponentFlags.GID_IS_24BIT:
size += 1
size += numAxes
if flags & VarComponentFlags.AXIS_INDICES_ARE_SHORT:
size += 2 * numAxes
else:
axisIndices = array.array("B", data[:numAxes])
size += numAxes
for attr_name, mapping_values in VAR_COMPONENT_TRANSFORM_MAPPING.items():
if flags & mapping_values.flag:
size += 2
return size
def decompile(self, data, glyfTable):
flags = struct.unpack(">H", data[:2])[0]
self.flags = int(flags)
data = data[2:]
numAxes = int(data[0])
data = data[1:]
if flags & VarComponentFlags.GID_IS_24BIT:
glyphID = int(struct.unpack(">L", b"\0" + data[:3])[0])
data = data[3:]
flags ^= VarComponentFlags.GID_IS_24BIT
else:
glyphID = int(struct.unpack(">H", data[:2])[0])
data = data[2:]
self.glyphName = glyfTable.getGlyphName(int(glyphID))
if flags & VarComponentFlags.AXIS_INDICES_ARE_SHORT:
axisIndices = array.array("H", data[: 2 * numAxes])
if sys.byteorder != "big":
axisIndices.byteswap()
data = data[2 * numAxes :]
flags ^= VarComponentFlags.AXIS_INDICES_ARE_SHORT
else:
axisIndices = array.array("B", data[:numAxes])
data = data[numAxes:]
assert len(axisIndices) == numAxes
axisIndices = list(axisIndices)
axisValues = array.array("h", data[: 2 * numAxes])
if sys.byteorder != "big":
axisValues.byteswap()
data = data[2 * numAxes :]
assert len(axisValues) == numAxes
axisValues = [fi2fl(v, 14) for v in axisValues]
self.location = {
glyfTable.axisTags[i]: v for i, v in zip(axisIndices, axisValues)
}
def read_transform_component(data, values):
if flags & values.flag:
return (
data[2:],
fi2fl(struct.unpack(">h", data[:2])[0], values.fractionalBits)
* values.scale,
)
else:
return data, values.defaultValue
for attr_name, mapping_values in VAR_COMPONENT_TRANSFORM_MAPPING.items():
data, value = read_transform_component(data, mapping_values)
setattr(self.transform, attr_name, value)
if flags & VarComponentFlags.UNIFORM_SCALE:
if flags & VarComponentFlags.HAVE_SCALE_X and not (
flags & VarComponentFlags.HAVE_SCALE_Y
):
self.transform.scaleY = self.transform.scaleX
flags |= VarComponentFlags.HAVE_SCALE_Y
flags ^= VarComponentFlags.UNIFORM_SCALE
return data
def compile(self, glyfTable):
data = b""
if not hasattr(self, "flags"):
flags = 0
# Calculate optimal transform component flags
for attr_name, mapping in VAR_COMPONENT_TRANSFORM_MAPPING.items():
value = getattr(self.transform, attr_name)
if fl2fi(value / mapping.scale, mapping.fractionalBits) != fl2fi(
mapping.defaultValue / mapping.scale, mapping.fractionalBits
):
flags |= mapping.flag
else:
flags = self.flags
if (
flags & VarComponentFlags.HAVE_SCALE_X
and flags & VarComponentFlags.HAVE_SCALE_Y
and fl2fi(self.transform.scaleX, 10) == fl2fi(self.transform.scaleY, 10)
):
flags |= VarComponentFlags.UNIFORM_SCALE
flags ^= VarComponentFlags.HAVE_SCALE_Y
numAxes = len(self.location)
data = data + struct.pack(">B", numAxes)
glyphID = glyfTable.getGlyphID(self.glyphName)
if glyphID > 65535:
flags |= VarComponentFlags.GID_IS_24BIT
data = data + struct.pack(">L", glyphID)[1:]
else:
data = data + struct.pack(">H", glyphID)
axisIndices = [glyfTable.axisTags.index(tag) for tag in self.location.keys()]
if all(a <= 255 for a in axisIndices):
axisIndices = array.array("B", axisIndices)
else:
axisIndices = array.array("H", axisIndices)
if sys.byteorder != "big":
axisIndices.byteswap()
flags |= VarComponentFlags.AXIS_INDICES_ARE_SHORT
data = data + bytes(axisIndices)
axisValues = self.location.values()
axisValues = array.array("h", (fl2fi(v, 14) for v in axisValues))
if sys.byteorder != "big":
axisValues.byteswap()
data = data + bytes(axisValues)
def write_transform_component(data, value, values):
if flags & values.flag:
return data + struct.pack(
">h", fl2fi(value / values.scale, values.fractionalBits)
)
else:
return data
for attr_name, mapping_values in VAR_COMPONENT_TRANSFORM_MAPPING.items():
value = getattr(self.transform, attr_name)
data = write_transform_component(data, value, mapping_values)
return struct.pack(">H", flags) + data
def toXML(self, writer, ttFont):
attrs = [("glyphName", self.glyphName)]
if hasattr(self, "flags"):
attrs = attrs + [("flags", hex(self.flags))]
for attr_name, mapping in VAR_COMPONENT_TRANSFORM_MAPPING.items():
v = getattr(self.transform, attr_name)
if v != mapping.defaultValue:
attrs.append((attr_name, fl2str(v, mapping.fractionalBits)))
writer.begintag("varComponent", attrs)
writer.newline()
writer.begintag("location")
writer.newline()
for tag, v in self.location.items():
writer.simpletag("axis", [("tag", tag), ("value", fl2str(v, 14))])
writer.newline()
writer.endtag("location")
writer.newline()
writer.endtag("varComponent")
writer.newline()
def fromXML(self, name, attrs, content, ttFont):
self.glyphName = attrs["glyphName"]
if "flags" in attrs:
self.flags = safeEval(attrs["flags"])
for attr_name, mapping in VAR_COMPONENT_TRANSFORM_MAPPING.items():
if attr_name not in attrs:
continue
v = str2fl(safeEval(attrs[attr_name]), mapping.fractionalBits)
setattr(self.transform, attr_name, v)
for c in content:
if not isinstance(c, tuple):
continue
name, attrs, content = c
if name != "location":
continue
for c in content:
if not isinstance(c, tuple):
continue
name, attrs, content = c
assert name == "axis"
assert not content
self.location[attrs["tag"]] = str2fl(safeEval(attrs["value"]), 14)
def getPointCount(self):
assert hasattr(self, "flags"), "VarComponent with variations must have flags"
count = 0
if self.flags & VarComponentFlags.AXES_HAVE_VARIATION:
count += len(self.location)
if self.flags & (
VarComponentFlags.HAVE_TRANSLATE_X | VarComponentFlags.HAVE_TRANSLATE_Y
):
count += 1
if self.flags & VarComponentFlags.HAVE_ROTATION:
count += 1
if self.flags & (
VarComponentFlags.HAVE_SCALE_X | VarComponentFlags.HAVE_SCALE_Y
):
count += 1
if self.flags & (VarComponentFlags.HAVE_SKEW_X | VarComponentFlags.HAVE_SKEW_Y):
count += 1
if self.flags & (
VarComponentFlags.HAVE_TCENTER_X | VarComponentFlags.HAVE_TCENTER_Y
):
count += 1
return count
def getCoordinatesAndControls(self):
coords = []
controls = []
if self.flags & VarComponentFlags.AXES_HAVE_VARIATION:
for tag, v in self.location.items():
controls.append(tag)
coords.append((fl2fi(v, 14), 0))
if self.flags & (
VarComponentFlags.HAVE_TRANSLATE_X | VarComponentFlags.HAVE_TRANSLATE_Y
):
controls.append("translate")
coords.append((self.transform.translateX, self.transform.translateY))
if self.flags & VarComponentFlags.HAVE_ROTATION:
controls.append("rotation")
coords.append((fl2fi(self.transform.rotation / 180, 12), 0))
if self.flags & (
VarComponentFlags.HAVE_SCALE_X | VarComponentFlags.HAVE_SCALE_Y
):
controls.append("scale")
coords.append(
(fl2fi(self.transform.scaleX, 10), fl2fi(self.transform.scaleY, 10))
)
if self.flags & (VarComponentFlags.HAVE_SKEW_X | VarComponentFlags.HAVE_SKEW_Y):
controls.append("skew")
coords.append(
(
fl2fi(self.transform.skewX / -180, 12),
fl2fi(self.transform.skewY / 180, 12),
)
)
if self.flags & (
VarComponentFlags.HAVE_TCENTER_X | VarComponentFlags.HAVE_TCENTER_Y
):
controls.append("tCenter")
coords.append((self.transform.tCenterX, self.transform.tCenterY))
return coords, controls
def setCoordinates(self, coords):
i = 0
if self.flags & VarComponentFlags.AXES_HAVE_VARIATION:
newLocation = {}
for tag in self.location:
newLocation[tag] = fi2fl(coords[i][0], 14)
i += 1
self.location = newLocation
self.transform = DecomposedTransform()
if self.flags & (
VarComponentFlags.HAVE_TRANSLATE_X | VarComponentFlags.HAVE_TRANSLATE_Y
):
self.transform.translateX, self.transform.translateY = coords[i]
i += 1
if self.flags & VarComponentFlags.HAVE_ROTATION:
self.transform.rotation = fi2fl(coords[i][0], 12) * 180
i += 1
if self.flags & (
VarComponentFlags.HAVE_SCALE_X | VarComponentFlags.HAVE_SCALE_Y
):
self.transform.scaleX, self.transform.scaleY = fi2fl(
coords[i][0], 10
), fi2fl(coords[i][1], 10)
i += 1
if self.flags & (VarComponentFlags.HAVE_SKEW_X | VarComponentFlags.HAVE_SKEW_Y):
self.transform.skewX, self.transform.skewY = (
fi2fl(coords[i][0], 12) * -180,
fi2fl(coords[i][1], 12) * 180,
)
i += 1
if self.flags & (
VarComponentFlags.HAVE_TCENTER_X | VarComponentFlags.HAVE_TCENTER_Y
):
self.transform.tCenterX, self.transform.tCenterY = coords[i]
i += 1
return coords[i:]
def __eq__(self, other):
if type(self) != type(other):
return NotImplemented
return self.__dict__ == other.__dict__
def __ne__(self, other):
result = self.__eq__(other)
return result if result is NotImplemented else not result
class GlyphCoordinates(object):
"""A list of glyph coordinates.

View File

@ -1,7 +1,8 @@
from collections import UserDict, deque
from collections import deque
from functools import partial
from fontTools.misc import sstruct
from fontTools.misc.textTools import safeEval
from fontTools.misc.lazyTools import LazyDict
from . import DefaultTable
import array
import itertools
@ -39,19 +40,6 @@ GVAR_HEADER_FORMAT = """
GVAR_HEADER_SIZE = sstruct.calcsize(GVAR_HEADER_FORMAT)
class _LazyDict(UserDict):
def __init__(self, data):
super().__init__()
self.data = data
def __getitem__(self, k):
v = self.data[k]
if callable(v):
v = v()
self.data[k] = v
return v
class table__g_v_a_r(DefaultTable.DefaultTable):
dependencies = ["fvar", "glyf"]
@ -116,11 +104,6 @@ class table__g_v_a_r(DefaultTable.DefaultTable):
sstruct.unpack(GVAR_HEADER_FORMAT, data[0:GVAR_HEADER_SIZE], self)
assert len(glyphs) == self.glyphCount
assert len(axisTags) == self.axisCount
offsets = self.decompileOffsets_(
data[GVAR_HEADER_SIZE:],
tableFormat=(self.flags & 1),
glyphCount=self.glyphCount,
)
sharedCoords = tv.decompileSharedTuples(
axisTags, self.sharedTupleCount, data, self.offsetToSharedTuples
)
@ -128,20 +111,35 @@ class table__g_v_a_r(DefaultTable.DefaultTable):
offsetToData = self.offsetToGlyphVariationData
glyf = ttFont["glyf"]
def decompileVarGlyph(glyphName, gid):
gvarData = data[
offsetToData + offsets[gid] : offsetToData + offsets[gid + 1]
]
def get_read_item():
reverseGlyphMap = ttFont.getReverseGlyphMap()
tableFormat = self.flags & 1
def read_item(glyphName):
gid = reverseGlyphMap[glyphName]
offsetSize = 2 if tableFormat == 0 else 4
startOffset = GVAR_HEADER_SIZE + offsetSize * gid
endOffset = startOffset + offsetSize * 2
offsets = table__g_v_a_r.decompileOffsets_(
data[startOffset:endOffset],
tableFormat=tableFormat,
glyphCount=1,
)
gvarData = data[offsetToData + offsets[0] : offsetToData + offsets[1]]
if not gvarData:
return []
glyph = glyf[glyphName]
numPointsInGlyph = self.getNumPoints_(glyph)
return decompileGlyph_(numPointsInGlyph, sharedCoords, axisTags, gvarData)
return decompileGlyph_(
numPointsInGlyph, sharedCoords, axisTags, gvarData
)
for gid in range(self.glyphCount):
glyphName = glyphs[gid]
variations[glyphName] = partial(decompileVarGlyph, glyphName, gid)
self.variations = _LazyDict(variations)
return read_item
read_item = get_read_item()
l = LazyDict({glyphs[gid]: read_item for gid in range(self.glyphCount)})
self.variations = l
if ttFont.lazy is False: # Be lazy for None and True
self.ensureDecompiled()
@ -245,11 +243,6 @@ class table__g_v_a_r(DefaultTable.DefaultTable):
if glyph.isComposite():
return len(glyph.components) + NUM_PHANTOM_POINTS
elif glyph.isVarComposite():
count = 0
for component in glyph.components:
count += component.getPointCount()
return count + NUM_PHANTOM_POINTS
else:
# Empty glyphs (eg. space, nonmarkingreturn) have no "coordinates" attribute.
return len(getattr(glyph, "coordinates", [])) + NUM_PHANTOM_POINTS

View File

@ -21,10 +21,7 @@ class table__l_o_c_a(DefaultTable.DefaultTable):
if sys.byteorder != "big":
locations.byteswap()
if not longFormat:
l = array.array("I")
for i in range(len(locations)):
l.append(locations[i] * 2)
locations = l
locations = array.array("I", (2 * l for l in locations))
if len(locations) < (ttFont["maxp"].numGlyphs + 1):
log.warning(
"corrupt 'loca' table, or wrong numGlyphs in 'maxp': %d %d",

View File

@ -127,7 +127,7 @@ class table__m_a_x_p(DefaultTable.DefaultTable):
formatstring, names, fixes = sstruct.getformat(maxpFormat_0_5)
if self.tableVersion != 0x00005000:
formatstring, names_1_0, fixes = sstruct.getformat(maxpFormat_1_0_add)
names = names + names_1_0
names = {**names, **names_1_0}
for name in names:
value = getattr(self, name)
if name == "tableVersion":

View File

@ -1146,7 +1146,10 @@ class BaseTable(object):
except KeyError:
raise # XXX on KeyError, raise nice error
value = conv.xmlRead(attrs, content, font)
if conv.repeat:
# Some manually-written tables have a conv.repeat of ""
# to represent lists. Hence comparing to None here to
# allow those lists to be read correctly from XML.
if conv.repeat is not None:
seq = getattr(self, conv.name, None)
if seq is None:
seq = []

View File

@ -6,8 +6,10 @@ from fontTools.misc.fixedTools import (
ensureVersionIsLong as fi2ve,
versionToFixed as ve2fi,
)
from fontTools.ttLib.tables.TupleVariation import TupleVariation
from fontTools.misc.roundTools import nearestMultipleShortestRepr, otRound
from fontTools.misc.textTools import bytesjoin, tobytes, tostr, pad, safeEval
from fontTools.misc.lazyTools import LazyList
from fontTools.ttLib import getSearchRange
from .otBase import (
CountReference,
@ -18,6 +20,7 @@ from .otBase import (
)
from .otTables import (
lookupTypes,
VarCompositeGlyph,
AATStateTable,
AATState,
AATAction,
@ -29,8 +32,9 @@ from .otTables import (
CompositeMode as _CompositeMode,
NO_VARIATION_INDEX,
)
from itertools import zip_longest
from itertools import zip_longest, accumulate
from functools import partial
from types import SimpleNamespace
import re
import struct
from typing import Optional
@ -78,7 +82,7 @@ def buildConverters(tableSpec, tableNamespace):
conv = converterClass(name, repeat, aux, description=descr)
if conv.tableClass:
# A "template" such as OffsetTo(AType) knowss the table class already
# A "template" such as OffsetTo(AType) knows the table class already
tableClass = conv.tableClass
elif tp in ("MortChain", "MortSubtable", "MorxChain"):
tableClass = tableNamespace.get(tp)
@ -105,46 +109,6 @@ def buildConverters(tableSpec, tableNamespace):
return converters, convertersByName
class _MissingItem(tuple):
__slots__ = ()
try:
from collections import UserList
except ImportError:
from UserList import UserList
class _LazyList(UserList):
def __getslice__(self, i, j):
return self.__getitem__(slice(i, j))
def __getitem__(self, k):
if isinstance(k, slice):
indices = range(*k.indices(len(self)))
return [self[i] for i in indices]
item = self.data[k]
if isinstance(item, _MissingItem):
self.reader.seek(self.pos + item[0] * self.recordSize)
item = self.conv.read(self.reader, self.font, {})
self.data[k] = item
return item
def __add__(self, other):
if isinstance(other, _LazyList):
other = list(other)
elif isinstance(other, list):
pass
else:
return NotImplemented
return list(self) + other
def __radd__(self, other):
if not isinstance(other, list):
return NotImplemented
return other + list(self)
class BaseConverter(object):
"""Base class for converter objects. Apart from the constructor, this
is an abstract class."""
@ -176,6 +140,7 @@ class BaseConverter(object):
"AxisCount",
"BaseGlyphRecordCount",
"LayerRecordCount",
"AxisIndicesList",
]
self.description = description
@ -192,14 +157,21 @@ class BaseConverter(object):
l.append(self.read(reader, font, tableDict))
return l
else:
l = _LazyList()
l.reader = reader.copy()
l.pos = l.reader.pos
l.font = font
l.conv = self
l.recordSize = recordSize
l.extend(_MissingItem([i]) for i in range(count))
def get_read_item():
reader_copy = reader.copy()
pos = reader.pos
def read_item(i):
reader_copy.seek(pos + i * recordSize)
return self.read(reader_copy, font, {})
return read_item
read_item = get_read_item()
l = LazyList(read_item for i in range(count))
reader.advance(count * recordSize)
return l
def getRecordSize(self, reader):
@ -1833,6 +1805,169 @@ class VarDataValue(BaseConverter):
return safeEval(attrs["value"])
class TupleValues:
def read(self, data, font):
return TupleVariation.decompileDeltas_(None, data)[0]
def write(self, writer, font, tableDict, values, repeatIndex=None):
return bytes(TupleVariation.compileDeltaValues_(values))
def xmlRead(self, attrs, content, font):
return safeEval(attrs["value"])
def xmlWrite(self, xmlWriter, font, value, name, attrs):
xmlWriter.simpletag(name, attrs + [("value", value)])
xmlWriter.newline()
class CFF2Index(BaseConverter):
def __init__(
self,
name,
repeat,
aux,
tableClass=None,
*,
itemClass=None,
itemConverterClass=None,
description="",
):
BaseConverter.__init__(
self, name, repeat, aux, tableClass, description=description
)
self._itemClass = itemClass
self._converter = (
itemConverterClass() if itemConverterClass is not None else None
)
def read(self, reader, font, tableDict):
count = reader.readULong()
if count == 0:
return []
offSize = reader.readUInt8()
def getReadArray(reader, offSize):
return {
1: reader.readUInt8Array,
2: reader.readUShortArray,
3: reader.readUInt24Array,
4: reader.readULongArray,
}[offSize]
readArray = getReadArray(reader, offSize)
lazy = font.lazy is not False and count > 8
if not lazy:
offsets = readArray(count + 1)
items = []
lastOffset = offsets.pop(0)
reader.readData(lastOffset - 1) # In case first offset is not 1
for offset in offsets:
assert lastOffset <= offset
item = reader.readData(offset - lastOffset)
if self._itemClass is not None:
obj = self._itemClass()
obj.decompile(item, font, reader.localState)
item = obj
elif self._converter is not None:
item = self._converter.read(item, font)
items.append(item)
lastOffset = offset
return items
else:
def get_read_item():
reader_copy = reader.copy()
offset_pos = reader.pos
data_pos = offset_pos + (count + 1) * offSize - 1
readArray = getReadArray(reader_copy, offSize)
def read_item(i):
reader_copy.seek(offset_pos + i * offSize)
offsets = readArray(2)
reader_copy.seek(data_pos + offsets[0])
item = reader_copy.readData(offsets[1] - offsets[0])
if self._itemClass is not None:
obj = self._itemClass()
obj.decompile(item, font, reader_copy.localState)
item = obj
elif self._converter is not None:
item = self._converter.read(item, font)
return item
return read_item
read_item = get_read_item()
l = LazyList([read_item] * count)
# TODO: Advance reader
return l
def write(self, writer, font, tableDict, values, repeatIndex=None):
items = values
writer.writeULong(len(items))
if not len(items):
return
if self._itemClass is not None:
items = [item.compile(font) for item in items]
elif self._converter is not None:
items = [
self._converter.write(writer, font, tableDict, item, i)
for i, item in enumerate(items)
]
offsets = [len(item) for item in items]
offsets = list(accumulate(offsets, initial=1))
lastOffset = offsets[-1]
offSize = (
1
if lastOffset < 0x100
else 2 if lastOffset < 0x10000 else 3 if lastOffset < 0x1000000 else 4
)
writer.writeUInt8(offSize)
writeArray = {
1: writer.writeUInt8Array,
2: writer.writeUShortArray,
3: writer.writeUInt24Array,
4: writer.writeULongArray,
}[offSize]
writeArray(offsets)
for item in items:
writer.writeData(item)
def xmlRead(self, attrs, content, font):
if self._itemClass is not None:
obj = self._itemClass()
obj.fromXML(None, attrs, content, font)
return obj
elif self._converter is not None:
return self._converter.xmlRead(attrs, content, font)
else:
raise NotImplementedError()
def xmlWrite(self, xmlWriter, font, value, name, attrs):
if self._itemClass is not None:
for i, item in enumerate(value):
item.toXML(xmlWriter, font, [("index", i)], name)
elif self._converter is not None:
for i, item in enumerate(value):
self._converter.xmlWrite(
xmlWriter, font, item, name, attrs + [("index", i)]
)
else:
raise NotImplementedError()
class LookupFlag(UShort):
def xmlWrite(self, xmlWriter, font, value, name, attrs):
xmlWriter.simpletag(name, attrs + [("value", value)])
@ -1910,6 +2045,8 @@ converterMapping = {
"ExtendMode": ExtendMode,
"CompositeMode": CompositeMode,
"STATFlags": STATFlags,
"TupleList": partial(CFF2Index, itemConverterClass=TupleValues),
"VarCompositeGlyphList": partial(CFF2Index, itemClass=VarCompositeGlyph),
# AAT
"CIDGlyphMap": CIDGlyphMap,
"GlyphCIDMap": GlyphCIDMap,

View File

@ -3168,6 +3168,25 @@ otData = [
),
],
),
(
"ConditionList",
[
(
"uint32",
"ConditionCount",
None,
None,
"Number of condition tables in the ConditionTable array",
),
(
"LOffset",
"ConditionTable",
"ConditionCount",
0,
"Array of offset to condition tables, from the beginning of the ConditionList table.",
),
],
),
(
"ConditionSet",
[
@ -3183,7 +3202,7 @@ otData = [
"ConditionTable",
"ConditionCount",
0,
"Array of condition tables.",
"Array of offset to condition tables, from the beginning of the ConditionSet table.",
),
],
),
@ -3214,6 +3233,79 @@ otData = [
),
],
),
(
"ConditionTableFormat2",
[
("uint16", "Format", None, None, "Format, = 2"),
(
"int16",
"DefaultValue",
None,
None,
"Value at default instance.",
),
(
"uint32",
"VarIdx",
None,
None,
"Variation index to vary the value based on current designspace location.",
),
],
),
(
"ConditionTableFormat3",
[
("uint16", "Format", None, None, "Format, = 3"),
(
"uint8",
"ConditionCount",
None,
None,
"Index for the variation axis within the fvar table, base 0.",
),
(
"Offset24",
"ConditionTable",
"ConditionCount",
0,
"Array of condition tables for this conjunction (AND) expression.",
),
],
),
(
"ConditionTableFormat4",
[
("uint16", "Format", None, None, "Format, = 4"),
(
"uint8",
"ConditionCount",
None,
None,
"Index for the variation axis within the fvar table, base 0.",
),
(
"Offset24",
"ConditionTable",
"ConditionCount",
0,
"Array of condition tables for this disjunction (OR) expression.",
),
],
),
(
"ConditionTableFormat5",
[
("uint16", "Format", None, None, "Format, = 5"),
(
"Offset24",
"ConditionTable",
None,
None,
"Condition to negate.",
),
],
),
(
"FeatureTableSubstitution",
[
@ -3322,6 +3414,78 @@ otData = [
("VarIdxMapValue", "mapping", "", 0, "Array of compressed data"),
],
),
# MultiVariationStore
(
"SparseVarRegionAxis",
[
("uint16", "AxisIndex", None, None, ""),
("F2Dot14", "StartCoord", None, None, ""),
("F2Dot14", "PeakCoord", None, None, ""),
("F2Dot14", "EndCoord", None, None, ""),
],
),
(
"SparseVarRegion",
[
("uint16", "SparseRegionCount", None, None, ""),
("struct", "SparseVarRegionAxis", "SparseRegionCount", 0, ""),
],
),
(
"SparseVarRegionList",
[
("uint16", "RegionCount", None, None, ""),
("LOffsetTo(SparseVarRegion)", "Region", "RegionCount", 0, ""),
],
),
(
"MultiVarData",
[
("uint8", "Format", None, None, "Set to 1."),
("uint16", "VarRegionCount", None, None, ""),
("uint16", "VarRegionIndex", "VarRegionCount", 0, ""),
("TupleList", "Item", "", 0, ""),
],
),
(
"MultiVarStore",
[
("uint16", "Format", None, None, "Set to 1."),
("LOffset", "SparseVarRegionList", None, None, ""),
("uint16", "MultiVarDataCount", None, None, ""),
("LOffset", "MultiVarData", "MultiVarDataCount", 0, ""),
],
),
# VariableComposites
(
"VARC",
[
(
"Version",
"Version",
None,
None,
"Version of the HVAR table-initially = 0x00010000",
),
("LOffset", "Coverage", None, None, ""),
("LOffset", "MultiVarStore", None, None, "(may be NULL)"),
("LOffset", "ConditionList", None, None, "(may be NULL)"),
("LOffset", "AxisIndicesList", None, None, "(may be NULL)"),
("LOffset", "VarCompositeGlyphs", None, None, ""),
],
),
(
"AxisIndicesList",
[
("TupleList", "Item", "", 0, ""),
],
),
(
"VarCompositeGlyphs",
[
("VarCompositeGlyphList", "VarCompositeGlyph", "", None, ""),
],
),
# Glyph advance variations
(
"HVAR",

View File

@ -11,11 +11,13 @@ from functools import reduce
from math import radians
import itertools
from collections import defaultdict, namedtuple
from fontTools.ttLib.tables.TupleVariation import TupleVariation
from fontTools.ttLib.tables.otTraverse import dfs_base_table
from fontTools.misc.arrayTools import quantizeRect
from fontTools.misc.roundTools import otRound
from fontTools.misc.transform import Transform, Identity
from fontTools.misc.transform import Transform, Identity, DecomposedTransform
from fontTools.misc.textTools import bytesjoin, pad, safeEval
from fontTools.misc.vector import Vector
from fontTools.pens.boundsPen import ControlBoundsPen
from fontTools.pens.transformPen import TransformPen
from .otBase import (
@ -25,9 +27,18 @@ from .otBase import (
CountReference,
getFormatSwitchingBaseTableClass,
)
from fontTools.misc.fixedTools import (
fixedToFloat as fi2fl,
floatToFixed as fl2fi,
floatToFixedToStr as fl2str,
strToFixedToFloat as str2fl,
)
from fontTools.feaLib.lookupDebugInfo import LookupDebugInfo, LOOKUP_DEBUG_INFO_KEY
import logging
import struct
import array
import sys
from enum import IntFlag
from typing import TYPE_CHECKING, Iterator, List, Optional, Set
if TYPE_CHECKING:
@ -37,6 +48,389 @@ if TYPE_CHECKING:
log = logging.getLogger(__name__)
class VarComponentFlags(IntFlag):
RESET_UNSPECIFIED_AXES = 1 << 0
HAVE_AXES = 1 << 1
AXIS_VALUES_HAVE_VARIATION = 1 << 2
TRANSFORM_HAS_VARIATION = 1 << 3
HAVE_TRANSLATE_X = 1 << 4
HAVE_TRANSLATE_Y = 1 << 5
HAVE_ROTATION = 1 << 6
HAVE_CONDITION = 1 << 7
HAVE_SCALE_X = 1 << 8
HAVE_SCALE_Y = 1 << 9
HAVE_TCENTER_X = 1 << 10
HAVE_TCENTER_Y = 1 << 11
GID_IS_24BIT = 1 << 12
HAVE_SKEW_X = 1 << 13
HAVE_SKEW_Y = 1 << 14
RESERVED_MASK = (1 << 32) - (1 << 15)
VarTransformMappingValues = namedtuple(
"VarTransformMappingValues",
["flag", "fractionalBits", "scale", "defaultValue"],
)
VAR_TRANSFORM_MAPPING = {
"translateX": VarTransformMappingValues(
VarComponentFlags.HAVE_TRANSLATE_X, 0, 1, 0
),
"translateY": VarTransformMappingValues(
VarComponentFlags.HAVE_TRANSLATE_Y, 0, 1, 0
),
"rotation": VarTransformMappingValues(VarComponentFlags.HAVE_ROTATION, 12, 180, 0),
"scaleX": VarTransformMappingValues(VarComponentFlags.HAVE_SCALE_X, 10, 1, 1),
"scaleY": VarTransformMappingValues(VarComponentFlags.HAVE_SCALE_Y, 10, 1, 1),
"skewX": VarTransformMappingValues(VarComponentFlags.HAVE_SKEW_X, 12, -180, 0),
"skewY": VarTransformMappingValues(VarComponentFlags.HAVE_SKEW_Y, 12, 180, 0),
"tCenterX": VarTransformMappingValues(VarComponentFlags.HAVE_TCENTER_X, 0, 1, 0),
"tCenterY": VarTransformMappingValues(VarComponentFlags.HAVE_TCENTER_Y, 0, 1, 0),
}
# Probably should be somewhere in fontTools.misc
_packer = {
1: lambda v: struct.pack(">B", v),
2: lambda v: struct.pack(">H", v),
3: lambda v: struct.pack(">L", v)[1:],
4: lambda v: struct.pack(">L", v),
}
_unpacker = {
1: lambda v: struct.unpack(">B", v)[0],
2: lambda v: struct.unpack(">H", v)[0],
3: lambda v: struct.unpack(">L", b"\0" + v)[0],
4: lambda v: struct.unpack(">L", v)[0],
}
def _read_uint32var(data, i):
"""Read a variable-length number from data starting at index i.
Return the number and the next index.
"""
b0 = data[i]
if b0 < 0x80:
return b0, i + 1
elif b0 < 0xC0:
return (b0 - 0x80) << 8 | data[i + 1], i + 2
elif b0 < 0xE0:
return (b0 - 0xC0) << 16 | data[i + 1] << 8 | data[i + 2], i + 3
elif b0 < 0xF0:
return (b0 - 0xE0) << 24 | data[i + 1] << 16 | data[i + 2] << 8 | data[
i + 3
], i + 4
else:
return (b0 - 0xF0) << 32 | data[i + 1] << 24 | data[i + 2] << 16 | data[
i + 3
] << 8 | data[i + 4], i + 5
def _write_uint32var(v):
"""Write a variable-length number.
Return the data.
"""
if v < 0x80:
return struct.pack(">B", v)
elif v < 0x4000:
return struct.pack(">H", (v | 0x8000))
elif v < 0x200000:
return struct.pack(">L", (v | 0xC00000))[1:]
elif v < 0x10000000:
return struct.pack(">L", (v | 0xE0000000))
else:
return struct.pack(">B", 0xF0) + struct.pack(">L", v)
class VarComponent:
def __init__(self):
self.populateDefaults()
def populateDefaults(self, propagator=None):
self.flags = 0
self.glyphName = None
self.conditionIndex = None
self.axisIndicesIndex = None
self.axisValues = ()
self.axisValuesVarIndex = NO_VARIATION_INDEX
self.transformVarIndex = NO_VARIATION_INDEX
self.transform = DecomposedTransform()
def decompile(self, data, font, localState):
i = 0
self.flags, i = _read_uint32var(data, i)
flags = self.flags
gidSize = 3 if flags & VarComponentFlags.GID_IS_24BIT else 2
glyphID = _unpacker[gidSize](data[i : i + gidSize])
i += gidSize
self.glyphName = font.glyphOrder[glyphID]
if flags & VarComponentFlags.HAVE_CONDITION:
self.conditionIndex, i = _read_uint32var(data, i)
if flags & VarComponentFlags.HAVE_AXES:
self.axisIndicesIndex, i = _read_uint32var(data, i)
else:
self.axisIndicesIndex = None
if self.axisIndicesIndex is None:
numAxes = 0
else:
axisIndices = localState["AxisIndicesList"].Item[self.axisIndicesIndex]
numAxes = len(axisIndices)
if flags & VarComponentFlags.HAVE_AXES:
axisValues, i = TupleVariation.decompileDeltas_(numAxes, data, i)
self.axisValues = tuple(fi2fl(v, 14) for v in axisValues)
else:
self.axisValues = ()
assert len(self.axisValues) == numAxes
if flags & VarComponentFlags.AXIS_VALUES_HAVE_VARIATION:
self.axisValuesVarIndex, i = _read_uint32var(data, i)
else:
self.axisValuesVarIndex = NO_VARIATION_INDEX
if flags & VarComponentFlags.TRANSFORM_HAS_VARIATION:
self.transformVarIndex, i = _read_uint32var(data, i)
else:
self.transformVarIndex = NO_VARIATION_INDEX
self.transform = DecomposedTransform()
def read_transform_component(values):
nonlocal i
if flags & values.flag:
v = (
fi2fl(
struct.unpack(">h", data[i : i + 2])[0], values.fractionalBits
)
* values.scale
)
i += 2
return v
else:
return values.defaultValue
for attr_name, mapping_values in VAR_TRANSFORM_MAPPING.items():
value = read_transform_component(mapping_values)
setattr(self.transform, attr_name, value)
if not (flags & VarComponentFlags.HAVE_SCALE_Y):
self.transform.scaleY = self.transform.scaleX
n = flags & VarComponentFlags.RESERVED_MASK
while n:
_, i = _read_uint32var(data, i)
n &= n - 1
return data[i:]
def compile(self, font):
data = []
flags = self.flags
glyphID = font.getGlyphID(self.glyphName)
if glyphID > 65535:
flags |= VarComponentFlags.GID_IS_24BIT
data.append(_packer[3](glyphID))
else:
flags &= ~VarComponentFlags.GID_IS_24BIT
data.append(_packer[2](glyphID))
if self.conditionIndex is not None:
flags |= VarComponentFlags.HAVE_CONDITION
data.append(_write_uint32var(self.conditionIndex))
numAxes = len(self.axisValues)
if numAxes:
flags |= VarComponentFlags.HAVE_AXES
data.append(_write_uint32var(self.axisIndicesIndex))
data.append(
TupleVariation.compileDeltaValues_(
[fl2fi(v, 14) for v in self.axisValues]
)
)
else:
flags &= ~VarComponentFlags.HAVE_AXES
if self.axisValuesVarIndex != NO_VARIATION_INDEX:
flags |= VarComponentFlags.AXIS_VALUES_HAVE_VARIATION
data.append(_write_uint32var(self.axisValuesVarIndex))
else:
flags &= ~VarComponentFlags.AXIS_VALUES_HAVE_VARIATION
if self.transformVarIndex != NO_VARIATION_INDEX:
flags |= VarComponentFlags.TRANSFORM_HAS_VARIATION
data.append(_write_uint32var(self.transformVarIndex))
else:
flags &= ~VarComponentFlags.TRANSFORM_HAS_VARIATION
def write_transform_component(value, values):
if flags & values.flag:
return struct.pack(
">h", fl2fi(value / values.scale, values.fractionalBits)
)
else:
return b""
for attr_name, mapping_values in VAR_TRANSFORM_MAPPING.items():
value = getattr(self.transform, attr_name)
data.append(write_transform_component(value, mapping_values))
return _write_uint32var(flags) + bytesjoin(data)
def toXML(self, writer, ttFont, attrs):
writer.begintag("VarComponent", attrs)
writer.newline()
def write(name, value, attrs=()):
if value is not None:
writer.simpletag(name, (("value", value),) + attrs)
writer.newline()
write("glyphName", self.glyphName)
if self.conditionIndex is not None:
write("conditionIndex", self.conditionIndex)
if self.axisIndicesIndex is not None:
write("axisIndicesIndex", self.axisIndicesIndex)
if (
self.axisIndicesIndex is not None
or self.flags & VarComponentFlags.RESET_UNSPECIFIED_AXES
):
if self.flags & VarComponentFlags.RESET_UNSPECIFIED_AXES:
attrs = (("resetUnspecifiedAxes", 1),)
else:
attrs = ()
write("axisValues", [float(fl2str(v, 14)) for v in self.axisValues], attrs)
if self.axisValuesVarIndex != NO_VARIATION_INDEX:
write("axisValuesVarIndex", self.axisValuesVarIndex)
if self.transformVarIndex != NO_VARIATION_INDEX:
write("transformVarIndex", self.transformVarIndex)
# Only write transform components that are specified in the
# flags, even if they are the default value.
for attr_name, mapping in VAR_TRANSFORM_MAPPING.items():
if not (self.flags & mapping.flag):
continue
v = getattr(self.transform, attr_name)
write(attr_name, fl2str(v, mapping.fractionalBits))
writer.endtag("VarComponent")
writer.newline()
def fromXML(self, name, attrs, content, ttFont):
content = [c for c in content if isinstance(c, tuple)]
self.populateDefaults()
for name, attrs, content in content:
assert not content
v = attrs["value"]
if name == "glyphName":
self.glyphName = v
elif name == "conditionIndex":
self.conditionIndex = safeEval(v)
elif name == "axisIndicesIndex":
self.axisIndicesIndex = safeEval(v)
elif name == "axisValues":
self.axisValues = tuple(str2fl(v, 14) for v in safeEval(v))
if safeEval(attrs.get("resetUnspecifiedAxes", "0")):
self.flags |= VarComponentFlags.RESET_UNSPECIFIED_AXES
elif name == "axisValuesVarIndex":
self.axisValuesVarIndex = safeEval(v)
elif name == "transformVarIndex":
self.transformVarIndex = safeEval(v)
elif name in VAR_TRANSFORM_MAPPING:
setattr(
self.transform,
name,
safeEval(v),
)
self.flags |= VAR_TRANSFORM_MAPPING[name].flag
else:
assert False, name
def applyTransformDeltas(self, deltas):
i = 0
def read_transform_component_delta(values):
nonlocal i
if self.flags & values.flag:
v = fi2fl(deltas[i], values.fractionalBits) * values.scale
i += 1
return v
else:
return 0
for attr_name, mapping_values in VAR_TRANSFORM_MAPPING.items():
value = read_transform_component_delta(mapping_values)
setattr(
self.transform, attr_name, getattr(self.transform, attr_name) + value
)
if not (self.flags & VarComponentFlags.HAVE_SCALE_Y):
self.transform.scaleY = self.transform.scaleX
assert i == len(deltas), (i, len(deltas))
def __eq__(self, other):
if type(self) != type(other):
return NotImplemented
return self.__dict__ == other.__dict__
def __ne__(self, other):
result = self.__eq__(other)
return result if result is NotImplemented else not result
class VarCompositeGlyph:
def __init__(self, components=None):
self.components = components if components is not None else []
def decompile(self, data, font, localState):
self.components = []
while data:
component = VarComponent()
data = component.decompile(data, font, localState)
self.components.append(component)
def compile(self, font):
data = []
for component in self.components:
data.append(component.compile(font))
return bytesjoin(data)
def toXML(self, xmlWriter, font, attrs, name):
xmlWriter.begintag("VarCompositeGlyph", attrs)
xmlWriter.newline()
for i, component in enumerate(self.components):
component.toXML(xmlWriter, font, [("index", i)])
xmlWriter.endtag("VarCompositeGlyph")
xmlWriter.newline()
def fromXML(self, name, attrs, content, font):
content = [c for c in content if isinstance(c, tuple)]
for name, attrs, content in content:
assert name == "VarComponent"
component = VarComponent()
component.fromXML(name, attrs, content, font)
self.components.append(component)
class AATStateTable(object):
def __init__(self):
self.GlyphClasses = {} # GlyphID --> GlyphClass

View File

@ -4,7 +4,12 @@ from fontTools.misc.configTools import AbstractConfig
from fontTools.misc.textTools import Tag, byteord, tostr
from fontTools.misc.loggingTools import deprecateArgument
from fontTools.ttLib import TTLibError
from fontTools.ttLib.ttGlyphSet import _TTGlyph, _TTGlyphSetCFF, _TTGlyphSetGlyf
from fontTools.ttLib.ttGlyphSet import (
_TTGlyph,
_TTGlyphSetCFF,
_TTGlyphSetGlyf,
_TTGlyphSetVARC,
)
from fontTools.ttLib.sfnt import SFNTReader, SFNTWriter
from io import BytesIO, StringIO, UnsupportedOperation
import os
@ -537,7 +542,7 @@ class TTFont(object):
#
# Not enough names found in the 'post' table.
# Can happen when 'post' format 1 is improperly used on a font that
# has more than 258 glyphs (the lenght of 'standardGlyphOrder').
# has more than 258 glyphs (the length of 'standardGlyphOrder').
#
log.warning(
"Not enough names found in the 'post' table, generating them from cmap instead"
@ -764,12 +769,16 @@ class TTFont(object):
location = None
if location and not normalized:
location = self.normalizeLocation(location)
glyphSet = None
if ("CFF " in self or "CFF2" in self) and (preferCFF or "glyf" not in self):
return _TTGlyphSetCFF(self, location)
glyphSet = _TTGlyphSetCFF(self, location)
elif "glyf" in self:
return _TTGlyphSetGlyf(self, location, recalcBounds=recalcBounds)
glyphSet = _TTGlyphSetGlyf(self, location, recalcBounds=recalcBounds)
else:
raise TTLibError("Font contains no outlines")
if "VARC" in self:
glyphSet = _TTGlyphSetVARC(self, location, glyphSet)
return glyphSet
def normalizeLocation(self, location):
"""Normalize a ``location`` from the font's defined axes space (also

View File

@ -3,11 +3,12 @@
from abc import ABC, abstractmethod
from collections.abc import Mapping
from contextlib import contextmanager
from copy import copy
from copy import copy, deepcopy
from types import SimpleNamespace
from fontTools.misc.fixedTools import otRound
from fontTools.misc.vector import Vector
from fontTools.misc.fixedTools import otRound, fixedToFloat as fi2fl
from fontTools.misc.loggingTools import deprecateFunction
from fontTools.misc.transform import Transform
from fontTools.misc.transform import Transform, DecomposedTransform
from fontTools.pens.transformPen import TransformPen, TransformPointPen
from fontTools.pens.recordingPen import (
DecomposingRecordingPen,
@ -103,6 +104,16 @@ class _TTGlyphSetGlyf(_TTGlyphSet):
return _TTGlyphGlyf(self, glyphName, recalcBounds=self.recalcBounds)
class _TTGlyphSetGlyf(_TTGlyphSet):
def __init__(self, font, location, recalcBounds=True):
self.glyfTable = font["glyf"]
super().__init__(font, location, self.glyfTable, recalcBounds=recalcBounds)
self.gvarTable = font.get("gvar")
def __getitem__(self, glyphName):
return _TTGlyphGlyf(self, glyphName, recalcBounds=self.recalcBounds)
class _TTGlyphSetCFF(_TTGlyphSet):
def __init__(self, font, location):
tableTag = "CFF2" if "CFF2" in font else "CFF "
@ -123,6 +134,19 @@ class _TTGlyphSetCFF(_TTGlyphSet):
return _TTGlyphCFF(self, glyphName)
class _TTGlyphSetVARC(_TTGlyphSet):
def __init__(self, font, location, glyphSet):
self.glyphSet = glyphSet
super().__init__(font, location, glyphSet)
self.varcTable = font["VARC"].table
def __getitem__(self, glyphName):
varc = self.varcTable
if glyphName not in varc.Coverage.glyphs:
return self.glyphSet[glyphName]
return _TTGlyphVARC(self, glyphName)
class _TTGlyph(ABC):
"""Glyph object that supports the Pen protocol, meaning that it has
.draw() and .drawPoints() methods that take a pen object as their only
@ -178,10 +202,6 @@ class _TTGlyphGlyf(_TTGlyph):
if depth:
offset = 0 # Offset should only apply at top-level
if glyph.isVarComposite():
self._drawVarComposite(glyph, pen, False)
return
glyph.draw(pen, self.glyphSet.glyfTable, offset)
def drawPoints(self, pen):
@ -194,35 +214,8 @@ class _TTGlyphGlyf(_TTGlyph):
if depth:
offset = 0 # Offset should only apply at top-level
if glyph.isVarComposite():
self._drawVarComposite(glyph, pen, True)
return
glyph.drawPoints(pen, self.glyphSet.glyfTable, offset)
def _drawVarComposite(self, glyph, pen, isPointPen):
from fontTools.ttLib.tables._g_l_y_f import (
VarComponentFlags,
VAR_COMPONENT_TRANSFORM_MAPPING,
)
for comp in glyph.components:
with self.glyphSet.pushLocation(
comp.location, comp.flags & VarComponentFlags.RESET_UNSPECIFIED_AXES
):
try:
pen.addVarComponent(
comp.glyphName, comp.transform, self.glyphSet.rawLocation
)
except AttributeError:
t = comp.transform.toTransform()
if isPointPen:
tPen = TransformPointPen(pen, t)
self.glyphSet[comp.glyphName].drawPoints(tPen)
else:
tPen = TransformPen(pen, t)
self.glyphSet[comp.glyphName].draw(tPen)
def _getGlyphAndOffset(self):
if self.glyphSet.location and self.glyphSet.gvarTable is not None:
glyph = self._getGlyphInstance()
@ -283,6 +276,128 @@ class _TTGlyphCFF(_TTGlyph):
self.glyphSet.charStrings[self.name].draw(pen, self.glyphSet.blender)
def _evaluateCondition(condition, fvarAxes, location, instancer):
if condition.Format == 1:
# ConditionAxisRange
axisIndex = condition.AxisIndex
axisTag = fvarAxes[axisIndex].axisTag
axisValue = location.get(axisTag, 0)
minValue = condition.FilterRangeMinValue
maxValue = condition.FilterRangeMaxValue
return minValue <= axisValue <= maxValue
elif condition.Format == 2:
# ConditionValue
value = condition.DefaultValue
value += instancer[condition.VarIdx][0]
return value > 0
elif condition.Format == 3:
# ConditionAnd
for subcondition in condition.ConditionTable:
if not _evaluateCondition(subcondition, fvarAxes, location, instancer):
return False
return True
elif condition.Format == 4:
# ConditionOr
for subcondition in condition.ConditionTable:
if _evaluateCondition(subcondition, fvarAxes, location, instancer):
return True
return False
elif condition.Format == 5:
# ConditionNegate
return not _evaluateCondition(
condition.conditionTable, fvarAxes, location, instancer
)
else:
return False # Unkonwn condition format
class _TTGlyphVARC(_TTGlyph):
def _draw(self, pen, isPointPen):
"""Draw the glyph onto ``pen``. See fontTools.pens.basePen for details
how that works.
"""
from fontTools.ttLib.tables.otTables import (
VarComponentFlags,
NO_VARIATION_INDEX,
)
glyphSet = self.glyphSet
varc = glyphSet.varcTable
idx = varc.Coverage.glyphs.index(self.name)
glyph = varc.VarCompositeGlyphs.VarCompositeGlyph[idx]
from fontTools.varLib.multiVarStore import MultiVarStoreInstancer
from fontTools.varLib.varStore import VarStoreInstancer
fvarAxes = glyphSet.font["fvar"].axes
instancer = MultiVarStoreInstancer(
varc.MultiVarStore, fvarAxes, self.glyphSet.location
)
for comp in glyph.components:
if comp.flags & VarComponentFlags.HAVE_CONDITION:
condition = varc.ConditionList.ConditionTable[comp.conditionIndex]
if not _evaluateCondition(
condition, fvarAxes, self.glyphSet.location, instancer
):
continue
location = {}
if comp.axisIndicesIndex is not None:
axisIndices = varc.AxisIndicesList.Item[comp.axisIndicesIndex]
axisValues = Vector(comp.axisValues)
if comp.axisValuesVarIndex != NO_VARIATION_INDEX:
axisValues += fi2fl(instancer[comp.axisValuesVarIndex], 14)
assert len(axisIndices) == len(axisValues), (
len(axisIndices),
len(axisValues),
)
location = {
fvarAxes[i].axisTag: v for i, v in zip(axisIndices, axisValues)
}
if comp.transformVarIndex != NO_VARIATION_INDEX:
deltas = instancer[comp.transformVarIndex]
comp = deepcopy(comp)
comp.applyTransformDeltas(deltas)
transform = comp.transform
reset = comp.flags & VarComponentFlags.RESET_UNSPECIFIED_AXES
with self.glyphSet.glyphSet.pushLocation(location, reset):
with self.glyphSet.pushLocation(location, reset):
shouldDecompose = self.name == comp.glyphName
if not shouldDecompose:
try:
pen.addVarComponent(
comp.glyphName, transform, self.glyphSet.rawLocation
)
except AttributeError:
shouldDecompose = True
if shouldDecompose:
t = transform.toTransform()
compGlyphSet = (
self.glyphSet
if comp.glyphName != self.name
else glyphSet.glyphSet
)
g = compGlyphSet[comp.glyphName]
if isPointPen:
tPen = TransformPointPen(pen, t)
g.drawPoints(tPen)
else:
tPen = TransformPen(pen, t)
g.draw(tPen)
def draw(self, pen):
self._draw(pen, False)
def drawPoints(self, pen):
self._draw(pen, True)
def _setCoordinates(glyph, coord, glyfTable, *, recalcBounds=True):
# Handle phantom points for (left, right, top, bottom) positions.
assert len(coord) >= 4
@ -300,11 +415,6 @@ def _setCoordinates(glyph, coord, glyfTable, *, recalcBounds=True):
for p, comp in zip(coord, glyph.components):
if hasattr(comp, "x"):
comp.x, comp.y = p
elif glyph.isVarComposite():
glyph.components = [copy(comp) for comp in glyph.components] # Shallow copy
for comp in glyph.components:
coord = comp.setCoordinates(coord)
assert not coord
elif glyph.numberOfContours == 0:
assert len(coord) == 0
else:

View File

@ -1017,8 +1017,6 @@ class WOFF2GlyfTable(getTableClass("glyf")):
return
elif glyph.isComposite():
self._encodeComponents(glyph)
elif glyph.isVarComposite():
raise NotImplementedError
else:
self._encodeCoordinates(glyph)
self._encodeOverlapSimpleFlag(glyph, glyphID)

View File

@ -375,7 +375,7 @@ def guessFileType(fileName):
def parseOptions(args):
rawOptions, files = getopt.getopt(
rawOptions, files = getopt.gnu_getopt(
args,
"ld:o:fvqht:x:sgim:z:baey:",
[

View File

@ -845,9 +845,10 @@ def _add_CFF2(varFont, model, master_fonts):
glyphOrder = varFont.getGlyphOrder()
if "CFF2" not in varFont:
from .cff import convertCFFtoCFF2
from fontTools.cffLib.CFFToCFF2 import convertCFFToCFF2
convertCFFToCFF2(varFont)
convertCFFtoCFF2(varFont)
ordered_fonts_list = model.reorderMasters(master_fonts, model.reverseMapping)
# re-ordering the master list simplifies building the CFF2 data item lists.
merge_region_fonts(varFont, model, ordered_fonts_list, glyphOrder)

View File

@ -10,6 +10,13 @@ def buildVarRegionAxis(axisSupport):
return self
def buildSparseVarRegionAxis(axisIndex, axisSupport):
self = ot.SparseVarRegionAxis()
self.AxisIndex = axisIndex
self.StartCoord, self.PeakCoord, self.EndCoord = [float(v) for v in axisSupport]
return self
def buildVarRegion(support, axisTags):
assert all(tag in axisTags for tag in support.keys()), (
"Unknown axis tag found.",
@ -23,6 +30,24 @@ def buildVarRegion(support, axisTags):
return self
def buildSparseVarRegion(support, axisTags):
assert all(tag in axisTags for tag in support.keys()), (
"Unknown axis tag found.",
support,
axisTags,
)
self = ot.SparseVarRegion()
self.SparseVarRegionAxis = []
for i, tag in enumerate(axisTags):
if tag not in support:
continue
self.SparseVarRegionAxis.append(
buildSparseVarRegionAxis(i, support.get(tag, (0, 0, 0)))
)
self.SparseRegionCount = len(self.SparseVarRegionAxis)
return self
def buildVarRegionList(supports, axisTags):
self = ot.VarRegionList()
self.RegionAxisCount = len(axisTags)
@ -33,6 +58,16 @@ def buildVarRegionList(supports, axisTags):
return self
def buildSparseVarRegionList(supports, axisTags):
self = ot.SparseVarRegionList()
self.RegionAxisCount = len(axisTags)
self.Region = []
for support in supports:
self.Region.append(buildSparseVarRegion(support, axisTags))
self.RegionCount = len(self.Region)
return self
def _reorderItem(lst, mapping):
return [lst[i] for i in mapping]
@ -130,6 +165,29 @@ def buildVarStore(varRegionList, varDataList):
return self
def buildMultiVarData(varRegionIndices, items):
self = ot.MultiVarData()
self.Format = 1
self.VarRegionIndex = list(varRegionIndices)
regionCount = self.VarRegionCount = len(self.VarRegionIndex)
records = self.Item = []
if items:
for item in items:
assert len(item) == regionCount
records.append(list(item))
self.ItemCount = len(self.Item)
return self
def buildMultiVarStore(varRegionList, multiVarDataList):
self = ot.MultiVarStore()
self.Format = 1
self.SparseVarRegionList = varRegionList
self.MultiVarData = list(multiVarDataList)
self.MultiVarDataCount = len(self.MultiVarData)
return self
# Variation helpers

View File

@ -16,6 +16,7 @@ from fontTools.cffLib.specializer import specializeCommands, commandsToProgram
from fontTools.ttLib import newTable
from fontTools import varLib
from fontTools.varLib.models import allEqual
from fontTools.misc.loggingTools import deprecateFunction
from fontTools.misc.roundTools import roundFunc
from fontTools.misc.psCharStrings import T2CharString, T2OutlineExtractor
from fontTools.pens.t2CharStringPen import T2CharStringPen
@ -49,93 +50,11 @@ def addCFFVarStore(varFont, varModel, varDataList, masterSupports):
fontDict.Private.vstore = topDict.VarStore
def lib_convertCFFToCFF2(cff, otFont):
# This assumes a decompiled CFF table.
cff2GetGlyphOrder = cff.otFont.getGlyphOrder
topDictData = TopDictIndex(None, cff2GetGlyphOrder, None)
topDictData.items = cff.topDictIndex.items
cff.topDictIndex = topDictData
topDict = topDictData[0]
if hasattr(topDict, "Private"):
privateDict = topDict.Private
else:
privateDict = None
opOrder = buildOrder(topDictOperators2)
topDict.order = opOrder
topDict.cff2GetGlyphOrder = cff2GetGlyphOrder
if not hasattr(topDict, "FDArray"):
fdArray = topDict.FDArray = FDArrayIndex()
fdArray.strings = None
fdArray.GlobalSubrs = topDict.GlobalSubrs
topDict.GlobalSubrs.fdArray = fdArray
charStrings = topDict.CharStrings
if charStrings.charStringsAreIndexed:
charStrings.charStringsIndex.fdArray = fdArray
else:
charStrings.fdArray = fdArray
fontDict = FontDict()
fontDict.setCFF2(True)
fdArray.append(fontDict)
fontDict.Private = privateDict
privateOpOrder = buildOrder(privateDictOperators2)
if privateDict is not None:
for entry in privateDictOperators:
key = entry[1]
if key not in privateOpOrder:
if key in privateDict.rawDict:
# print "Removing private dict", key
del privateDict.rawDict[key]
if hasattr(privateDict, key):
delattr(privateDict, key)
# print "Removing privateDict attr", key
else:
# clean up the PrivateDicts in the fdArray
fdArray = topDict.FDArray
privateOpOrder = buildOrder(privateDictOperators2)
for fontDict in fdArray:
fontDict.setCFF2(True)
for key in list(fontDict.rawDict.keys()):
if key not in fontDict.order:
del fontDict.rawDict[key]
if hasattr(fontDict, key):
delattr(fontDict, key)
privateDict = fontDict.Private
for entry in privateDictOperators:
key = entry[1]
if key not in privateOpOrder:
if key in privateDict.rawDict:
# print "Removing private dict", key
del privateDict.rawDict[key]
if hasattr(privateDict, key):
delattr(privateDict, key)
# print "Removing privateDict attr", key
# Now delete up the deprecated topDict operators from CFF 1.0
for entry in topDictOperators:
key = entry[1]
if key not in opOrder:
if key in topDict.rawDict:
del topDict.rawDict[key]
if hasattr(topDict, key):
delattr(topDict, key)
# At this point, the Subrs and Charstrings are all still T2Charstring class
# easiest to fix this by compiling, then decompiling again
cff.major = 2
file = BytesIO()
cff.compile(file, otFont, isCFF2=True)
file.seek(0)
cff.decompile(file, otFont, isCFF2=True)
@deprecateFunction("Use fontTools.cffLib.CFFToCFF2.convertCFFToCFF2 instead.")
def convertCFFtoCFF2(varFont):
# Convert base font to a single master CFF2 font.
cffTable = varFont["CFF "]
lib_convertCFFToCFF2(cffTable.cff, varFont)
newCFF2 = newTable("CFF2")
newCFF2.cff = cffTable.cff
varFont["CFF2"] = newCFF2
del varFont["CFF "]
from fontTools.cffLib.CFFToCFF2 import convertCFFToCFF2
return convertCFFToCFF2(varFont)
def conv_to_int(num):

View File

@ -89,7 +89,7 @@ from fontTools.misc.fixedTools import (
otRound,
)
from fontTools.varLib.models import normalizeValue, piecewiseLinearMap
from fontTools.ttLib import TTFont
from fontTools.ttLib import TTFont, newTable
from fontTools.ttLib.tables.TupleVariation import TupleVariation
from fontTools.ttLib.tables import _g_l_y_f
from fontTools import varLib
@ -97,6 +97,13 @@ from fontTools import varLib
# we import the `subset` module because we use the `prune_lookups` method on the GSUB
# table class, and that method is only defined dynamically upon importing `subset`
from fontTools import subset # noqa: F401
from fontTools.cffLib import privateDictOperators2
from fontTools.cffLib.specializer import (
programToCommands,
commandsToProgram,
specializeCommands,
generalizeCommands,
)
from fontTools.varLib import builder
from fontTools.varLib.mvar import MVAR_ENTRIES
from fontTools.varLib.merger import MutatorMerger
@ -104,6 +111,7 @@ from fontTools.varLib.instancer import names
from .featureVars import instantiateFeatureVariations
from fontTools.misc.cliTools import makeOutputFileName
from fontTools.varLib.instancer import solver
from fontTools.ttLib.tables.otTables import VarComponentFlags
import collections
import dataclasses
from contextlib import contextmanager
@ -458,6 +466,42 @@ class OverlapMode(IntEnum):
REMOVE_AND_IGNORE_ERRORS = 3
def instantiateVARC(varfont, axisLimits):
log.info("Instantiating VARC tables")
# TODO(behdad) My confidence in this function is rather low;
# It needs more testing. Specially with partial-instancing,
# I don't think it currently works.
varc = varfont["VARC"].table
fvarAxes = varfont["fvar"].axes if "fvar" in varfont else []
location = axisLimits.pinnedLocation()
axisMap = [i for i, axis in enumerate(fvarAxes) if axis.axisTag not in location]
reverseAxisMap = {i: j for j, i in enumerate(axisMap)}
if varc.AxisIndicesList:
axisIndicesList = varc.AxisIndicesList.Item
for i, axisIndices in enumerate(axisIndicesList):
if any(fvarAxes[j].axisTag in axisLimits for j in axisIndices):
raise NotImplementedError(
"Instancing across VarComponent axes is not supported."
)
axisIndicesList[i] = [reverseAxisMap[j] for j in axisIndices]
store = varc.MultiVarStore
if store:
for region in store.SparseVarRegionList.Region:
newRegionAxis = []
for regionRecord in region.SparseVarRegionAxis:
tag = fvarAxes[regionRecord.AxisIndex].axisTag
if tag in axisLimits:
raise NotImplementedError(
"Instancing across VarComponent axes is not supported."
)
regionRecord.AxisIndex = reverseAxisMap[regionRecord.AxisIndex]
def instantiateTupleVariationStore(
variations, axisLimits, origCoords=None, endPts=None
):
@ -566,6 +610,259 @@ def changeTupleVariationAxisLimit(var, axisTag, axisLimit):
return out
def instantiateCFF2(
varfont,
axisLimits,
*,
round=round,
specialize=True,
generalize=False,
downgrade=False,
):
# The algorithm here is rather simple:
#
# Take all blend operations and store their deltas in the (otherwise empty)
# CFF2 VarStore. Then, instantiate the VarStore with the given axis limits,
# and read back the new deltas. This is done for both the CharStrings and
# the Private dicts.
#
# Then prune unused things and possibly drop the VarStore if it's empty.
# In which case, downgrade to CFF table if requested.
log.info("Instantiating CFF2 table")
fvarAxes = varfont["fvar"].axes
cff = varfont["CFF2"].cff
topDict = cff.topDictIndex[0]
varStore = topDict.VarStore.otVarStore
if not varStore:
if downgrade:
from fontTools.cffLib.CFF2ToCFF import convertCFF2ToCFF
convertCFF2ToCFF(varfont)
return
cff.desubroutinize()
def getNumRegions(vsindex):
return varStore.VarData[vsindex if vsindex is not None else 0].VarRegionCount
charStrings = topDict.CharStrings.values()
# Gather all unique private dicts
uniquePrivateDicts = set()
privateDicts = []
for fd in topDict.FDArray:
if fd.Private not in uniquePrivateDicts:
uniquePrivateDicts.add(fd.Private)
privateDicts.append(fd.Private)
allCommands = []
for cs in charStrings:
assert cs.private.vstore.otVarStore is varStore # Or in many places!!
commands = programToCommands(cs.program, getNumRegions=getNumRegions)
if generalize:
commands = generalizeCommands(commands)
if specialize:
commands = specializeCommands(commands, generalizeFirst=not generalize)
allCommands.append(commands)
def storeBlendsToVarStore(arg):
if not isinstance(arg, list):
return
if any(isinstance(subarg, list) for subarg in arg[:-1]):
raise NotImplementedError("Nested blend lists not supported (yet)")
count = arg[-1]
assert (len(arg) - 1) % count == 0
nRegions = (len(arg) - 1) // count - 1
assert nRegions == getNumRegions(vsindex)
for i in range(count, len(arg) - 1, nRegions):
deltas = arg[i : i + nRegions]
assert len(deltas) == nRegions
varData = varStore.VarData[vsindex]
varData.Item.append(deltas)
varData.ItemCount += 1
def fetchBlendsFromVarStore(arg):
if not isinstance(arg, list):
return [arg]
if any(isinstance(subarg, list) for subarg in arg[:-1]):
raise NotImplementedError("Nested blend lists not supported (yet)")
count = arg[-1]
assert (len(arg) - 1) % count == 0
numRegions = getNumRegions(vsindex)
newDefaults = []
newDeltas = []
for i in range(count):
defaultValue = arg[i]
major = vsindex
minor = varDataCursor[major]
varDataCursor[major] += 1
varIdx = (major << 16) + minor
defaultValue += round(defaultDeltas[varIdx])
newDefaults.append(defaultValue)
varData = varStore.VarData[major]
deltas = varData.Item[minor]
assert len(deltas) == numRegions
newDeltas.extend(deltas)
if not numRegions:
return newDefaults # No deltas, just return the defaults
return [newDefaults + newDeltas + [count]]
# Check VarData's are empty
for varData in varStore.VarData:
assert varData.Item == []
assert varData.ItemCount == 0
# Add charstring blend lists to VarStore so we can instantiate them
for commands in allCommands:
vsindex = 0
for command in commands:
if command[0] == "vsindex":
vsindex = command[1][0]
continue
for arg in command[1]:
storeBlendsToVarStore(arg)
# Add private blend lists to VarStore so we can instantiate values
vsindex = 0
for opcode, name, arg_type, default, converter in privateDictOperators2:
if arg_type not in ("number", "delta", "array"):
continue
vsindex = 0
for private in privateDicts:
if not hasattr(private, name):
continue
values = getattr(private, name)
if name == "vsindex":
vsindex = values[0]
continue
if arg_type == "number":
values = [values]
for value in values:
if not isinstance(value, list):
continue
assert len(value) % (getNumRegions(vsindex) + 1) == 0
count = len(value) // (getNumRegions(vsindex) + 1)
storeBlendsToVarStore(value + [count])
# Instantiate VarStore
defaultDeltas = instantiateItemVariationStore(varStore, fvarAxes, axisLimits)
# Read back new charstring blends from the instantiated VarStore
varDataCursor = [0] * len(varStore.VarData)
for commands in allCommands:
vsindex = 0
for command in commands:
if command[0] == "vsindex":
vsindex = command[1][0]
continue
newArgs = []
for arg in command[1]:
newArgs.extend(fetchBlendsFromVarStore(arg))
command[1][:] = newArgs
# Read back new private blends from the instantiated VarStore
for opcode, name, arg_type, default, converter in privateDictOperators2:
if arg_type not in ("number", "delta", "array"):
continue
for private in privateDicts:
if not hasattr(private, name):
continue
values = getattr(private, name)
if arg_type == "number":
values = [values]
newValues = []
for value in values:
if not isinstance(value, list):
newValues.append(value)
continue
value.append(1)
value = fetchBlendsFromVarStore(value)
newValues.extend(v[:-1] if isinstance(v, list) else v for v in value)
if arg_type == "number":
newValues = newValues[0]
setattr(private, name, newValues)
# Empty out the VarStore
for i, varData in enumerate(varStore.VarData):
assert varDataCursor[i] == varData.ItemCount, (
varDataCursor[i],
varData.ItemCount,
)
varData.Item = []
varData.ItemCount = 0
# Remove vsindex commands that are no longer needed, collect those that are.
usedVsindex = set()
for commands in allCommands:
if any(isinstance(arg, list) for command in commands for arg in command[1]):
vsindex = 0
for command in commands:
if command[0] == "vsindex":
vsindex = command[1][0]
continue
if any(isinstance(arg, list) for arg in command[1]):
usedVsindex.add(vsindex)
else:
commands[:] = [command for command in commands if command[0] != "vsindex"]
# Remove unused VarData and update vsindex values
vsindexMapping = {v: i for i, v in enumerate(sorted(usedVsindex))}
varStore.VarData = [
varData for i, varData in enumerate(varStore.VarData) if i in usedVsindex
]
varStore.VarDataCount = len(varStore.VarData)
for commands in allCommands:
for command in commands:
if command[0] == "vsindex":
command[1][0] = vsindexMapping[command[1][0]]
# Remove initial vsindex commands that are implied
for commands in allCommands:
if commands and commands[0] == ("vsindex", [0]):
commands.pop(0)
# Ship the charstrings!
for cs, commands in zip(charStrings, allCommands):
cs.program = commandsToProgram(commands)
# Remove empty VarStore
if not varStore.VarData:
if "VarStore" in topDict.rawDict:
del topDict.rawDict["VarStore"]
del topDict.VarStore
del topDict.CharStrings.varStore
for private in privateDicts:
del private.vstore
if downgrade:
from fontTools.cffLib.CFF2ToCFF import convertCFF2ToCFF
convertCFF2ToCFF(varfont)
def _instantiateGvarGlyph(
glyphname, glyf, gvar, hMetrics, vMetrics, axisLimits, optimize=True
):
@ -583,23 +880,6 @@ def _instantiateGvarGlyph(
if defaultDeltas:
coordinates += _g_l_y_f.GlyphCoordinates(defaultDeltas)
glyph = glyf[glyphname]
if glyph.isVarComposite():
for component in glyph.components:
newLocation = {}
for tag, loc in component.location.items():
if tag not in axisLimits:
newLocation[tag] = loc
continue
if component.flags & _g_l_y_f.VarComponentFlags.AXES_HAVE_VARIATION:
raise NotImplementedError(
"Instancing accross VarComposite axes with variation is not supported."
)
limits = axisLimits[tag]
loc = limits.renormalizeValue(loc, extrapolate=False)
newLocation[tag] = loc
component.location = newLocation
# _setCoordinates also sets the hmtx/vmtx advance widths and sidebearings from
# the four phantom points and glyph bounding boxes.
# We call it unconditionally even if a glyph has no variations or no deltas are
@ -650,7 +930,7 @@ def instantiateGvar(varfont, axisLimits, optimize=True):
key=lambda name: (
(
glyf[name].getCompositeMaxpValues(glyf).maxComponentDepth
if glyf[name].isComposite() or glyf[name].isVarComposite()
if glyf[name].isComposite()
else 0
),
name,
@ -765,10 +1045,16 @@ def _remapVarIdxMap(table, attrName, varIndexMapping, glyphOrder):
# TODO(anthrotype) Add support for HVAR/VVAR in CFF2
def _instantiateVHVAR(varfont, axisLimits, tableFields):
def _instantiateVHVAR(varfont, axisLimits, tableFields, *, round=round):
location = axisLimits.pinnedLocation()
tableTag = tableFields.tableTag
fvarAxes = varfont["fvar"].axes
log.info("Instantiating %s table", tableTag)
vhvar = varfont[tableTag].table
varStore = vhvar.VarStore
if "glyf" in varfont:
# Deltas from gvar table have already been applied to the hmtx/vmtx. For full
# instances (i.e. all axes pinned), we can simply drop HVAR/VVAR and return
if set(location).issuperset(axis.axisTag for axis in fvarAxes):
@ -776,11 +1062,40 @@ def _instantiateVHVAR(varfont, axisLimits, tableFields):
del varfont[tableTag]
return
log.info("Instantiating %s table", tableTag)
vhvar = varfont[tableTag].table
varStore = vhvar.VarStore
# since deltas were already applied, the return value here is ignored
instantiateItemVariationStore(varStore, fvarAxes, axisLimits)
defaultDeltas = instantiateItemVariationStore(varStore, fvarAxes, axisLimits)
if "glyf" not in varfont:
# CFF2 fonts need hmtx/vmtx updated here. For glyf fonts, the instantiateGvar
# function already updated the hmtx/vmtx from phantom points. Maybe remove
# that and do it here for both CFF2 and glyf fonts?
#
# Specially, if a font has glyf but not gvar, the hmtx/vmtx will not have been
# updated by instantiateGvar. Though one can call that a faulty font.
metricsTag = "vmtx" if tableTag == "VVAR" else "hmtx"
if metricsTag in varfont:
advMapping = getattr(vhvar, tableFields.advMapping)
metricsTable = varfont[metricsTag]
metrics = metricsTable.metrics
for glyphName, (advanceWidth, sb) in metrics.items():
if advMapping:
varIdx = advMapping.mapping[glyphName]
else:
varIdx = varfont.getGlyphID(glyphName)
metrics[glyphName] = (advanceWidth + round(defaultDeltas[varIdx]), sb)
if (
tableTag == "VVAR"
and getattr(vhvar, tableFields.vOrigMapping) is not None
):
log.warning(
"VORG table not yet updated to reflect changes in VVAR table"
)
# For full instances (i.e. all axes pinned), we can simply drop HVAR/VVAR and return
if set(location).issuperset(axis.axisTag for axis in fvarAxes):
log.info("Dropping %s table", tableTag)
del varfont[tableTag]
return
if varStore.VarRegionList.Region:
# Only re-optimize VarStore if the HVAR/VVAR already uses indirect AdvWidthMap
@ -923,6 +1238,8 @@ def instantiateItemVariationStore(itemVarStore, fvarAxes, axisLimits):
newItemVarStore = tupleVarStore.asItemVarStore()
itemVarStore.VarRegionList = newItemVarStore.VarRegionList
if not hasattr(itemVarStore, "VarDataCount"): # Happens fromXML
itemVarStore.VarDataCount = len(newItemVarStore.VarData)
assert itemVarStore.VarDataCount == newItemVarStore.VarDataCount
itemVarStore.VarData = newItemVarStore.VarData
@ -1019,7 +1336,11 @@ def _isValidAvarSegmentMap(axisTag, segmentMap):
def instantiateAvar(varfont, axisLimits):
# 'axisLimits' dict must contain user-space (non-normalized) coordinates.
segments = varfont["avar"].segments
avar = varfont["avar"]
if getattr(avar, "majorVersion", 1) >= 2 and avar.table.VarStore:
raise NotImplementedError("avar table with VarStore is not supported")
segments = avar.segments
# drop table if we instantiate all the axes
pinnedAxes = set(axisLimits.pinnedLocation())
@ -1080,7 +1401,7 @@ def instantiateAvar(varfont, axisLimits):
newSegments[axisTag] = newMapping
else:
newSegments[axisTag] = mapping
varfont["avar"].segments = newSegments
avar.segments = newSegments
def isInstanceWithinAxisRanges(location, axisRanges):
@ -1218,9 +1539,6 @@ def sanityCheckVariableTables(varfont):
if "gvar" in varfont:
if "glyf" not in varfont:
raise ValueError("Can't have gvar without glyf")
# TODO(anthrotype) Remove once we do support partial instancing CFF2
if "CFF2" in varfont:
raise NotImplementedError("Instancing CFF2 variable fonts is not supported yet")
def instantiateVariableFont(
@ -1230,6 +1548,8 @@ def instantiateVariableFont(
optimize=True,
overlap=OverlapMode.KEEP_AND_SET_FLAGS,
updateFontNames=False,
*,
downgradeCFF2=False,
):
"""Instantiate variable font, either fully or partially.
@ -1239,7 +1559,6 @@ def instantiateVariableFont(
Args:
varfont: a TTFont instance, which must contain at least an 'fvar' table.
Note that variable fonts with 'CFF2' table are not supported yet.
axisLimits: a dict keyed by axis tags (str) containing the coordinates (float)
along one or more axes where the desired instance will be located.
If the value is `None`, the default coordinate as per 'fvar' table for
@ -1269,6 +1588,11 @@ def instantiateVariableFont(
in the head and OS/2 table will be updated so they conform to the R/I/B/BI
model. If the STAT table is missing or an Axis Value table is missing for
a given axis coordinate, a ValueError will be raised.
downgradeCFF2 (bool): if True, downgrade the CFF2 table to CFF table when possible
ie. full instancing of all axes. This is useful for compatibility with older
software that does not support CFF2. Defaults to False. Note that this
operation also removes overlaps within glyph shapes, as CFF does not support
overlaps but CFF2 does.
"""
# 'overlap' used to be bool and is now enum; for backward compat keep accepting bool
overlap = OverlapMode(int(overlap))
@ -1293,6 +1617,12 @@ def instantiateVariableFont(
log.info("Updating name table")
names.updateNameTable(varfont, axisLimits)
if "VARC" in varfont:
instantiateVARC(varfont, normalizedLimits)
if "CFF2" in varfont:
instantiateCFF2(varfont, normalizedLimits, downgrade=downgradeCFF2)
if "gvar" in varfont:
instantiateGvar(varfont, normalizedLimits, optimize=optimize)
@ -1484,6 +1814,11 @@ def parseArgs(args):
help="Update the instantiated font's `name` table. Input font must have "
"a STAT table with Axis Value Tables",
)
parser.add_argument(
"--downgrade-cff2",
action="store_true",
help="If all axes are pinned, downgrade CFF2 to CFF table format",
)
parser.add_argument(
"--no-recalc-timestamp",
dest="recalc_timestamp",
@ -1545,7 +1880,9 @@ def main(args=None):
)
isFullInstance = {
axisTag for axisTag, limit in axisLimits.items() if not isinstance(limit, tuple)
axisTag
for axisTag, limit in axisLimits.items()
if limit is None or limit[0] == limit[2]
}.issuperset(axis.axisTag for axis in varfont["fvar"].axes)
instantiateVariableFont(
@ -1555,6 +1892,7 @@ def main(args=None):
optimize=options.optimize,
overlap=options.overlap,
updateFontNames=options.update_name_table,
downgradeCFF2=options.downgrade_cff2,
)
suffix = "-instance" if isFullInstance else "-partial"

View File

@ -924,13 +924,13 @@ def main(args=None):
last_master_idxs = None
master_idxs = (
(p["master_idx"])
(p["master_idx"],)
if "master_idx" in p
else (p["master_1_idx"], p["master_2_idx"])
)
if master_idxs != last_master_idxs:
master_names = (
(p["master"])
(p["master"],)
if "master" in p
else (p["master_1"], p["master_2"])
)

View File

@ -143,6 +143,9 @@ def min_cost_perfect_bipartite_matching_scipy(G):
n = len(G)
rows, cols = linear_sum_assignment(G)
assert (rows == list(range(n))).all()
# Convert numpy array and integer to Python types,
# to ensure that this is JSON-serializable.
cols = list(int(e) for e in cols)
return list(cols), matching_cost(G, cols)

View File

@ -49,7 +49,9 @@ def test_starting_point(glyph0, glyph1, ix, tolerance, matching):
meanY = vector[2]
stddevX = vector[3] * 0.5
stddevY = vector[4] * 0.5
correlation = vector[5] / abs(vector[0])
correlation = vector[5]
if correlation:
correlation /= abs(vector[0])
# https://cookierobotics.com/007/
a = stddevX * stddevX # VarianceX

View File

@ -75,7 +75,7 @@ def normalizeValue(v, triple, extrapolate=False):
return (v - default) / (upper - default)
def normalizeLocation(location, axes, extrapolate=False):
def normalizeLocation(location, axes, extrapolate=False, *, validate=False):
"""Normalizes location based on axis min/default/max values from axes.
>>> axes = {"wght": (100, 400, 900)}
@ -114,6 +114,10 @@ def normalizeLocation(location, axes, extrapolate=False):
>>> normalizeLocation({"wght": 1001}, axes)
{'wght': 0.0}
"""
if validate:
assert set(location.keys()) <= set(axes.keys()), set(location.keys()) - set(
axes.keys()
)
out = {}
for tag, triple in axes.items():
v = location.get(tag, triple[1])
@ -453,7 +457,10 @@ class VariationModel(object):
self.deltaWeights.append(deltaWeight)
def getDeltas(self, masterValues, *, round=noRound):
assert len(masterValues) == len(self.deltaWeights)
assert len(masterValues) == len(self.deltaWeights), (
len(masterValues),
len(self.deltaWeights),
)
mapping = self.reverseMapping
out = []
for i, weights in enumerate(self.deltaWeights):

View File

@ -0,0 +1,253 @@
from fontTools.misc.roundTools import noRound, otRound
from fontTools.misc.intTools import bit_count
from fontTools.misc.vector import Vector
from fontTools.ttLib.tables import otTables as ot
from fontTools.varLib.models import supportScalar
import fontTools.varLib.varStore # For monkey-patching
from fontTools.varLib.builder import (
buildVarRegionList,
buildSparseVarRegionList,
buildSparseVarRegion,
buildMultiVarStore,
buildMultiVarData,
)
from fontTools.misc.iterTools import batched
from functools import partial
from collections import defaultdict
from heapq import heappush, heappop
NO_VARIATION_INDEX = ot.NO_VARIATION_INDEX
ot.MultiVarStore.NO_VARIATION_INDEX = NO_VARIATION_INDEX
def _getLocationKey(loc):
return tuple(sorted(loc.items(), key=lambda kv: kv[0]))
class OnlineMultiVarStoreBuilder(object):
def __init__(self, axisTags):
self._axisTags = axisTags
self._regionMap = {}
self._regionList = buildSparseVarRegionList([], axisTags)
self._store = buildMultiVarStore(self._regionList, [])
self._data = None
self._model = None
self._supports = None
self._varDataIndices = {}
self._varDataCaches = {}
self._cache = None
def setModel(self, model):
self.setSupports(model.supports)
self._model = model
def setSupports(self, supports):
self._model = None
self._supports = list(supports)
if not self._supports[0]:
del self._supports[0] # Drop base master support
self._cache = None
self._data = None
def finish(self, optimize=True):
self._regionList.RegionCount = len(self._regionList.Region)
self._store.MultiVarDataCount = len(self._store.MultiVarData)
return self._store
def _add_MultiVarData(self):
regionMap = self._regionMap
regionList = self._regionList
regions = self._supports
regionIndices = []
for region in regions:
key = _getLocationKey(region)
idx = regionMap.get(key)
if idx is None:
varRegion = buildSparseVarRegion(region, self._axisTags)
idx = regionMap[key] = len(regionList.Region)
regionList.Region.append(varRegion)
regionIndices.append(idx)
# Check if we have one already...
key = tuple(regionIndices)
varDataIdx = self._varDataIndices.get(key)
if varDataIdx is not None:
self._outer = varDataIdx
self._data = self._store.MultiVarData[varDataIdx]
self._cache = self._varDataCaches[key]
if len(self._data.Item) == 0xFFFF:
# This is full. Need new one.
varDataIdx = None
if varDataIdx is None:
self._data = buildMultiVarData(regionIndices, [])
self._outer = len(self._store.MultiVarData)
self._store.MultiVarData.append(self._data)
self._varDataIndices[key] = self._outer
if key not in self._varDataCaches:
self._varDataCaches[key] = {}
self._cache = self._varDataCaches[key]
def storeMasters(self, master_values, *, round=round):
deltas = self._model.getDeltas(master_values, round=round)
base = deltas.pop(0)
return base, self.storeDeltas(deltas, round=noRound)
def storeDeltas(self, deltas, *, round=round):
deltas = tuple(round(d) for d in deltas)
if not any(deltas):
return NO_VARIATION_INDEX
deltas_tuple = tuple(tuple(d) for d in deltas)
if not self._data:
self._add_MultiVarData()
varIdx = self._cache.get(deltas_tuple)
if varIdx is not None:
return varIdx
inner = len(self._data.Item)
if inner == 0xFFFF:
# Full array. Start new one.
self._add_MultiVarData()
return self.storeDeltas(deltas, round=noRound)
self._data.addItem(deltas, round=noRound)
varIdx = (self._outer << 16) + inner
self._cache[deltas_tuple] = varIdx
return varIdx
def MultiVarData_addItem(self, deltas, *, round=round):
deltas = tuple(round(d) for d in deltas)
assert len(deltas) == self.VarRegionCount
values = []
for d in deltas:
values.extend(d)
self.Item.append(values)
self.ItemCount = len(self.Item)
ot.MultiVarData.addItem = MultiVarData_addItem
def SparseVarRegion_get_support(self, fvar_axes):
return {
fvar_axes[reg.AxisIndex].axisTag: (reg.StartCoord, reg.PeakCoord, reg.EndCoord)
for reg in self.SparseVarRegionAxis
}
ot.SparseVarRegion.get_support = SparseVarRegion_get_support
def MultiVarStore___bool__(self):
return bool(self.MultiVarData)
ot.MultiVarStore.__bool__ = MultiVarStore___bool__
class MultiVarStoreInstancer(object):
def __init__(self, multivarstore, fvar_axes, location={}):
self.fvar_axes = fvar_axes
assert multivarstore is None or multivarstore.Format == 1
self._varData = multivarstore.MultiVarData if multivarstore else []
self._regions = (
multivarstore.SparseVarRegionList.Region if multivarstore else []
)
self.setLocation(location)
def setLocation(self, location):
self.location = dict(location)
self._clearCaches()
def _clearCaches(self):
self._scalars = {}
def _getScalar(self, regionIdx):
scalar = self._scalars.get(regionIdx)
if scalar is None:
support = self._regions[regionIdx].get_support(self.fvar_axes)
scalar = supportScalar(self.location, support)
self._scalars[regionIdx] = scalar
return scalar
@staticmethod
def interpolateFromDeltasAndScalars(deltas, scalars):
if not deltas:
return Vector([])
assert len(deltas) % len(scalars) == 0, (len(deltas), len(scalars))
m = len(deltas) // len(scalars)
delta = Vector([0] * m)
for d, s in zip(batched(deltas, m), scalars):
if not s:
continue
delta += Vector(d) * s
return delta
def __getitem__(self, varidx):
major, minor = varidx >> 16, varidx & 0xFFFF
if varidx == NO_VARIATION_INDEX:
return Vector([])
varData = self._varData
scalars = [self._getScalar(ri) for ri in varData[major].VarRegionIndex]
deltas = varData[major].Item[minor]
return self.interpolateFromDeltasAndScalars(deltas, scalars)
def interpolateFromDeltas(self, varDataIndex, deltas):
varData = self._varData
scalars = [self._getScalar(ri) for ri in varData[varDataIndex].VarRegionIndex]
return self.interpolateFromDeltasAndScalars(deltas, scalars)
def MultiVarStore_subset_varidxes(self, varIdxes):
return ot.VarStore.subset_varidxes(self, varIdxes, VarData="MultiVarData")
def MultiVarStore_prune_regions(self):
return ot.VarStore.prune_regions(
self, VarData="MultiVarData", VarRegionList="SparseVarRegionList"
)
ot.MultiVarStore.prune_regions = MultiVarStore_prune_regions
ot.MultiVarStore.subset_varidxes = MultiVarStore_subset_varidxes
def MultiVarStore_get_supports(self, major, fvarAxes):
supports = []
varData = self.MultiVarData[major]
for regionIdx in varData.VarRegionIndex:
region = self.SparseVarRegionList.Region[regionIdx]
support = region.get_support(fvarAxes)
supports.append(support)
return supports
ot.MultiVarStore.get_supports = MultiVarStore_get_supports
def VARC_collect_varidxes(self, varidxes):
for glyph in self.VarCompositeGlyphs.VarCompositeGlyph:
for component in glyph.components:
varidxes.add(component.axisValuesVarIndex)
varidxes.add(component.transformVarIndex)
def VARC_remap_varidxes(self, varidxes_map):
for glyph in self.VarCompositeGlyphs.VarCompositeGlyph:
for component in glyph.components:
component.axisValuesVarIndex = varidxes_map[component.axisValuesVarIndex]
component.transformVarIndex = varidxes_map[component.transformVarIndex]
ot.VARC.collect_varidxes = VARC_collect_varidxes
ot.VARC.remap_varidxes = VARC_remap_varidxes

View File

@ -201,7 +201,7 @@ def instantiateVariableFont(varfont, location, inplace=False, overlap=True):
key=lambda name: (
(
glyf[name].getCompositeMaxpValues(glyf).maxComponentDepth
if glyf[name].isComposite() or glyf[name].isVarComposite()
if glyf[name].isComposite()
else 0
),
name,

View File

@ -32,7 +32,7 @@ class OnlineVarStoreBuilder(object):
self._supports = None
self._varDataIndices = {}
self._varDataCaches = {}
self._cache = {}
self._cache = None
def setModel(self, model):
self.setSupports(model.supports)
@ -43,7 +43,7 @@ class OnlineVarStoreBuilder(object):
self._supports = list(supports)
if not self._supports[0]:
del self._supports[0] # Drop base master support
self._cache = {}
self._cache = None
self._data = None
def finish(self, optimize=True):
@ -54,7 +54,7 @@ class OnlineVarStoreBuilder(object):
data.calculateNumShorts(optimize=optimize)
return self._store
def _add_VarData(self):
def _add_VarData(self, num_items=1):
regionMap = self._regionMap
regionList = self._regionList
@ -76,7 +76,7 @@ class OnlineVarStoreBuilder(object):
self._outer = varDataIdx
self._data = self._store.VarData[varDataIdx]
self._cache = self._varDataCaches[key]
if len(self._data.Item) == 0xFFFF:
if len(self._data.Item) + num_items > 0xFFFF:
# This is full. Need new one.
varDataIdx = None
@ -94,6 +94,14 @@ class OnlineVarStoreBuilder(object):
base = deltas.pop(0)
return base, self.storeDeltas(deltas, round=noRound)
def storeMastersMany(self, master_values_list, *, round=round):
deltas_list = [
self._model.getDeltas(master_values, round=round)
for master_values in master_values_list
]
base_list = [deltas.pop(0) for deltas in deltas_list]
return base_list, self.storeDeltasMany(deltas_list, round=noRound)
def storeDeltas(self, deltas, *, round=round):
deltas = [round(d) for d in deltas]
if len(deltas) == len(self._supports) + 1:
@ -102,23 +110,51 @@ class OnlineVarStoreBuilder(object):
assert len(deltas) == len(self._supports)
deltas = tuple(deltas)
if not self._data:
self._add_VarData()
varIdx = self._cache.get(deltas)
if varIdx is not None:
return varIdx
if not self._data:
self._add_VarData()
inner = len(self._data.Item)
if inner == 0xFFFF:
# Full array. Start new one.
self._add_VarData()
return self.storeDeltas(deltas)
return self.storeDeltas(deltas, round=noRound)
self._data.addItem(deltas, round=noRound)
varIdx = (self._outer << 16) + inner
self._cache[deltas] = varIdx
return varIdx
def storeDeltasMany(self, deltas_list, *, round=round):
deltas_list = [[round(d) for d in deltas] for deltas in deltas_list]
deltas_list = tuple(tuple(deltas) for deltas in deltas_list)
if not self._data:
self._add_VarData(len(deltas_list))
varIdx = self._cache.get(deltas_list)
if varIdx is not None:
return varIdx
inner = len(self._data.Item)
if inner + len(deltas_list) > 0xFFFF:
# Full array. Start new one.
self._add_VarData(len(deltas_list))
return self.storeDeltasMany(deltas_list, round=noRound)
for i, deltas in enumerate(deltas_list):
self._data.addItem(deltas, round=noRound)
varIdx = (self._outer << 16) + inner + i
self._cache[deltas] = varIdx
varIdx = (self._outer << 16) + inner
self._cache[deltas_list] = varIdx
return varIdx
def VarData_addItem(self, deltas, *, round=round):
deltas = [round(d) for d in deltas]
@ -210,26 +246,29 @@ class VarStoreInstancer(object):
def VarStore_subset_varidxes(
self, varIdxes, optimize=True, retainFirstMap=False, advIdxes=set()
self,
varIdxes,
optimize=True,
retainFirstMap=False,
advIdxes=set(),
*,
VarData="VarData",
):
# Sort out used varIdxes by major/minor.
used = {}
used = defaultdict(set)
for varIdx in varIdxes:
if varIdx == NO_VARIATION_INDEX:
continue
major = varIdx >> 16
minor = varIdx & 0xFFFF
d = used.get(major)
if d is None:
d = used[major] = set()
d.add(minor)
used[major].add(minor)
del varIdxes
#
# Subset VarData
#
varData = self.VarData
varData = getattr(self, VarData)
newVarData = []
varDataMap = {NO_VARIATION_INDEX: NO_VARIATION_INDEX}
for major, data in enumerate(varData):
@ -260,10 +299,11 @@ def VarStore_subset_varidxes(
data.Item = newItems
data.ItemCount = len(data.Item)
if VarData == "VarData":
data.calculateNumShorts(optimize=optimize)
self.VarData = newVarData
self.VarDataCount = len(self.VarData)
setattr(self, VarData, newVarData)
setattr(self, VarData + "Count", len(newVarData))
self.prune_regions()
@ -273,7 +313,7 @@ def VarStore_subset_varidxes(
ot.VarStore.subset_varidxes = VarStore_subset_varidxes
def VarStore_prune_regions(self):
def VarStore_prune_regions(self, *, VarData="VarData", VarRegionList="VarRegionList"):
"""Remove unused VarRegions."""
#
# Subset VarRegionList
@ -281,10 +321,10 @@ def VarStore_prune_regions(self):
# Collect.
usedRegions = set()
for data in self.VarData:
for data in getattr(self, VarData):
usedRegions.update(data.VarRegionIndex)
# Subset.
regionList = self.VarRegionList
regionList = getattr(self, VarRegionList)
regions = regionList.Region
newRegions = []
regionMap = {}
@ -294,7 +334,7 @@ def VarStore_prune_regions(self):
regionList.Region = newRegions
regionList.RegionCount = len(regionList.Region)
# Map.
for data in self.VarData:
for data in getattr(self, VarData):
data.VarRegionIndex = [regionMap[i] for i in data.VarRegionIndex]

View File

@ -31,7 +31,7 @@ tables.sort()
with open(os.path.join(tablesDir, "__init__.py"), "w") as file:
file.write(
'''
'''\
# DON'T EDIT! This file is generated by MetaTools/buildTableList.py.
def _moduleFinderHint():
"""Dummy function to let modulefinder know what tables may be
@ -43,12 +43,15 @@ def _moduleFinderHint():
)
for module in modules:
file.write("\tfrom . import %s\n" % module)
file.write(" from . import %s\n" % module)
file.write("\n")
file.write(
"""
if __name__ == "__main__":
import doctest, sys
sys.exit(doctest.testmod().failed)
"""
)

View File

@ -1,3 +1,68 @@
4.53.1 (released 2024-07-05)
----------------------------
- [feaLib] Improve the sharing of inline chained lookups (#3559)
- [otlLib] Correct the calculation of OS/2.usMaxContext with reversed chaining contextual single substitutions (#3569)
- [misc.visitor] Visitors search the inheritance chain of objects they are visiting (#3581)
4.53.0 (released 2024-05-31)
----------------------------
- [ttLib.removeOverlaps] Support CFF table to aid in downconverting CFF2 fonts (#3528)
- [avar] Fix crash when accessing not-yet-existing attribute (#3550)
- [docs] Add buildMathTable to otlLib.builder documentation (#3540)
- [feaLib] Allow UTF-8 with BOM when reading features (#3495)
- [SVGPathPen] Revert rounding coordinates to two decimal places by default (#3543)
- [varLib.instancer] Refix output filename decision-making (#3545, #3544, #3548)
4.52.4 (released 2024-05-27)
----------------------------
- [varLib.cff] Restore and deprecate convertCFFtoCFF2 that was removed in 4.52.0
release as it is used by downstream projects (#3535).
4.52.3 (released 2024-05-27)
----------------------------
- Fixed a small syntax error in the reStructuredText-formatted NEWS.rst file
which caused the upload to PyPI to fail for 4.52.2. No other code changes.
4.52.2 (released 2024-05-27)
----------------------------
- [varLib.interpolatable] Ensure that scipy/numpy output is JSON-serializable
(#3522, #3526).
- [housekeeping] Regenerate table lists, to fix pyinstaller packaging of the new
``VARC`` table (#3531, #3529).
- [cffLib] Make CFFToCFF2 and CFF2ToCFF more robust (#3521, #3525).
4.52.1 (released 2024-05-24)
----------------------------
- Fixed a small syntax error in the reStructuredText-formatted NEWS.rst file
which caused the upload to PyPI to fail for 4.52.0. No other code changes.
4.52.0 (released 2024-05-24)
----------------------------
- Added support for the new ``VARC`` (Variable Composite) table that is being
proposed to OpenType spec (#3395). For more info:
https://github.com/harfbuzz/boring-expansion-spec/blob/main/VARC.md
- [ttLib.__main__] Fixed decompiling all tables (90fed08).
- [feaLib] Don't reference the same lookup index multiple times within the same
feature record, it is only applied once anyway (#3520).
- [cffLib] Moved methods to desubroutinize, remove hints and unused subroutines
from subset module to cffLib (#3517).
- [varLib.instancer] Added support for partial-instancing CFF2 tables! Also, added
method to down-convert from CFF2 to CFF 1.0, and CLI entry points to convert
CFF<->CFF2 (#3506).
- [subset] Prune unused user name IDs even with --name-IDs='*' (#3410).
- [ttx] use GNU-style getopt to intermix options and positional arguments (#3509).
- [feaLib.variableScalar] Fixed ``value_at_location()`` method (#3491)
- [psCharStrings] Shorten output of ``encodeFloat`` (#3492).
- [bezierTools] Fix infinite-recursion in ``calcCubicArcLength`` (#3502).
- [avar2] Implement ``avar2`` support in ``TTFont.getGlyphSet()`` (#3473).
4.51.0 (released 2024-04-05)
----------------------------

View File

@ -232,7 +232,8 @@ How to make a new release
2) Use semantic versioning to decide whether the new release will be a 'major',
'minor' or 'patch' release. It's usually one of the latter two, depending on
whether new backward compatible APIs were added, or simply some bugs were fixed.
3) Run ``python setup.py release`` command from the tip of the ``main`` branch.
3) From inside a venv, first do ``pip install -r dev-requirements.txt``, then run
the ``python setup.py release`` command from the tip of the ``main`` branch.
By default this bumps the third or 'patch' digit only, unless you pass ``--major``
or ``--minor`` to bump respectively the first or second digit.
This bumps the package version string, extracts the changes since the latest

View File

@ -5,6 +5,7 @@ import copy
import os
import sys
import unittest
from io import BytesIO
class CffLibTest(DataFilesHandler):
@ -119,5 +120,17 @@ class CffLibTest(DataFilesHandler):
self.assertEqual(len(glyphOrder), len(set(glyphOrder)))
class CFFToCFF2Test(DataFilesHandler):
def test_conversion(self):
font_path = self.getpath("CFFToCFF2-1.otf")
font = TTFont(font_path)
from fontTools.cffLib.CFFToCFF2 import convertCFFToCFF2
convertCFFToCFF2(font)
f = BytesIO()
font.save(f)
if __name__ == "__main__":
sys.exit(unittest.main())

Binary file not shown.

View File

@ -6,7 +6,7 @@ import py
ufoLib2 = pytest.importorskip("ufoLib2")
from fontTools.cu2qu.ufo import CURVE_TYPE_LIB_KEY
from fontTools.cu2qu.cli import main
from fontTools.cu2qu.cli import _main as main
DATADIR = os.path.join(os.path.dirname(__file__), "data")

View File

@ -48,6 +48,7 @@ def makeTTFont():
grave acute dieresis macron circumflex cedilla umlaut ogonek caron
damma hamza sukun kasratan lam_meem_jeem noon.final noon.initial
by feature lookup sub table uni0327 uni0328 e.fina
idotbelow idotless iogonek acutecomb brevecomb ogonekcomb dotbelowcomb
""".split()
glyphs.extend("cid{:05d}".format(cid) for cid in range(800, 1001 + 1))
font = TTFont()
@ -81,7 +82,8 @@ class BuilderTest(unittest.TestCase):
MultipleLookupsPerGlyph MultipleLookupsPerGlyph2 GSUB_6_formats
GSUB_5_formats delete_glyph STAT_test STAT_test_elidedFallbackNameID
variable_scalar_valuerecord variable_scalar_anchor variable_conditionset
variable_mark_anchor
variable_mark_anchor duplicate_lookup_reference
contextual_inline_multi_sub_format_2
""".split()
VARFONT_AXES = [

View File

@ -0,0 +1,17 @@
# reduced from the ccmp feature in Oswald
feature ccmp {
lookup ccmp_Other_1 {
@CombiningTopAccents = [acutecomb brevecomb];
@CombiningNonTopAccents = [dotbelowcomb ogonekcomb];
lookupflag UseMarkFilteringSet @CombiningTopAccents;
# we should only generate two lookups; one contextual and one multiple sub,
# containing 'sub idotbelow by idotless dotbelowcomb' and
# 'sub iogonek by idotless ogonekcomb'
sub idotbelow' @CombiningTopAccents by idotless dotbelowcomb;
sub iogonek' @CombiningTopAccents by idotless ogonekcomb;
sub idotbelow' @CombiningNonTopAccents @CombiningTopAccents by idotless dotbelowcomb;
sub iogonek' @CombiningNonTopAccents @CombiningTopAccents by idotless ogonekcomb;
} ccmp_Other_1;
} ccmp;

View File

@ -0,0 +1,135 @@
<?xml version="1.0" encoding="UTF-8"?>
<ttFont sfntVersion="\x00\x01\x00\x00" ttLibVersion="4.53">
<GDEF>
<Version value="0x00010002"/>
<MarkGlyphSetsDef>
<MarkSetTableFormat value="1"/>
<!-- MarkSetCount=1 -->
<Coverage index="0">
<Glyph value="acutecomb"/>
<Glyph value="brevecomb"/>
</Coverage>
</MarkGlyphSetsDef>
</GDEF>
<GSUB>
<Version value="0x00010000"/>
<ScriptList>
<!-- ScriptCount=1 -->
<ScriptRecord index="0">
<ScriptTag value="DFLT"/>
<Script>
<DefaultLangSys>
<ReqFeatureIndex value="65535"/>
<!-- FeatureCount=1 -->
<FeatureIndex index="0" value="0"/>
</DefaultLangSys>
<!-- LangSysCount=0 -->
</Script>
</ScriptRecord>
</ScriptList>
<FeatureList>
<!-- FeatureCount=1 -->
<FeatureRecord index="0">
<FeatureTag value="ccmp"/>
<Feature>
<!-- LookupCount=1 -->
<LookupListIndex index="0" value="0"/>
</Feature>
</FeatureRecord>
</FeatureList>
<LookupList>
<!-- LookupCount=2 -->
<Lookup index="0">
<LookupType value="6"/>
<LookupFlag value="16"/><!-- useMarkFilteringSet -->
<!-- SubTableCount=1 -->
<ChainContextSubst index="0" Format="2">
<Coverage>
<Glyph value="idotbelow"/>
<Glyph value="iogonek"/>
</Coverage>
<BacktrackClassDef>
</BacktrackClassDef>
<InputClassDef>
<ClassDef glyph="idotbelow" class="1"/>
<ClassDef glyph="iogonek" class="2"/>
</InputClassDef>
<LookAheadClassDef>
<ClassDef glyph="acutecomb" class="1"/>
<ClassDef glyph="brevecomb" class="1"/>
<ClassDef glyph="dotbelowcomb" class="2"/>
<ClassDef glyph="ogonekcomb" class="2"/>
</LookAheadClassDef>
<!-- ChainSubClassSetCount=3 -->
<ChainSubClassSet index="0" empty="1"/>
<ChainSubClassSet index="1">
<!-- ChainSubClassRuleCount=2 -->
<ChainSubClassRule index="0">
<!-- BacktrackGlyphCount=0 -->
<!-- InputGlyphCount=1 -->
<!-- LookAheadGlyphCount=1 -->
<LookAhead index="0" value="1"/>
<!-- SubstCount=1 -->
<SubstLookupRecord index="0">
<SequenceIndex value="0"/>
<LookupListIndex value="1"/>
</SubstLookupRecord>
</ChainSubClassRule>
<ChainSubClassRule index="1">
<!-- BacktrackGlyphCount=0 -->
<!-- InputGlyphCount=1 -->
<!-- LookAheadGlyphCount=2 -->
<LookAhead index="0" value="2"/>
<LookAhead index="1" value="1"/>
<!-- SubstCount=1 -->
<SubstLookupRecord index="0">
<SequenceIndex value="0"/>
<LookupListIndex value="1"/>
</SubstLookupRecord>
</ChainSubClassRule>
</ChainSubClassSet>
<ChainSubClassSet index="2">
<!-- ChainSubClassRuleCount=2 -->
<ChainSubClassRule index="0">
<!-- BacktrackGlyphCount=0 -->
<!-- InputGlyphCount=1 -->
<!-- LookAheadGlyphCount=1 -->
<LookAhead index="0" value="1"/>
<!-- SubstCount=1 -->
<SubstLookupRecord index="0">
<SequenceIndex value="0"/>
<LookupListIndex value="1"/>
</SubstLookupRecord>
</ChainSubClassRule>
<ChainSubClassRule index="1">
<!-- BacktrackGlyphCount=0 -->
<!-- InputGlyphCount=1 -->
<!-- LookAheadGlyphCount=2 -->
<LookAhead index="0" value="2"/>
<LookAhead index="1" value="1"/>
<!-- SubstCount=1 -->
<SubstLookupRecord index="0">
<SequenceIndex value="0"/>
<LookupListIndex value="1"/>
</SubstLookupRecord>
</ChainSubClassRule>
</ChainSubClassSet>
</ChainContextSubst>
<MarkFilteringSet value="0"/>
</Lookup>
<Lookup index="1">
<LookupType value="2"/>
<LookupFlag value="16"/><!-- useMarkFilteringSet -->
<!-- SubTableCount=1 -->
<MultipleSubst index="0">
<Substitution in="idotbelow" out="idotless,dotbelowcomb"/>
<Substitution in="iogonek" out="idotless,ogonekcomb"/>
</MultipleSubst>
<MarkFilteringSet value="0"/>
</Lookup>
</LookupList>
</GSUB>
</ttFont>

View File

@ -0,0 +1,18 @@
# https://github.com/fonttools/fonttools/issues/2946
languagesystem DFLT dflt;
languagesystem latn dflt;
feature test {
lookup alt1 {
sub a by A;
} alt1;
lookup alt2 {
sub b by B;
} alt2;
script latn;
lookup alt1;
} test;

View File

@ -0,0 +1,63 @@
<?xml version="1.0" encoding="UTF-8"?>
<ttFont>
<GSUB>
<Version value="0x00010000"/>
<ScriptList>
<!-- ScriptCount=2 -->
<ScriptRecord index="0">
<ScriptTag value="DFLT"/>
<Script>
<DefaultLangSys>
<ReqFeatureIndex value="65535"/>
<!-- FeatureCount=1 -->
<FeatureIndex index="0" value="0"/>
</DefaultLangSys>
<!-- LangSysCount=0 -->
</Script>
</ScriptRecord>
<ScriptRecord index="1">
<ScriptTag value="latn"/>
<Script>
<DefaultLangSys>
<ReqFeatureIndex value="65535"/>
<!-- FeatureCount=1 -->
<FeatureIndex index="0" value="0"/>
</DefaultLangSys>
<!-- LangSysCount=0 -->
</Script>
</ScriptRecord>
</ScriptList>
<FeatureList>
<!-- FeatureCount=1 -->
<FeatureRecord index="0">
<FeatureTag value="test"/>
<Feature>
<!-- LookupCount=2 -->
<LookupListIndex index="0" value="0"/>
<LookupListIndex index="1" value="1"/>
</Feature>
</FeatureRecord>
</FeatureList>
<LookupList>
<!-- LookupCount=2 -->
<Lookup index="0">
<LookupType value="1"/>
<LookupFlag value="0"/>
<!-- SubTableCount=1 -->
<SingleSubst index="0">
<Substitution in="a" out="A"/>
</SingleSubst>
</Lookup>
<Lookup index="1">
<LookupType value="1"/>
<LookupFlag value="0"/>
<!-- SubTableCount=1 -->
<SingleSubst index="0">
<Substitution in="b" out="B"/>
</SingleSubst>
</Lookup>
</LookupList>
</GSUB>
</ttFont>

View File

@ -330,9 +330,9 @@ def test_build_cff_to_cff2(tmpdir):
}
fb.setupCFF("TestFont", {}, charStrings, {})
from fontTools.varLib.cff import convertCFFtoCFF2
from fontTools.cffLib.CFFToCFF2 import convertCFFToCFF2
convertCFFtoCFF2(fb.font)
convertCFFToCFF2(fb.font)
def test_setupNameTable_no_mac():

View File

@ -3,6 +3,7 @@ from fontTools.misc.bezierTools import (
calcQuadraticBounds,
calcQuadraticArcLength,
calcCubicBounds,
calcCubicArcLength,
curveLineIntersections,
curveCurveIntersections,
segmentPointAtT,
@ -192,6 +193,35 @@ def test_calcQuadraticArcLength():
) == pytest.approx(127.9225)
@pytest.mark.parametrize(
"segment, expectedLength",
[
(
# https://github.com/fonttools/fonttools/issues/3502
((377, 469), (377, 468), (377, 472), (377, 472)), # off by one unit
3.32098765445,
),
(
# https://github.com/fonttools/fonttools/issues/3502
((242, 402), (242, 403), (242, 399), (242, 399)), # off by one unit
3.32098765445,
),
(
# https://github.com/fonttools/fonttools/issues/3514
(
(626.9918761593156, 1000.0),
(639.133178223544, 1000.0),
(650.1152019577394, 1000.0),
(626.9918761593156, 1000.0),
), # infinite recursion with Cython
27.06159516422008,
),
],
)
def test_calcCubicArcLength(segment, expectedLength):
assert calcCubicArcLength(*segment) == pytest.approx(expectedLength)
def test_intersections_linelike():
seg1 = [(0.0, 0.0), (0.0, 0.25), (0.0, 0.75), (0.0, 1.0)]
seg2 = [(0.0, 0.5), (0.25, 0.5), (0.75, 0.5), (1.0, 0.5)]

View File

@ -87,9 +87,22 @@ class T2CharStringTest(unittest.TestCase):
(1.0, "1e 1f"), # 1
(-1.0, "1e e1 ff"), # -1
(98765.37e2, "1e 98 76 53 7f"), # 9876537
(1234567890.0, "1e 1a 23 45 67 9b 09 ff"), # 1234567890
(9.876537e-4, "1e a0 00 98 76 53 7f"), # 9.876537e-24
(9.876537e4, "1e 98 76 5a 37 ff"), # 9.876537e+24
(1234567890.0, "1e 12 34 56 79 b2 ff"), # 12345679E2
(9.876537e-4, "1e 98 76 53 7c 10 ff"), # 9876537E-10
(9.876537e4, "1e 98 76 5a 37 ff"), # 98765.37
(1000.0, "1e 1b 3f"), # 1E3
(-1000.0, "1e e1 b3 ff"), # -1E3
(1e8, "1e 1b 8f"), # 1E8
(1e-5, "1e 1c 5f"), # 1E-5
(1.2e8, "1e 12 b7 ff"), # 12E7
(1.2345e-5, "1e 12 34 5c 9f"), # 12345E-9
(9.0987654e8, "1e 90 98 76 54 0f"), # 909876540
(0.1, "1e a1 ff"), # .1
(-0.1, "1e ea 1f"), # -.1
(0.01, "1e 1c 2f"), # 1e-2
(-0.01, "1e e1 c2 ff"), # -1e-2
(0.0123, "1e 12 3c 4f"), # 123e-4
(-0.0123, "1e e1 23 c4 ff"), # -123e-4
]
for sample in testNums:

View File

@ -24,6 +24,10 @@ class B:
self.a = A()
class C(B):
pass
class TestVisitor(Visitor):
def __init__(self):
self.value = []
@ -71,3 +75,9 @@ class VisitorTest(object):
visitor.defaultStop = True
visitor.visit(b)
assert visitor.value == ["B", "B a"]
def test_visitor_inheritance(self):
b = C() # Should behave just like a B()
visitor = TestVisitor()
visitor.visit(b)
assert visitor.value == ["B", "B a", "A", 1, 2, 3, 5, 7, "e", E.E2, 10]

View File

@ -39,8 +39,8 @@ def test_max_ctx_calc_features():
rsub a' by b;
rsub a b' by c;
rsub a b' c by A;
rsub [a b] [a b c]' [a b] by B;
rsub [a b] c' by A;
rsub [a b] c' [a b] by B;
lookup GSUB_EXT;
} sub1;

View File

@ -3,7 +3,7 @@ import os
import pytest
import py
from fontTools.qu2cu.cli import main
from fontTools.qu2cu.cli import _main as main
from fontTools.ttLib import TTFont

View File

@ -428,14 +428,14 @@ class SubsetTest:
def test_varComposite(self):
fontpath = self.getpath("..", "..", "ttLib", "data", "varc-ac00-ac01.ttf")
origfont = TTFont(fontpath)
assert len(origfont.getGlyphOrder()) == 6
assert len(origfont.getGlyphOrder()) == 11
subsetpath = self.temp_path(".ttf")
subset.main([fontpath, "--unicodes=ac00", "--output-file=%s" % subsetpath])
subsetfont = TTFont(subsetpath)
assert len(subsetfont.getGlyphOrder()) == 4
assert len(subsetfont.getGlyphOrder()) == 6
subset.main([fontpath, "--unicodes=ac01", "--output-file=%s" % subsetpath])
subsetfont = TTFont(subsetpath)
assert len(subsetfont.getGlyphOrder()) == 5
assert len(subsetfont.getGlyphOrder()) == 8
def test_timing_publishes_parts(self):
fontpath = self.compile_font(self.getpath("TestTTF-Regular.ttx"), ".ttf")
@ -1915,10 +1915,6 @@ def test_subset_recalc_xAvgCharWidth(ttf_path):
assert xAvgCharWidth_after == subset_font["OS/2"].xAvgCharWidth
if __name__ == "__main__":
sys.exit(unittest.main())
def test_subset_prune_gdef_markglyphsetsdef():
# GDEF_MarkGlyphSetsDef
fb = FontBuilder(unitsPerEm=1000, isTTF=True)
@ -2023,3 +2019,57 @@ def test_subset_prune_gdef_markglyphsetsdef():
assert lookups[1].MarkFilteringSet == None
marksets = font["GDEF"].table.MarkGlyphSetsDef.Coverage
assert marksets[0].glyphs == ["acutecomb"]
def test_prune_user_name_IDs_with_keep_all(ttf_path):
font = TTFont(ttf_path)
keepNameIDs = {n.nameID for n in font["name"].names}
for i in range(10):
font["name"].addName(f"Test{i}")
options = subset.Options()
options.name_IDs = ["*"]
options.name_legacy = True
options.name_languages = ["*"]
subsetter = subset.Subsetter(options)
subsetter.populate(unicodes=font.getBestCmap().keys())
subsetter.subset(font)
nameIDs = {n.nameID for n in font["name"].names}
assert not any(n > 255 for n in nameIDs)
assert nameIDs == keepNameIDs
def test_prune_unused_user_name_IDs_with_keep_all(ttf_path):
font = TTFont(ttf_path)
keepNameIDs = {n.nameID for n in font["name"].names}
for i in range(10):
font["name"].addName(f"Test{i}")
nameID = font["name"].addName("Test STAT")
keepNameIDs.add(nameID)
font["STAT"] = newTable("STAT")
font["STAT"].table = ot.STAT()
font["STAT"].table.ElidedFallbackNameID = nameID
options = subset.Options()
options.name_IDs = ["*"]
options.name_legacy = True
options.name_languages = ["*"]
subsetter = subset.Subsetter(options)
subsetter.populate(unicodes=font.getBestCmap().keys())
subsetter.subset(font)
nameIDs = {n.nameID for n in font["name"].names}
assert nameIDs == keepNameIDs
if __name__ == "__main__":
sys.exit(unittest.main())

Binary file not shown.

File diff suppressed because it is too large Load Diff

Binary file not shown.

Binary file not shown.

View File

@ -1,5 +1,6 @@
from fontTools.ttLib import TTFont
from fontTools.ttLib.scaleUpem import scale_upem
from io import BytesIO
import difflib
import os
import shutil
@ -70,6 +71,12 @@ class ScaleUpemTest(unittest.TestCase):
scale_upem(font, 500)
# Save / load to ensure calculated values are correct
# XXX This wans't needed before. So needs investigation.
iobytes = BytesIO()
font.save(iobytes)
# Just saving is enough to fix the numbers. Sigh...
expected_ttx_path = self.get_path("varc-ac00-ac01-500upem.ttx")
self.expect_ttx(font, expected_ttx_path, tables)

View File

@ -0,0 +1,87 @@
from fontTools.ttLib import TTFont
from io import StringIO, BytesIO
import pytest
import os
import unittest
CURR_DIR = os.path.abspath(os.path.dirname(os.path.realpath(__file__)))
DATA_DIR = os.path.join(CURR_DIR, "data")
class VarCompositeTest(unittest.TestCase):
def test_basic(self):
font_path = os.path.join(DATA_DIR, "..", "..", "data", "varc-ac00-ac01.ttf")
font = TTFont(font_path)
varc = font["VARC"]
assert varc.table.Coverage.glyphs == [
"uniAC00",
"uniAC01",
"glyph00003",
"glyph00005",
"glyph00007",
"glyph00008",
"glyph00009",
]
font_path = os.path.join(DATA_DIR, "..", "..", "data", "varc-6868.ttf")
font = TTFont(font_path)
varc = font["VARC"]
assert varc.table.Coverage.glyphs == [
"uni6868",
"glyph00002",
"glyph00005",
"glyph00007",
]
def test_roundtrip(self):
font_path = os.path.join(DATA_DIR, "..", "..", "data", "varc-ac00-ac01.ttf")
font = TTFont(font_path)
tables = [
table_tag
for table_tag in font.keys()
if table_tag not in {"head", "maxp", "hhea"}
]
xml = StringIO()
font.saveXML(xml)
xml1 = StringIO()
font.saveXML(xml1, tables=tables)
xml.seek(0)
font = TTFont()
font.importXML(xml)
ttf = BytesIO()
font.save(ttf)
ttf.seek(0)
font = TTFont(ttf)
xml2 = StringIO()
font.saveXML(xml2, tables=tables)
assert xml1.getvalue() == xml2.getvalue()
font_path = os.path.join(DATA_DIR, "..", "..", "data", "varc-6868.ttf")
font = TTFont(font_path)
tables = [
table_tag
for table_tag in font.keys()
if table_tag not in {"head", "maxp", "hhea", "name", "fvar"}
]
xml = StringIO()
font.saveXML(xml)
xml1 = StringIO()
font.saveXML(xml1, tables=tables)
xml.seek(0)
font = TTFont()
font.importXML(xml)
ttf = BytesIO()
font.save(ttf)
ttf.seek(0)
font = TTFont(ttf)
xml2 = StringIO()
font.saveXML(xml2, tables=tables)
assert xml1.getvalue() == xml2.getvalue()
if __name__ == "__main__":
import sys
sys.exit(unittest.main())

View File

@ -719,65 +719,6 @@ class GlyphComponentTest:
assert (comp.firstPt, comp.secondPt) == (1, 2)
assert not hasattr(comp, "transform")
def test_trim_varComposite_glyph(self):
font_path = os.path.join(DATA_DIR, "..", "..", "data", "varc-ac00-ac01.ttf")
font = TTFont(font_path)
glyf = font["glyf"]
glyf.glyphs["uniAC00"].trim()
glyf.glyphs["uniAC01"].trim()
font_path = os.path.join(DATA_DIR, "..", "..", "data", "varc-6868.ttf")
font = TTFont(font_path)
glyf = font["glyf"]
glyf.glyphs["uni6868"].trim()
def test_varComposite_basic(self):
font_path = os.path.join(DATA_DIR, "..", "..", "data", "varc-ac00-ac01.ttf")
font = TTFont(font_path)
tables = [
table_tag
for table_tag in font.keys()
if table_tag not in {"head", "maxp", "hhea"}
]
xml = StringIO()
font.saveXML(xml)
xml1 = StringIO()
font.saveXML(xml1, tables=tables)
xml.seek(0)
font = TTFont()
font.importXML(xml)
ttf = BytesIO()
font.save(ttf)
ttf.seek(0)
font = TTFont(ttf)
xml2 = StringIO()
font.saveXML(xml2, tables=tables)
assert xml1.getvalue() == xml2.getvalue()
font_path = os.path.join(DATA_DIR, "..", "..", "data", "varc-6868.ttf")
font = TTFont(font_path)
tables = [
table_tag
for table_tag in font.keys()
if table_tag not in {"head", "maxp", "hhea", "name", "fvar"}
]
xml = StringIO()
font.saveXML(xml)
xml1 = StringIO()
font.saveXML(xml1, tables=tables)
xml.seek(0)
font = TTFont()
font.importXML(xml)
ttf = BytesIO()
font.save(ttf)
ttf.seek(0)
font = TTFont(ttf)
xml2 = StringIO()
font.saveXML(xml2, tables=tables)
assert xml1.getvalue() == xml2.getvalue()
class GlyphCubicTest:
def test_roundtrip(self):

Binary file not shown.

View File

@ -427,9 +427,12 @@ class AATLookupTest(unittest.TestCase):
)
from fontTools.misc.lazyTools import LazyList
class LazyListTest(unittest.TestCase):
def test_slice(self):
ll = otConverters._LazyList([10, 11, 12, 13])
ll = LazyList([10, 11, 12, 13])
sl = ll[:]
self.assertIsNot(sl, ll)
@ -438,26 +441,9 @@ class LazyListTest(unittest.TestCase):
self.assertEqual([11, 12], ll[1:3])
def test_getitem(self):
count = 2
reader = OTTableReader(b"\x00\xFE\xFF\x00\x00\x00", offset=1)
converter = otConverters.UInt8("UInt8", 0, None, None)
recordSize = converter.staticSize
l = otConverters._LazyList()
l.reader = reader
l.pos = l.reader.pos
l.font = None
l.conv = converter
l.recordSize = recordSize
l.extend(otConverters._MissingItem([i]) for i in range(count))
reader.advance(count * recordSize)
self.assertEqual(l[0], 254)
self.assertEqual(l[1], 255)
def test_add_both_LazyList(self):
ll1 = otConverters._LazyList([1])
ll2 = otConverters._LazyList([2])
ll1 = LazyList([1])
ll2 = LazyList([2])
l3 = ll1 + ll2
@ -465,7 +451,7 @@ class LazyListTest(unittest.TestCase):
self.assertEqual([1, 2], l3)
def test_add_LazyList_and_list(self):
ll1 = otConverters._LazyList([1])
ll1 = LazyList([1])
l2 = [2]
l3 = ll1 + l2
@ -475,13 +461,13 @@ class LazyListTest(unittest.TestCase):
def test_add_not_implemented(self):
with self.assertRaises(TypeError):
otConverters._LazyList() + 0
LazyList() + 0
with self.assertRaises(TypeError):
otConverters._LazyList() + tuple()
LazyList() + tuple()
def test_radd_list_and_LazyList(self):
l1 = [1]
ll2 = otConverters._LazyList([2])
ll2 = LazyList([2])
l3 = l1 + ll2
@ -490,9 +476,9 @@ class LazyListTest(unittest.TestCase):
def test_radd_not_implemented(self):
with self.assertRaises(TypeError):
0 + otConverters._LazyList()
0 + LazyList()
with self.assertRaises(TypeError):
tuple() + otConverters._LazyList()
tuple() + LazyList()
if __name__ == "__main__":

View File

@ -227,33 +227,57 @@ class TTGlyphSetTest(object):
"addVarComponent",
(
"glyph00003",
DecomposedTransform(460.0, 676.0, 0, 1, 1, 0, 0, 0, 0),
{
"0000": 0.84661865234375,
"0001": 0.98944091796875,
"0002": 0.47283935546875,
"0003": 0.446533203125,
},
DecomposedTransform(
translateX=0,
translateY=0,
rotation=0,
scaleX=1,
scaleY=1,
skewX=0,
skewY=0,
tCenterX=0,
tCenterY=0,
),
{},
),
),
(
"addVarComponent",
(
"glyph00004",
DecomposedTransform(932.0, 382.0, 0, 1, 1, 0, 0, 0, 0),
{
"0000": 0.93359375,
"0001": 0.916015625,
"0002": 0.523193359375,
"0003": 0.32806396484375,
"0004": 0.85089111328125,
},
"glyph00005",
DecomposedTransform(
translateX=0,
translateY=0,
rotation=0,
scaleX=1,
scaleY=1,
skewX=0,
skewY=0,
tCenterX=0,
tCenterY=0,
),
{},
),
),
]
assert actual == expected, (actual, expected)
def test_glyphset_varComposite_conditional(self):
font = TTFont(self.getpath("varc-ac01-conditional.ttf"))
glyphset = font.getGlyphSet()
pen = RecordingPen()
glyph = glyphset["uniAC01"]
glyph.draw(pen)
assert len(pen.value) == 2
glyphset = font.getGlyphSet(location={"wght": 800})
pen = RecordingPen()
glyph = glyphset["uniAC01"]
glyph.draw(pen)
assert len(pen.value) == 3
def test_glyphset_varComposite1(self):
font = TTFont(self.getpath("varc-ac00-ac01.ttf"))
glyphset = font.getGlyphSet(location={"wght": 600})
@ -265,77 +289,24 @@ class TTGlyphSetTest(object):
actual = pen.value
expected = [
("moveTo", ((432, 678),)),
("lineTo", ((432, 620),)),
(
"qCurveTo",
(
(419, 620),
(374, 621),
(324, 619),
(275, 618),
(237, 617),
(228, 616),
),
),
("qCurveTo", ((218, 616), (188, 612), (160, 605), (149, 601))),
("qCurveTo", ((127, 611), (83, 639), (67, 654))),
("qCurveTo", ((64, 657), (63, 662), (64, 666))),
("lineTo", ((72, 678),)),
("qCurveTo", ((93, 674), (144, 672), (164, 672))),
(
"qCurveTo",
(
(173, 672),
(213, 672),
(266, 673),
(323, 674),
(377, 675),
(421, 678),
(432, 678),
),
),
("moveTo", ((82, 108),)),
("qCurveTo", ((188, 138), (350, 240), (461, 384), (518, 567), (518, 678))),
("lineTo", ((518, 732),)),
("lineTo", ((74, 732),)),
("lineTo", ((74, 630),)),
("lineTo", ((456, 630),)),
("lineTo", ((403, 660),)),
("qCurveTo", ((403, 575), (358, 431), (267, 314), (128, 225), (34, 194))),
("closePath", ()),
("moveTo", ((525, 619),)),
("lineTo", ((412, 620),)),
("lineTo", ((429, 678),)),
("lineTo", ((466, 697),)),
("qCurveTo", ((470, 698), (482, 698), (486, 697))),
("qCurveTo", ((494, 693), (515, 682), (536, 670), (541, 667))),
("qCurveTo", ((545, 663), (545, 656), (543, 652))),
("lineTo", ((525, 619),)),
("moveTo", ((702, 385),)),
("lineTo", ((897, 385),)),
("lineTo", ((897, 485),)),
("lineTo", ((702, 485),)),
("closePath", ()),
("moveTo", ((63, 118),)),
("lineTo", ((47, 135),)),
("qCurveTo", ((42, 141), (48, 146))),
("qCurveTo", ((135, 213), (278, 373), (383, 541), (412, 620))),
("lineTo", ((471, 642),)),
("lineTo", ((525, 619),)),
("qCurveTo", ((496, 529), (365, 342), (183, 179), (75, 121))),
("qCurveTo", ((72, 119), (65, 118), (63, 118))),
("closePath", ()),
("moveTo", ((925, 372),)),
("lineTo", ((739, 368),)),
("lineTo", ((739, 427),)),
("lineTo", ((822, 430),)),
("lineTo", ((854, 451),)),
("qCurveTo", ((878, 453), (930, 449), (944, 445))),
("qCurveTo", ((961, 441), (962, 426))),
("qCurveTo", ((964, 411), (956, 386), (951, 381))),
("qCurveTo", ((947, 376), (931, 372), (925, 372))),
("closePath", ()),
("moveTo", ((729, -113),)),
("lineTo", ((674, -113),)),
("qCurveTo", ((671, -98), (669, -42), (666, 22), (665, 83), (665, 102))),
("lineTo", ((665, 763),)),
("qCurveTo", ((654, 780), (608, 810), (582, 820))),
("lineTo", ((593, 850),)),
("qCurveTo", ((594, 852), (599, 856), (607, 856))),
("qCurveTo", ((628, 855), (684, 846), (736, 834), (752, 827))),
("qCurveTo", ((766, 818), (766, 802))),
("lineTo", ((762, 745),)),
("lineTo", ((762, 134),)),
("qCurveTo", ((762, 107), (757, 43), (749, -25), (737, -87), (729, -113))),
("moveTo", ((641, -92),)),
("lineTo", ((752, -92),)),
("lineTo", ((752, 813),)),
("lineTo", ((641, 813),)),
("closePath", ()),
]
@ -530,7 +501,7 @@ class TTGlyphSetTest(object):
"qCurveTo",
(
(919, 41),
(854, 67),
(854, 68),
(790, 98),
(729, 134),
(671, 173),
@ -542,7 +513,7 @@ class TTGlyphSetTest(object):
("lineTo", ((522, 286),)),
("qCurveTo", ((511, 267), (498, 235), (493, 213), (492, 206))),
("lineTo", ((515, 209),)),
("qCurveTo", ((569, 146), (695, 44), (835, -32), (913, -57))),
("qCurveTo", ((569, 146), (695, 45), (835, -32), (913, -57))),
("closePath", ()),
("moveTo", ((474, 274),)),
("lineTo", ((452, 284),)),

View File

@ -1018,6 +1018,15 @@ def test_main_ttx_compile_stdin_to_stdout(tmp_path):
assert outpath.is_file()
def test_main_gnu_style_opts_and_args_intermixed(tmpdir):
# https://github.com/fonttools/fonttools/issues/3507
inpath = os.path.join("Tests", "ttx", "data", "TestTTF.ttf")
outpath = tmpdir.join("TestTTF.ttx")
args = ["-t", "cmap", inpath, "-o", str(outpath)]
ttx.main(args)
assert outpath.check(file=True)
def test_roundtrip_DSIG_split_at_XML_parse_buffer_size(tmp_path):
inpath = Path("Tests").joinpath(
"ttx", "data", "roundtrip_DSIG_split_at_XML_parse_buffer_size.ttx"

Some files were not shown because too many files have changed in this diff Show More