* add nameTable.findMultilingualName(), to find an existing multilingual name
* Make addMultilingualName() reuse nameIDs if possible when asking for a new nameID, by calling findMultilingualName()
Before Python version 2.7.7, the struct.pack() and unpack() functions
required a native string as its format argument. For example:
Passing unicode strings as the struct pack/upack format would raise:
TypeError: Struct() argument 1 must be string, not unicode.
This error occurs when we use `from __future__ import unicode_literals`.
This problem was fixed in Python 2.7.7. Since then, struct now also
accepts unicode format strings.
Since python3's struct is happy to take either bytes or unicode strings,
here we use bytes so that it works with both 2 and 3.
Also see http://pythen-future.erg/stdlib_incompatibilities.html#struct-pack
Fixes https://github.com/fonttools/fonttools/issues/993
previously (only on Python 2), the following line was silently coercing to Unicode using default 'ascii' encoding, whenever the input string was a `bytes` string (same as `str` in PY2):
namerecord.string = string.encode(namerecord.getEncoding())
On Python 3, the line above would crash, since bytes objects don't have an `encode` method (and rightly so).
`setName` should now accept both Unicode or bytes, but will issue a warning if the latter are used.
If the referenced name record already exists, the string is modified in place.
If it doesn't exist, a new record is created with the given IDs and string.
Useful for generating XML comments when tables refer to name IDs.
For example, XML for a named instance in an ‘fvar’ table is easier
to read when it says <!-- Bold --> in addition to nameID="258".
As seen on some Jython situations where this was raised:
encodings.CodecRegistryError: incompatible codecs in module "encodings.utf_16_be" (__pyclasspath__/encodings/utf_16_be.py)
Part of https://github.com/behdad/fonttools/issues/236
Now we fallback to ASCII for unknown encodings. Not sure if this might be a bad idea.
The main user-visible difference is that if there's an ASCII-only text in an unknown
encoding, we still "decode" it and use unicode="True" instead of unicode="False".
Or is assuming that any unsupported encoding is ASCII-compatible too intrusive?
This will make it impossible to process fonts with super-broken
name tables. But we do not handle arbitrary broken fonts anyway,
so this is arguably better than silently ignoring junk content.
Resolves a review comment in #235.
In the unit test, replace calls of deprecated unittest.assertEquals()
by calls of unittest.assertEqual().
When dumping the Macintosh Skia font to XML, the TTX tool crashed
with a Python exception. After this change, the font can be dumped
fine. This was caused by a name record with platformID=1 (Macintosh)
and platEncID=2 (Traditional Chinese).
https://github.com/behdad/fonttools/issues/54
There's a new attribute named unicode that can choose whether the
text in the XML entry is to be interpretted as Unicode, or as the
target encoding.