We Need Know-how!
It’s strongly suggested that you first read our Experiment One to have the backstory of what follows:
The following was originally posted to sirbacon.org, account FB Decipherer, my response to the first reply I received:
What a joy that someone has both read my post and has grokked it to the max!
You raise a key point that in some cases the difference between the ‘a’ and ‘b’ biformed forms is sometimes so minuscule that Classification cannot be reliably performed by inspection (that is, via unaided human eyeballs). But consider an arbitrary digital representation of one letter in an alphabet, an image of say, 30 x 30 pixels. If the position of any one pixel was uniform across letters, it could be easily read in place, and used as one bit of persistant binary data. That’s all the Biliteral cipher method requires.
That, of course, is not what we have with historic, high resolution digital facsimiles, such as my favored Bodleian Library downloads of the First Folio. But I think of this: an image editing program such as Photoshop allows superimposing two images, moving them in place manually for registration, then taking the union and intersection of the pair together, which in both cases yields a third image.
Computer Vision programming libraries such as OpenCV have extensive facilities for doing all of that, in highly automated fashion under program control. Then the difference image of two biformed variants for any letter could be isolated as a third, much smaller, image. In the difficult case it might be a very thin crescent shape of some kind, but it would be detectable as a non-zero number of pixels, and it would be uniform within a number of pages which had been printed from the same set of alphabetic cast lead slugs (there is some more proper word for them…)
I did a little exploratory coding some months back with OpenCV routines for working with SIFT (Scalable Invariant Feature Transform). I believe its merit is that it can distill down, or abstract, an entire digital image into a much simpler data structure, a sort of digital signature, one which is particularly well-suited for comparing with other images for discerning differences between images. And it works well with very complex full color images.
Thus the requirement here is vastly simpler, in that we always are working in monochrome, it’s always strictly two-dimensional, and in either case of italic or roman type, there will always be a fixed set of 96 glyphs to match against (24 letters in the Elizabethan alphabet, two biformed forms, both upper and lower case: 24 x 2 x 2). Not to mention that there is no background image data to filter out, and any original background noise can just be erased.
A fantastic freebie resource is the Bodleian Library set of Text Encoding Initiative (TEI) XML files, one for each of the 36 First Folio plays, each having the complete text (with comprehensive metadata) in machine-readable form. Presumably it is a very reliable information source: the names of the Oxford University team of proofreaders is included in the metadata for the files.
Because of the above, as we iterate through each letter of each line using our Python code, we can know in advance what each letter the next in sequence “should” be. We only have to next test whether it is an ‘a’ or ‘b’ variant. If we can extract the bounding box coordinates of each letter in turn, we can take the SIFT of each in turn, and compare it to the looked-up value from a list of reference SIFT values for each letter (calculated in advance from somewhere, somehow) and previously saved.
But where do these crucial reference values come from? I am thinking of an application of Bootstrapping, where the initial reference values might be very rough to begin with, but refinement comes after each classification, and learning on-the-fly comes from a progressively more extensive store of classified images to work with.
Some months back I did the initial work on a project for the Roboflow Open Source, cloud-based AI development platform, which I named Elizabethan Biformed Alphabets. I was hoping someone with much more experience than me with Convolutional Neural Networks would find the concept interesting enough to wish to contribute to the project, but so far no one has come forward.
Would this be an interesting opportunity for you?
Copyright © 2023 New Gorhambury
The Tudor Rose
Respectfully dedicated to A. Phoenix, whoever you are