Saturday, November 27, 2021

A mistake in the VM?

One of the notable things about the VM is that there are no obvious mistakes. The scribe(s) do not appear to have erased, scraped out or overlined any text.

On f105r, however, there are four words written in an odd location, and I will argue that these were accidentally omitted from the end of the first line of the text, but the mistake was caught before the scribe was done and they were written in above the line. These words make up line 10 in the Landini-Stolfi transliteration, and fall between the second paragraph and the third one.


On René Zandbergen's site the description for this page has the following note: "There is a break between the third and fourth paragraph, and it appears as if the end of the third paragraphs was written above it." I think the "break" referred to here is the fact that the ink of the fourth paragraph is slightly fainter than the third, and the letters are neater and smaller, suggesting that paragraphs 1-3 were written in one sitting, and paragraphs 4 and onward were written later.

Observations

We can observe the following things about the physical appearance of these words:
  • They are set lower than the last line of the second paragraph, though there was ample space to place them in line with it, suggesting that they are not intended to be part of paragraph 2.
  • The gallows letters of the first line of the third paragraph are interposed between the oddball words, suggesting that the oddball words were written after the first line of paragraph 3 was completed, and written around the gallows letters.
  • The color and shape of the letters in these words is similar to those in paragraphs 1-3, so not obviously written later or in a different hand.
The words have the following statistical properties:
  • sairy elsewhere only appears as the last word of the first line of a paragraph
  • ore does not appear elsewhere
  • daiindy appears once in Currier A as part of a label and twice in Currier B (including this instance), not in either case as the end of a line
  • ytam appears both in Currier A and B, usually at the end of the line, but not always
Conclusions

The color and style of the letters, together with their placement, suggest that they belong to paragraph 3, and they were written in at roughly the same time that the other lines of paragraph 3 were written. The statistical properties of the words suggest that they belong to the end of a line, but which line?
  • Line 11: This line ends in dyaiin, which is a word not found elsewhere
  • Line 12: This line ends in ry, which elsewhere is only a line-final word
  • Line 13: This line ends with ot, which elsewhere does not appear at the end of a line. There is blank space at the end of the paragraph, sufficient to write about two words.
Given that line 12 ends in a word which is elsewhere only line-final, and line 13 leaves enough space for at least two of the oddball words to have been written there, the best explanation is that these words belong at the end of line 11, the line immediately below them.

I have seen omissions like this in manuscripts in the past, and the cause is often that the eye skips from one word to a later similar word. In this case, perhaps the scribe's eye skipped from sairy to yaiir, which starts line 12. That opens up two possibilities:
  • If the text was enciphered first on a wax tablet (or something similar) and then copied to the vellum, and the eye-skip occurred during the copying process, then line-breaks on the wax tablet were not the same as the line breaks on the vellum.
  • If the eye-skip occurred during the encipherment process, then the plaintext for sairy could be similar to (or even identical to) the plaintext for yaiir.

Saturday, November 20, 2021

Word Transposition in the Voynich Manuscript

This is a follow-on to my last post, but I don't want to bother recapping the argument from last time, so I'll just start over fresh and go a different direction.

Summary
While a text in an unknown language may look random, there are two "forces" that govern the appearance of words in the text. One of those "forces" is absent in the Voynich Manuscript, and I think that indicates that a word transposition step has taken place.

Argument
The following graph shows the relative frequency that a word will recur in a text after having once occurred.


The graph above shows the likelihood that a given word will appear a second time after it has appeared a first time. For example, in a Latin medieval prose text (red line), if a word appears once then there is an extremely low chance (0.03%) that the next word will be the same word. That rises to an almost 1% chance that it will appear six words later, then slowly drops off to a 0.76% chance that it will appear 30 words later, and a roughly 0.6% chance that it will appear 100 words later.

A similar phenomenon can be seen in the early modern French novel Pantagruel (blue line).

This curve could be described as the product of the interaction of two "forces":
  1. Strong repulsive force: Instances of the same word have a very low likelihood of being found in very close vicinity to each other. Perhaps languages are naturally structured in a way that avoids close repetition.
  2. Weak attractive force: Instances of the same word have a higher likelihood of being found in the same broad area of the text as each other. Intuitively it seems like this should not apply to high-frequency words with low semantic content (articles, prepositions, etc.), since there is no reason for these to be grouped together in the same area of a text. Instead, this ought to apply primarily to lower-frequency words with high semantic content, since these words will be tied to the topic of discourse, and will therefore be clustered in areas of the text where the topic relates to their semantic domain. (I should have proved this out, but I didn't.)

Interestingly, Latin syllables (orange line) respond to the same strong repulsive force as words, but not the weak attractive one. This makes sense if the weak attractive force relates to semantic content, because syllables themselves have no semantic content and are therefore not tied to the topic of the text. Instead, with Latin syllables we see a strong tendency for syllables not to repeat in close vicinity to each other, but then the curve just rises to a plateau.

So what do we see in the VM?


Words in the Voynich Manuscript demonstrate the effects of the weak attractive force more or less like Latin words.  This suggests they have a semantic component, and there is some kind of topicalization going on. However, the VM shows no evidence of the strong repulsive force. What could cause that?

The strong repulsive force works over a very short distance, generally less than five words. If words were shuffled around so they were separated from their neighbors by a distance of five words or more, then this would conceal the effect of the strong repulsive force.

In other words, perhaps there is a transposition step in the VM cipher, operating on words. This could solve a lot of problems.

For example, such a transposition could also explain why the text does not exhibit line-breaking features that make it clear whether it runs right-to-left or left-to-right.

It might also explain why the last lines of some paragraphs (especially in the Currier A sections) have gaps in them. Perhaps these gaps are slots that were simply not filled in by the transposition algorithm.


Indeed, if we suppose that the transposition works on the level of the paragraph, then that could explain why so many paragraphs begin with a word containing an ornate gallows letter. If the transposition algorithm resets for each paragraph, then the reader would need a visual cue to indicate where to start over again.

I can even imagine algorithms that could produce the phenomenon of head words, body words and tail words, though this is a bit more of a stretch since that would mean there is some connection between what a word is and where the transposition algorithm puts it.

Lastly, this could explain why the VM has no punctuation. In manuscripts of this era punctuation was common (though not universal). Since punctuation marks stand between words in connected linear text, if the words are shuffled through some kind of transposition algorithm then it might no longer be clear where to put the punctuation marks.

So...how would one prove or disprove the existence of word transposition in the VM?

Thursday, October 28, 2021

Is there unusual context dependency in the VM?

Back in 2015 Torsten Timm wrote a paper titled How the Voynich Manuscript was Created, in which he argued that the VM was created by a process of copying and altering glyph groups that had already been written. One of the pieces of evidence Timm used in supporting this argument was the measurable fact that words found on one line of the manuscript had a higher likelihood of being found on the three lines immediately above it than being found further away in the text.

In this post I'll dig into a specific problem with Timm's argument, and I will argue that what Timm observed is a natural language feature, but that following his approach reveals another interesting feature of the VM text.

Let's start with Timm's argument. The following graph is taken from his paper, and it shows the likelihood that a word on one line of the VM will be found on a line before it:


Here you see that a word found on any given line has almost a 7% chance of being found elsewhere in the same line (position 0), a roughly 6.5% chance of being found one line higher (position 1), 6% chance of being found two lines higher, and so forth. The further back you go, the lower the likelihood of finding your word repeated, with the curve flattening out at around 4%.

Does the curve in this graph represent a natural feature or an unnatural one? On the face of it, it seems a phenomenon like this ought to be perfectly natural. If we have an herbal, for example, we would expect a word like "verbena" or "pigroot" to be localized to the section of the herbal that discusses it. But would that be sufficient to move the line on the graph?

Timm argued that this is an unnatural feature, and supported this argument by carrying out the same exercise with a Latin text (the Aeneid) and an English text (Dryden's translation of the Aeneid). Here is what he found:


Here you can see that the Voynich manuscript has a curve that is dramatically different from both the Latin and English texts that Timm chose for comparison.

Note that the English line dips down at the far left, a phenomenon which Timm attributed to Dryden's rhyme scheme. More on that below.

Here's the problem: The Aeneid is not normal in terms of its repetitiveness. If you conduct this same exercise with all of the medieval prose and poetry in the LatinISE corpus, you find the following:


As you can see from this graph:
  • The Aeneid is less repetitive than Medieval Latin poetry. This is probably in part due to Vergil's style (maybe he preferred to avoid repetition) and probably also in part due to the fact that Medieval Latin made more use of low-content high-frequency words than Classical Latin.
  • Latin poetry is less repetitive than Latin prose, but this is partly due to a difference in line length. The curve in the graph above resulted from breaking prose texts down into lines of up to 40 characters in length. If I had used an 80-character limit instead, the curve would have peaked at 7.8%. If I had used a 23-character limit, the prose curve would have come close to matching the poetry curve.
  • In both Medieval Latin poetry and prose there is a lower tendency for a word to be found in its own line (position 0) than in the next line (position 1). This same phenomenon appeared in Timm's graph for the "English" line, and he explained it as a product of Dryden's rhyme scheme. However, since there is no rhyme scheme in play in Latin prose, there must be another explanation.
In my opinion, the interesting thing Timm's analysis reveals is actually this: In the VM, a word is more likely to be found again on its own line than to be found on the lines above it. I have an idea of what this could mean, but I don't want to make this post unnecessarily long, so I will dig into it in a separate post.

Tuesday, October 26, 2021

Are there nulls in the VM?

Renaissance cryptographers used nulls to break up repeated sequences of characters and to alter the frequency statistics of a text. Did the author of the VM do anything similar? If so, how could we detect it?

Null characters increase the entropy of a text. The more randomly a null character is employed, the greater the entropy it adds to the text. For example, the Latin text Historia rerum in partibus transmarinis gestarum written by William of Tyre conveys an average of 2.711 bits per character when measured using third-order entropy. If we randomly insert a null character "@" into the text at an average interval of every five characters, the entropy increases to 3.044 bits per character.

In cases like this, it seems like we ought to be able to ferret out the null character by identifying the character which, when removed, causes a significant drop in the entropy of the text.

The procedure we'll use is:

1. Calculate the average bits per character of a text which we think may be a cipher. This value will be the Bnull.

2. Make a copy of the cipher text and remove from the copy a character C which we think might be a null.

3. Calculate average bits per character of the text with C removed. Let this be BC.

5. Calculate the "nullitude" of the character C using NC = BC / Bnull.

First, we should know what the results look like with a text that does not contain nulls. Here are the nullitude values for characters in the Historia when no null character is inserted:

Here we do not see a significant drop below the value of 1, indicating that no single character, when removed from the text, causes a noticeable decrease in entropy. This is what we expected to see.

Now, looking at a copy of the Historia into which a null "@" has been inserted randomly about every five characters, we see a different result:


Here the null character stands out clearly, causing a significant drop in the entropy of the text once it is removed. Again, this is what we expect.

We can apply the same test to the Voynich Manuscript. Here is what we find with Currier A:


The result here is interesting because, at the left side of the chart, we see that removal of the character "e" causes a drop in entropy. It isn't a huge drop, but it does stand out from the rest of the characters. This suggests the possibility that Currier A words like cheol and cheor might be synonyms of the more common words chol and chor.

Though "e" looks slightly nullish in Currier A, we do not see the same phenomenon in Currier B:


In Currier B the distribution is more like the Latin plaintext above, with no particular character looking more like a null than the others.

Friday, October 1, 2021

Head words, body words and tail words

Lines of Voynichese text can be divided into a head (the first word), a tail (the last word) and a body (all the words in the middle).

Words can be classified according to where they tend to fall in the line:

  • Head words tend to be the first word in a line.
    • The first word of a paragraph seems to be its own special kind of headword
  • Tail words tend to be the last word in a line
    • Words ending in -m and -g tend to be tail words
    • Words ending in -aly tend to be tail words
  • Body words tend to fall in the middle of a line
  • Free words can appear anywhere

These categories aren't strict, so head words can sometimes be found in the body of a line, but are almost never found at the end. Similarly, tail words can be found in the body, but almost never at the head. There is a lot more work to be done looking at the relationship between the structure of a word and its classification.

This explains why I failed to find a clear direction of text when I was looking at line breaks. I was looking for common pairs that were broken across lines, but since the head and the tail of the line are drawn from statistically different sets of words than the body, those pairs turn out to be very rare in the rest of the text. (Thanks to Nick Pelling for the observation that initial and final letters of Voynichese probably screwed up my test!)

Interestingly, page F81R (where the text is laid out like a poem) follows these head-and-tail tendencies. That is, the words at the heads of the lines tend to be head words elsewhere, and the words at the ends of lines tend to be tail words elsewhere. This suggests that the ragged line lengths on this page are intentional, and adds weight back to the hypothesis that this page contains a poem.

Sunday, September 19, 2021

I was Wrong about F81R
How Line Breaks and Word Breaks Behave in Currier A and B

This is going to be a long and boring post, so here's the summary:
  • Line breaks in the VM do not act like line breaks in a natural text, in that they do not provide evidence of whether the text runs left-to-right or right-to-left.
  • Word breaks in Currier A act like word breaks in a natural text, but in Currier B they do not.
  • Since my analysis of F81R as a poem was based on the assumption that line breaks and word breaks were natural, yet they turn out not to be natural at all, there remains nothing to support the idea that this page contains a poem.
Here are the tests I conducted whose results led me to that conclusion.

1. Direction of Text

Question: Text in the VM is laid out on the page in a way that suggests left-to-right text, but does the content of the text support that? How do we know the layout of the text isn't intentionally misleading?

Test: In a traditional European text, line breaks are governed by the width of the text column, and have an arbitrary relationship to the underlying text. Therefore we should expect that high frequency pairs of words W1 and W2 will occasionally be broken across lines, so W1 will appear on one end of one line and W2 will appear on the other end of the next line. If W1 appears at the right end of one line and W2 appears at the left end of the next line, then the text behaves like a left-to-right text. If they appear on the left and right ends, respectively, then it behaves like a right-to-left text.

Demonstration: I applied the test to De natura rerum ad Sisebutum regem liber, by Isidorus Hispalensis Episcopus, which is roughly the size of the Currier A section of the VM. The sample text contained 366 distinct pairs of words that were repeated at least twice, for a total of 966 instances of repeated pairs. In 68 cases a pair was found broken across lines in a way that indicated left-to-right text, in 13 cases it was found broken in a way that indicated right-to-left text.

Conclusion: With more than five times as many left-to-right breaks, the evidence pointed strongly to a left-to-right text, as was expected.

Currier A: I found 491 distinct pairs repeated at least twice, for a total 1389 instances of repeated pairs. In 32 cases a pair was broken across lines in a way that indicated left-to-right text, in 47 cases it was found broken in a way that indicated right-to-left text.

Conclusion: The number of left-to-right breaks is not significantly different from the number of right-to-left breaks. This is not obviously a natural text running in either direction.

Currier B: I found 1701 distinct pairs repeated at least twice, for a total 5313 instances of repeated pairs. In 69 cases a pair was broken across lines in a way that indicated left-to-right text, in 94 cases it was found broken in a way that indicated right-to-left text.

Conclusion: The number of left-to-right breaks is not significantly different from the number of right-to-left breaks. This is not obviously a natural text running in either direction.

2. Word Breaks

Question: Text in the VM appears to be broken into words by spaces, but do these spaces really act like word breaks within the text?

Test: Word breaks should divide the text into a relatively productive lexicon. A productive lexicon is one that can produce the text in question with a relatively small number of words used at relatively high frequencies. We should find that true word breaks divide the text into a productive lexicon better than any other character in the text.

Treat each character in the text as a potential word-break character and measure the frequency of the most frequent word in the resulting lexicon. Use that frequency as a proxy for the productivity of the lexicon. If the word break character results in the most productive lexicon, then it acts like a true word break.

Demonstration: I applied the test to De natura rerum ad Sisebutum regem liber. The word break character resulted in a score of 320, while the next best character (s) resulted in a score of 184.

Conclusion: The lexicon created by the word break character is nearly twice as productive as the next best candidate. The word break character acts like a true word break, as expected.

Currier A: The word break character resulted in a score of 512, while the next best character (o) resulted in a score of 266.

Conclusion: The lexicon created by the word break character in Currier A is nearly twice as productive as the next best candidate. The word break character acts like a true word break.

Currier B: The word break character resulted in a score of 499, but the character producing the most productive lexicon was actually 'e', which yielded a score of 514. The character 'a' was third in rank, with a score of 482.

Conclusion: The lexicon created by the word break character in Currier B is not significantly more productive than the lexicon created by other high-frequency characters. In Currier B, the word break character does not act like a word break.

Thursday, September 16, 2021

Currier B aiin and Latin in

In the Latin ISE corpus, the word 'in' is the second most frequent word, and it would be surprising if this word was not among the top ten words of any Latin text of significant length. The word 'in' also has the property that it is rarely followed by another high-frequency word. The reason for this is that 'in' is a preposition, and is therefore usually followed by a noun with high semantic content, and those words are generally lower in frequency than function words.

Despite the fact that it is rarely followed by another high-frequency word, 'in' is commonly preceded by another high-frequency word, particularly 'et', 'est' or 'ut'. This can be seen in the frequencies by which the top ten most frequent words appear together:


In Currier B the word 'aiin' has similar properties. It is the fourth most frequent word, and has the property that it is rarely followed by another high-frequency word, but is commonly preceded by one. This can be seen in the frequencies by which the top ten most frequent words appear together:

The word 'in' appears not only in Latin, but also in Tuscan and Spanish, though with somewhat lower frequency. In Dante's Divina Commedia, for example, it is the 11th most common word. I assume the drop in frequency between Latin and Tuscan was due to the loss of case markers on nouns, which would have required a corresponding increase in the number of prepositions (since otherwise distinctions such as in urbe / in urbem were lost).

The situation with Latin and Tuscan could be compared to the situation with Currier B and Currier A. The word 'aiin' also appears in Currier A, though with a lower frequency, being the 17th most common word.

Wednesday, September 15, 2021

qokeedy qokeedy

In this post I'll look at similarities between high-frequency qok- words in Currier B and high-frequency qu- words in Latin.

1. Textual Frequency

The prefix qu- is the most frequent two-letter prefix in the Latin ISE corpus, and the prefix qok- is the most frequent three-letter prefix in Currier B.

2. Zipf Rank

The most frequent qok- words in Currier B occupy similar Zipf ranks to the most frequent qu- words in Latin (though the Currier B words have a tendency to have lower Zipf ranks).


3. Reduplication

Some of the qu- words in Latin may be reduplicated, as may some of the qok- words in Currier B:









Sunday, September 12, 2021

Lexical vs. Textual Frequency

There are two ways you can look at the frequency of a letter or sequence of letters: One is the frequency of the letter in a text (textual frequency, the way we usually look at it); the other is the frequency of the letter in the lexicon (lexical frequency).

Since there is no reason for the frequency of a word to have any connection to the letters in it, we would expect to find that the relationship between textual frequency and lexical frequency is roughly linear. And, generally speaking, this is the case. However, there are usually a small number of outliers.

For example, looking at initial pairs of letters in Dante's Divina Commedia, we find two initial pairs that stand out:


The prefix ch- appears in the high-frequency word che (and its contracted form ch'), which drives up its textual frequency relative to its lexical frequency. The prefix ri- is a derivational prefix used to create a relatively large number of words of low frequency, driving up its lexical frequency relative to its textual frequency.

Latin has a different (but etymologically related) outlier:

The prefix qu- in latin appears in such relatively high-frequency words as qui, quod, quae, quam, quid, quo, quem, quoque, which drives up its textual frequency relative to its lexical frequency.

Currier A and B have different patterns from each other:
Currier A has the high-frequency words daiin and dain, which raises the textual frequency of the prefix da- relative to its lexical frequency.

Currier B has the high-frequency words qokeedy, qokain, qokedy, qokeey and qokain which raises the textual frequency of the prefix qo- relative to its lexical frequency.

In the case of the VM there may be multiple reasons for these outliers. There are probably lexical anomalies in the underlying language, but then the cipher itself could introduce its own odd behaviors through the use of homophones, multi-letter symbols, and so forth.

Tuesday, September 7, 2021

Latin Contractions

My efforts to get a copy of The Curse of the Voynich are themselves apparently cursed. The first time I tried to order this book I was at a vacation rental for a month, and discovered that the postal service would not deliver it because the rental had no mailbox. The second time my order was canceled because the book was out of stock. I am hopeful that my third effort will meet with a better outcome.

While I wait for it to arrive, however, I've been looking at one of Nick Pelling's ideas. Did the Voynich cipher employ contraction and abbreviation as part of its process? If so, it seems like this could explain the relatively low amount of information conveyed by Voynichese words. It would be a lossy compression process similar to the removal of vowels, but perhaps more culturally appropriate to the 15th century.

I looked at the 1901 German translation of Adriano Cappelli's Lexicon abbreviaturarum, and it seems that conventions for contraction and abbreviation evolved over time such that by the 14th or 15th centuries scribes were using a number of methods in conjunction, including the use of a small set of symbols borrowed from Tironian notes. In order to understand these processes better, I took thirty random entries from the lexicon and looked at what the scribes chose to keep from the full written word and what they felt they were able to do away with. In general, I found that words could be divided into three parts:

Prefix: The prefix is made of consecutive letters from the start of the word, including at minimum the first letter. In my samples, the prefix is one character long about 53% of the time, two characters about 23% of the time, three characters about 6% of the time.

Infix: The infix is made of of letters that are generally not consecutive, chosen from the middle of the word. Presumably these are letters that differentiate between one contracted word and another. There is roughly 12% chance that a given letter from the middle of the word will appear in the infix.

Suffix: The suffix is made of consecutive letters from the end of the word, except the -m of accusative endings, which is sometimes dropped. The last letter was included in the suffix about 63% of the time, the second-to-last about 30% of the time, the third-to-last about 6% of the time.

What is interesting about this, to me, is that the first letter of each word is always retained. That means, if the Voynich cipher employs abbreviations and/or contractions, and the subsequent steps are only forms of substitution (and not, for example, transposition), then it might be possible to crack the first letters of Voynichese words.

It would be hard to know if you had gotten it right, though!

Thursday, August 26, 2021

A Voynich-like Code

I've just read an article titled The Linguistics of the Voynich Manuscript by Claire L. Bowern and Luke Lindemann, which summarizes previous scholarship on the manuscript and concludes that "the character-level metrics show Voynichese to be unusual, while the word- and line-level metrics show it to be regular natural language."

Reading the article reminded me to finish this post, which I started several weeks ago. Here, I'll outline a cipher using ideas from my previous posts, which I believe a late medieval or early Renaissance scholar might plausibly have created, which I think would produce some of the features of the Voynich manuscript.

I'll walk through the cipher steps with an English phrase and a Latin phrase: Can you read these words? Potesne legere haec verba?

Step 1: Remove the vowels from all of the words. This is what causes the words of the cipher text to carry less information than they would in the source language. In this example I'm treating the letter v as a vowel in Latin, but w as a consonant in English, because these are the historical conventions in these languages.

English; cn y rd ths wrds?

Latin: ptsn lgr hc rb?

It is an open question for me whether it is feasible to reverse this step in Latin. In English I know it is reasonable if you are familiar with the general content of the text, because a similar approach was used to create mnemonics describing Masonic ceremonies:


Step 2: Encipher each word using a substitution cipher that replaces each letter with a syllable, with special syllables reserved for the last letters in each word, to create the appearance of an inflected language. This is what creates the low second-order character entropy.

In this case, I have created the key using the first and last syllables from polysyllabic words at the beginning of Virgil's Aeneid. I haven't bothered to create a complete key, it only covers the letters needed for this example.


A partial key

Using this key, the two example sentences become:

English: viam tum prono vepria liprocaa?

Latin: favelaam otrogus prirum proma?

One of the neat things about this cipher approach is that one could hypothetically train oneself to speak the cipher. 

Step 3: Write the cipher in a secret alphabet. This changes very little about the cipher, and might be considered more of a cultural requirement of the era.

To be clear, I don't think the Voynich cipher worked in exactly this way. For example, the frequency of daiin in the Currier A pages is nearly exactly the frequency of t (representing et, ut, te, tu, etc.) in a long devoweled Latin Text, but it isn't clear how daiin could be used to encode initial, medial or final t in other longer words. If the underlying language of the VM is Latin, and it is encoded using a system like this, then it is likely that there is some additional complexity in step 2. For example, there might be a set of words (like daiin, chol, chor) that encode single letters, then another set of prefixes and suffixes to encode letters in longer words.

Friday, August 6, 2021

Old Cryptography and Entropy

In my last two posts, I first suggested that Voynich Manuscript 81R might contain a poem in Latin dactylic hexameter, but then I argued that the lines only convey about half of the information necessary to encode such a poem. In this post I'll try to reconcile those two arguments by showing that a late medieval/early Renaissance cipher system could have produced this effect.

The pages of the VM have been carbon-dated to between 1404 and 1438. If the text is not a hoax, and it was written within a century or so of the production of the vellum, then what cryptographic techniques might the author plausibly have known, and how would they impact the total bits per line of an enciphered poem?

According to David Kahn's The Code-Breakers, the following methods might have been available to someone in Europe during that period. For most of these, I have created simulations using the Aeneid as a plain text, and measured the effect on bits per line using the formula for Pbc from my last post.

  • Writing backwards (0.2% increase)
  • Substituting dots for vowels (28.5% decrease)
  • Foreign alphabets (little or no change, depending on how well the foreign alphabet maps to the plaintext alphabet)
  • Simple substitution (no change)
  • Writing in consonants only (45.6% - 49% decrease, depending on whether v and j are treated as vowels)
  • Figurate expressions (impractical to test, but likely to increase bits per line)
  • Exotic alphabets (no change, same as simple substitution)
  • Shorthand (impractical to test, but likely to decrease bits per line)
  • Abbreviations (impractical to test, but certain to decrease bits per line)
  • Word substitutions (did not test, but likely to cause moderate increase or decrease to bits per line)
  • Homophones for vowels (increase bits per line, but the exact difference depends on the number of homophones per vowel. With two homophones for each vowel, there was a 19.5% increase)
  • Nulls (increase bits per line, but the exact difference depends on the number of distinct nulls used and the number of nulls inserted per line)
  • Homophones for consonants (increase bits per line, but the exact difference depends on the number of homophones per consonant)
  • Nomenclators (impact depends on the type of nomenclator. I tested with a large nomenclator and got a 44.5% decrease in bits per line)
If 81R contains a poem in Latin dactylic hexameter, then it appears the encoding system caused something like a 47.9% decrease in the number of bits per line. Only two of the encoding methods above have a similar effect:
  • Writing in consonants only
  • Using a large nomenclator
The first of these options is intriguing, because removing the vowels from a Latin text causes a significant number of lexical collisions, especially if v and j are treated as vowels. If this is one of the steps in the Voynich cipher process, then the appearance of repeated sequences like daiin daiin daiin in the VM could result from sequences like  ita ut tu, ut vitae tuae, etc.

That, of course, cannot be the only story here. If the VM is written in a cipher that removes all of the vowels, then it must also be written in a cipher that encodes single Latin consonants as strings of multiple Voynich letters in order to account for the length of Voynichese words. This must also be done in a way that increases the lengths of words without significantly increasing the number of bits per line.

I think this is quite possible to do. In my next post I'll try to demonstrate this with a proof-of-concept cipher that creates a cipher text like the VM from a Latin plaintext.

Monday, July 26, 2021

Entropy in Voynichese

It has often been observed that Voynich characters have relatively low entropy (c.f. this discussion on René Zandbergen's site). This is a serious problem for the proposal I made in my last post, where I suggested that page 81R of the Voynich Manuscript might contain a poem in Latin dactylic hexameter.

Suppose you calculate the bits of information conveyed by a character c of a text T using a formula like the following:

Sc = (ln(fT) - ln(fc)) / ln(2)

where

Sc is the number of bits conveyed by the single character c

fT is the number of characters in the text

fc is the number of times the character c appears in the text

Using this formula we find that the lines on 81R carry, on average, 121.4 bits of information. In contrast, lines of the Aeneid carry an average of 156.1 bits of information. This is a real problem, which becomes even more severe if you look at the incremental information conveyed by the second character in a pair. That is, for a character c appearing immediately after a character b:

Pbc = (ln(fb) - ln(fbc)) / ln(2)

where

Pbc is the number of bits conveyed by character c when it appears in the pair bc

fb is the number of times the character b appears in the text

fbc is the number of times the sequence bc appears in the text, which may also be expressed as the number of times that the character c appears immediately after b.

This second approach to measuring information tells us, for example, that the character "u" in a Latin text conveys no additional information when it follows "q". Since the total frequency of "qu" is the same as the total frequency of "u", the numerator is zero, and total bits likewise is zero.

When you apply this measure to the lines on 81R and the Aeneid, the average amount of information conveyed the lines of 81R drops to 66.9 bits, while the information conveyed in the average line of the Aeneid drops only to 128.5 bits.

This is a serious challenge to the idea that the plaintext on 81R is a Latin poem in dactylic hexameter, because it suggests that these lines simply don't contain enough information to encode such a poem. In my next post I will look at historically and culturally plausible enciphering schemes that could produce this effect.

Saturday, July 24, 2021

Linguistic Information from Voynich 81R

Page 81R of the Voynich Manuscript has a block of text with an interesting property that is different from other text in the VM. While most lines of text in the VM continue to a page margin or the boundary of an image, the text on 81R is ragged on the right side. In other texts, both printed and manuscript, this type of raggedness can be a property of poetry, wherein the breaks between lines are guided by metrical considerations rather than the need to use space on the page efficiently. Nick Pelling has a post that digs into this page, and he notes that the poem-like layout of this page was observed by Gabriel Landini on the Voynich mailing list in 1996.

So, if 81R contains a poem, then what kind of information could we derive from it? 

Generally speaking, a line of poetry is broken down into feet, and feet have some relationship to syllables, though the exact nature of that relationship varies. In Latin and Greek dactylic hexameter, for example, a foot is made of two poetically long syllables (a spondee) or else a long syllable and two short ones (a dactyl), and there are six feet per line. In iambic pentameter a foot is made of one unstressed and one stressed syllable (an iamb) and there are five feet per line. Other styles of poetry use other definitions of feet and other numbers of feet per line.

Whatever definition there is to a foot and a line, however, there is going to be some natural relationship between the length of the line and the number of syllables in it. For a given language and a given metrical form, that will lead to a certain average number of words per line, with a certain standard deviation.

These values are different for different languages and metrical forms. In the graph below, I have taken multiple 31-line samples from five epic poems and graphed the average number of words per line (x axis) against the standard deviation in number of words per line (y axis).


In the graph above, you can see that each of these epic poems has a different average number of words per line. Chaucer is by far the highest, while the Serbian epic poem Strahinja Banović is at the extreme other end.

If the number of words in a line of Voynich text is equal to the number of words in a line of the underlying plain text, and the text on 81R is a poem, then where does it fall on this graph?

There is not universal agreement on where all of the wordbreaks are on 81R, so we have a range of answers, but it is a relatively narrow range, and the answer is relatively clear. Among the five sample epic poems, the most similar in this respect is the Aeneid. The three red bubbles in the graph below demonstrate the range of values for page 81R, and the blue bubbles are the sample values from the Aeneid.

The next most similar poem is the Anglo-Norman Voyage de Brendan, the right edge of which touches the left edge of the Voynich range:

This suggests the possibility that 81R is written in Latin dactylic hexameter, or else possibly something like the Anglo-Norman octosyllables of Voyage de Brendan.

The argument for Latin dactylic hexameter is strengthened over something like Old French by the fact that there are 31 lines on 81R. Old French poetry (like Middle English) was built on rhyming couplets, and to have an odd number of lines would mean having a line dangling at the end with no rhyme.

Of course, the VM is not a simple substitution cipher, and it's always possible that a Voynichese word does not correspond to a plaintext word, but this is a direction I will hopefully expand on more in my next post.

Wednesday, July 21, 2021

A domain-specific language for representing morphology

Whenever I learn a new language, I instinctively want to model the morphology in code. It's inefficient to write grammars in generic programming languages, though, and that's where I always get stuck.

This month I developed a domain-specific language for representing morphology. The interpreter is written in Javascript, but could easily be rewritten in almost any other language.

A project in this language starts out with a declaration of the types of graphemes used in the language. (It works on the level of graphemes instead of phonemes, but phonemic systems are a subset of graphemic systems, so there is nothing lost by doing it this way.)

Here is an example, which defines vowels (V) and consonants (C) in a system with five vowels, phonemic vowel length, and certain digraphs (such as 'hw', 'qu', 'hl').

  classes: {

    V: '[aeiouáéíóú]',

    C: '[ghn]w|[hnrt]y|qu|h[rl]|[bdfghklmnpqrstvwy]'

  }

The values on the left are identifiers (V, C) and the values on the right are regular expressions.

These identifiers can then be used in transformations like the following, which will append -n to a word ending in a vowel, or -en to a word ending in a consonant.

    append_n: [

      '* V -> * V n',

      '* C -> * C en'

    ]

This transformation is composed of two rules, which are turned into regular expressions like the following:

/(^.*)([aeiouáéíóú]$)/
/(^.*)([ghn]w|[hnrt]y|qu|h[rl]|[bdfghklmnpqrstvwy]$)/

Each of the rules also has a map describing the way that the parts of the input are transformed into an output, like this:

[1, 2, "n"]
[1, 2, "en"]

A candidate word, such as 'arat', is tested against each regular expression from top to bottom. In this case, it will be matched against the second expression, and the match results will look like this:

["arat", "ara", "t"]

The transformation will then assemble the answer using the mapping [1, 2, "en"]. The numbers in the mapping refer to elements in the zero-indexed match results, so the result will be "ara" + "t" + "en" = "araten".

In addition to preparing regular expressions and mappings for applying transformations, the system also prepares reversing versions. In this case, we have the following reverse expressions and mappings:

/(^.*)([aeiouáéíóú])(n$)/
/(^.*)([ghn]w|[hnrt]y|qu|h[rl]|[bdfghklmnpqrstvwy])(en$)/

[1, 2]
[1, 2]

In reverse application, instead of using only the first rule that matches, the system applies any rule that matches and returns an array of answers. So, if we reverse "araten" then we will match against both rules, and get the answers ["arat", "arate"].

The value of reverse application is that we can take inflected words from a text and reverse the inflections to arrive at a set of possible stems.

There is much more to it, of course, because morphology is complex.