My Contributions (original) (raw)

Terry Ritter

2006 Jan 1

After more than a decade of full-time work in cryptography, I am sometimes challenged to summarize my work in a form "for mere mortals." While I can give a general overview of what I have done, most of that consists of fairly detailed improvements to existing technology. Seeing each improvement requires at least a general understanding of the technical problem which each improvement solves. Even academic cryptographers may have difficulty perceiving these issues since they do not address common academic problems.


Contents


No Ciphers for Sale, Just Technology and Consulting

I offer no current cipher programs, mainly because I am not happy with current computers as a security platform. Nor is the issue just Windows, but any system which is hidden or too complex for individuals to analyze and validate. It is unlikely that anycipher systemcould operate securely on an insecure computer (at least not to commonly expected cryptographic levels of security). There is also no point in ciphering when theplaintextitself is fundamentally at risk. Moreover, no machine connected to The Net can be considered secure, and it seems doubtful that any firewall could be certified to cryptographic levels of assurance.

Personally, I use one computer on The Net, and a different computer for my work. The work computer is not, and never has been, connected to The Net or even any LAN. Only the work computer can support secure information, and only very limited information is passed back and forth on floppy or CDR. Having two isolated computers is one way to build substantial security, but it is awkward.

My current crypto business consists of promoting my noveltechnology for license, and, of course,crypto consulting.


What is Cryptography?

Cryptography is the art and science of hiding information while in storage or transit. The basic idea is to transform orencipher data into a different form (ciphertext), so that it seems to lack meaning to those who do not know how to transform it back. Normally, acipher will support a plethora of different transformations, far too many to try them all, each selected by a differentkey value. The goal is to create a needle in a haystack, where the correct deciphering is just one of trillions of possible decipherings, all but one wrong.

Understanding cryptography can be difficult because it is fundamentally unlike anything else we design and build. Almost universally, we build something for an effect we can measure, and we can see how well we do. But cryptography "works" only to the extent that it keeps our information secret fromopponents we do not know and who will not tell us when we fail. In fact, we can expect opponents to moan about how difficult it would be tobreak a cipher which they actually can break easily, to get us to use a cipher they can expose. In practice, cryptography is a contest, and the environment is deception and misdirection. That is completely different from what we normally find in science and technology, and calculated deception is something for which few are prepared.

After a decade and more in cryptography, I find myself increasingly distressed by the state of the field and many of those who supposedly represent it. Various example exist:


Cryptography in the 90's

By far the major issue in cryptography during the past couple of decades has beenpublic keycryptography. With a conventional orsecret key cipher, we use some key value to select an arbitrary transformation into ciphertext, and then use exactly the same key value to reverse the transformation. Anyone who enciphers a message also can decipher it.

In a public key cipher, there are two (related) key values: One can be made public so anyone can use that transformation, while the other key is kept secret. So anyone can encipher a message, but only the party holding the hidden private key can expose the enciphered messages.

A public key transformation is very, very slow, so most "public key" ciphers actually arehybrid ciphers. These ciphers use public key technology just to transfer one arbitrary value, which is then used as the key for a much faster secret key cipher.

One problem with public key technology is that, if someone can_intercept_ messages in transit, and also get the far party to use a new key (quite plausible, since public keys supposedly can be freely distributed), the person "in the middle" can expose messageswithout breaking any cipher at all. Thus, even a public key cipher which was mathematicallyprovenunbreakable (which is probably impossible anyway), could expose information to aman in the middle attack. Only public key ciphers have that weakness, and that is not a weakness I am willing to accept.

Basically I work on automatedsecret-key ciphers, which, of course, are major components in most public key ciphers anyway. But the well-known state of the art has come a long, long way from its origin in classic or hand ciphering.


Background: Modern Secret-Key Ciphering

Modern automatic ciphering benefits greatly both from the availability of computing machinery, and the resulting growth of information-processing knowledge. Thus:

Consequently, in addition to generalizing the older concepts like:


Old Crypto Problems / New Crypto Mechanisms

I believe I have made a few fundamental advances beyond the state of the art in cryptographic mechanisms. Since these mechanisms were intended to address fundamental problems in practical cryptography, they probably make no sense unless we first understand the problems.

  1. Stream Cipher Weakness. A conventionalstream cipher is basically akeyed random number generator(RNG) whose output is mixed withplaintextdata, typically usingexclusive-OR, thus producingciphertext. Butcryptanalysisassumes that a large amount of plaintext and the related ciphertext (known-plaintext) is available to theopponent. Under these conditions exclusive-OR has no strength at all, and immediately exposes theRNGsequence. Then, knowing both the RNG design and RNG output, the opponents have a good chance at reconstructing the internalstate of the RNG, and that willbreakthe cipher.
    To address the problem of stream cipher weakness, academic cryptography created ever more complex RNG designs. Since each RNG is just some computer state which is combined or mixed in some way to produce a new state value, the obvious idea was to have a multiplicity of RNG's, and mix them together in some way to gain exponential strength. Surprisingly, that almost never works (see:The Story of Combiner Correlation: A Literature Survey).
    My take on the problem was different: I saw the negative outcome being the result of both exclusive-OR weakness_and_ RNG weakness. Since RNG strength was being addressed, my response was to look at thecombiningprocess, to see if that could be improved. At the time (around 1988), it was unclear whether unbiased, reversible, nonlinear combining could even exist. Ultimately, I called my answer Dynamic Substitution.
    Dynamic Substitutioncan be seen as a modified form ofSimple Substitutionto replace the conventional exclusive-ORstream cipher combiner. In Dynamic Substitution, each current plaintext character is substituted through a table and becomes ciphertext as usual, but then the just-used substitution element is _exchanged_with some other as selected by the RNG sequence, which is novel. In this way, the more often a particular transformation is used, the more often it changes.
    Dynamic Substitution is a keyed, nonlinearcombinerwith state. In addition to simply replacing exclusive-OR, it can be used in a sequence of combinings, or in a selection among different combiners, neither of which make sense with exclusive-OR. Dynamic Substitution thus also enables various architectures which were previously unhelpful.
    Also see theDynamic Substitutionsection on my home page.
  2. **Dynamic Transposition Possibilities.**Once we have a DynamicSubstitution, that begs the question as to whether a DynamicTranspositioncan exist as well. The answer is: "Yes."
    Dynamic Transpositionhas two parts:
    1. The construction of a datablock with an equal number of 1's and 0's (this can be done fairly easily, for arbitrary data, with relatively low expansion).
    2. The use ofkeyed bit-shuffling to select one from among any possible bit-permutation.
      Each block of a message is shuffled independently.
      The strength of the technique is in the fact that the ciphertext result could have been produced by a plethora of _different_shuffling permutations, even given the exact same plaintext.Known plaintextthus does not disclose the keyed shuffling sequence, which is what we are hiding. Dynamic Transposition is thus largely immune to conventionalknown plaintext attackin a way that exclusive-OR simply cannot be.
      Dynamic Transposition also offers a theoretical clarity which cannot exist in conventionalblock ciphers: Conventional block ciphers cannot begin to key-select from among every possible huge emulatedsimple substitution table because there are far, far too many potential tables, making a key which could select any of them far too large to use. In contrast, Dynamic Transposition can produce _any possible_bit-permutation, given only an RNG with large enough state. There is thus no anxiety about a particular selected set of transformations which may or may not have useful statistical properties for opponents.
      I see Dynamic Transposition as a useful basis for provable strength in practice (although I have no such proof). See the 1991 Cryptologia articleTransposition Cipher with Pseudo-Random Shuffling: The Dynamic Transposition Combiner, and the 2001 sci.crypt article clarifying an earlier discussion:Dynamic Transposition Revisited Again.
  3. **Block Cipher Construction.**In my view, mostblock ciphershave a remarkably _ad hoc_construction, given their academic origins. In particular, I see the use ofroundsas testimony that the desiredconfusion anddiffusiondid not occur the first time the operation was run. But simply doing something again and again until the result is somehow good enough is no substitute for doing it right in the first place.
    A conventional block cipher is nothing more than an emulated huge substitution table. The substitution must be computed as needed because such a table must be far too large to construct and store completely. In my view, what was needed was a construction technique which could connect keyed and randomly-constructed modest-size tables to simulate a larger-size table. If we could do that efficiently, we could repeat it using the emulated tables, to get even larger tables, and thus eventually support huge block sizes. (Huge blocks reduce the probability that the same plaintext block will re-occur, thus perhaps avoiding the need for aninitialization vectorand thus allowing out-of-sequence ciphering as can occur in network packet processing.)
    The elementalBalanced Block Mixer(BBM) is a two-input, two-output mixing function which is reversible, in which the mixing can be keyed. The implementation consists of twoorthogonal Latin squares; the mixing may consist of keyed, explicit, nonlinear tables or simple linear computations, when that is acceptable (for example, see:Fenced DES).
    If we put two smallblocks or values of data into a BBM, we get two values out, where each output valueprovablydepends in abalanced way upon _both_of the input values. Moreover, the elemental oLs transformations can be substituted for the arithmetic computations in anFFT or Walsh transformation. By using FFT-like patterns, we can quickly mix the results from many small substitution tables such that each result provably depends upon every table, which is exactly what we need to build an emulated larger table. And, in marked contrast to conventional FFT or Walsh transformations, BBM computations do not expand the data range (see:ciphertext expansion.)
    The BBM structure thus allows us to build a large block cipher out of small substitution tables orS-boxeswhich we then mix together. And, independent of whether or not we key-select the mixing, we can key-select the tables from among all possible tables. (Also see theBalanced Block Mixingarea of my home page.)
  4. **Any-Size Message / Fixed-Size Blocks.**One problem with conventional block ciphering is that, inevitably, most messages will consist of some number of whole blocks plus some extra which is not a full block. Since block ciphering can only cipher full blocks, there often will be someciphertext expansion. There are various ways of handling the end part (including stream ciphering the end data), but these techniques often have consequences which can be problematic in particular common situations.
    One way to avoid the last block problem entirely is to dynamically adjust block size so that the last block is never very short and can be enciphered as a full block. The ability to adjust block size is novel, and I personally originated the term "Variable Size Block Cipher" (VSBC) in 1995 (see:VSBC Newsgroup Discussion); the term was then picked up without attribution for more popular academic use.
    VSBC's have the usual conventional block cipher characteristics, but can cipher blocks of virtually arbitrary size (beyond some minimum) to the byte, as dynamically selected at ciphering time. This occurs in the context of a "constant depth" process, which needs no additional computation per byte, even with huge blocks, so there is no motive to cipher data in small blocks.
    Even when plenty of message remains, VSBC blocks can vary in size to help resist cryptanalysis; by adding random amounts of random data (nulls), simply knowing the originalplaintext is not enough to match with resulting ciphertext. Thus, VSBC's can avoidknown plaintext attacksin ways that go beyond conventional block ciphering strength, so that some of the conventional assumptions ofcryptanalysisdo not apply. In summary:
    • VSBC's can support huge blocks and so allow a return toECB-mode (Electronic Code Book) operation, which is frowned-upon withDES.
    • VSBC's can support ciphering already-defined fields in existing systems without the need for added Initial Value (IV) storage. While an IV is not necessarily difficult to make or use, it does expand the amount of store needed. This makes it difficult to cipher wholly within existing data structures.
    • VSBC's also support the ability to hide both the start and end positions of each block. (We send some random data at the start, and again at the end, and use blocks of keyed pseudorandom size in between.) This ability to hide the extent of a block directly confronts one of the things we normally assume anopponent will have in anyattack on a block cipher.
    • VSBC's make it easy to expand the block to contain control fields, fordynamic keying,authentication, or both.
      Also see theVariable Size Block Cipherssection on my home page.

Analysis by Project

Analysis is central to what I do, and has been for all of my professional life.

Analysis is understanding in depth, and is far more than just particular choreographedcryptographic attacks. Current attacks are the result of a quarter-century of research, mostly on conventionalblock ciphers. For mynovel cipherdesigns, when conventional attacks are not appropriate, we simply do not have a similar depth of understanding, and we cannot present what we do not have.

Before Windows, in the days of DOS (the late 80's and early 90's), I did personally design, implement, and offer severalDynamic Substitution cipher systemsfor sale, includingCLOAK2 andPenknife.

  1. CLOAK2 Stream Cipher
    One can see the design of my 1993-1995 DOS-based CLOAK2secret key stream cipher inThe Cloak2 Design.
    The CLOAK2 cipher uses novelDynamic Substitution combinertechnology which I invented, patented, own and described in Cryptologia in 1991. It also uses the hugeAdditive RNG'swhich were part of my article:The Efficient Generation of Cryptographic Confusion Sequenceswhich also appeared in Cryptologia in 1991.
    In my view, the CLOAK2 security arguments are clearer and more believable than those for modern block ciphers. CLOAK2 does not use, and thus does not rely upon, public key techonology, and so is far closer to our normal experience with metal keys and locks and doors.
    CLOAK2 has substantial secret key management methodology often overlooked by new cipher designers. See, for example:CLOAK2 Features: Key Management.
    CLOAK2 strength arguments are addressed in:The Cloak2 Design: Strength Arguments. The attacks discussed include: Brute Force on RNG, Brute Force on User Key, Chosen Key, Differential Cryptanalysis, Ciphertext Only, Known Plaintext, Key-Stream Repetition, Birthday Attack on Message Keys, and Chosen Plaintext. This list is more than we usually see in cipher presentations.
    The discussions themselves are simplified by the fact that CLOAK2 is a stream cipher, not a block cipher, and by the use of Dynamic Substitution technology which is widely discussed elsewhere on my pages, and which itself directly opposes the usual stream-cipher attacks.
  2. Balanced Block Mixing Technology
    The development ofBBM technology was a classic multi-year R&D project with multiple open reports -- all available on line -- describing incremental results. This work started in 1994.
    BBM technology is not just one idea, and certainly not just yet another "cipher." The original approach used the idea of countedbalance in themixing mixing functions. Subsequent articles give other constructions and analysis from different directions. One of those is an actualproof that even asingle input bit-change would produce a bit-change on alloutput ports. That proof thus guarantees that all next- level S-Boxes would be "active." The proof avoids the need for any such measurement or optimization.
    A particularly innovative analytic approach was my 1997 experimental statistical analysis ofBoolean function nonlinearity distributions inMixing constructions. In 1998 I found how to createkeyed orthogonal Latin squaresand extended the results to nonlinear mixing. Those experiments support the idea that a nonlinearity distribution similar to that of ideal S-Boxes can be created by mixing boxes of half the bit size (and then half again and so on). In this way we can create huge emulated boxes -- conventional block ciphers -- by mixing small boxes in sufficient levels. In practice, the exact same code can handle legacy 64-bit blocks, modern 256-bit blocks, and 4K byte large blocks.
  3. Dynamic Transposition
    My latest cipher is a description, not an implementation:Dynamic Transposition Revisited Again, an article which is now almost two years old. It is actually a detailed elaboration of the Dynamic Transposition combiner I also discussed in Cryptologia in 1991. About the last fifth of the article discusses "ATTACKS," and it starts off:

    "Complex and hard-to-refute conventional block-cipher attacks, such as Differential and Linear Cryptanalysis, would seem to be completely irrelevant to ciphers of the Dynamic Transposition type. Where there are no S-boxes, there are no linear or differential relations in S-boxes. Where there is no emulated table, there is no linear or differential relationship in that table. And where there is no Feistel structure, there is no ability to peel off Feistel layers. To have a block cipher where we can make these statements is really quite remarkable."
    That analysis then continues for page after page, in something like 19 more paragraphs.
    Personally, I think Dynamic Transposition is one of the most important ciphers around, for several reasons:

    1. First, it is one of the few open examples of a serioustransposition cipher.
    2. Next, it is an example of a block cipher which is not a conventional block cipher; that is, it does not attempt to emulate a hugesubstitution. It does not have and does not needdiffusion, oravalanche, orS-boxes, orrounds.
    3. Third, the design demonstrates that it is possible to have a cipher function which is "complete." That is, the cipher has sufficient internalstateto select among every possible permutation, instead of just a virtually infinitesimal pre-selected subset, as in conventionalblock ciphers.
    4. Fourth, the design is a clear attempt to finesse Shannonperfect secrecyinto practical implementation. It thus attempts to leverage actual theoretical impossibilities, instead of simply using the _ad hoc_constructs of current block ciphers.
    5. Last but not least, it demonstrates a ciphering function which ispermutation, but in which the particular permutation which occurs apparently cannot be distinguished by the opponent, even ifthe opponent has fullknown plaintext. This is because a vast number of different permutations will each produce the exact same ciphertext. This is a particularly interestingstrength issue.

Relationships Discovered

I have also found a couple of interesting combinatoric relationships:

  1. **Augmented Repetitions.**One relationship is related to thebirthday paradox. If we are willing to admit a new definition for what I call"augmented" repetitions, very simple and reversible formulas predict the number of augmented doubles, triples, etc. for random sampling from a given population. (See:Estimating Population from Repetitions in Accumulated Random Samples.) The point of this, of course, is to be able toestimate a huge population size fairly accurately from trials with on the order of square-root-of-the-population samples. Population estimation can help to certify hardwarereally randomnumber generators (RNG's); the values appear to correspond well toentropycomputations, and thus confirm those results. Population estimation is extremely general, and can be applied to bit-sequences as well as value-sequences. Population estimates perhaps also can be used to estimate the number of unique keys in small versions ofscalableblock ciphers.
  2. **Combinatoric Model for Runs-Up/Runs-Down Tests.**I have identified the combinatoric model for runs-up and runs-down RNG tests, and so eliminate a statistical bias which one gets using the approximation in Knuth II on sequences of byte-range values. See:Chi-Square Bias in Runs-Up/Down RNG Tests

Supporting Work

  1. Practical Latin Square Combiners
  2. Orthogonal Latin Squares, Nonlinear Balanced Block Mixers, and Mixing Ciphers
  3. Measured Boolean Function Nonlinearity in Mixing Cipher Constructions
  4. Active Boolean Function Nonlinearity Measurement in JavaScript.
  5. Random Noise Sources
  6. Experimental Characterization of Recorded Noise.

General-Use and Reference Pages

  1. Learning About Cryptography
  2. Crypto Glossary
  3. Normal, Chi-Square and Kolmogorov-Smirnov Statistics Functions in JavaScript
  4. Base Conversion, Logs, Powers, Factorials, Permutations and Combinations in JavaScript

Literature Surveys

Basically a reviewed index to the literature on various topics. Now a bit old, but still useful for those who do not know the older literature:

  1. S-Box Design: A Literature Survey
  2. The Story of Combiner Correlation: A Literature Survey (the story of stream ciphers)
  3. Differential Cryptanalysis: A Literature Survey
  4. Linear Cryptanalysis: A Literature Survey
  5. Walsh-Hadamard Transforms: A Literature Survey
  6. Linear Complexity: A Literature Survey
  7. RNG Surveys: A Literature Survey
  8. RNG Implementations: A Literature Survey
  9. Random Electrical Noise: A Literature Survey
  10. Random Number Machines: A Literature Survey
  11. Randomness Tests: A Literature Survey
  12. Latin Squares: A Literature Survey

Peer-Reviewed and Other Journal and Magazine Articles

From my crypto era:

  1. The Great CRC Mystery
  2. Substitution Cipher with Pseudo-Random Shuffling: The Dynamic Substitution Combiner
  3. Transposition Cipher with Pseudo-Random Shuffling: The Dynamic Transposition Combiner
  4. The Politics of Software Patents
  5. The Efficient Generation of Cryptographic Confusion Sequences
  6. Voice and Video Cryptography in a DSP Environment
  7. Estimating Population from Repetitions in Accumulated Random Samples
  8. Cryptography: Is Staying with the Herd Really Best? (Copyright 1999, IEEE.)

Usenet sci.crypt Articles

A few of the better articles from a much larger body of work:

  1. Fenced DES
  2. Chi-Square Bias in Runs-Up/Down RNG Tests
  3. Measured Boolean Function Nonlinearity in Mixing Cipher Constructions
  4. Measuring Boolean Function Nonlinearity by Walsh Transform
  5. Measured Boolean Function Nonlinearity in Variable Size Block Ciphers
  6. Measured Boolean Function Nonlinearity in Feistel Cipher Constructions
  7. Measured Distant Avalanche in Large Block Ciphers
  8. Break This 4-Bit Block Mixing Cipher
  9. Break This 8-Bit Block Mixing Cipher
  10. Practical Latin Square Combiners
  11. Orthogonal Latin Squares, Nonlinear Balanced Block Mixers, and Mixing Ciphers
  12. Random Noise Sources
  13. Experimental Characterization of Recorded Noise
  14. Dynamic Transposition Revisited Again

On Ciphers and Trust

For some time, now, I have been involved in trying to firstly_expose_, and secondly address some aspects ofcryptographywhich are fundamentallyunscientific:

Ciphersappear to differ from all other constructed devices in our society in that we cannot know whether or not they "work." We demand that a cipher not expose our information to anopponent, but we have no way to know whether our cipher actually succeeds. It is unreasonable to hope that our opponents will inform us of their successes. So we have no way to know when we must improve our cipher, or use a completely different one. This fundamental inability to measure "success" has massive implications with respect to cryptographicengineering.

We cannot know our opponents, and so also cannot know their capabilities. Sincestrengthin practice is inherentlycontextual(the weakness of a cipher being dependent upon the knowledge of particular opponents), the inability to know the context is a fundamental problem in the discussion of practical cryptographic strength. Ideally we would have a strengthproof that would apply to even the most capable opponent; but only rarely are current proofs useful in practice.

These issues go to the heart of thescientific investigation of practical cryptography, as opposed to academic cryptography. Although many things about cryptography are measurable and so can be scientific, the overall success of a cipher is not one of those things. (It seems especially ironic that this is the only thing we really need.) Academic work which fails to clearly make the distinction between theory and practice further confuses an already widely-misunderstood situation.

Once we accept that practical cryptography is a uniquely unscientific situation, we can seek to address it. For example, if we assume that some "strong enough" cipher designs exist (although we may not be able to identify them), we may gain an advantage byenciphering multiple times with different ciphers and keys. (For example, see Shannon's proposal:algebra of secrecy systems). In addition, using a "stack" of ciphers means thatknown plaintextinformation for any particular cipher simply would not be available to an opponent. Since known-plaintext is the basis for many if not most attacks, preventing known-plaintext exposure for the individual ciphers is a big advantage.

We can carry the war to our opponents by using a continuously expanding set of ciphers, thus requiring opponents to identify, obtain, analyze, and break each cipher we might use. Currently, we all tend to use "standard" ciphers, which thus protect a huge amount of potentially profitable data. The common use of a standard cipher sets up a profitable and fixed target for the opponent and thus increases therisk for the user. Unlike most fields concerned with danger, cryptography seems remarkably sanguine about deliberately creating asingle point of failuresituation, where the failure of just one cipher can have catastrophic results.

I have discussed these issues extensively, including 1996 discussions on email cipher weakness (see:Cryptography is War!), extensive 1999 Usenet discussions (see:Fixing Strength Problems in Cipher Use), and a 1999 article published in IEEE Computer(Cryptography: Is Staying with the Herd Really Best?).


Terry Ritter, hiscurrent address, and histop page.