BLOGGER TEMPLATES AND TWITTER BACKGROUNDS »

Friday, September 25, 2009

LEC 3 MODERN CRIPTOGRAPHY

1.0 - Introduction:

Cryptography is the science of devising methods that allow information to be sent in a secure form in such a way that the only person able to retrieve this information is the intended recipient.

The basic principle is this: A message being sent is known as plaintext. The message is then coded using a cryptographic algorithm . This process is called encryption (see Fig. 1). An encrypted message is known as ciphertext, and is turned back into plaintext by the process of decryption.

fig1.gif

Fig. 1

It must be assumed that any eavesdropper has access to all communications between the sender and the recipient. A method of encryption is only secure if even with this complete access, the eavesdropper is still unable to recover the original plaintext from the ciphertext.

There is a big difference between security and obscurity. If a message is left for somebody in an airport locker, and the details of the airport and the locker number is known only by the intended recipient, then this message is not secure, merely obscure. If however, all potential eavesdroppers know the exact location of the locker, and they still cannot open the locker and access the message, then this message is secure.

In the last few decades cryptographic algorithms, being mathematical by nature, have become sufficiently advanced that they can only be handled by computers. This in effect means that plaintext is binary in form, and can therefore be anything; a picture, a voice, an e-mail or even a video - it makes no difference, a string of binary can represent any of these. This paper discusses all cryptography from a binary standpoint.


2.0- Cryptographic algorithms

The actual mathematical function used to encrypt and decrypt messages is called a cryptographic algorithm or cipher. This is only part of the system used to send and receive secure messages. This will become clearer further on when specific systems are discussed in detail.

2.1 - Restricted algorithms

If, as with most historical ciphers, the security of the message being sent relies on the algorithm itself remaining secret, then that algorithm is known as a restricted algorithm. These have a number of fundamental drawbacks (Ref. 3).
  • The algorithm obviously has to be restricted to only those people that you want to be able to decode your message. Therefore a new algorithm must be invented for every discrete group of users.
  • A large or changing group of users cannot utilise them, as every time one user leaves the group, everyone must change algorithm.
  • If the algorithm is compromised in any way, a new algorithm must be implemented.

2.2 - Key-based algorithms

Practically all modern cryptographic systems make use of a key. Algorithms that use a key system allow all details of the algorithm to be widely available. This is because all of the security lies in the key. With a key-based algorithm the plaintext is encrypted and decrypted by the algorithm which uses a certain key, and the resulting ciphertext is dependant on the key, and not the algorithm. This means that an eavesdropper can have a complete copy of the algorithm in use, but without the specific key used to encrypt that message it is useless.

2.2.1 - Symmetric Algorithms

Symmetric algorithms have one key that is used both to encrypt and decrypt the message, hence their name. In order for the recipient to decrypt the message they need to have an identical copy of the key. This presents one major problem; unless the recipient can meet the sender in person and obtain a key that way, then the key itself must be transmitted to the recipient and is thus susceptible to eavesdropping.

There are two types of symmetric algorithms. Stream ciphers operate on plaintext one bit at a time. Block ciphers operate on groups of bits called blocks which are generally 64 bits long.

Two symmetric algorithms, both block ciphers, are considered in this paper - the Data Encryption Standard (DES) and the International Data Encryption Algorithm (IDEA) .

2.2.2 - Public-Key Algorithms

Public-key algorithms are asymmetric, that is to say the key that is used to encrypt the message is different to the key used to decrypt the message. The encryption key, known as the public key is used to encrypt a message, but the message can only be decoded by the person that has the decryption key, known as the private key.

This type of algorithm has a number of advantages over traditional symmetric ciphers; it means that the recipient can make their public key widely available - anyone wanting to send them a message uses the algorithm and the recipient’s public key to do so. An eavesdropper may have both the algorithm and the public key, but will still not be able to decrypt the message. Only the recipient, with their private key can decrypt the message.

A disadvantage of public-key algorithms is that they are more computationally intensive than symmetric algorithms, and therefore encryption and decryption take longer. This may not be significant for a short text message, but certainly is for long messages or audio/video.

This paper describes the two public-key algorithms, the RSA algorithm , and the Pretty Good Privacy (PGP) hybrid algorithm .

2.3 - One Time Pads:

The one-time pad was invented by Major Joseph Mauborgne and Gilbert Bernam in 1917, and is an unconditionally secure (i.e. unbreakable) algorithm. The theory behind a one-time pad is simple. The pad is a non-repeating random string of letters. Each letter on the pad is used once only to encrypt one corresponding plaintext character. After use the pad must never be re-used. As long as the pad remains secure, so is the message. This is because a random key added to a non-random message produces completely random ciphertext, and there is absolutely no amount of analysis or computation that can alter that. If both pads are destroyed then the original message will never be recovered. There are two major drawbacks: Firstly, it is extremely hard to generate truly random numbers, and a pad that has even a couple of non-random properties is theoretically breakable. Secondly, because the pad can never be reused no matter how large it is, the length of the pad must be the same as the length of the message - fine for text, but virtually impossible for video.

2.4 - Steganography

Steganography is not actually a method of encrypting messages, but hiding them within something else to enable them to pass undetected. Traditionally this was achieved with invisible ink, microfilm or taking the first letter from each word of a message. This is now achieved by hiding the message within a graphics or sound file. For instance in a 256-greyscale image, if the least significant bit of each byte is replaced with a bit from the message then the result will be indistinguishable to the human eye (Ref. 1). An eavesdropper will not even realise a message is being sent. This is not cryptography however, and although it would fool a human, a computer would be able to detect this very quickly and reproduce the original message.

2.5 - Cryptanalysis

Cryptanalysis is the science (or black art!) of recovering the plaintext of a message from the ciphertext without access to the key. In cryptanalysis, it is always assumed that the cryptanalyst has full access to the algorithm. An attempted cryptanalysis is known as an attack, of which there are four major types:
  • Ciphertext-only: The only information the cryptanalyst has to work with is the ciphertext of various messages all encrypted with the same algorithm.
  • Known-plaintext: In this scenario, the cryptanalyst has access not only to the ciphertext of various messages, but also the corresponding plaintext as well.
  • Chosen-plaintext: The cryptanalyst has access to the same information as in a Known-plaintext attack, but this time may choose the plaintext that gets encrypted. This attack is more powerful, as specific plaintext blocks can be chosen that may yield more information about the key. An Adaptive-chosen- plaintext attack is merely one where the cryptanalyst may repeatedly encrypt plaintext, thereby modifying the input based on the results of a previous encryption.
  • Chosen-ciphertext: The cryptanalyst is able to repeatedly choose ciphertext to be decrypted, and has access to the resulting plaintext. From this they can try to deduce the key.
(Ref. 3)

2.6 - Algorithm security

There is only one totally secure algorithm, the one-time pad . All other algorithms can be broken given infinite time and resources. Modern cryptography relies on making it computationally unfeasible to break an algorithm; this means that whilst it is theoretically possible, the time-scale and resources involved make it completely unrealistic.

If an algorithm is presumed to be perfect, then the only method of breaking it relies on trying every possible key combination until the resulting ciphertext makes sense. This type of attack is called a brute-force attack. The field of parallel computing is perfectly suited to the task of brute force attacks, as every processor can be given a number of possible keys to try, and they do not need to interact with each other at all except to announce the result. A technique that is becoming increasingly popular is parallel processing using thousands of individual computers connected to the Internet. This is known as distributed computing.

However strong or weak the algorithm used to encrypt it, a message can be thought of as secure if the time and/or resources needed to recover the plaintext greatly exceed the benefits bestowed by having the contents. This could be because the cost involved is greater than the financial value of the message, or simply that by the time the plaintext is recovered the contents will be outdated.


3.0 - Operations used by algorithms

Although the methods of encryption/decryption have changed dramatically since the advent of computers, there are still only two basic operations that can be carried out on a piece of plaintext - substitution and transposition. The only real difference is that whereas before these were carried out with the alphabet, nowadays they are carried out on binary bits.

3.1 - Substitution

Substitution operations replace bits in the plaintext with other bits decided upon by the algorithm, to produce ciphertext. This substitution then just has to be reversed to produce plaintext from ciphertext. This can be made increasingly complicated. For instance one plaintext character could correspond to one of a number of ciphertext characters (homphonic substitution), or each character of plaintext is substituted by a character of corresponding position in a length of another text (running cipher).

3.2 - Transposition

Transposition (or permutation) does not alter any of the bits in plaintext, but instead move their positions around within it. If the resultant ciphertext is then put through more transpositions, the end result is increasingly secure.

3.3 - XOR

XOR is an exclusive-or operation. It is a Boolean operator such that if 1 of two bits is true, then so is the result, but if both are true or both are false then the result is false.

e.g.

0 XOR 0 = 0
1 XOR 0 = 1
0 XOR 1 = 1
1 XOR 1 = 0
A surprising amount of commercial software uses simple XOR functions to provide security, including the USA digital cellular telephone network and many office applications, and it is trivial to crack (Ref. 3). However the XOR operation, as will be seen later in this paper, is a vital part of many advanced cryptographic algorithms when performed between long blocks of bits that also undergo substitution and/or transposition.


4.0 - Algorithms in detail:

4.1 - DES

The US National Bureau of Standards (NSB) published the Data Encryption Standard in 1975. Created by IBM, DES came about due to a public request by the NSB requesting proposals for a standard cryptographic algorithm that satisfied the following criteria:
  • Provides a high level of security
  • The security depends on keys, not the secrecy of the algorithm
  • The security is capable of being evaluated
  • The algorithm is completely specified and easy to understand
  • It is efficient to use and adaptable
  • Must be available to all users
  • Must be exportable
DES has now been in world-wide use for over 20 years, and due to the fact that it is a defined standard means that any system implementing DES can communicate with any other system using it. DES is used in banks and businesses all over the world, as well as in networks (as Kerberos) and to protect the password file on UNIX Operating Systems (as CRYPT(3)).
The Algorithm:
DES is a symmetric, block-cipher algorithm with a key length of 64 bits, and a block size of 64 bits (i.e. the algorithm operates on successive 64 bit blocks of plaintext). Being symmetric, the same key is used for encryption and decryption, and DES also uses the same algorithm for encryption and decryption.

First a transposition is carried out according to a set table (the initial permutation), the 64-bit plaintext block is then split into two 32-bit blocks, and 16 identical operations called rounds are carried out on each half. The two halves are then joined back together, and the reverse of the initial permutation carried out. The purpose of the first transposition is not clear, as it does not affect the security of the algorithm, but is thought to be for the purpose of allowing plaintext and ciphertext to be loaded into 8-bit chips in byte-sized pieces (Ref. 3).

In any round, only one half of the original 64-bit block is operated on. The rounds alternate between the two halves.

One round in DES consists of:
Key transformation:
The 64-bit key is reduced to 56 by removing every eighth bit (these are sometimes used for error checking). Sixteen different 48-bit subkeys are then created - one for each round. This is achieved by splitting the 56-bit key into two halves, and then circularly shifting them left by 1 or 2 bits, depending on the round. After this, 48 of the bits are selected. Because they are shifted, different groups of key bits are used in each subkey. This process is called a compression permutation due to the transposition of the bits and the reduction of the overall size.
Expansion permutation:
After the key transformation, whichever half of the block is being operated on undergoes an expansion permutation. In this operation, the expansion and transposition are achieved simultaneously by allowing the 1st and 4th bits in each 4 bit block to appear twice in the output, i.e. the 4th input bit becomes the 5th and 7th output bits (see Fig. 2).

The expansion permutation achieves 3 things: Firstly it increases the size of the half-block from 32 bits to 48, the same no of bits as in the compressed key subset, which is important as the next operation is to XOR the two together. Secondly, it produces a longer string of data for the substitution operation that subsequently compresses it. Thirdly, and most importantly, because in the subsequent substitutions the 1st and 4th bits appear in two S-boxes (described shortly), they affect two substitutions. The effect of this is that the dependency of the output bits on the input bits increases rapidly, and so therefore does the security of the algorithm.

Fig. 2 - The Expansion Permutation.

XOR:
The resulting 48-bit block is then XORed with the appropriate subset key for that round.
Substitution:
The next operation is to perform substitutions on the expanded block. There are eight substitution boxes, called S-boxes. The first S-box operates on the first 6 bits of the 48-bit expanded block, the 2nd S-box on the next six, and so on. Each S-box operates from a table of 4 rows and 16 columns, each entry in the table is a 4-bit number. The 6-bit number the S-box takes as input is used to look up the appropriate entry in the table in the following way. The 1st and 6th bits are combined to form a 2-bit number corresponding to a row number, and the 2nd to 5th bits are combined to form a 4-bit number corresponding to a particular column. The net result of the substitution phase is eight 4-bit blocks that are then combined into a 32-bit block.

It is the non-linear relationship of the S-boxes that really provide DES with its security, all the other processes within the DES algorithm are linear, and as such relatively easy to analyse (Ref. 3).

Fig. 3 - The S-box substitution (adapted after Ref. 3)

Permutation:
The 32-bit output of the substitution phase then undergoes a straightforward transposition using a table sometimes known as the P-box.
Finally:
After all the rounds have been completed, the two ‘half-blocks’ of 32 bits are recombined to form a 64-bit output, the final permutation is performed on it, and the resulting 64-bit block is the DES encrypted ciphertext of the input plaintext block.
Reversal (decryption):
Decrypting DES (if you have the correct key!) is very easy. Thanks to its design, the decryption algorithm is identical to the encryption algorithm - the only alteration that is made, is that to decrypt DES ciphertext, the subsets of the key used in each round are used in reverse, i.e. the 16th subset used first.
Security of DES:
Unfortunately, with advances in the field of cryptanalysis and the huge increase in available computing power, DES is no longer considered to be very secure. There are algorithms that can be used to reduce the number of keys that need to be checked, but even using a straightforward brute-force attack and just trying every single possible key there are computers that can crack DES in a matter of minutes. It is rumoured that the US National Security Agency (NSA) can crack a DES encrypted message in 3-15 minutes (Ref. 3).

If a time limit of 2 hours to crack a DES encrypted file is set, then you have to check all possible keys (2^56) in two hours, which is roughly 5 trillion keys per second. Whilst this may seem like a huge number, consider that a $10 Application-Specific Integrated Circuits (ASICs) chip can test 200 million keys per second, and many of these can be paralleled together (Ref. 2). It is suggested that a $10 million investment in ASICs would allow a computer to be built that would be capable of breaking a DES encrypted message in 6 minutes (Ref. 2).

It is the conclusion of this author that DES can no longer be considered a sufficiently secure algorithm. If a DES-encrypted message can be broken in minutes by supercomputers today, then the rapidly increasing power of computers means that it will be a trivial matter to break DES encryption in the future (when a message encrypted today may still need to be secure).

4.2 - IDEA

IDEA was created in its first form by Xuejia Lai and James Massey in 1990, this was called the Proposed Encryption Standard (PES). In 1991, Lai and Massey strengthened the algorithm against differential cryptanalysis and called the result Improved PES (IPES). The name of IPES was changed to International Data Encryption Algorithm (IDEA) in 1992.IDEA is a symmetric, block-cipher algorithm with a key length of 128 bits, a block size of 64 bits, and as with DES, the same algorithm provides encryption and decryption.

IDEA consists of 8 rounds using 52 subkeys. Each round uses six subkeys, with the remaining four being used for the output transformation.

The subkeys are created as follows:
Firstly the 128-bit key is divided into eight 16-bit keys to provide the first eight subkeys. The bits of the original key are then shifted 25 bits to the left, and then it is again split into eight subkeys. This shifting and then splitting is repeated until all 52 subkeys (SK1-SK52) have been created. (Ref. 5)

The 64-bit plaintext block is firstly split into four (B1-B4), a round then consists of the following steps:

(OB stands for output block)

OB1 = B1 * SK1 (multiply 1st sub-block with 1st
subkey)
OB2 = B2 + SK2 (add 2nd sub-block to 2nd subkey)
OB3 = B3 + SK3
OB4 = B4 * SK4 (multiply 3rd sub-block with 3rd subkey)
OB5 = OB1 XOR OB3 ( XOR results of steps 1 and 3)
OB6 = OB2 XOR OB4
OB7 = OB5 * SK5 (multiply result of step 5 with 5th subkey)
OB8 = OB6 + OB7 (add results of steps 5 and 7)
OB9 = OB8 * SK6 (multiply result of step 8 with 6th subkey)
OB10 = OB7 + OB9
OB11 = OB1 XOR OB9 (XOR results of steps 1 and 9)
OB12 = OB3 XOR OB9
OB13 = OB2 XOR OB10
OB14 = OB4 XOR OB10
The input to the next round, is the four sub-blocks OB11, OB13, OB12, OB14 in that order.

After the eighth round, the four final output blocks (F1-F4) are used in a final transformation to produce four sub-blocks of ciphertext (C1-C4) that are then rejoined to form the final 64-bit block of ciphertext.

C1 = F1 * SK49
C2 = F2 + SK50
C3 = F3 + SK51
C4 = F4 * SK52
Ciphertext = C1 & C2 & C3 & C4.
Security of IDEA:
Not only is IDEA approximately twice as fast as DES, but it is also considerably more secure. Using a brute-force approach, there are 2^128 possible keys. If a billion chips that could each test 1 billion keys a second were used to try and crack an IDEA-encrypted message, it would take them 10^13 years which is considerably longer than the age of the universe (Ref. 3). Being a fairly new algorithm, it is possible a better attack than brute-force will be found, which, when coupled with much more powerful machines in the future may be able to crack a message. However for a long way into the future, IDEA seems to be an extremely secure cipher.

4.3 - RSA

RSA, named after its three creators - Rivest, Shamir and Adleman, was the first effective public-key algorithm, and for years has withstood intense scrutiny by cryptanalysts all over the world.

Unlike symmetric key algorithms, where, as long as one presumes that an algorithm is not flawed, the security relies on having to try all possible keys, public-key algorithms rely on it being computationally unfeasible to recover the private key from the public key.

RSA relies on the fact that it is easy to multiply two large prime numbers together, but extremely hard (i.e. time consuming) to factor them back from the result (Ref. 6).

Factoring a number means finding its prime factors, which are the prime numbers that need to be multiplied together in order to produce that number. For example:

10 = 2 * 5
60 = 2 * 2 * 3 * 5
2^113 - 1 = 3391 * 23279 * 65993 * 1868569 * 1066818132868207
The algorithm:
Two very large prime numbers, normally of equal length, are randomly chosen then multiplied together.

N = A*B

T = (A-1) * (B-1)

A third number is then also chosen randomly as the public key (E) such that it has no common factors (i.e. is relatively prime) with T. The private key (D) is then:

D = E^-1 mod T

To encrypt a block of plaintext (M) into ciphertext (C):

C = M^E mod N

To decrypt:

M = C^D mod N

As an example:

1st prime (A) = 37

2nd prime (B) = 23

So,

N= 37*23 = 851

T = (37 - 1)*(23 - 1) = 36 * 23 = 792

E must have no factors other than 1 in common with 792.

E (public key) could be 5.

D (private key) = 5^-1 mod 792 = 317

To encrypt a message (M) of the character ‘G’:

If G is represented as 7 (7th letter in alphabet), then M= 7.

C (ciphertext) = 7^5 mod 851 = 638

To decrypt:

M = 638^317 mod 851 = 7

Security of RSA:
At this time, no more efficient method of cracking RSA is known than simply factoring N. An eavesdropper would have C and E, and so by factoring N could get M. Whilst computational speeds obviously affects how long it would take to factor N, the main determinant that is changing is mathematical theory. New, faster/better, methods for factoring numbers are constantly being devised, the current best for long numbers being the Number Field Sieve (Ref. 3). Prime Numbers of a length that was unimaginable a mere decade ago are now factored easily. Obviously the longer N is, the harder it is to factor, and so the better the security of RSA. Currently the longest decimal number to be factored is 130 digits, achieved in 1996 (Ref. 2). As theory and computers improve, so larger and larger keys will have to be used. The disadvantage in using extremely long keys is the computational overhead involved in encryption/decryption. This will only become a problem if a new factoring technique emerges that requires keys of such lengths to be used that necessary key length increases much faster than the increasing average speed of computers utilising the RSA algorithm.

Given that RSA has undergone so much scrutiny by cryptanalysts, the algorithm would seem to be secure, this means that RSA’s future security probably relies solely on advances in factoring techniques. It is recommended that, barring an astronomical increase in the efficiency of factoring techniques, or available computing power , a 2048-bit key will ensure very secure protection into the foreseeable future. For instance an Intel Paragon can achieve 50,000 mips (million operations per second), it would take a million of these six billion years to factor a 2048-bit key using current techniques. (Ref. 2)

4.4 Hybrid cryptography systems:

Even without using huge keys RSA is about 1000 times slower to encrypt/decrypt than DES, this has resulted in it not being widely used as a stand-alone cryptography system. However, it is used in many hybrid cryptosystems such as PGP. The basic principle of hybrid systems is to encrypt plaintext with a symmetric algorithm (usually DES or IDEA); the symmetric algorithm’s key is then itself encrypted with a public-key algorithm such as RSA. The RSA-encrypted key and symmetric algorithm-encrypted message are then sent to the recipient, who uses his private RSA key to decrypt the symmetric algorithm’s key, and then that key to decrypt the message. This is considerably faster than using RSA throughout, and allows a different symmetric key to be used each time, considerably enhancing the security of the symmetric algorithm.

4.4.1 - Digital signing:

A disadvantage of public-key cryptography is that anyone can send you a message using your public key, it is then necessary to prove that this message came from who it claims to have been sent by. A message encrypted by someone’s private key, can be decrypted by anyone with their public key. This means that if the sender encrypted a message with his private key, and then encrypted the resulting ciphertext with the recipient’s public key, the recipient would be able to decrypt the message with first their private key, and then the sender’s public key, thus recovering the message and proving it came from the correct sender.

This process is very time-consuming, and therefore rarely used. A much more common method of digitally signing a message is using a method called one-way hashing.

4.4.2 - One-way Hashing:

A one-way hash function is a mathematical function that takes a message string of any length (pre-string) and returns a smaller fixed-length string (hash value). These functions are designed in such a way that not only is it very difficult to deduce the message from its hashed version, but also that even given that all hashes are a certain length, it is extremely hard to find two messages that hash to the same value. In fact to find two messages with the same hash from a 128-bit hash function, 2^64 hashes would have to be tried. In other words, the hash value of a file is a small unique ‘fingerprint’.

H= hash value, f= hash function, M= original message/pre-string

H = f(M)

If you know M then H is easy to compute. However knowing H and f, it is not easy to compute M, and is hopefully computationally unfeasible.

As long as there is a low risk of collision (i.e. 2 messages hashing to the same value), and the hash is very hard to reverse, then a one-way hash function proves extremely useful for a number of aspects of cryptography.

If you one-way hash a message, the result will be a much shorter but still unique (at least statistically) number. This can be used as proof of ownership of a message without having to reveal the contents of the actual message. For instance rather than keeping a database of copyrighted documents, if just the hash values of each document were stored, then not only would this save a lot of space, but it would also provide a great deal of security. If copyright then needs to be proved, the owner could produce the original document and prove it hashes to that value.

Hash-functions can also be used to prove that no changes have been made to a file, as adding even one character to a file would completely change its hash value.

By far the most common use of hash functions is to digitally sign messages. The sender performs a one-way hash on the plaintext message, encrypts it with his private key and then encrypts both with the recipients public key and sends in the usual way. On decrypting the ciphertext, the recipient can use the sender’s public key to decrypt the hash value, he can then perform a one-way hash himself on the plaintext message, and check this with the one he has received. If the hash values are identical, the recipient knows not only that the message came from the correct sender, as it used their private key to encrypt the hash, but also that the plaintext message is completely authentic as it hashes to the same value.

The above method is greatly preferable to encrypting the whole message with a private key, as the hash of a message will normally be considerably smaller than the message itself. This means that it will not significantly slow down the decryption process in the same way that decrypting the entire message with the sender’s public key, and then decrypting it again with the recipient’s private key would.

The PGP system uses the MD5 hash function (Ref. 8) for precisely this purpose.


5.0 - Conclusions:

There is a place for both symmetric and public-key algorithms in modern cryptography. Hybrid cryptosystems successfully combine aspects of both and seem to be secure and fast. While PGP and its complex protocols are designed with the Internet community in mind, it should be obvious that the encryption behind it is very strong and could be adapted to suit many applications. There may well still be instances when a simple algorithm is necessary, and with the security provided by algorithms like IDEA, there is absolutely no reason think of these as significantly less secure.

An article posted on the Internet I once read, on the subject of picking locks, stated:

"The most effective door opening tool in any burglars toolkit remains the crowbar".

This also applies to cryptanalysis - direct action is often the most effective. It is all very well transmitting your messages with 128-bit IDEA encryption, but if all that is necessary to obtain that key is to walk up to one of the computers involved with a floppy disk then the whole point of encryption is negated. In other words, an incredibly strong algorithm is not sufficient. For a system to be effective there must be effective management protocols involved.


References and Bibliography:

  1. Johnson, N., Steganography, http://patriot.net/~johnson/html/neil/stegdoc/stegdoc.html
  2. Heath, J.. Survey: Corporate uses of Cryptography, http://www.iinet.net.au/~heath /crypto.html
  3. Schneier, B., Applied Cryptography Second Edition: protocols, algorithms, and source code in C, John Wiley & Sons, 1996, pp758.
  4. Mayo, S., How PGP works and the maths behind RSA, http://rschp2.anu.edu.au:8080/h owpgp.html
  5. Mayo, S., The IDEA Algorithm, http://rschp2.anu.edu.au:8080/ide a.html
  6. Sullivan, C., Makmur, M., RSA Algorithm Javascript, http://www.engr.orst.edu/~mak mur/HCproject/
  7. Dunlap, C., Programmers Crack RSA Encryption Code, http://www.techweb. com/wire/news/1997/10/1025rsa.html
  8. Rivest, R.L., The MD5 Message Digest Algorithm, RFC 1320, April 1992.

0 comments: