Aucune remarque pour cette diapositive
Asymmetric cryptographic is an essential part of all modern cryptosystems.It has allowed us to move from the old Enigma machines used by WWII cryptographers to TLS which is used to secure communication over the internet.I am sure that we have all used TLS and we have asymmetric cryptography to thank for this.
Asymmetric cryptography relies on certain information being computationally hard to compute without a secret. Asymetrix cryptosystems contain public component which can be known by everyone including an adversary. However, it must not be possible to compute the private or secret key from this information. This would completely break the cryptosystem.These mathematical functions are currently computationally difficult but not provably hard. An efficient algorithm may exist and just has not been found. Our cryptosystems rely on that efficient algorithm not being discovered. We will take a closer look at some of these mathematical functions now.
BothDiffie Hellman and RSA were first published in the late 1970’s and are used in almost all of today’s cryptosystems.They are used for a variety of purposes such as secure key exchange, encryption and signing.Elliptic Curve Cryptography was first published in the 1980’s and there was been significant academic interest in Elliptic Curve cryptography but limited use in industry. ECC is performed over a specified curve unlike Diffie Hellman and RSA which are performed over the set of integers.In the recent few years, the NSA published Suite B recommendations and the Russian’s declassified GOST recommendation which recommend use Elliptic curve cryptography. Like Diffie Hellman and RSA, ECC can be used for key exchange, signing and encryption.
Now we will take a closer look at the Diffie Hellman key exchange protocol. This was first published by Diffie and Hellman in 1976.As some of you may know, Diffie Hellman allows one to establish a shared secret by exchanging data over a public untrusted network.This shared secret can then be used in a symmetric cryptosystem.The security of the Diffie Hellman key exchange completely relies on the computational hardness of the discrete logarithm problem.
How does one break Diffie Hellman. Simple, you solve the discrete logarithm problem.Suppose you have h = g^x. The discrete logarithm problem is to find the element x when only g and h are known.This seemingly simple problem is the basis of the Diffie Hellman key exchange protocol.To reiterate an efficient discrete logarithm algorithm will completely break DH.Also, since both El-Gamal and DSA rely on slight modifications of the DLP an efficient generic DL algorithms will break them as well.
The first phase in RSA is to compute an RSA modulus from 2 large primes.From that RSA modulus, a public key exponent (e) and public key (d) are computed that satisfy a particular mathematical relation.This public key is used to encrypt any message sent to the receiver. This is done by raising the message to the recipient's public key e.The recipient of the message then raises the ciphertext to their private key d.As with Diffie Hellman, the security of RSA relies on a mathematical problem.In this the mathematical problem is factoring.
We attack RSA by attacking the underlying mathematical function which in this case is factoring.Factoring as we can remember from grade school mathematics is a seemingly simple task eg. 35 = 5 * 7…. This is true for small numbers at least. However, there currently exists no efficient algorithm to factor an arbitrary number.Factoring an RSA modulus would allow us to compute the two constituent primes of that modulus and with the user’s public key we would then be able to compute the user’s private key.We can simply use the same mathematical relation which was used to generate it in the first place.To reiterate an efficient factoring algorithm will completely break RSA.
Now let us switch gears a bit and discuss Elliptic Curve Cryptography. As mentioned earlier ECC was first published in the 1980’s and there has continued Work in the field over the past 30 years.An elliptic curve E over the real numbers R is defined by a Weierstrass equation. I have shown an example on the slide.This funky looking curve is special and allows us to build even secure cryptosystems.Generally either a prime field or binary field is used depending on the application.
As with both Diffie Hellman and RSA, ECC also depends on a fundamental mathematical problem.In this case ECC is secure due to the hardness of the Elliptic curve discrete logarithm problem (ECDLP). This should not to be confused with the discrete logarithm problem we just saw.The underlying mathematical problem is given two points on the elliptic curve, P and Q, compute the integer d such that Q = dP.In the diagram on the screen I have shown the simple case where d=2.The key pair (d; Q) can be used for a variety of cryptosystems including signature and encryption/decryption.As with Diffie Hellman and RSA, if an efficient algorithm for solving the underlying mathematical problem then the entire cryptosystem is broken.
Now we show the NIST recommended key sizes for symmetric algorithms, Diffie Hellman and ECC.As can be seen NIST recommends significantly smaller key sizes for ECC. This is due to the increased computational difficulty in solving the ECDLP as opposedto factoring or the regular DLP.Furthermore given current research advances even key sizes in the same rows are not computationally equivalent.
Now we will move on the next section and look at some of the new advances in the academic world.And why we need to be very concerned about this new research progress.
There are two types of discrete logarithm algorithms namely generic algorithms and specific algorithms.Generic algorithms work with a divide and conquer approach by breaking up groups into smaller groups They are very slow and take exponential time. We will discuss the complexity of algorithms in the next slide. Specific algorithms make use of particular group representations and can be much faster. Examples such as the index calculus algorithm currently result in mainly sub-exponential algorithms.They work by leveraging certain properties of the group.
Now let us take a look at algorithmic complexity and why that matters. Algorithmic complexity is simply how fast does a given algorithm run.Discrete Logarithm and Factoring generally use L notation in the literature to indicate their complexity. L(0) indicates that an algorithm is polynomial while L(1) is a fully exponential algorithm. Anything in between is sub-exponential.On the linear running time plot we can see that Exponential time algorithm dominates all other algorithms which can barely be seen on x-axis.The difference in running time can be me more clearly seen in the logarithmic plot where the running time of the exponential continues to grow while for polynomial time it plateaus.
Sporadic progress in DL research for 30+ years1979: Alderman published a sub-exponential L( 1/2 ) algorithm to solve the DLP. Note that this is not half the running the time of an L(1) algorithm.The fastest ECDLP logarithm has been fully exponential for the last 30 years and while there has been progress at the margins there have been no major breakthroughs.
A few later there was a further improvement in academia when an L(1/3) algorithm was published.1984: Odlyzko published a L( 1/3 ) algorithm to solve DLP in finite fields.And then there was little progress in the algorithmic complexity over the past 30 years. There were some improvements at the margin to the constants but there was no substantial progress until this year.
Then suddenly in February 2013 we had a paper released by Antonine Joux where he published an L(1/4) algorithm for Discrete logarithms. Note this is not a generic algorithm and only applies to cases with certain restrictions on the types of group..
And then within a few months Antonine Joux and some other researchers improved these algorithm to be quasi-polynomial L(0).Again this only applies to discrete logarithm with particular properties.
As we just saw there has rapid progress in the discrete logarithm field in the past 6 months. While these algorithms are currently limited to only certain circumstances namely small characteristics fields.Now for the math nerds…The characteristics of a field is the number of multiplicative identity elements in a sum needed to compute the additiive identity of the field. Generally practical cryptosystems use large characteristic field.These recent developments will bring more attention to the discrete logarithm problem and this will spur researchers into looking more closely problem most likely resulting in even further improvement in the near future.
Let us know take a brief look at some of the mathematics used in the new discrete logarithm algorithm.The main thing to note is that no new fundamental mathematical technique was required. This did not require the invention of a new branch of mathematics.Instead he used several mathematical tricks to speed up the running time of the algorithm significantly. It is remarkable that such techniques were not seen earlierBy any previous researchers in the area.Some of these techniques include a clever change of variables; a specific polynomial to simplify the computation. And a new descent algorithm to express arbitrary elements in the finite field.This resulted in a discrete logarithm that is much faster than anything published earlier.
And then given these insights, in less than 6 months, other researchers including Joux were able to help and improve this algorithm even further with some more special mathematics. They used special matrix properties which sped up the slowest step and resulted in a quasi-polynomial algorithm for discrete logarithm in certain circumstances.This is a big deal since there was marginal progress for 25+ years but then in 6 months there has been significant progress in discrete logarithm research.Note that is no obvious jump to more practical implementations yet. However, with the renewed interest in the field in academia we could so much more progress in the immediate future.
Some of the current implications of this discrete logarithm research right now is that pairing based cryptography which is used mainly in academic circles is no longer secure when done over small characteristics. There are limited practical implementations of pairing based cryptography though there is a pairing based crypto library maintained by the Stanford cryptography group.Also the function field sieve which will be discussed in more detail by Tom Ritter in the next section is improved by these new developments. The function field sieve is used mainly for small to medium characteristics field.
And now I’ll pass it to Tom Ritter who will discuss how this may apply to factoring.All right, so that’s a lot of Math, let’s talk about how this impacts or doesn’t impact the _algorithms_ we use today, before we talk about how it impacts the _applications_ we use today
The Function Field Sieve, which is what’s used for solving Discrete Logs, has four steps. In the last six months, all of them have been improved. That means it’s way more likely that something will be applicable to an algorithm we care about.And it’s worthwhile to note that the computation times that people are setting records with are not super-computer-worthy. Less than a month on a single core. 652 figure:460 hours for seiving8 core days = 192 hours for linear algebra
So Joux has attacked fields of a small characteristic. But we use fields of a large characteristic in Diffie Hellman, DSA, and ElGamal. Joux’s specific improvements are hit or miss on applying to these types of fields. The polynomial selection probably doesn’t, the sieving might, and the descent algorithm needs some tweaking and further work – but it will definitely lead to improvements.And of course, the simple fact that everyone in the Academic Community is really excited about this stuff means we’ll probably see even more improvements down the pipe.
So what about RSA? Everybody uses RSA, and we use it everywhere. And traditionally, Discrete Logs and Factoring have been very closely linked. When we improve one, we tend to improve the other in short order.
And we’ve seen this over the years. The dates that Javed threw out – those have seen advances right next to each other on the other algorithm. In the 70s, in the 80s, and in the 90s. And while I hate to think we’re going to call this decade the ‘10s, we’ll probably see a reflective paper nonetheless.
But WHY are these two algorithms so closely related? Well, they have about the same steps. They both select a polynomial, sieve for relations, perform a big linear algebra step, and then solve for the specific number you want factor or compute the discrete log of.So if that’s how they’re similar, let me explain how they’re different.
In Factoring, the polynomial selection takes some time, but it’s not that slow.In Discrete Logs, Joux has chosen his polynomial as a constant, based off the type of Group he’s working in.
The Relationship Sieving in both takes time – but it’s trivial to parallelize. In the era of EC2 and Google Compute Engine – any problem that’s embarrassingly parallel and doesn’t require the energy output of the sun tends to just have cores thrown at it.
Now the Linear Algebra for both of them is Slow, and difficult to parallelize. It requires a lot of memory, and a lot of memory bandwidth, plus a lot of CPU time. It’s also harder for Discrete Logs than it is for Factring.
And the most notable difference is that the last step is way more difficult for Discrete Logs. The Descent is extremely painful for Discrete Logs, but the analogous step, the Square Root takes minutes for Factoring.So they’re very closely related, but they’re not exactly homogeneous between the two.
So coming back to RSA – there’s no obvious technique fromJoux’s work to directly apply to the General Number Field Sieve, and factoring RSA public keys.That said, if there’s even a 5% chance, that’s basically a 5% chance to throw every single Certificate Authority, every single SSL session, every single software update mechanism into complete and utter disarray.And based on NIST’s publications and colloquiums it seems like they’re concerned about this too.
And it’s worthwhile to note that running the General Number Field Sieve, and factoring 512-bit, and even 768-bit, RSA keys is within you, the audience members’ grasps. The software used to do it is public and open source, and there are tutorials on how to factor 512-bit keys in under 30 hours.
So right now, ECC is in pretty good shape. But we have to keep in mind that ECC has been around and studied for 30 years, while RSA and DH or more importantly, factoring and discrete logs, have been studied for hundreds.if Joux or others hit upon a general purpose discrete log algorithm – Diffie Hellman and other algorithms we rely on are toastAnd if it leaps to factoring, RSA will be toast to.And if give you an idea what I mean when I say toast, I mean key sizes might have to go from 2048 to 16,384. Besides being wildly impractical for any actual use because it’s way too slow – there’s like, no software that supports keysizes that large.So let me hand it over to Alex to talk about how screwed we all are.