Quantifying Information Flow Using Min-entropy and G-leakage
Associate Professor, School of Computing and Information Sciences
Florida International University
(Video runs from 01:15:52 to 02:17:43)
A fundamental issue in security is to prevent computer systems from leaking confidential information, whether maliciously or accidentally. One approach that has proved fruitful is to devise type systems that guarantee that well-typed programs satisfy noninterference properties. Unfortunately, noninterference is often impractical due to leaks that cannot be avoided. For instance, a login program that rejects an incorrect password thereby reveals that the secret password is not the one that was entered. More subtly, the amount of time taken by a cryptographic operation may reveal information about the secret key. Hence the last decade has seen growing interest in quantitative theories of information flow; these give us a way to talk about "how much" information is leaked, perhaps allowing us to tolerate "small" leaks.
But while it is tempting to measure leakage using classic information-theoretic concepts like Shannon entropy, these turn out not to provide very satisfactory confidentiality guarantees. As a result, several researchers have developed an alternative theory based on Rényi's min-entropy. In this theory, leakage is measured in terms of a secret's vulnerability to being guessed in one try by an adversary. In the first part of this talk, I will describe the basic theory of min-entropy leakage in deterministic and probabilistic systems, including comparisons with Shannon leakage, results on min-capacity, and results on channels in cascade. I will also discuss algorithms for calculating leakage in systems, and mention some applications, such as bounds on timing attacks on cryptography.
Next I will discuss g-leakage, a recent generalization of min-entropy leakage. The key idea is to introduce gain functions g to specify the benefit that an adversary derives from making a certain guess about the secret. Gain functions allow a wide variety of operational scenarios to be modeled, including those where the adversary benefits from guessing a value close to the secret, guessing a part of the secret, guessing a property of the secret, or guessing the secret within some number of tries. I will discuss important properties of g-leakage, including bounds between min-capacity, g-capacity, and Shannon capacity.
"Min-entropy as a Resource"
Geoffrey Smith completed his Ph.D. in Computer Science at Cornell University in 1991. In 1998, he joined the faculty of Florida International University, where he is now an Associate Professor. His research interests are centered on the foundations of computer security, especially from the perspective of programming languages. For the past 15 years, he has focused on secure information flow and, more recently, on quantitative information flow. He has been supported in this work by several NSF grants, and his papers in this area have been widely cited. He spent the Fall 2011 semester as a Visiting Scientist at the École Polytechnique in Palaiseau, France, and in June 2012 he was a Distinguished Visiting Researcher at IMDEA in Madrid, Spain.