Abstract: We study the parallel time-complexity of basic cryptographic primitives such as one-way functions (OWFs) and pseudorandom generators (PRGs). Specifically, we consider the possibility of computing instances of these primitives by NC0 circuits, in which each output bit depends on a constant number of input bits. Despite previous efforts in this direction, there has been no convincing theoretical evidence supporting this possibility, which was posed as an open question in several previous works.
We essentially settle this question by providing strong evidence for the possibility of cryptography in NC0. Our main result is that every "moderately easy" OWF (resp., PRG), say computable in NC1, can be compiled into a corresponding OWF (resp., low-stretch PRG) in which each output bit depends on only four input bits. The existence of OWF and PRG in NC1 is a relatively mild assumption, implied by most number-theoretic or algebraic intractability assumptions commonly used in cryptography. A similar compiler can also be obtained for other cryptographic primitives such as one-way permutations, encryption, signature, commitment, and collision-resistant hashing.
Our results make use of the machinery of randomizing polynomials, which was originally motivated by questions in the domain of information-theoretic secure multiparty computation. By extending this tool to the computational setting we obtain additional results regarding NC0 cryptography. In particular, we show that even some relatively complex cryptographic primitives, including (stateless) symmetric encryption and digital signatures, are NC0-reducible to a PRG. No parallel reductions of this type were previously known, even in NC. Our reductions make a non-black-box use of the underlying PRG.
Joint work with Benny Applebaum and Yuval Ishai (FOCS 2004, CCC 2005)
Abstract: We present a general notion of channel for cryptographic purposes, that can model either a (classical) physical channel or the consequences of a cryptographic protocol, or any hybrid. We consider security amplification for such channels: converting a channel that leaks a bit with some fixed probability to one that only leaks negligible information. We show that security amplification is not possible for a general such channel, but give some general axioms for channels guaranteeing the possibility of security amplification, and a universal security amplification protocol for such channels. Our axioms have the following property: if a channel satisfies the axioms, then any feasible protocol using the channel can be modelled as a channel that satisfies the axioms. This means that reductions using the axioms are composable, unlike for other restrictions of channels, such as memorylessness.
Joint work with Valentine Kabanets, Bruce M. Kapron, Valerie King.
Abstract: We present two new approaches to improving the integrity of network broadcasts and multicasts with low storage and computation overhead. The first approach is a leap-frog linking protocol for securing the integrity of packets as they traverse a network during a broadcast, such as in the setup phase for link-state routing. This technique allows each router to gain confidence about the integrity of a packet before passing it on to the next router; hence, allows many integrity violations to be stopped immediately in their tracks. The second approach is a novel key pre-distribution scheme that we use in conjunction with a small number of hashed message authentication codes (HMACs), which allows end-to-end integrity checking as well as improved hop-by-hop integrity checking. Our schemes are suited to environments, such as in ad hoc and overlay networks, where routers can share only a small number of symmetric keys. Moreover, our protocols do not use encryption (which, of course, can be added as an optional security enhancement). Instead, security is based strictly on the use of one-way hash functions; hence, our algorithms are considerably faster than those based on traditional public-key signature schemes. This improvement in speed comes with only modest reductions in the security for broadcasting, as our schemes can tolerate small numbers of malicious routers, provided they don't form significant cooperating coalitions.
Abstract: In this talk we will discuss the development of the discrete-logarithm based cryptosystems, including the elliptic curve cryptosystems (ECC) and the pairing-based cryptosystems. Then we will discuss the foundational security of these systems, especially in light of recent progress on global methods for studying the discrete logarithm problem in the classical setting and the elliptic curve setting.
Abstract: Misuse-based intrusion detection systems rely on models of attacks to identify the manifestation of intrusive behavior. Therefore, the ability of these systems to reliably detect attacks is strongly affected by the quality of their models, which are often called "signatures." A perfect model would be able to detect all the instances of an attack without making mistakes. Unfortunately, writing good models (or good signatures) is hard. Attacks that exploit a specific vulnerability may do so in completely different ways, and writing models that take into account all possible variations is very difficult. For this reason, it would be beneficial to have testing tools that are able to evaluate the "goodness" of detection signatures. This work presents a technique to test and evaluate misuse detection models in the case of network-based intrusion detection systems. The testing technique is based on a mechanism that generates a large number of variations of an exploit by applying mutant operators to an exploit template. These mutant exploits are then run against a victim host protected by a network-based intrusion detection system. The results of the systems in detecting these variations provide a quantitative basis for the evaluation of the quality of the corresponding detection model.
Abstract: In this talk, I consider the problem of private searching on streaming data, where we can efficiently implement searching for documents under secret criteria (such as presence or absence of a hidden combination of hidden keywords) under various cryptographic assumptions. Our results can be viewed in a variety of ways: as a generalization of the notion of a Private Information Retrieval (to the more general queries and to a streaming environment as well as to public-key program obfuscation); as positive results on privacy-preserving data mining; and as a delegation of hidden program computation to other machines.
Joint work with William Skeith.
Abstract: We present a modular version of the random oracle methodology (for the design of cryptographic protocols) that emphasizes provable security and structured design principles, rather than just practicality. The methodology allows to build and analyze high-level cryptographic constructions in a unified way that leads at the same time to - a theoretical protocol instantiation provably secure in the standard model of computation - a very efficient practical instantiation provably secure in the random oracle model. We demonstrate the viability of our methodology through the construction and analysis of three increasingly complex examples: encryption secure against chosen-plaintext attack, encryption secure against chosen-ciphertext attack, and a group signature scheme. We believe our methods give both a better understanding of the known constructions whose security was previously known to hold in the random oracle model, as well as an opportunity to increase the impact of theoretical results in cryptographic practice.
(Joint work with Amit Sahai and Marc Fischlin)