Speaker: 

Paata Ivanishvili

Institution: 

UCI

Time: 

Wednesday, October 27, 2021 - 2:00pm to 3:00pm

Location: 

Rowland Hall 510R

Perhaps a very basic question one asks in learning theory is as follows: we are given a  function f on the hypercube {-1,1}^n, and we are allowed to query samples (X, f(X)) where X is uniformly distributed on {-1,1}^n. After getting these samples (X_1, f(X_1)), ..., (X_N, f(X_N)) we would like to construct a function h which approximates f up to an error epsilon (say in L^2). Of course h is a random function as it involves i.i.d. random variables X_1, ... , X_N in its construction. Therefore, we want to construct such h which can only fail to approximate f with probability at most delta. So given parameters epsilon, delta  in (0,1) the goal is to minimize the number of random queries N. I will show that around log(n) random queries are sufficient to learn bounded "low-complexity" functions. Based on joint work with Alexandros Eskenazis.