Skip to main content

Deterministic learning of DNF

Resource type
Thesis type
(Thesis) M.Sc.
Date created
2020-12-10
Authors/Contributors
Abstract
Our main result is a deterministic learning algorithm with membership queries that learns a disjunctive normal form (DNF) with a polynomial number of terms within an additive approximation error in quasi polynomial time n^{O(logn)}. With random examples under the uniform distribution, the learning algorithm of [LMN93] for DNFs runs in time n^{O(log^2(n))}. Our approach is to consider the Fourier expansion of the target DNF and approximate the heavy Fourier coefficients. Our hypothesis is the sign of the sparse polynomial that is defined with the approximated coefficients. We present two approaches for building our sparse polynomial approximating the target DNF. First, we use Gopalan and Meka's [GMR13] PRG to deterministically approximate small degree coefficients of our target DNF. Second, we generalize the result of [DETT10] to show that a general DNF can be fooled by a small biased set to approximate coefficients of any degree. We show that under the assumption that there exists an ideal PRG with a logarithmic seed length for general DNFs, we can derandomize the Goldreich and Levin algorithm to find all small degree coefficients with large absolute values. Therefore, under the ideal PRG assumption, there exists a deterministic learning algorithm for DNFs that runs in the same time as [Man92], n^{O(loglogn)}.
Document
Identifier
etd21177
Copyright statement
Copyright is held by the author(s).
Permissions
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Supervisor or Senior Supervisor
Thesis advisor: Kabanets, Valentine
Language
English
Member of collection
Download file Size
input_data\21079\etd21177.pdf 497.16 KB

Views & downloads - as of June 2023

Views: 0
Downloads: 0