diff --git a/text/thething/evaluation.tex b/text/thething/evaluation.tex index 7b82d91..a114449 100644 --- a/text/thething/evaluation.tex +++ b/text/thething/evaluation.tex @@ -1,5 +1,5 @@ \section{Evaluation} -\label{sec:the-thing-eval} +\label{sec:lmdk-eval} In this section we present the experiments that we performed on real and synthetic data sets. With the experiments on the synthetic data sets we show the privacy loss by our framework when tuning the size and statistical characteristics of the input {\thething} set $L$. diff --git a/text/thething/problem.tex b/text/thething/problem.tex index cbfab0b..dab6ee7 100644 --- a/text/thething/problem.tex +++ b/text/thething/problem.tex @@ -1,5 +1,5 @@ \section{{\Thething} privacy} -\label{sec:prob} +\label{sec:lmdk-prob} {\Thething} privacy is based on differential privacy. For this reason, we revisit the definition and important properties of differential privacy before moving on to the main ideas of this paper.