the-last-thing/text/evaluation/theotherthing.tex
katerinatzo 286982315a 5.3
2021-10-12 23:24:56 +02:00

65 lines
4.6 KiB
TeX

\section{Selection of landmarks}
\label{sec:eval-lmdk-sel}
In this section, we present the experiments on the methodology for the {\thethings} selection presented in Section~\ref{subsec:lmdk-sel-sol}, on the real and the synthetic data sets.
With the experiments on the synthetic data sets (Section~\ref{subsec:sel-utl}) we show the normalized Euclidean and Wasserstein distances \kat{is this distance the landmark distance that we saw just before ? clarify } of the time series histograms for various distributions and {\thething} percentages.
This allows us to justify our design decisions for our concept that we showcased in Section~\ref{subsec:lmdk-sel-sol}.
With the experiments on the real data sets (Section~\ref{subsec:sel-prv}), we show the performance in terms of utility of our three {\thething} mechanisms in combination with the privacy preserving {\thething} selection component.
\kat{Mention whether it improves the original proposal or not.}
\subsection{{\Thething} selection utility metrics}
\label{subsec:sel-utl}
Figure~\ref{fig:sel-dist} demonstrates the normalized distance that we obtain when we utilize either (a)~the Euclidean or (b)~the Wasserstein distance metric to obtain a set of {\thethings} including regular events.
\begin{figure}[htp]
\centering
\subcaptionbox{Euclidean\label{fig:sel-dist-norm}}{%
\includegraphics[width=.5\linewidth]{evaluation/sel-dist-norm}%
}%
\subcaptionbox{Wasserstein\label{fig:sel-dist-emd}}{%
\includegraphics[width=.5\linewidth]{evaluation/sel-dist-emd}%
}%
\caption{The normalized (a)~Euclidean, and (b)~Wasserstein distance of the generated {\thething} sets for different {\thething} percentages.}
\label{fig:sel-dist}
\end{figure}
Comparing the results of the Euclidean distance in Figure~\ref{fig:sel-dist-norm} with those of the Wasserstein in Figure~\ref{fig:sel-dist-emd} we conclude that the Euclidean distance provides more consistent results for all possible distributions.
% (0 + (0.25 + 0.25 + 0.3 + 0.3)/4 + (0.45 + 0.45 + 0.45 + 0.5)/4 + (0.5 + 0.5 + 0.7 + 0.7)/4 + (0.6 + 0.6 + 1 + 1)/4 + (0.3 + 0.3 + 0.3 + 0.3)/4)/6
% (0 + (0.15 + 0.15 + 0.15 + 0.15)/4 + (0.2 + 0.2 + 0.3 + 0.4)/4 + (0.3 + 0.3 + 0.6 + 0.6)/4 + (0.3 + 0.3 + 1 + 1)/4 + (0.05 + 0.05 + 0.05 + 0.05)/4)
The maximum difference is approximately $0.4$ for the former and $0.7$ for the latter between the bimodal and skewed {\thething} distribution.
While both methods share the same mean normalized distance of $0.4$, the Euclidean distance demonstrates a more consistent performance among all possible {\thething} distributions.
Therefore, we choose to utilize the Euclidean distance metric for the implementation of the privacy-preserving {\thething} selection in Section~\ref{subsec:lmdk-sel-sol}.
\subsection{Budget allocation and {\thething} selection}
\label{subsec:sel-prv}
Figure~\ref{fig:real-sel} exhibits the performance of Skip, Uniform, and Adaptive (see Section~\ref{subsec:lmdk-mechs}) in combination with the {\thething} selection component.
\begin{figure}[htp]
\centering
\subcaptionbox{Copenhagen\label{fig:copenhagen-sel}}{%
\includegraphics[width=.5\linewidth]{evaluation/copenhagen-sel}%
}%
\hspace{\fill}
\subcaptionbox{HUE\label{fig:hue-sel}}{%
\includegraphics[width=.5\linewidth]{evaluation/hue-sel}%
}%
\subcaptionbox{T-drive\label{fig:t-drive-sel}}{%
\includegraphics[width=.5\linewidth]{evaluation/t-drive-sel}%
}%
\caption{The mean absolute error (a)~as a percentage, (b)~in kWh, and (c)~in meters of the released data for different {\thething} percentages.}
\label{fig:real-sel}
\end{figure}
In comparison with the utility performance without the {\thething} selection component (Figure~\ref{fig:real}), we notice a slight deterioration for all three models.
This is natural since we allocated part of the available privacy budget to the privacy-preserving {\thething} selection component, which in turn increased the number of {\thethings}.
Therefore, there is less privacy budget available for data publishing throughout the time series for $0$\% and $100$\% {\thethings}.
\kat{why not for the other percentages?}
Skip performs best in our experiments with HUE, due to the low range in the energy consumption and the high scale of the Laplace noise, which it avoids due to the employed approximation.
However, for the Copenhagen data set and T-drive Skip attains greater mean absolute error than the user-level protection scheme, which exposes no benefit w.r.t. user-level protection.
Overall, Adaptive has a consistent performance in terms of utility for all of the data sets that we experimented with, and always outperforms the user-level privacy.
Thus, it is selected as the best mechanism to use in general.