evaluation: Minor corrections in thething

This commit is contained in:
Manos Katsomallos 2021-10-10 22:28:18 +02:00
parent 2e08f9d0b4
commit f84cfb205d

View File

@ -3,16 +3,11 @@
% \kat{After discussing with Dimitris, I thought you are keeping one chapter for the proposals of the thesis. In this case, it would be more clean to keep the theoretical contributions in one chapter and the evaluation in a separate chapter. } % \kat{After discussing with Dimitris, I thought you are keeping one chapter for the proposals of the thesis. In this case, it would be more clean to keep the theoretical contributions in one chapter and the evaluation in a separate chapter. }
% \mk{OK.} % \mk{OK.}
In this section we present the experiments that we performed on real and synthetic data sets. In this section we present the experiments that we performed, to test the methodology that we presented in Section~\ref{subsec:lmdk-sol}, on real and synthetic data sets.
With the experiments on the real data sets (Section~\ref{subsec:lmdk-expt-bgt}), we show the performance in terms of utility of our three {\thething} mechanisms. With the experiments on the real data sets (Section~\ref{subsec:lmdk-expt-bgt}), we show the performance in terms of utility of our three {\thething} mechanisms.
With the experiments on the synthetic data sets (Section~\ref{subsec:lmdk-expt-cor}) we show the privacy loss by our framework when tuning the size and statistical characteristics of the input {\thething} set $L$ with special emphasis on how the privacy loss under temporal correlation is affected by the number and distribution of the {\thethings}. With the experiments on the synthetic data sets (Section~\ref{subsec:lmdk-expt-cor}) we show the privacy loss by our framework when tuning the size and statistical characteristics of the input {\thething} set $L$ with special emphasis on how the privacy loss under temporal correlation is affected by the number and distribution of the {\thethings}.
Notice that in our experiments, in the cases when we have $0\%$ and $100\%$ of the events being {\thethings}, we get the same behavior as in event- and user-level privacy respectively.
This happens due the fact that at each timestamp we take into account only the data items at the current timestamp and ignore the rest of the time series (event-level) when there are no {\thethings}.
Whereas, when each timestamp corresponds to a {\thething} we consider and protect all the events throughout the entire series (user-level).
\subsection{Budget allocation schemes} \subsection{Budget allocation schemes}
\label{subsec:lmdk-expt-bgt} \label{subsec:lmdk-expt-bgt}
@ -42,7 +37,7 @@ The Skip model excels, compared to the others, at cases where it needs to approx
The combination of the low range in HUE ($[0.28$, $4.45]$ with an average of $0.88$kWh) and the large scale in the Laplace mechanism results in a low mean absolute error for Skip(Figure~\ref{fig:hue}). The combination of the low range in HUE ($[0.28$, $4.45]$ with an average of $0.88$kWh) and the large scale in the Laplace mechanism results in a low mean absolute error for Skip(Figure~\ref{fig:hue}).
In general, a scheme that favors approximation over noise injection would achieve a better performance in this case. In general, a scheme that favors approximation over noise injection would achieve a better performance in this case.
However, the Adaptive model performs by far better than Uniform and strikes a nice balance between event- and user-level protection for all {\thethings} percentages. However, the Adaptive model performs by far better than Uniform and strikes a nice balance between event- and user-level protection for all {\thethings} percentages.
In the T-drive data set (Figure~\ref{fig:t-drive}), the Adaptive mechanism outperforms the Uniform one by $10$\%--$20$\% for all {\thethings} percentages greater than $40$ and by more than $20$\% the Skip one. In the T-drive data set (Figure~\ref{fig:t-drive}), the Adaptive mechanism outperforms Uniform by $10$\%--$20$\% for all {\thethings} percentages greater than $40$ and Skip by more than $20$\%.
The lower density (average distance of $623$ meters) of the T-drive data set has a negative impact on the performance of Skip. The lower density (average distance of $623$ meters) of the T-drive data set has a negative impact on the performance of Skip.
In general, we can claim that the Adaptive is the most reliable and best performing mechanism with minimal tuning, if we take into consideration the drawbacks of the Skip mechanism mentioned in Section~\ref{subsec:lmdk-mechs}. In general, we can claim that the Adaptive is the most reliable and best performing mechanism with minimal tuning, if we take into consideration the drawbacks of the Skip mechanism mentioned in Section~\ref{subsec:lmdk-mechs}.
@ -54,10 +49,6 @@ Moreover, designing a data-dependent sampling scheme would possibly result in be
Figure~\ref{fig:avg-dist} shows a comparison of the average temporal distance of the events from the previous/next {\thething} or the start/end of the time series for various distributions in synthetic data. Figure~\ref{fig:avg-dist} shows a comparison of the average temporal distance of the events from the previous/next {\thething} or the start/end of the time series for various distributions in synthetic data.
More particularly, we count for every event the total number of events between itself and the nearest {\thething} or the series edge. More particularly, we count for every event the total number of events between itself and the nearest {\thething} or the series edge.
We observe that the uniform and bimodal distributions tend to limit the regular event--{\thething} distance.
This is due to the fact that the former scatters the {\thethings}, while the latter distributes them on both edges, leaving a shorter space uninterrupted by {\thethings}.
% and as a result they reduce the uninterrupted space by landmarks in the sequence.
On the contrary, distributing the {\thethings} at one part of the sequence, as in skewed or symmetric, creates a wider space without {\thethings}.
\begin{figure}[htp] \begin{figure}[htp]
\centering \centering
@ -66,14 +57,13 @@ On the contrary, distributing the {\thethings} at one part of the sequence, as i
\label{fig:avg-dist} \label{fig:avg-dist}
\end{figure} \end{figure}
We observe that the uniform and bimodal distributions tend to limit the regular event--{\thething} distance.
This is due to the fact that the former scatters the {\thethings}, while the latter distributes them on both edges, leaving a shorter space uninterrupted by {\thethings}.
% and as a result they reduce the uninterrupted space by landmarks in the sequence.
On the contrary, distributing the {\thethings} at one part of the sequence, as in skewed or symmetric, creates a wider space without {\thethings}.
Figure~\ref{fig:dist-cor} illustrates a comparison among the aforementioned distributions regarding the overall privacy loss under (a)~weak, (b)~moderate, and (c)~strong temporal correlation degrees. Figure~\ref{fig:dist-cor} illustrates a comparison among the aforementioned distributions regarding the overall privacy loss under (a)~weak, (b)~moderate, and (c)~strong temporal correlation degrees.
The line shows the overall privacy loss---for all cases of {\thethings} distribution---without temporal correlation. The line shows the overall privacy loss---for all cases of {\thethings} distribution---without temporal correlation.
In combination with Figure~\ref{fig:avg-dist}, we conclude that a greater average event--{\thething} distance in a distribution can result into greater overall privacy loss under moderate and strong temporal correlation.
This is due to the fact that the backward/forward privacy loss accumulates more over time in wider spaces without {\thethings} (see Section~\ref{sec:correlation}).
Furthermore, the behavior of the privacy loss is as expected regarding the temporal correlation degree.
Predictably, a stronger correlation degree generates higher privacy loss while widening the gap between the different distribution cases.
On the contrary, a weaker correlation degree makes it harder to differentiate among the {\thethings} distributions.
The privacy loss under a weak correlation degree converge.
\begin{figure}[htp] \begin{figure}[htp]
\centering \centering
@ -91,3 +81,10 @@ The privacy loss under a weak correlation degree converge.
The line shows the overall privacy loss without temporal correlation.} The line shows the overall privacy loss without temporal correlation.}
\label{fig:dist-cor} \label{fig:dist-cor}
\end{figure} \end{figure}
In combination with Figure~\ref{fig:avg-dist}, we conclude that a greater average event--{\thething} distance in a distribution can result into greater overall privacy loss under moderate and strong temporal correlation.
This is due to the fact that the backward/forward privacy loss accumulates more over time in wider spaces without {\thethings} (see Section~\ref{sec:correlation}).
Furthermore, the behavior of the privacy loss is as expected regarding the temporal correlation degree.
Predictably, a stronger correlation degree generates higher privacy loss while widening the gap between the different distribution cases.
On the contrary, a weaker correlation degree makes it harder to differentiate among the {\thethings} distributions.
The privacy loss under a weak correlation degree converge.