% \kat{After discussing with Dimitris, I thought you are keeping one chapter for the proposals of the thesis. In this case, it would be more clean to keep the theoretical contributions in one chapter and the evaluation in a separate chapter. }
In this section, we present the experiments that we performed, to test the methodology that we presented in Section~\ref{subsec:lmdk-sol}, on real and synthetic data sets.
With the experiments on the real data sets (Section~\ref{subsec:lmdk-expt-bgt}), we show the performance in terms of data utility of our three {\thething} privacy schemes: \texttt{Skip}, \texttt{Uniform} and \texttt{Adaptive}.
We define data utility as the mean absolute error introduced by the privacy mechanism.
We compare with the event- and user-level differential privacy protection levels, and show that, in the general case, {\thething} privacy allows for better data utility than user-level differential privacy while balancing between the two protection levels.
% \kat{in the previous set of experiments we were measuring the MAE, now we are measuring the privacy loss... Why is that? Isn't it two sides of the same coin? }
We observe that a greater average {\thething}--regular event distance in a time series can result into greater temporal privacy loss under moderate and strong temporal correlation.
Figure~\ref{fig:real} exhibits the performance of the three schemes, \texttt{Skip}, \texttt{Uniform}, and \texttt{Adaptive} applied on the three data sets that we study.
Notice that, in the cases when we have $0\%$ and $100\%$ of the events being {\thethings}, we get the same behavior as in event- and user-level privacy respectively.
This happens due to the fact that at each timestamp we take into account only the data items at the current timestamp and ignore the rest of the time series (event-level) when there are no {\thethings}.
% For the Geolife data set (Figure~\ref{fig:geolife}), Skip has the best performance (measured in Mean Absolute Error, in meters) because it invests the most budget overall at every regular event, by approximating the {\thething} data based on previous releases.
% Due to the data set's high density (every $1$--$5$ seconds or every $5$--$10$ meters per point) approximating constantly has a low impact on the data utility.
% On the contrary, the lower density of the T-drive data set (Figure~\ref{fig:t-drive}) has a negative impact on the performance of Skip.
In general, we notice that, for this data set and due to the application of the random response technique, it is more beneficial to either invest more privacy budget per event or prefer approximation over introducing randomization.
The combination of the small range of measurements ($[0.28$, $4.45]$ with an average of $0.88$kWh) in HUE (Figure~\ref{fig:hue}) and the large scale in the Laplace mechanism, allows for schemes that favor approximation over noise injection to achieve a better performance in terms of data utility.
Hence, \texttt{Skip} achieves a constant low mean absolute error.
In T-drive (Figure~\ref{fig:t-drive}), \texttt{Adaptive} outperforms \texttt{Uniform} by $10$\%--$20$\% for all {\thething} percentages greater than $40$\% and \texttt{Skip} by more than $20$\%.
The lower density (average distance of $623$m) of the T-drive data set has a negative impact on the performance of \texttt{Skip} because republishing a previous perturbed value is now less accurate than perturbing the current location.
if we take into consideration the drawbacks of the Skip mechanism, particularly in spatiotemporal data, e.g., sporadic location data publishing~\cite{gambs2010show, russell2018fitness} or misapplying location cloaking~\cite{xssfopes2020tweet}, that could lead to the indication of privacy-sensitive attribute values.
% (mentioned in Section~\ref{subsec:lmdk-mechs})
% \kat{you can mention them also here briefly, and give the pointer for the section}
Moreover, implementing a more advanced and data-dependent sampling method
% \kat{what would be the main characteristic of the scheme? that it picks landmarks how?}
that accounts for changes in the trends of the input data and adapts its rate accordingly, would
% possibly
% \kat{possibly is not good enough, if you are sure remove it. Otherwise mention that more experiments need to be done?}
As previously mentioned, temporal correlation is inherent in continuous publishing, and it is the cause of supplementary privacy loss in the case of privacy-preserving time series publishing.
In this section, we are interested in studying the effect that the distance of the {\thethings} from every regular event has on the privacy loss caused under the presence of temporal correlation.
Figure~\ref{fig:avg-dist} shows a comparison of the average temporal distance of the events from the previous/next {\thething} or the start/end of the time series for various distributions in our synthetic data.
More specifically, we model the distance of an event as the count of the total number of events between itself and the nearest {\thething} or the time series edge.
\caption{Average temporal distance of regular events from the {\thethings} for different {\thething} percentages within a time series in various {\thething} distributions.}
We observe that the uniform and bimodal distributions tend to limit the regular event--{\thething} distance.
This is due to the fact that the former scatters the {\thethings}, while the latter distributes them on both edges, leaving a shorter space uninterrupted by {\thethings}.
% and as a result they reduce the uninterrupted space by landmarks in the sequence.
On the contrary, distributing the {\thethings} at one part of the sequence, as in skewed or symmetric, creates a wider space without {\thethings}.
Figure~\ref{fig:dist-cor} illustrates a comparison among the aforementioned distributions regarding the temporal privacy loss under (a)~weak, (b)~moderate, and (c)~strong temporal correlation degrees.
This is due to the fact that the backward/forward privacy loss accumulates more over time in wider spaces without {\thethings} (see Section~\ref{sec:correlation}).
Furthermore, the behavior of the privacy loss is as expected regarding the temporal correlation degree: a stronger correlation degree generates higher privacy loss while widening the gap between the different distribution cases.