\section{Selection of events} \label{sec:theotherthing} In Section~\ref{sec:thething}, we introduced the notion of {\thething} events in privacy-preserving time series publishing. The differentiation among regular and {\thething} events stipulates a privacy budget allocation that deviates from the application of existing differential privacy protection levels. Based on this novel event categorization, we designed three schemes (Section~\ref{subsec:lmdk-mechs}) that achieve {\thething} privacy. For this, we assumed that the timestamps in the {\thething} set $L$ are not privacy-sensitive, and therefore we used them in our models as they were. This may pose a direct or indirect privacy risk to the users. For the former, we consider the case where we desire to publish $L$ as complimentary information to the release of the event values. For the latter, a potentially adversarial data analyst may infer $L$ by observing the values of the privacy budget, which is usually an inseparable attribute of the data release as an indicator of the privacy guarantee to the users and as an estimate of the data utility to the analysts. Hence, in both cases, a user-defined $L$, which is supposed to facilitate the configurable privacy protection of the user, could end up posing a privacy risk to them. In Example~\ref{ex:lmdk-risk}, we demonstrate the extreme case of the application of the \texttt{Skip} {\thething} privacy scheme from Figure~\ref{fig:lmdk-skip}, where we approximate {\thethings} with the latest data release and invest all of the available privacy budget to regular events. \begin{example} \label{ex:lmdk-risk} Figure~\ref{fig:lmdk-risk} shows the privacy risk that the application of a {\thething} privacy scheme that nullifies or approximates outputs, similar to \texttt{Skip}, might cause. We point out in red the details that might cause indirect information inference. In this extreme case, the minimization of the privacy budget in combination with nullifying the output (either by not publishing or by adding a lot of noise) or approximating the current output with previously released outputs might hint to any adversary that the current event is a {\thething}. \begin{figure}[htp] \centering \includegraphics[width=.75\linewidth]{problem/lmdk-risk} \caption{The privacy risk (highlighted in red) that the application of the {\thething} privacy \texttt{Skip} scheme might pose.} \label{fig:lmdk-risk} \end{figure} Apart from the privacy budget that we invested at {\thethings}, we can observe a pattern for the budgets at regular events as well. Therefore, an adversary who observes the values of the privacy budget can easily infer not only the number but also the exact temporal position of the {\thethings}. \end{example} \SetKwInput{KwResult}{Output} \SetKwData{diffCur}{diffCur} \SetKwData{diffMin}{diffMin} \SetKwData{evalCur}{evalCur} \SetKwData{evalOrig}{evalOrig} \SetKwData{evalSum}{evalSum} \SetKwData{h}{h} \SetKwData{hi}{h$_i$} \SetKwData{hist}{hist} \SetKwData{histCur}{histCur} \SetKwData{histTmp}{histTmp} \SetKwData{metricCur}{metricCur} \SetKwData{metricOrig}{metricOrig} \SetKwData{opt}{opt} \SetKwData{opti}{opt$_i$} \SetKwData{opts}{opts} \SetKwData{optim}{optim} \SetKwData{optimi}{optim$_i$} \SetKwData{opts}{opts} \SetKwData{reg}{reg} \SetKwFunction{calcMetric}{calcMetric} \SetKwFunction{evalSeq}{evalSeq} \SetKwFunction{getCombs}{getCombs} \SetKwFunction{getDiff}{getDiff} \SetKwFunction{getHist}{getHist} \SetKwFunction{getOpts}{getOpts} \SetKwFunction{getNorm}{getNorm} \SetKwFunction{len}{len} \SetKwFunction{sumHist}{sum} \input{problem/theotherthing/contribution} \input{problem/theotherthing/problem} \input{problem/theotherthing/solution}