diff --git a/abstract.tex b/abstract.tex index 19901f7..fbcf43b 100644 --- a/abstract.tex +++ b/abstract.tex @@ -1,2 +1,8 @@ \chapter{Abstract} -The abstract goes here. + +Sensors, portable devices, and location based services, generate massive amounts of geo-tagged, and/or location- and user-related data on a daily basis. +Such data are useful in numerous application domains from healthcare to intelligent buildings and from crowdsourced data collection to traffic monitoring. +A lot of these data are referring to activities and carrying information of individuals and thus, their manipulation and sharing inevitably arise concerns about the privacy of the individuals involved. +To address this problem, researchers have already proposed various seminal techniques for the protection of users' privacy. +However, the continual fashion in which data are generated nowadays, and the high availability of external sources of information, pose more threats and add extra challenges to the problem. +% In this thesis, we are concerned with and present the work done on data privacy in support of continuous data publication, and report on the proposed solutions, with a special focus on solutions concerning location or georeferenced data. diff --git a/acknowledgements.tex b/acknowledgements.tex index f8eb472..51d5341 100644 --- a/acknowledgements.tex +++ b/acknowledgements.tex @@ -1,8 +1,9 @@ \chapter{Acknowledgements} -Upon the completion of my thesis, I would like to express my deep gratitude to my research supervisors for their patient guidance, enthusiastic encouragement and useful critiques of this research work. Besides my advisors, I would like to thank the reporters as well as the rest of the jury for their invaluable contribution. +Upon the completion of my thesis, I would like to express my deep gratitude to my research supervisors for their patient guidance, enthusiastic encouragement and useful critiques of this research work. +Besides my advisors, I would like to thank the reporters as well as the rest of the jury for their invaluable contribution. A special thanks to my department’s faculty, staff and fellow students for their valuable assistance whenever needed and for creating a pleasant and creative environment during my studies. Last but not least, I wish to thank my family and friends for their unconditional support and encouragement all these years. -Pontoise, September X, 2019 +Cergy-Pontoise, MM DD, 2019 diff --git a/background.tex b/background.tex new file mode 100644 index 0000000..633f9c2 --- /dev/null +++ b/background.tex @@ -0,0 +1,360 @@ +\chapter{Background} +\label{ch:background} + +In this section, we introduce some relevant terminology and background knowledge around the problem of continuous publication of private data sets. + + +\section{Data} +\label{sec:data} + + +\subsection{Categories} +\label{subsec:data-categories} + +As this survey is about privacy, the data that we are interested in contain information about individuals and their actions, called also \emph{microdata}, in their raw, usually tabular form. +When the data are aggregated, or transformed using a statistical analysis task, we talk about \emph{statistical data}. + +Data in either of these two forms, may have a special property called~\emph{continuity}, and data with continuity are further classified into the following three categories: + +\begin{itemize} + \item \emph{Data stream} is a possibly \emph{unbounded} series of data. + \item \emph{Sequential data} is a series of \emph{dependent} data, e.g.,~trajectories. + \item \emph{Time series} is a series of data points \emph{indexed in time}. +\end{itemize} + +Note that data may fall into more than one of these categories, e.g.,~we may have data streams of dependent data, or dependent data indexed in time. + + +\subsection{Processing} +\label{subsec:data-processing} + +The traditional flow of crowdsourced data, as shown in Figure~\ref{fig:data-flow}, and witnessed in the majority of works that we cover, corresponds to a centralized architecture, i.e.,~data are collected, processed and anonymized, and published by an intermediate (trusted) entity~\cite{mcsherry2009privacy, blocki2013differentially, johnson2018towards}. +On the other side, we are also aware of several solutions proposed to support a decentralized privacy preserving schema~\cite{andres2013geo, erlingsson2014rappor, katsomallos2017open}. +According to this approach, the anonymization process takes place locally, on the side of the data producers, before sending the data to any intermediate entity or, in some cases, directly to the data consumers. +In this way, the resulting anonymization procedure is more independent from third-party entities, avoiding the risk of unpredicted privacy leakage from a compromised data publisher. +Contrary to centralized architectures, most of such multi-party computation approaches fail to generate elaborate data statistics, to offer a complete and up-to-date view of the events happening on the data producers' side, and thus, to maximize the utility of the available data. +For this reason, the centralized paradigm is more popular and hence, in this survey we also focus on works in this area. + +A data aggregation or a privacy preservation process is performed in the following two prominent data processing paradigms: + +\begin{itemize} + \item \emph{Batch} allows the --- usually offline --- processing of a (large) block of data, collected over a period of time, which could as well be a complete data set. + Batch processing excels at queries that require answers with high accuracy, since decisions are made based on observations on the whole data set. + \item \emph{Streaming} refers to the --- usually online --- processing of a possibly unbounded sequence of data items of small volume (in contrast to batch). + By nature, the processing is performed on the data subsets available at each point in time, and is ideal for time-critical queries that require (near) real-time analytics. +\end{itemize} + + +\subsection{Publishing} +\label{subsec:data-publishing} + +Either raw or processed, data can be subsequently published in one of the following publication methods: + +\begin{itemize} + \item \emph{One-shot} is the one-off publication of the (whole) data set at one time point. + \item \emph{Continuous} is the publishing of data sets in an uninterrupted manner. + The continuous publication scheme can be organized as shown below: + \begin{itemize} + \item \emph{Continual} refers to the publishing of data, characterized by some periodicity. + Slightly abusing the terminology, the terms `continuous' and `continual' often appear interchangeably in the literature. + \item \emph{Sequential} refers to the publishing of views of an original data set or updates thereof, the one after the other. + By definition the different releases are related to each other + \begin{itemize} + \item \emph{Incremental} is the sequential publishing of the $\Delta$ (i.e.,~the difference) over the previous data set release. + \end{itemize} + \end{itemize} +\end{itemize} + + +\section{Privacy} +\label{sec:privacy} + +When personal data are publicly released, either as microdata or statistical data, individuals' privacy can be compromised. +In the literature this compromise is know as \emph{information disclosure} and is usually categorized in \emph{identity} and \emph{attribute} disclosure~\cite{li2007t}. + +\begin{itemize} + \item \emph{Identity}, an individual is linked to a particular record, with a probability higher than a desired threshold. + \item \emph{Attribute}, new information (attribute value) about an individual is revealed. +\end{itemize} + +Note that identity disclosure can result in attribute disclosure, and vice versa. + + +\subsection{Attacks} +\label{subsec:privacy-attacks} + +Information disclosure is augmented by \emph{adversarial attacks}, i.e.,~combining supplementary knowledge available to \emph{adversaries} with the released data, or setting unrealistic assumptions while designing the privacy preserving algorithms. +Below we list example attacks that appear in the works that we review: + +\begin{itemize} + \item Knowledge about the sensitive attribute domain. + Here we can identify \emph{homogeneity and skewness} attacks~\cite{machanavajjhala2006diversity,li2007t}, based on knowledge of statistics on the sensitive attribute values, and \emph{similarity attack} based on semantic similarity between sensitive attribute values. + \item `Random' models of reasoning that make unrealistic assumptions in many scenarios such as the random world model, the i.i.d model, or the independent-tuples model. + These fall under the deFinetti's attack~\cite{kifer2009attacks}. + \item External data sources, e.g.,~geographical, demographic or other supplementary information. + Such~\emph{background knowledge} constitutes the \emph{linkage attack}~\cite{narayanan2008robust}, which helps link individuals with certain records or attributes. + \item Previous releases of the same and/or related data sets, i.e.,~\emph{temporal} and \emph{complementary release} attacks~\cite{sweeney2002k}. + In this category, we can also identify the \emph{unsorted matching} attack~\cite{sweeney2002k}, which is achieved when the original data set is considered in the same tuple ordering for different releases. + \item \emph{Data correlations} derived from previous data releases and/or other external sources~\cite{kifer2011no, chen2014correlated, zhu2015correlated, liu2016dependence, zhao2017dependent}. + In the literature that we review, the most prominent types of data correlations are: + \begin{itemize} + \item \emph{Spatiotemporal}~\cite{gotz2012maskit, fan2013differentially, xiao2015protecting, cao2017quantifying, ma2017plp}, appearing when processing time series or sequences of human activities with geolocation characteristics. + \item \emph{Feature}~\cite{ghinita2009preventing}, uncovered among features of released data sets, and + \item \emph{Serial} or \emph{autocorrelations}~\cite{li2007hiding, fan2013differentially, erdogdu2015privacy, wang2016rescuedp, wang2017cts}, characterized by dependencies between the elements in one series of data. + \end{itemize} +\end{itemize} + +The first three categories of attacks mentioned above, have been addressed by several works in one-shot privacy preserving data publishing. +As the continuous publishing scheme is more relevant and realistic nowadays, more recent works deal with the later three types of attacks that take into account different releases. + + +\subsection{Levels} +\label{subsec:privacy-levels} + +There are three levels of protection that the data publisher can consider: \emph{user-}, \emph{event-}, and \emph{$w$-event} privacy. +An \emph{event} is a (user, sensitive value) pair, e.g.,~the user $a$ is at location $l$. +\emph{User-}, and \emph{event-} privacy~\cite{dwork2010differential} are the main privacy levels; the former guarantees that all the events of any user \emph{for all timestamps} are protected, while the latter ensures that any single event \emph{at a specific timestamp} is protected. +Moreover, \emph{w-event}~\cite{kellaris2014differentially} attempts to bridge the gap between event and user level privacy in streaming settings, by protecting any event sequence of any user within a window of $w$ timestamps. +$w-$event is narrower than user level privacy, since it does not hide multiple event sequences from the same user, but when $w$ is set to infinity, $w$-event and user level notions converge. +Note that the described levels have been coined in the context of \emph{differential privacy}~\cite{dwork2006calibrating}, nevertheless, they may apply at other privacy protection techniques as well. + + +\subsection{Seminal works} +\label{subsec:privacy-seminal} + +Next, we visit some of the most important methods proposed in the literature for data privacy (not necessarily defined for continuous data set publication). +Following the categorization of anonymization techniques as defined in~\cite{wang2010privacy}, we visit the most prominent algorithms using the operations of \emph{aggregation}, \emph{suppression}, \emph{generalization}, data \emph{perturbation}, and \emph{randomization}. +To hide sensitive data by aggregation we group together multiple rows of a table to form a single value; by suppression we delete completely certain sensitive values or entire records~\cite{gruteser2004protecting}; and by generalization we replace an attribute value with a parent value in the attribute taxonomy. +In perturbation, we disturb the initial attribute value in a deterministic or probabilistic approach. +When the distortion is done in a probabilistic way, we talk about randomization. +The first subsection on microdata publication, uses the four first perturbation techniques, while the second subsection on statistical data publication uses the last two techniques. + +It is worth mentioning that there are a lot of contributions in the field of privacy preserving techniques based on \emph{cryptography}, e.g.,~\cite{benaloh2009patient, kamara2010cryptographic, cao2014privacy} that would be relevant to discuss. +However, the majority of these methods, among other assumptions that they make, have minimum or even no trust to the entities that handle personal information. +Furthermore, the amount and way of data processing of these techniques usually burden the overall procedure, deteriorate the utility of the resulting data sets, and restricts their applicability. +Therefore, our focus is limited to techniques that achieve a satisfying balance between both participants' privacy and data utility. +For these reasons, there will be no further discussion around this family of techniques in this article. + + +\subsubsection{Microdata} + +Sweeney coined \emph{$k$-anonymity}~\cite{sweeney2002k}, one of the first established works on data privacy. +A released data set features $k$-anonymity protection when the sequence of values for a set of identifying attributes, called the \emph{quasi-identifiers}, is the same for at least $k$ records in the data set. +This constitutes an individual indistinguishable from at least $k{-}1$ other individuals in the same data set. +In a follow-up work~\cite{sweeney2002achieving}, the author describes a way to achieve $k$-anonymity for a data set by the suppression or generalization of certain values of the quasi-identifiers. +However, $k$-anonymity is not intolerant to external attacks of re-identification on the released data set. +The problematic settings identified in~\cite{sweeney2002k} appear when attempting to apply $k$-anonymity on continuous data publication (as we will also see in the next section). +These attacks include multiple $k$-anonymous data set releases with the same record order, subsequent releases of a data set without taking into account previous $k$-anonymous releases, and over time tuple changes. +Proposed solutions include rearranging the attributes, setting the whole attribute set of previously released data sets as quasi-identifiers or releasing data based on previous $k$-anonymous releases. + +Machanavajjhala et al.~\cite{machanavajjhala2006diversity} pointed out that $k$-anonymity is vulnerable to homogeneity and background knowledge attacks. +Thereby, they proposed \emph{$l$-diversity} which demands that the values of the sensitive attributes are `well-represented' by $l$ sensitive values in each group. +Principally, a data set can be $l$-diverse by featuring at least $l$ distinct values for the sensitive field in each group (\emph{distinct} $l$-diversity). +Other instantiations demand that the entropy of the whole data set is greater than or equal to $\log(l)$ (\emph{entropy} $l$-diversity), or that the number of appearances of the most common sensitive value is less than the sum of the counts of the rest of the values multiplied by a user defined constant $c$ (\emph{recursive (c, l)}-diversity). +Later on, Li et al.~\cite{li2007t} indicated that $l$-diversity can be void by sensitive attributes with a small value range, skewness and similarity attacks. +In such cases, \emph{$\theta$-closeness} guarantees that the distribution of a sensitive attribute in a group and the distribution of the same attribute in the whole data set is similar. +This similarity is bounded by a threshold $\theta$. +A data set is said to have $\theta$-closeness when all of its groups have $\theta$-closeness. + + +\subsubsection{Statistical data} + +While methods based on $k-$anonymity have been mainly employed when releasing microdata, \emph{differential privacy}~\cite{dwork2006calibrating} has been proposed for `privately' releasing high utility aggregates over microdata. +More precisely, differential privacy ensures that the removal or addition of a single data item, i.e.,~the record of an individual, in a released data set does not (substantially) affect the outcome of any analysis. +It is a statistical property of the privacy mechanism and is irrelevant to the computational power and auxiliary information available to the adversary. + +In its formal definition, a privacy mechanism $M$ that introduces some randomness to the query result, provides $\varepsilon$-differential privacy, for a given privacy budget $\varepsilon$, if for all pairs of data sets $X$, $Y$ differing in one tuple, the following holds: +$$\Pr[M(X)\in S]\leq e^\varepsilon \Pr[M(Y)\in S]$$ +where $\Pr[.]$ denotes the probability of an event, $S$ denotes the set of all possible worlds, and as previously noted, $\varepsilon$ represents the user-defined privacy budget or more precisely the privacy risk. +As the definition implies, for lower budget ($\varepsilon$) values a mechanism achieves stronger privacy protection because the probabilities of $X$ and $Y$ being true worlds are similar. +Differential privacy satisfies two composability properties: the sequential, and the parallel one~\cite{mcsherry2009privacy, soria2016big}. +Due to the \emph{sequential composability} property, the total privacy level of two independent mechanisms $M_1$ and $M_2$ over the same data set that satisfy $\varepsilon_1$ and $\varepsilon_2$ respectively, equals to $\varepsilon_1 + \varepsilon_2$. +The \emph{parallel composition} dictates that when the mechanisms $M_1$ and $M_2$ are applied over disjoint subsets of the same data set, then the overall privacy level is of $\underset{i}{argmax}(\varepsilon_i), i\in\{1,2\}$. + +Methods based on differential privacy are best for low sensitivity queries such as counts, because the presence/absence of a single record can only change the result slightly. +However, sum and max queries can be problematic, since a single but very different value could change the answer noticeably, making it necessary to add a lot of noise to the query answer. +Furthermore, asking a series of queries may allow the disambiguation between possible worlds, making it necessary to add even more noise to the results. +For this reason, after a series of queries exhausts the privacy budget, the data set has to be discarded. +Keeping the original guarantee across $n$ queries, that require different/new answers, one must inject $n$ times the noise thus, destroying the utility of the output. + +The notion of differential privacy has highly influenced the research community, resulting in many follow-up publications (~\cite{mcsherry2007mechanism, kifer2011no, zhang2017privbayes} to mention a few). +We distinguish here \emph{Pufferfish} proposed by Kifer et al.~\cite{kifer2014pufferfish}, an abstraction of differential privacy. +It is a framework that allows experts in an application domain, without necessarily having any particular expertise in privacy, to develop privacy definitions for their data sharing needs. +To define a privacy mechanism using \emph{Pufferfish}, one has to define a set of potential secrets \emph{S}, a set of discriminative pairs \emph{$S_{pairs}$}, and evolution scenarios \emph{D}. +\emph{S} serves an explicit specification of what we would like to protect, e.g.,~`the record of an individual $X$ is (not) in the data'. +\emph{$S_{pairs}$} is a subset of $S\times S$ that instructs how to protect the potential secrets $S$, e.g.,~(`$X$ is in the table', `$X$ is not in the table'). +Finally, \emph{D} is a set of conservative assumptions about how the data evolved (or were generated) that reflects the adversary's belief about the data, e.g.,~probability distributions, variable correlations, etc. +When there is independence between all the records in $D$, then $\epsilon$-differential privacy and the privacy definition of $\epsilon$-\emph{Pufferfish}$(S, S_{pairs}, D)$ are equivalent. + + +\section{Scenario} +\label{sec:scenario} + +\begin{figure}[tbp] + \centering\noindent\adjustbox{max width=\linewidth} { + \begin{tabular}{@{}ccc@{}} + + \begin{tabular}{@{}llll@{}} + \toprule + \textbf{\textit{Name}} & \textbf{Age} & \textbf{Location} & \textbf{Context} \\ + \midrule + Donald & 27 & Paris & `at work' \\ + Daisy & 25 & Paris & `driving' \\ + Huey & 12 & New York & `running' \\ + Dewey & 11 & New York & `at home' \\ + Louie & 10 & New York & `walking' \\ + Quackmore & 62 & Paris & `nearest restos?' \\ + \bottomrule + \end{tabular} + + & + + \begin{tabular}{@{}llll@{}} + \toprule + \textbf{\textit{Name}} & \textbf{Age} & \textbf{Location} & \textbf{Context} \\ + \midrule + Donald & 27 & Athens & `driving' \\ + Daisy & 25 & Athens & `where's Acropolis?' \\ + Huey & 12 & Paris & `what's here?' \\ + Dewey & 11 & Paris & `walking' \\ + Louie & 10 & Paris & `at home' \\ + Quackmore & 62 & Athens & `swimming' \\ + \bottomrule + \end{tabular} + + & + + \ldots + + \\ + + $t_0$ & $ t_1$ & \ldots \\ + + \end{tabular} + } + \caption{Raw user-generated data in tabular form, for two timestamps.} + \label{fig:scenario} +\end{figure} + +To illustrate the usage of the two main aforementioned techniques for privacy preserving data publishing, we provide here an example scenario. +Users interact with a location based service by making queries within some context at various locations. +Figure~\ref{fig:scenario} shows a toy data set of user-generated data, in two subsequent timestamps $t_0$, and $t_1$ (Figure~\ref{fig:scenario}~(a) \&~(b) respectively). +Each table contains four attributes, namely \emph{Name} (the key of the relation), \emph{Age}, \emph{Location}, and \emph{Context}. +Location shows the spatial information (e.g.,~latitude, longitude or full address) related to the query. +Context includes information that characterizes the user's state, or the query itself. +Its content varies according to the service's functionality and is transmitted/received by the user. + +\begin{figure}[tbp] + \centering\noindent\adjustbox{max width=\linewidth} { + \begin{tabular}{@{}ccc@{}} + + \begin{tabular}{@{}llll@{}} + \toprule + \textbf{\textit{Name}} & \textbf{Age} & \textbf{Location} & \textbf{Context} \\ + \midrule + * & $> 20$ & Paris & `at work' \\ + * & $> 20$ & Paris & `driving' \\ + * & $> 20$ & Paris & `nearest restos?' \\ + \midrule + * & $\leq 20$ & New York & `running' \\ + * & $\leq 20$ & New York & `at home' \\ + * & $\leq 20$ & New York & `walking' \\ + \bottomrule + \end{tabular} + + & + + \begin{tabular}{@{}llll@{}} + \toprule + \textbf{\textit{Name}} & \textbf{Age} & \textbf{Location} & \textbf{Context} \\ + \midrule + * & $> 20$ & Athens & `driving' \\ + * & $> 20$ & Athens & `where's Acropolis?' \\ + * & $> 20$ & Athens & `swimming' \\ + \midrule + * & $\leq 20$ & Paris & `what's here?' \\ + * & $\leq 20$ & Paris & `walking' \\ + * & $\leq 20$ & Paris & `at home' \\ + \bottomrule + \end{tabular} + + & + + \ldots + + \\ + + $t_0$ & $ t_1$ & \ldots \\ + + \end{tabular} + } + \caption{3-anonymous versions of the data in Figure~\ref{fig:scenario}.} + \label{fig:scenario-micro} +\end{figure} + +First, we anonymize the data set of Figure~\ref{fig:scenario} using $k-$anonymity, with $k=3$. +This means that any user should not be distinguished from at least 2 others. +We start by suppressing the values of the Name attribute, which is the identifier. +The Age and Location attributes are the quasi-identifiers, so we proceed to adequately generalize them. +The age values are turned to ranges ($\leq 20$, and $> 20$), but no generalization is needed for location. +Note however that in a bigger data set typically we would have to generalize all the quasi-identifiers to achieve the desired anonymization level. +Finally, $3$-anonymity is achieved by putting the entries in groups of three, according to the quasi-identifiers. +Figure~\ref{fig:scenario-micro} depicts the results at each timestamp. + +\begin{figure}[tbp] + \centering\noindent\adjustbox{max width=\linewidth} { + \begin{tabular}{@{}ccc@{}} + + \begin{tabular}{@{}llll@{}} + \toprule + & $t_0$ & $t_1$ & \ldots \\ + \midrule + Paris & $3$ & $3$ & \ldots \\ + New York & $3$ & $0$ & \ldots \\ + Athens & $0$ & $3$ & \ldots \\ + \bottomrule + \end{tabular} + + & $\overrightarrow{Noise}$ & + + \begin{tabular}{@{}llll@{}} + \toprule + & $t_0$ & $t_1$ & \ldots \\ + \midrule + Paris & $2$ & $0$ & \ldots \\ + New York & $3$ & $2$ & \ldots \\ + Athens & $1$ & $2$ & \ldots \\ + \bottomrule + \end{tabular} + + \\ + + (a) True counts & & (b) Private counts \\ + + \end{tabular} + } + \caption{(a) Aggregated data from Figure~\ref{fig:scenario}, and (b) $0.5$-differentially private versions of these data.} + \label{fig:scenario-stat} +\end{figure} + +Next, we demonstrate differential privacy. +Let us assume that we want to release the number of users at each location, for each timestamp. +For this reason, we run a count query $Q$, with a \emph{Group By} clause on Location, over each table of Figure~\ref{fig:scenario}. +Figure~\ref{fig:scenario-stat}~(a) shows the results of these queries, which are called \emph{true counts}. +Then, we apply an $\varepsilon$-differentially private mechanism, with $\varepsilon = 0.5$, taking into account $Q$, and the data sets. +This mechanism adds some noise to the true counts. +A typical example is the \emph{Laplace} mechanism that draws randomly a value (with noise) from a Laplace distribution. +In our case, $\mu$ is equal to the true count, and $b$ is the sensitivity of the count function divided by the available privacy budget $\varepsilon$. +In the case of the count function, the sensitivity is $1$ since the addition/removal of a tuple from the data set can change the final result of the function by maximum $1$ (tuple). +Figure~\ref{fig:laplace} shows how the Laplace distribution for the true count for Paris at $t_0$ looks like. +Figure~\ref{fig:scenario-stat}~(b) shows all the perturbed counts that are going to be released. + +\begin{figure}[tbp] + \centering + \includegraphics[width=0.5\linewidth]{laplace} + \caption{A Laplace distribution for \emph{location} $\mu = 3$ and \emph{scale} $b = 2$.} + \label{fig:laplace} +\end{figure} + +Note that in this example, the applied privacy preserving approaches are intentionally quite simplistic for demonstration purposes, without taking into account data continuity and the more advanced attacks (e.g.,~background knowledge attack, temporal inference attack, etc.) described in Section~\ref{subsec:privacy-attacks}. +Follow-up works have been developed to meet these attacks, as we review in the next section. diff --git a/discussion.tex b/discussion.tex new file mode 100644 index 0000000..ca1121a --- /dev/null +++ b/discussion.tex @@ -0,0 +1,37 @@ +\subsection{Discussion} +\label{subsec:discussion} + +In the previous sections we provided a brief summary and review for each work that falls into the categories of Microdata and Statistical Data privacy preserving publication under continual data schemes. +The main elements that have been summarized in Table~\ref{tab:related} allow us to make some interesting observations, on each category individually, and more generally. + +In the Statistical Data section, all of the works deal with data linkage attacks, while there are some more recent works taking into consideration possible data correlations as well. +We notice that data linkage is currently assumed in the bibliography as the worst case scenario. +For this reason, works in the Statistical Data category provide a robust privacy protection solution independent to the adversaries' knowledge. +The prevailing distortion method in this category is probabilistic perturbation. +This is justified by the fact that nearly all of the observed methods are based on differential privacy. +The majority implements the Laplace mechanism, while some of them offer an adaptive approach. + +In the Microdata category we observe that problems with sequential data, i.e.,~data that are generated in a sequence and dependent on the values in previous data sets, are more prominent. +It is important to note that works on this set of problems actually followed similar scenarios, i.e.,~publishing updated versions of an original data set, either vertically (schema-wise) or horizontally (tuple-wise). +Naturally, in such cases the most evident attack scenarios are the complementary release ones, as in each release there is great probability that there will be an intersection of tuples with previous releases. +On the other hand, when the problem has stream data/processing, we observe that these data are location specific, most commonly trajectories. +In such cases, the attacks considered are wider (than only versions of an original data set), taking into account external information, e.g.,~correlations that typically may be available for location specific data. + +Speaking of correlations, in either category, we may see that the protection method used is mainly probabilistic, if not total suppression. +This makes sense, since by generalization the correlation between attributes would not be canceled. +Generalization is used naturally on grouped-based techniques, to make it possible to group more tuples under the generated categories --- and thus achieve anonymization. + +As far as the protection levels are concerned, the Microdata category mainly targets event level protection, as all users are protected equally through the performed grouping. +Still, scenarios that contain trajectories, associated with a certain user aim to protect this user's privacy by blurring the actual trajectories (user-level). +$w-$event level is absent in the Microdata category; one reason maybe that streaming scenarios are not prominent in this category, and another practical reason may be that this notion was introduced later in time. +Indeed, none of the works in the Microdata category explicitly mention the level of privacy, as these levels have been introduced in differential privacy scenarios, hence in Statistical Data. +Considering all the use cases from both categories, event-level protection is more prominent, as it is more practical to protect all the users as a single set than each one individually in continual settings. + +As already discussed, problems with streaming processing are not common in the Microdata category. +Indeed, most of the cases including streaming scenarios are in the Statistical Data category. +A technical reason behind this observation is that anonymizing a raw data set as a whole, may be a time-consuming process, and thus, not well-suited for streaming. +The complexity actually depends on the number of attributes, if we consider the possible combinations that may be enumerated. +On the contrary, aggregation functions as used in the Statistical Data category, especially in the absence of filters, usually are low cost. +Moreover, perturbing a numerical values (the usual result of an aggregation function) does not add a lot in the complexity of the algorithm (depending of course on the perturbation model used). +For this reason, perturbing the result of a process is more time efficient than anonymizing the data set and then running the process on the anonymized data. +Still, we may argue that an anonymized data set can be more widely used; in the case of statistical data it is only the data holder that performs the processes and releases the results. diff --git a/graphics/data-flow.pdf b/graphics/data-flow.pdf new file mode 100644 index 0000000..92662c1 Binary files /dev/null and b/graphics/data-flow.pdf differ diff --git a/graphics/data-value.pdf b/graphics/data-value.pdf new file mode 100644 index 0000000..79c811e Binary files /dev/null and b/graphics/data-value.pdf differ diff --git a/graphics/laplace.pdf b/graphics/laplace.pdf new file mode 100644 index 0000000..5cd77cf Binary files /dev/null and b/graphics/laplace.pdf differ diff --git a/images/logo-universite-paris-seine.png b/graphics/logo-universite-paris-seine.png similarity index 100% rename from images/logo-universite-paris-seine.png rename to graphics/logo-universite-paris-seine.png diff --git a/graphics/table-related.tex b/graphics/table-related.tex new file mode 100644 index 0000000..415b564 --- /dev/null +++ b/graphics/table-related.tex @@ -0,0 +1,131 @@ +{ + \setlength\tabcolsep{2pt} + \fontsize{5.35}{7.5}\selectfont + \begin{longtabu} [c]{@{} *{9}l @{}} + + \toprule + + \multirow{2}{*}[-2pt]{\textbf{Name}} & \multicolumn{3}{c}{\textbf{Data}} & \multicolumn{4}{c}{\textbf{Protection}} & \multirow{2}{*}[-2pt]{\textbf{Correlations}} \\ \cmidrule(l{2pt}r{2pt}){2-4} \cmidrule(l{2pt}r{2pt}){5-8} + & \textbf{Input/Output} & \textbf{Processing} & \textbf{Publishing} & \textbf{Attack} & \textbf{Method} & \textbf{Level} & \textbf{Distortion} & \\ \midrule \endhead + + \multicolumn{9}{c}{\textbf{Microdata}} \\ \midrule + + \hyperlink{he2011preventing}{\emph{$e-$equivalence}}~\cite{he2011preventing} & stream & batch & sequential & complementary & $l$-diversity & event & generalization & - \\ + & & & & release & & & & \\ \tabucline[hdashline]{-} + + \hyperlink{li2016hybrid}{Li et al.}~\cite{li2016hybrid} & stream & batch & sequential & complementary & $l$-diversity & event & generalization, & - \\ + & & & & release & & & perturbation & \\ \tabucline[hdashline]{-} + + \hyperlink{zhou2009continuous}{Zhou et al.}~\cite{zhou2009continuous} & stream & streaming & continuous & same with & $k$-anonymity & event & generalization, & - \\ + & & & & $k$-anonymity~\cite{sweeney2002k} & & & randomization & \\ \tabucline[hdashline]{-} + + \hyperlink{gotz2012maskit}{\textbf{\emph{MaskIt}}}~\cite{gotz2012maskit} & stream & streaming & continuous & correlations & $\delta$-privacy & user & suppression & temporal \\ + & & & & & & & & (Markov) \\ \tabucline[hdashline]{-} + + \hyperlink{ma2017plp}{\textbf{\emph{PLP}}}~\cite{ma2017plp} & stream & streaming & continuous & correlations & $\delta$-privacy & user & suppression & spatiotem- \\ + & & & & & & & (probabilistic) & poral (CRFs) \\ + + \midrule + + \hyperlink{wang2006anonymizing}{\emph{$(X, Y)-$}} & sequential & batch & sequential & complementary & $k$-anonymity & event & generalization & - \\ + \hyperlink{wang2006anonymizing}{\emph{privacy}}~\cite{wang2006anonymizing} & & & & release & & & & \\ \tabucline[hdashline]{-} + + \hyperlink{Shmueli}{Shmueli and} & sequential & batch & sequential & same with & $l$-diversity & event & generalization & - \\ + \hyperlink{Shmueli}{Tassa}~\cite{shmueli2015privacy} & & & & $l$-diversity~\cite{machanavajjhala2006diversity} & & & & \\ \tabucline[hdashline]{-} + + \hyperlink{xiao2007m}{\emph{$m-$invariance}}~\cite{xiao2007m} & sequential & batch & sequential & complementary & $l$-diversity & event & generalization & - \\ + & & & & release & & & & \\ \tabucline[hdashline]{-} + + \hyperlink{chen2011differentially}{\textbf{Chen et al.}}~\cite{chen2011differentially} & sequential & batch & one-shot & linkage & differential & user & perturbation & - \\ + & & & & & privacy & & (Laplace) & \\ \tabucline[hdashline]{-} + + \hyperlink{jiang2013publishing}{\textbf{Jiang et al.}}~\cite{jiang2013publishing} & sequential & batch & one-shot & linkage & differential & user & perturbation & - \\ + & & & & & privacy & & (Laplace) & \\ \tabucline[hdashline]{-} + + \hyperlink{fung2008anonymity}{\emph{$BCF-$}} & sequential & batch & incremental & complementary & $k$-anonymity & event & generalization & - \\ + \hyperlink{fung2008anonymity}{\emph{anonymity}}~\cite{fung2008anonymity} & & & & release & & & & \\ \tabucline[hdashline]{-} + + \hyperlink{xiao2015protecting}{\textbf{Xiao et al.}}~\cite{xiao2015protecting} & sequential & streaming & sequential & correlations & $\delta$-location set & user & \emph{Planar Isotropic} & temporal \\ + & & & & & & & \emph{Mechanism (PIM)} & (Markov) \\ \tabucline[hdashline]{-} + + \hyperlink{al2018adaptive}{\textbf{Al-Dhubhani et al.}}~\cite{al2018adaptive} & sequential & streaming & sequential & correlations & geo-indistin- & user & perturbation & temporal \\ + & & & & & guishability & & (Planar Laplace) & \\ \tabucline[hdashline]{-} + + \hyperlink{ghinita2009preventing}{\textbf{Ghinita et al.}}~\cite{ghinita2009preventing} & sequential & streaming & sequential & linkage & spatiotemporal & user & generalization (spatio- & feature \\ + & & & & & transformation & & temporal cloaking) & \\ + + \midrule + + \hyperlink{primault2015time}{\textbf{\emph{Promesse}}}~\cite{primault2015time} & time series & batch & one-shot & spatiotemporal & temporal & user & perturbation & - \\ + & & & & inference & transformation & & (temporal) & \\ \midrule + + \multicolumn{9}{c}{\textbf{Statistical Data}} \\ \midrule + + \hyperlink{chan2011private}{Chan et al.}~\cite{chan2011private} & stream/ & streaming & continual & linkage & differential & event & perturbation & - \\ + & continual & & & & privacy & & (Laplace) & \\ \tabucline[hdashline]{-} + + \hyperlink{cao2015differentially}{\textbf{\emph{l-trajectory}}}~\cite{cao2015differentially} & stream/ & streaming & continuous & linkage & $l-$trajectory & w-event & perturbation & - \\ + & time series & & & & & personalized & (dynamic Laplace) & \\ \tabucline[hdashline]{-} + + \hyperlink{bolot2013private}{Bolot et al.}~\cite{bolot2013private} & stream & streaming & continual & linkage & differential & event & perturbation & - \\ + & & & & & privacy & & (Laplace) & \\ \tabucline[hdashline]{-} + + \hyperlink{quoc2017privapprox}{\emph{PrivApprox}}~\cite{quoc2017privapprox} & stream & streaming & continual & linkage & zero & event & perturbation (ran- & - \\ + & & & & & knowledge & & domized response) & \\ \tabucline[hdashline]{-} + + \hyperlink{li2007hiding}{Li et al.}~\cite{li2007hiding} & stream & streaming & continuous & linkage & randomization & event & perturbation & serial \\ + & & & & & & & (dynamic) & (data trends) \\ \tabucline[hdashline]{-} + + \hyperlink{chen2017pegasus}{\emph{PeGaSus}}~\cite{chen2017pegasus} & stream & streaming & continuous & linkage & differential & event & perturbation & - \\ + & & & & & privacy & & (Laplace) & \\ \tabucline[hdashline]{-} + + \hyperlink{cao2017quantifying}{Cao et al.}~\cite{cao2017quantifying} & stream & streaming & continuous & correlations & differential & event & perturbation & temporal \\ + & & & & & privacy & & (Laplace) & (Markov) \\ \tabucline[hdashline]{-} + + \hyperlink{kellaris2014differentially}{Kellaris et al.}~\cite{kellaris2014differentially} & stream & streaming & continuous & linkage & differential & w-event & perturbation & - \\ + & & & & & privacy & & (dynamic Laplace) & \\ \tabucline[hdashline]{-} + + \hyperlink{wang2016rescuedp}{\textbf{\emph{RescueDP}}}~\cite{wang2016rescuedp} & stream & streaming & continuous & linkage & differential & w-event & perturbation & serial \\ + & & & & & privacy & & (dynamic Laplace) & (Pearson's r) \\ + + \midrule + + \hyperlink{kellaris2013practical}{Kellaris et al.}~\cite{kellaris2013practical} & sequential & batch & one-shot & linkage & differential & event & perturbation & - \\ + & & & & & privacy & & (Laplace) & \\ \tabucline[hdashline]{-} + + \hyperlink{chen2012differentially}{\textbf{Chen et al.}}~\cite{chen2012differentially} & sequential & batch & one-shot & linkage & differential & user & perturbation & - \\ + & & & & & privacy & & (adaptive Laplace) & \\ \tabucline[hdashline]{-} + + \hyperlink{hua2015differentially}{\textbf{Hua et al.}}~\cite{hua2015differentially} & sequential & batch & one-shot & linkage & differential & user & perturbation (ex- & - \\ + & & & & & privacy & & ponential, Laplace) & \\ \tabucline[hdashline]{-} + + \hyperlink{li2017achieving}{\textbf{Li et al.}}~\cite{li2017achieving} & sequential & batch & one-shot & linkage & differential & user & perturbation & - \\ + & & & & & privacy & & (Laplace) & \\ + + \midrule + + \hyperlink{erdogdu2015privacy}{Erdogdu et al.}~\cite{erdogdu2015privacy} & time series & batch/ & continual & correlations & $\epsilon_t$-privacy & user & perturbation & serial \\ + & & streaming & & & & & (stohastic) & (HMM) \\ \tabucline[hdashline]{-} + + \hyperlink{yang2015bayesian}{\emph{Bayesian differen-}} & time series & batch & one-shot & correlations & \emph{Pufferfish} & event & perturbation & general \\ + \hyperlink{yang2015bayesian}{\emph{tial privacy}}~\cite{yang2015bayesian} & & & & & & & (Laplace) & (Gaussian) \\ \tabucline[hdashline]{-} + + \hyperlink{song2017pufferfish}{Song et al.}~\cite{song2017pufferfish} & time series & batch & one-shot & correlations & \emph{Pufferfish} & event/user & perturbation & general \\ + & & & & & & & (dynamic Laplace) & (Markov) \\ \tabucline[hdashline]{-} + + \hyperlink{fan2013differentially}{\textbf{Fan et al.}}~\cite{fan2013differentially} & time series & streaming & continuous & correlations & differential & event & perturbation & spatiotem- \\ + & & & & & privacy & & (Laplace) & poral/serial \\ \tabucline[hdashline]{-} + + \hyperlink{wang2017cts}{\emph{CTS-DP}}~\cite{wang2017cts} & time series & streaming & continuous & correlations & differential & event & perturbation & serial (autocor- \\ + & & & & & privacy & & \emph{(correlated Laplace)} & relation function) \\ \tabucline[hdashline]{-} + + \hyperlink{fan2014adaptive}{\emph{FAST}}~\cite{fan2014adaptive} & time series & streaming & continuous & linkage & differential & user & perturbation & - \\ + & & & & & privacy & & (dynamic Laplace) & \\ + + \bottomrule + + \caption{Summary table of reviewed privacy methods. Location specific techniques are listed in bold, the rest are not data-type specific.} + \label{tab:related} + + \end{longtabu} +} \ No newline at end of file diff --git a/introduction.tex b/introduction.tex index 61abb24..2da3089 100644 --- a/introduction.tex +++ b/introduction.tex @@ -1,4 +1,74 @@ \chapter{Introduction} \label{ch:intro} -This is the introduction. -\section{Test} + +\section{Introduction} +\label{sec:introduction} + +Data privacy is becoming an increasingly important issue both at a technical and a societal level and introduces various challenges ranging from the way we share and publish data sets to the way we use online and mobile services. +Personal information, also described as \emph{microdata}, acquired increasing value and is in many cases used as the `currency'~\cite{economist2016data} to pay access to various services, i.e.,~users are asked to exchange their personal information with the service provided. +This is particularly true for many \emph{Location Based Services (LBS)} like Google Maps~\cite{gmaps}, Waze~\cite{waze}, etc.; these services exchange their `free' service with collecting and using user-generated data, like timestamped geolocalized information. +Besides navigation services, social media applications (e.g.,~Facebook~\cite{facebook}, Twitter~\cite{twitter}, Foursquare~\cite{foursquare}, etc.) take advantage of user-generated and user-related data, to make relevant recommendations and show personalized advertisement. +Here the location is also one of the important required private data to be shared. +Last but not least, \emph{data brokers} (e.g.,~Experian~\cite{experian}, TransUnion~\cite{transunion}, Acxiom~\cite{acxiom}, etc.) collect data from public and private resources, e.g.,~censuses, bank card transaction records, voter registration lists, etc. +Most of these data are georeferenced and contain directly or indirectly location information; protecting the location of the user has become one of the most important goals so far. + +These data, on the one hand, give useful feedback to the involved users and/or services, and on the other hand, provide valuable information to internal/external analysts. +While these activities happen within the boundaries of the law~\cite{tankard2016gdpr}, it is important to be able to anonymize the corresponding data before sharing and to take into account the possibility of correlating, linking and crossing diverse independent data sets. +Especially the latter is becoming quite important in the era of Big Data, where the existence of diverse linked data sets is one of the promises (for example, one can refer to the discussion on Linked Open Data~\cite{efthymiou2015big}). +The ability to extract additional information impacts the ways we protect our data and affects the privacy guarantees we can provide. + +Besides the explosion of online and mobile services, another important aspect is that a lot of these services actually rely on data provided by the users (\textit{crowdsourced} data) to function, with prominent examples efforts like Wikipedia~\cite{wiki} and OpenStreetMap~\cite{osm}. +Data from crowdsourced based applications, if not protected correctly, can be easily used to identify personal information like location or activity and thus, lead indirectly to issues of user surveillance~\cite{lyon2014surveillance}. +Nonetheless, users seem reluctant to undertake the financial burden to support the numerous services that they use~\cite{savage2013value}. +While this does not mean that the various aggregators of personal/private data are exonerated of any responsibility, it imposes the need to work within this model, providing the necessary technical solutions to increase data privacy. + +\begin{figure}[tbp] + \centering + \includegraphics[width=\linewidth]{data-flow} + \caption{The usual flow of crowdsourced data harvested by publishers, anonymized, and released to data consumers.} + \label{fig:data-flow} +\end{figure} + +However, providing adequate user privacy affects the utility of the data, which is associated with one of the five dimensions (known as the five \emph{`V's'}) that define Big Data: its \textit{Veracity}. +Through privacy preserving processes, in the case of \textit{microdata}, a private version, containing some synthetic data as well, is generated, where the users are not distinguishable. +In the case of \textit{statistical} data (e.g.,~the results of statistical queries over our data sets), a private version is generated by adding some kind of noise on the actual statistical values. +In both cases, we end up by affecting the quality of the published data set and in both cases, the privacy and the utility of the `noisy' private output are two contrasting desiderata, which need to be measured and balanced. +As a matter of fact, the added noise is greater when we consider external threats (e.g.,~linked or correlated data), in order to ensure the same level of protection, inevitably affecting the utility of the data set. +For this reason, the abundance of external information in the Big Data era is something that need to be taken into account, in the traditional processing flow, shown in Figure~\ref{fig:data-flow}. +While we still need to go through the preprocessing step to make the data private before releasing it for public use, we should make sure that the quality/privacy ratio is re-stabilized. + +This discussion introduces the importance of being able to correctly choose the proper privacy algorithms that would allow users to provide private copies of their data with some guarantees. +Finding a balance between privacy and data utility is a task far from trivial for any privacy expert. +On the one hand, it is crucial to select an appropriate anonymization technique, relevant to the data set intended for public release. +On the other hand, it is equally essential to tune the selected technique according to the circumstances, e.g.,~assumptions, level of distortion, etc.~\cite{kifer2011no}. +Selecting the wrong privacy algorithm or configuring it poorly, may not only put in risk the privacy of the involved individuals, but also end up deteriorating the quality and therefore, the utility of the data set. + +\begin{figure}[tbp] + \centering + \includegraphics[width=0.5\linewidth]{data-value} + \caption{Value of data to decision making over time from less than seconds to more than months~\cite{gualtieri2016perishable}.} + \label{fig:data-value} +\end{figure} + +In this context, in this thesis we focus on privacy in continual publication scenarios, with an emphasis on works taking into account data correlations, since this field (i) includes the most prominent cases, like for example location privacy problems, and (ii) provides the most challenging and yet not well charted part of the privacy algorithms, since it is rather new and is increasingly complex. +The type of data in these cases require a timely processing, since usually their value decreases over time, as demonstrated in Figure~\ref{fig:data-value}. +This allows us to provide an insight into additional properties of the algorithms, like for instance if they work on streaming or real-time data, or if they take into account existing data correlations either within the data set or with external data sets. +Geospatial data commonly fall in this category; a few examples include --- but are not limited to --- data being produced while tracking the movement of individual for various purposes (where data should become private on the move and in real-time), crowdsourced data that are used to report measurements like noise or pollution (where data should become private before reaching the server), and even data items like photographs or social media posts that might include location information (where data should become private before the posts become public). +In most of these cases, the privacy preserving processes should take into account implicit correlations that exist, since data have a spatial dimension and space imposes its own restrictions. + +The domain of data privacy is rather vast, and naturally several works have already been conducted on different scopes. +Subsequently, we refer the interested reader to a non-exhaustive list of relevant articles. +A group of works focuses on the family of algorithms used to make the data private. +For instance, Simi et al.~\cite{simi2017extensive} provide an extensive study of works on $k$-anonymity and Dwork~\cite{dwork2008differential} focuses on differential privacy. +Another group of works focuses on techniques that allow the execution of data mining or machine learning tasks with some privacy guarantees, e.g.,~Wang et al.~\cite{wang2009survey}, and Ji et al.~\cite{ji2014differential}. +In a more general scope, Wang et al.~\cite{wang2010privacy} offer a summary and evaluation of privacy-preserving data publishing techniques. +Additional works, look into issues around Big Data and user privacy. +Indicatively, Jain et al.~\cite{jain2016big}, and Soria-Comas et al.~\cite{soria2016big} do an examination of how Big Data conflict with preexisting concepts of private data management, and how efficiently $k$-anonymity and $\varepsilon$-differential privacy meet Big Data requirements. +Finally, there are some works that focus on the application. +E.g.,~Zhou et al.~\cite{zhou2008brief} have a focus on social networks, Christin et al.~\cite{christin2011survey} give an outline of how privacy aspects are addressed in crowdsensing applications, and Primault et al.~\cite{primault2018long} summarize privacy threats and location privacy-preserving mechanisms. + +% This thesis is organized as follows: we begin by providing a general description of the field of data privacy, and the most prominent anonymization and obfuscation/noise inducing algorithms that have been proposed in the literature so far (Section~\ref{sec:background}). +% The main content of the thesis (Section~\ref{sec:main}) spans works related to the continual publication of data points, or to the republication of (or parts of) a data set along time, with regard to the privacy of the individuals involved. +% More particularly, we divide the works in two categories, based on the type of data published: microdata or statistical data. +% In all cases, we use the same set of properties to characterize the algorithms, and thus, allow to compare them. +% Finally (Section~\ref{sec:conclusion}), we put these works into perspective and discuss various future research lines in this area. diff --git a/main.tex b/main.tex index ac56150..3782955 100644 --- a/main.tex +++ b/main.tex @@ -4,29 +4,65 @@ \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{graphicx} +\usepackage{hyperref} +\usepackage[hyphenbreaks]{breakurl} +\usepackage{booktabs} +\usepackage{stmaryrd} +\usepackage[T1]{fontenc} +\usepackage{cite} -\graphicspath{ {images/} } +% packages for algorithm formatting +\usepackage{algorithm} +%\usepackage{algorithmic} +\usepackage{algpseudocode} + +\usepackage[table]{xcolor} +\usepackage{amssymb,amsmath} + +%custom packages +\usepackage{adjustbox} +\usepackage[normalem]{ulem} +\usepackage{geometry} +\usepackage{xcolor} +\usepackage{enumerate} +\usepackage{multirow} +\usepackage{adjustbox} +\usepackage{booktabs} +\usepackage{longtable} +\usepackage{tabu} +\newtabulinestyle{hdashline=0.5pt on 0.5pt off 2pt} +\usepackage{float} +\usepackage{caption} + +\definecolor{smaragdine}{RGB}{0,159,119} % <3 + + +\graphicspath{{graphics/}} +\DeclareGraphicsExtensions{.pdf, .png} \begin{document} -\input{titlepage} +% \input{titlepage} \frontmatter -\tableofcontents -\listoffigures -\listoftables +% \tableofcontents +% \listoffigures +% \listoftables -\input{acknowledgements} +% \input{acknowledgements} -\input{abstract} +% \input{abstract} \mainmatter -\input{introduction} +% \input{introduction} +\input{background} +% \input{related} \backmatter +\bibliographystyle{alpha} \bibliography{bibliography} \end{document} diff --git a/microdata.tex b/microdata.tex new file mode 100644 index 0000000..1f5436b --- /dev/null +++ b/microdata.tex @@ -0,0 +1,90 @@ +\section{Microdata} +\label{sec:microdata} + +As observed in Table~\ref{tab:related}, privacy preserving algorithms for microdata rely on $k-$anonymity, or derivatives of it. Ganta et al.~\cite{ganta2008composition} revealed that $k$-anonymity methods are vulnerable to \emph{composition attacks}. Consequently, these attacks drew the attention of researchers, who proposed various algorithms based on $k-$anonymity, each introducing a different dimension on the problem, for instance that previous releases are known to the publisher, or that the quasi-identifiers can be formed by combining attributes in different releases. Note, however, that only one (Li et al.~\cite{li2016hybrid}) of the following works assumes \emph{independently} anonymized data sets that may not be known to the publisher in the attack model, making it more general than the rest of the works. + + +% \subsection{Continual data} + +% \mk{Nothing to put here.} + + +\subsection{Data streams} + +% M-invariance: towards privacy preserving re-publication of dynamic data sets + +\hypertarget{xiao2007m}{Xiao et al.}~\cite{xiao2007m} consider the case when a data set is (re)published in different time-shots in an update (tuple delete, insert) manner. More precisely, they address anonymization in dynamic environments by implementing m-\emph{invariance}. In a simple $k$-anonymization (or $l$-diverse) scenario the privacy of an individual that exists in two updates can be compromised by the intersection of the set of sensitive values. In contrast, an individual who exists in a series of $m$-invariant releases, is always associated with the same set of $m$ different sensitive values. To enable the publishing of $m$-invariant data sets, artificial tuples called \emph{counterfeits} may be added in a release. To minimize the noise added to the data sets, the authors provide an algorithm with two extra desiderata: minimize the counterfeits and the quasi-identifiers' generalization level. Still, the choice of adding tuples with specific sensitive values disturbs the value distribution with a direct effect on any relevant statistics analysis. + + +% Preventing equivalence attacks in updated, anonymized data + +In the same update setting (insert/delete), \hypertarget{he2011preventing}{He et al.}~\cite{he2011preventing} introduce another kind of attack, namely the \emph{equivalence} attack, not taken into account by the aforementioned $m$-invariance technique. The equivalence attack allows for sets of individuals (of size $e 0$), for all possible outputs $\overrightarrow{o}$ ($\Pr[A(\overrightarrow{x}) = \overrightarrow{o}] > 0$), for all times $t$ and all sensitive contexts $s\in S$, it satisfies the condition $\Pr[X_t = s|\overrightarrow{o}] - \Pr[X_t = s] \leq \delta$. After filtering all the elements of a given stream, an output sequence for a single day is released. The process can be repeated to publish longer context streams. The utility of the system is measured as the expected number of released contexts. Letting the user to define the privacy settings requires that the user has some certain level of relative knowledge, which is not usually the case in real life. Additionally, suppressing data can sometimes disclose more information than releasing them instead, e.g.,~releasing multiple data points around a `sensitive' area (and not inside it) is going to eventually disclose the protected area. + + +% PLP: Protecting location privacy against correlation analyze Attack in crowdsensing + +\hypertarget{ma2017plp}{Ma et al.}~\cite{ma2017plp} propose \emph{PLP} a crowdsensing scheme that protects location privacy against adversaries that can extract spatiotemporal correlations---modeled with CRFs---from crowdsensing data. Users' context (location, sensing data) stream is filtered while long-range dependencies among locations and reported sensing data are taken into account. Sensing data are suppressed at all sensitive locations while data at insensitive locations are reported with a certain probability defined by observing the corresponding CRF model. On the one hand, the privacy of the reported data is estimated by the difference $\delta$ between the probability that a user would be at a specific location given supplementary information versus the same probability without the extra information. On the other hand, the utility of the method depends on the total amount of reported data (more is better). An estimation algorithm searches for the optimal strategy that maximizes utility while preserving a predefined privacy threshold. Although this approach allows users to define their desired privacy prerequisites, it cannot guarantee optimal protection. + + +\subsection{Sequential data} + +% Anonymizing sequential releases + +\hypertarget{wang2006anonymizing}{Wang and Fung}~\cite{wang2006anonymizing} address the problem of anonymously releasing different projections of the same data set, in subsequent timestamps. More precisely, the authors want to protect individual information that could be revealed from \emph{joining} various releases of the same data set. To do so, instead of locating the quasi-identifiers in a single release, the authors suggest that the identifiers may span the current and all previous releases of the (projections of the) data set. Then, the proposed method uses the join of the different releases on the common identifying attributes. The goal is to generalize the identifying attributes of the current release, given that previous releases are immutable. The generalization is performed in a top down manner, meaning that the attributes are initially over generalized, and step by step are specialized until they reach the point when predefined quality and privacy requirements are met. The privacy requirements, are the so-called $(X,Y)-privacy$ for a threshold $k$, meaning that the identifying attributes in $X$ are linked with at most $k$ sensitive values in $Y$, in the join of the previously released and current tables. The quality requirements can be tuned into the framework, whereas three alternatives are proposed: the reduction of the class entropy~\cite{quinlan2014c4,shannon2001mathematical}, the notion of distortion, and the discernibility~\cite{bayardo2005data}. The authors propose an algorithm for the release of a table $T1$ in the existence of a previous table $T2$, which takes into account the scalability and performance problems that a join among those two may entail. Still, when many previous releases exist, the complexity would remain high. + + +% Privacy by diversity in sequential releases of databases + +\hypertarget{Shmueli}{Shmueli and Tassa}~\cite{shmueli2015privacy} identified the computational inefficiency of anonymously releasing a data set, taking into account previous ones, in scenarios of sequential publication. In more detail, they consider the case when in subsequent times, projections over different subsets of attributes of a table are published, and they provide an extension for attribute addition. Their algorithm can compute $l-$diverse anonymized releases (over different subsets of attributes) in parallel, by generating $l-1$ so-called \emph{fake} worlds. A fake world is generated from the base table, by randomly permutating non-identifier and sensitive values among the tuples, in such a way that minimal information loss (quality desideratum) is incurred. This is possible, partially by verifying that the permutation is done among quasi-identifiers that are similar. Then, the algorithm creates buckets of tuples with at least $l$ number of different sensitive values, in which the quasi-identifiers will then be generalized in order to achieve $l-$diversity (privacy protection desideratum). The generalization step is also conducted in a information-loss efficient way. All different releases will be $l-$diverse, because they are created assuming the same possible worlds, with which they are consistent. Tuples/attributes deletion is briefly discussed and left as open question. The paper is contrasted with a previous work~\cite{shmueli2012limiting} of the same authors, claiming that the new approach considers a stronger adversary (the adversary knows all individuals with their quasi-identifiers in the database, and not only one), and that the computation is much more efficient, as it does not have an exponential complexity w.r.t. to the number previous publications. + + +% Differentially private trajectory data publication + +\hypertarget{chen2011differentially}{Chen et al.}~\cite{chen2011differentially} propose a non-interactive data-dependent sanitization algorithm to generate a differentially private release for trajectory data. First, a noisy \emph{prefix tree}, i.e.,~an ordered search tree data structure used to store an associative array, is constructed. Each node represents a possible location---a legit location from a set of locations that any user can be present in---of a trajectory and contains a perturbed count---the number of persons in the current location---with noise drawn from a Laplace distribution. The privacy budget is equally allocated to each level of the tree. At each level, and for every node, children nodes with non-zero number of trajectories are identified as \emph{non-empty} by observing noisy counts so as to continue expanding them. All children nodes are associated with disjoint subsets and thus, the parallel composition theorem of differential privacy can be applied. Therefore, all the available budget can be used for each node. An empty node is detected by injecting Laplace noise to its corresponding count and checking if it is less that a preset threshold $\theta=\frac{2\sqrt{2}}{\varepsilon / h}$. Where $\varepsilon$ is the available privacy budget and $h$ the height of the tree. To generate the sanitized database, it is necessary to traverse the prefix tree once in post-order. At each node, the number of terminated trajectories is calculated and corresponding copies of prefixes are sent to the output. During this process, some consistency constraints are taken into account to avoid erroneous trajectories due to the noise added previously. Namely, for any root-to-leaf path $p, \forall v_i \in p, |tr(v_i)| \leq |tr(v_{i+1})|$, where $v_i$ is a child of $v_{i+1}$, and for each node $v, |tr(v)| \geq \sum_{u \in children(v)} |tr(u)|$. The increase of the privacy budget results in less average relative error because less noise is added at each level. By increasing the height of the tree, the relative error initially decreases as more information is retained from the database. However, after a certain threshold, the increase of height can result in less available privacy budget at each level and thus more relative error due to the increased perturbation. + + +% Publishing trajectories with differential privacy guarantees + +\hypertarget{jiang2013publishing}{Jiang et al.}~\cite{jiang2013publishing} focus on ship trajectories with known starting and terminal points. More specifically, they study several different noise addition mechanisms for publishing trajectories with differential privacy guarantees. These mechanisms include adding \emph{global} noise to the trajectory or noise to each location \emph{point} of the trajectory by sampling a noisy radius from an exponential distribution, and adding noise drawn from a Laplace distribution to each \emph{coordinate} of every location point. Upon the comparison of these different techniques, the latter offers better privacy guarantee and smaller error bound, but the resulting trajectory is noticeably distorted raising doubts about its practicality. A \emph{Sampling Distance and Direction (SDD)} mechanism is proposed to tackle the limited practicality coming from the addition of Laplace noise to the trajectory coordinates. It enables the publishing of optimal next possible trajectory point by sampling a suitable distance and direction at the current position and taking into account the ship's maximum speed constraint. The SDD mechanism outperforms other mechanisms and can maintain good utility with very high probability even while offering strong privacy guarantees. + + +% Anonymity for continuous data publishing + +\hypertarget{fung2008anonymity}{Fung et al.}~\cite{fung2008anonymity} introduce the problem of privately releasing continuous \emph{incremental} data sets. The invariant of this kind of releases is that in every timestamp $T_i$, the records previously released in a timestamp $T_j$, where $j