Cleaning up
This commit is contained in:
parent
9c88c64381
commit
4957a02e80
@ -1,37 +0,0 @@
|
|||||||
\subsection{Discussion}
|
|
||||||
\label{subsec:discussion}
|
|
||||||
|
|
||||||
In the previous sections we provided a brief summary and review for each work that falls into the categories of Microdata and Statistical Data privacy preserving publication under continual data schemes.
|
|
||||||
The main elements that have been summarized in Table~\ref{tab:related} allow us to make some interesting observations, on each category individually, and more generally.
|
|
||||||
|
|
||||||
In the Statistical Data section, all of the works deal with data linkage attacks, while there are some more recent works taking into consideration possible data correlations as well.
|
|
||||||
We notice that data linkage is currently assumed in the bibliography as the worst case scenario.
|
|
||||||
For this reason, works in the Statistical Data category provide a robust privacy protection solution independent to the adversaries' knowledge.
|
|
||||||
The prevailing distortion method in this category is probabilistic perturbation.
|
|
||||||
This is justified by the fact that nearly all of the observed methods are based on differential privacy.
|
|
||||||
The majority implements the Laplace mechanism, while some of them offer an adaptive approach.
|
|
||||||
|
|
||||||
In the Microdata category we observe that problems with sequential data, i.e.,~data that are generated in a sequence and dependent on the values in previous data sets, are more prominent.
|
|
||||||
It is important to note that works on this set of problems actually followed similar scenarios, i.e.,~publishing updated versions of an original data set, either vertically (schema-wise) or horizontally (tuple-wise).
|
|
||||||
Naturally, in such cases the most evident attack scenarios are the complementary release ones, as in each release there is great probability that there will be an intersection of tuples with previous releases.
|
|
||||||
On the other hand, when the problem has stream data/processing, we observe that these data are location specific, most commonly trajectories.
|
|
||||||
In such cases, the attacks considered are wider (than only versions of an original data set), taking into account external information, e.g.,~correlations that typically may be available for location specific data.
|
|
||||||
|
|
||||||
Speaking of correlations, in either category, we may see that the protection method used is mainly probabilistic, if not total suppression.
|
|
||||||
This makes sense, since by generalization the correlation between attributes would not be canceled.
|
|
||||||
Generalization is used naturally on grouped-based techniques, to make it possible to group more tuples under the generated categories --- and thus achieve anonymization.
|
|
||||||
|
|
||||||
As far as the protection levels are concerned, the Microdata category mainly targets event level protection, as all users are protected equally through the performed grouping.
|
|
||||||
Still, scenarios that contain trajectories, associated with a certain user aim to protect this user's privacy by blurring the actual trajectories (user-level).
|
|
||||||
$w-$event level is absent in the Microdata category; one reason maybe that streaming scenarios are not prominent in this category, and another practical reason may be that this notion was introduced later in time.
|
|
||||||
Indeed, none of the works in the Microdata category explicitly mention the level of privacy, as these levels have been introduced in differential privacy scenarios, hence in Statistical Data.
|
|
||||||
Considering all the use cases from both categories, event-level protection is more prominent, as it is more practical to protect all the users as a single set than each one individually in continual settings.
|
|
||||||
|
|
||||||
As already discussed, problems with streaming processing are not common in the Microdata category.
|
|
||||||
Indeed, most of the cases including streaming scenarios are in the Statistical Data category.
|
|
||||||
A technical reason behind this observation is that anonymizing a raw data set as a whole, may be a time-consuming process, and thus, not well-suited for streaming.
|
|
||||||
The complexity actually depends on the number of attributes, if we consider the possible combinations that may be enumerated.
|
|
||||||
On the contrary, aggregation functions as used in the Statistical Data category, especially in the absence of filters, usually are low cost.
|
|
||||||
Moreover, perturbing a numerical values (the usual result of an aggregation function) does not add a lot in the complexity of the algorithm (depending of course on the perturbation model used).
|
|
||||||
For this reason, perturbing the result of a process is more time efficient than anonymizing the data set and then running the process on the anonymized data.
|
|
||||||
Still, we may argue that an anonymized data set can be more widely used; in the case of statistical data it is only the data holder that performs the processes and releases the results.
|
|
@ -1,90 +0,0 @@
|
|||||||
\section{Microdata}
|
|
||||||
\label{sec:microdata}
|
|
||||||
|
|
||||||
As observed in Table~\ref{tab:related}, privacy preserving algorithms for microdata rely on $k$-anonymity, or derivatives of it. Ganta et al.~\cite{ganta2008composition} revealed that $k$-anonymity methods are vulnerable to \emph{composition attacks}. Consequently, these attacks drew the attention of researchers, who proposed various algorithms based on $k$-anonymity, each introducing a different dimension on the problem, for instance that previous releases are known to the publisher, or that the quasi-identifiers can be formed by combining attributes in different releases. Note, however, that only one (Li et al.~\cite{li2016hybrid}) of the following works assumes \emph{independently} anonymized data sets that may not be known to the publisher in the attack model, making it more general than the rest of the works.
|
|
||||||
|
|
||||||
|
|
||||||
% \subsection{Continual data}
|
|
||||||
|
|
||||||
% \mk{Nothing to put here.}
|
|
||||||
|
|
||||||
|
|
||||||
\subsection{Data streams}
|
|
||||||
|
|
||||||
% M-invariance: towards privacy preserving re-publication of dynamic data sets
|
|
||||||
|
|
||||||
\hypertarget{xiao2007m}{Xiao et al.}~\cite{xiao2007m} consider the case when a data set is (re)published in different time-shots in an update (tuple delete, insert) manner. More precisely, they address anonymization in dynamic environments by implementing m-\emph{invariance}. In a simple $k$-anonymization (or $l$-diverse) scenario the privacy of an individual that exists in two updates can be compromised by the intersection of the set of sensitive values. In contrast, an individual who exists in a series of $m$-invariant releases, is always associated with the same set of $m$ different sensitive values. To enable the publishing of $m$-invariant data sets, artificial tuples called \emph{counterfeits} may be added in a release. To minimize the noise added to the data sets, the authors provide an algorithm with two extra desiderata: minimize the counterfeits and the quasi-identifiers' generalization level. Still, the choice of adding tuples with specific sensitive values disturbs the value distribution with a direct effect on any relevant statistics analysis.
|
|
||||||
|
|
||||||
|
|
||||||
% Preventing equivalence attacks in updated, anonymized data
|
|
||||||
|
|
||||||
In the same update setting (insert/delete), \hypertarget{he2011preventing}{He et al.}~\cite{he2011preventing} introduce another kind of attack, namely the \emph{equivalence} attack, not taken into account by the aforementioned $m$-invariance technique. The equivalence attack allows for sets of individuals (of size $e<m$) to be associated with sets of sensitive values with a probability lower than $m$, in different snap-shots. For example, through tuple deletions, we may infer that two individuals share the exact same sensitive value (thus, may be considered equivalent). In order for a snap-shop of releases to be private, they have to be both $m$-invariant and $e$-equivalent, ($e\leq m$). Subsequently, the authors propose an algorithm incorporating $m$-invariance and based on the graph optimization \emph{min cut} problem, for publishing $e$-equivalent data sets. The proposed method can achieve better levels of privacy, in comparable times and quality as $m$-invariance.
|
|
||||||
|
|
||||||
|
|
||||||
% A hybrid approach to prevent composition attacks for independent data releases
|
|
||||||
|
|
||||||
\hypertarget{li2016hybrid}{Li et al.}~\cite{li2016hybrid} identified a common characteristic in most of the privacy techniques: when anonymizing a data set all previous releases are known to the data owner. It is probable however that the releases are independent from each other, and that the data owner is unaware of these releases when anonymizing the data set. In such a setting, the previous techniques would suffer from composition attacks. The authors define this kind of adversary and propose a hybrid model for data anonymization. More precisely, the adversary knows that an individual exists in two different data sets, he has a hold of the anonymized versions, but the anonymization is done independently (i.e.,~without knowledge of the other data set) for each data set. The key idea in fighting a composition attack is to enforce the probability that the matches among tuples from the two data sets are random, linking different rather than the same individual. To do so, the proposed anonymization exploits three preprocessing steps, before applying a traditional $k$-anonymity or $l$-diversity anonymization algorithm. First, the data set is sampled so as to blur the knowledge of the existence of individuals. Then, especially in small data sets, quasi-identifiers are perturbed by noise addition, before the classical generalization step. In addition to quasi-identifiers also the sensitive values are generalized, in the case of sparse data. The danger of composition attacks is less prominent when using this method, on top of $k$-anonymity rather than without, while having comparable quality results. Moreover, the quality results are shown to be substantially better than those obtained by the use of $\varepsilon$-differential privacy. This is a good attempt to independently anonymizing multiple times a data release, however the scenario is restricted to releases over the same database schema, using the same perturbation and generalization functions.
|
|
||||||
|
|
||||||
|
|
||||||
% Continuous privacy preserving publishing of data streams
|
|
||||||
|
|
||||||
\hypertarget{zhou2009continuous}{Zhou et al.}~\cite{zhou2009continuous} introduce the problem of continuous private data publication in \emph{streams}, and propose a randomized solution based on $k$-anonymity. In their definition, they state that a private stream consists in publishing equivalence classes of size larger than or equal to $k$ containing generalized tuples from distinct persons (or identifiers in general). To create the equivalence classes they set several desiderata. Except for the size of a class, which should be larger or equal to $k$, the information loss occurred by the generalization should be low, whereas the delay in forming and publishing the class should be low as well. To achieve these they built a randomized model using the popular structure of $R-$trees, extended to accommodate data density distribution information. In this way, they achieve a better quality for the released private data: On the one hand, formed classes contain data items that are close to each other (in dense areas), while on the other hand classes with tuples of sparse areas are released as soon as possible so that the delay will remain low. This work has a special focus on publishing good quality private data. Still, it does not consider attacks where background knowledge exists, nor does it measure the privacy level achieved (other than requiring the size of the released class to be larger or equal to $k$ as in $k$-anonymity), as $\varepsilon$-differential privacy.
|
|
||||||
|
|
||||||
|
|
||||||
% Maskit: Privately releasing user context streams for personalized mobile applications
|
|
||||||
|
|
||||||
\hypertarget{gotz2012maskit}{Gotz et al.}~\cite{gotz2012maskit} developed \emph{MaskIt}, a system that interfaces the sensors of a personal device, identifies various sets of \emph{contexts} and releases a stream of privacy preserving contexts to untrusted applications installed on the device. A context is defined as the circumstances that form the setting for an event, e.g.,~`at the office', `running', etc. The users have to define the sensitive contexts that they wish to be protected and the desired level of privacy. The system models the users' various contexts and transitions between them. Temporal correlations are captured using Markov chains by taking into account historical observations. After the initialization, \emph{MaskIt} filters a stream of user contexts by checking for each context whether it is okay to be released or needs to be suppressed. More specifically, a system $A$ preserves \emph{$\delta$-privacy} against an adversary if for all possible inputs $\overrightarrow{x}$ sampled from the Markov chain $M$ with non-zero probability (i.e.~$\Pr[\overrightarrow{x}] > 0$), for all possible outputs $\overrightarrow{o}$ ($\Pr[A(\overrightarrow{x}) = \overrightarrow{o}] > 0$), for all times $t$ and all sensitive contexts $s\in S$, it satisfies the condition $\Pr[X_t = s|\overrightarrow{o}] - \Pr[X_t = s] \leq \delta$. After filtering all the elements of a given stream, an output sequence for a single day is released. The process can be repeated to publish longer context streams. The utility of the system is measured as the expected number of released contexts. Letting the user to define the privacy settings requires that the user has some certain level of relative knowledge, which is not usually the case in real life. Additionally, suppressing data can sometimes disclose more information than releasing them instead, e.g.,~releasing multiple data points around a `sensitive' area (and not inside it) is going to eventually disclose the protected area.
|
|
||||||
|
|
||||||
|
|
||||||
% PLP: Protecting location privacy against correlation analyze Attack in crowdsensing
|
|
||||||
|
|
||||||
\hypertarget{ma2017plp}{Ma et al.}~\cite{ma2017plp} propose \emph{PLP} a crowdsensing scheme that protects location privacy against adversaries that can extract spatiotemporal correlations---modeled with CRFs---from crowdsensing data. Users' context (location, sensing data) stream is filtered while long-range dependencies among locations and reported sensing data are taken into account. Sensing data are suppressed at all sensitive locations while data at insensitive locations are reported with a certain probability defined by observing the corresponding CRF model. On the one hand, the privacy of the reported data is estimated by the difference $\delta$ between the probability that a user would be at a specific location given supplementary information versus the same probability without the extra information. On the other hand, the utility of the method depends on the total amount of reported data (more is better). An estimation algorithm searches for the optimal strategy that maximizes utility while preserving a predefined privacy threshold. Although this approach allows users to define their desired privacy prerequisites, it cannot guarantee optimal protection.
|
|
||||||
|
|
||||||
|
|
||||||
\subsection{Sequential data}
|
|
||||||
|
|
||||||
% Anonymizing sequential releases
|
|
||||||
|
|
||||||
\hypertarget{wang2006anonymizing}{Wang and Fung}~\cite{wang2006anonymizing} address the problem of anonymously releasing different projections of the same data set, in subsequent timestamps. More precisely, the authors want to protect individual information that could be revealed from \emph{joining} various releases of the same data set. To do so, instead of locating the quasi-identifiers in a single release, the authors suggest that the identifiers may span the current and all previous releases of the (projections of the) data set. Then, the proposed method uses the join of the different releases on the common identifying attributes. The goal is to generalize the identifying attributes of the current release, given that previous releases are immutable. The generalization is performed in a top down manner, meaning that the attributes are initially over generalized, and step by step are specialized until they reach the point when predefined quality and privacy requirements are met. The privacy requirements, are the so-called $(X,Y)-privacy$ for a threshold $k$, meaning that the identifying attributes in $X$ are linked with at most $k$ sensitive values in $Y$, in the join of the previously released and current tables. The quality requirements can be tuned into the framework, whereas three alternatives are proposed: the reduction of the class entropy~\cite{quinlan2014c4,shannon2001mathematical}, the notion of distortion, and the discernibility~\cite{bayardo2005data}. The authors propose an algorithm for the release of a table $T1$ in the existence of a previous table $T2$, which takes into account the scalability and performance problems that a join among those two may entail. Still, when many previous releases exist, the complexity would remain high.
|
|
||||||
|
|
||||||
|
|
||||||
% Privacy by diversity in sequential releases of databases
|
|
||||||
|
|
||||||
\hypertarget{Shmueli}{Shmueli and Tassa}~\cite{shmueli2015privacy} identified the computational inefficiency of anonymously releasing a data set, taking into account previous ones, in scenarios of sequential publication. In more detail, they consider the case when in subsequent times, projections over different subsets of attributes of a table are published, and they provide an extension for attribute addition. Their algorithm can compute $l-$diverse anonymized releases (over different subsets of attributes) in parallel, by generating $l-1$ so-called \emph{fake} worlds. A fake world is generated from the base table, by randomly permutating non-identifier and sensitive values among the tuples, in such a way that minimal information loss (quality desideratum) is incurred. This is possible, partially by verifying that the permutation is done among quasi-identifiers that are similar. Then, the algorithm creates buckets of tuples with at least $l$ number of different sensitive values, in which the quasi-identifiers will then be generalized in order to achieve $l-$diversity (privacy protection desideratum). The generalization step is also conducted in a information-loss efficient way. All different releases will be $l-$diverse, because they are created assuming the same possible worlds, with which they are consistent. Tuples/attributes deletion is briefly discussed and left as open question. The paper is contrasted with a previous work~\cite{shmueli2012limiting} of the same authors, claiming that the new approach considers a stronger adversary (the adversary knows all individuals with their quasi-identifiers in the database, and not only one), and that the computation is much more efficient, as it does not have an exponential complexity w.r.t. to the number previous publications.
|
|
||||||
|
|
||||||
|
|
||||||
% Differentially private trajectory data publication
|
|
||||||
|
|
||||||
\hypertarget{chen2011differentially}{Chen et al.}~\cite{chen2011differentially} propose a non-interactive data-dependent sanitization algorithm to generate a differentially private release for trajectory data. First, a noisy \emph{prefix tree}, i.e.,~an ordered search tree data structure used to store an associative array, is constructed. Each node represents a possible location---a legit location from a set of locations that any user can be present in---of a trajectory and contains a perturbed count---the number of persons in the current location---with noise drawn from a Laplace distribution. The privacy budget is equally allocated to each level of the tree. At each level, and for every node, children nodes with non-zero number of trajectories are identified as \emph{non-empty} by observing noisy counts so as to continue expanding them. All children nodes are associated with disjoint subsets and thus, the parallel composition theorem of differential privacy can be applied. Therefore, all the available budget can be used for each node. An empty node is detected by injecting Laplace noise to its corresponding count and checking if it is less that a preset threshold $\theta=\frac{2\sqrt{2}}{\varepsilon / h}$. Where $\varepsilon$ is the available privacy budget and $h$ the height of the tree. To generate the sanitized database, it is necessary to traverse the prefix tree once in post-order. At each node, the number of terminated trajectories is calculated and corresponding copies of prefixes are sent to the output. During this process, some consistency constraints are taken into account to avoid erroneous trajectories due to the noise added previously. Namely, for any root-to-leaf path $p, \forall v_i \in p, |tr(v_i)| \leq |tr(v_{i+1})|$, where $v_i$ is a child of $v_{i+1}$, and for each node $v, |tr(v)| \geq \sum_{u \in children(v)} |tr(u)|$. The increase of the privacy budget results in less average relative error because less noise is added at each level. By increasing the height of the tree, the relative error initially decreases as more information is retained from the database. However, after a certain threshold, the increase of height can result in less available privacy budget at each level and thus more relative error due to the increased perturbation.
|
|
||||||
|
|
||||||
|
|
||||||
% Publishing trajectories with differential privacy guarantees
|
|
||||||
|
|
||||||
\hypertarget{jiang2013publishing}{Jiang et al.}~\cite{jiang2013publishing} focus on ship trajectories with known starting and terminal points. More specifically, they study several different noise addition mechanisms for publishing trajectories with differential privacy guarantees. These mechanisms include adding \emph{global} noise to the trajectory or noise to each location \emph{point} of the trajectory by sampling a noisy radius from an exponential distribution, and adding noise drawn from a Laplace distribution to each \emph{coordinate} of every location point. Upon the comparison of these different techniques, the latter offers better privacy guarantee and smaller error bound, but the resulting trajectory is noticeably distorted raising doubts about its practicality. A \emph{Sampling Distance and Direction (SDD)} mechanism is proposed to tackle the limited practicality coming from the addition of Laplace noise to the trajectory coordinates. It enables the publishing of optimal next possible trajectory point by sampling a suitable distance and direction at the current position and taking into account the ship's maximum speed constraint. The SDD mechanism outperforms other mechanisms and can maintain good utility with very high probability even while offering strong privacy guarantees.
|
|
||||||
|
|
||||||
|
|
||||||
% Anonymity for continuous data publishing
|
|
||||||
|
|
||||||
\hypertarget{fung2008anonymity}{Fung et al.}~\cite{fung2008anonymity} introduce the problem of privately releasing continuous \emph{incremental} data sets. The invariant of this kind of releases is that in every timestamp $T_i$, the records previously released in a timestamp $T_j$, where $j<i$, are released again together with a set of new records. The authors first focus in two consecutive releases and describe three classes of possible attacks. They name these attacks \emph{correspondence} attacks because they rely on the principle that all tuples from data set $D1$ correspond to a tuple in the subsequent data set $D2$. Naturally, the opposite does not hold, as tuples with a timestamp $T_2$ do not exist in $D1$. Assuming that the attacker knows the quasi-identifiers and the timestamp of the record of a person, they define the \emph{backward}, \emph{cross} and \emph{forward} (\emph{BCF}) attacks. They show that combining two individually $k$-anonymized subsequent releases using one of the aforementioned attacks can lead to `cracking' some of the records in the set of $k$ candidate tuples rendering the privacy level lower than $k$. Except for the detection of cases of compromising $BCF$ anonymity between two releases, the authors also provide an anonymization algorithm for a release $R2$ in the presence of a private release $R1$. The algorithm starts from the most possible generalized state for the quasi-identifiers of the records in $D2$. Step by step, it checks which combinations of specializations on the attributes do not violate the $BCF$ anonymity and outputs the most possible specialized version of the data set. The authors discuss how the framework extends to multiple releases and to different kinds of privacy methods (other than $k$-anonymization). It is worth noting that in order to maintain a certain quality for a release, it is essential that the delta among subsequent releases is large enough; otherwise the needed generalization level may destroy the utility of the data set.
|
|
||||||
|
|
||||||
|
|
||||||
% Protecting Locations with Differential Privacy under Temporal Correlations
|
|
||||||
|
|
||||||
\hypertarget{xiao2015protecting}{Xiao et al.}~\cite{xiao2015protecting} propose another privacy definition based on differential privacy that accounts for temporal correlations in geo-tagged data. Location changes between two consecutive timestamps are determined by temporal correlations modeled through a Markov chain. A \emph{$\delta$-location} set includes all the probable locations a user might appear excluding locations of low probability. Therefore, the true location is hidden in the resulting set in which any pairs of locations are indistinguishable and thus, the user is protected. The lower the value of $\delta$, the more locations are included and hence, the higher level of privacy is achieved. \emph{Planar Isotropic Mechanism (PIM)} is used as a perturbation mechanism to add noise to the released locations. It is proved that $l_1$-norm sensitivity fails to capture the exact sensitivity, i.e.,~the difference between any two query answers from two instances in neighboring databases, in a multidimensional space. For this reason, \emph{sensitivity hull}, an independent notion from the context of location privacy, is utilized instead. In~\cite{xiao2017loclok} they demonstrate the functionality of their system \emph{LocLok} which implements the concept of $\delta$-location. In spite of taking into account temporal correlations for identifying the next possible locations of a user, the proposed definition does not evaluate the corresponding privacy leakage.
|
|
||||||
|
|
||||||
|
|
||||||
% An adaptive geo-indistinguishability mechanism for continuous LBS queries
|
|
||||||
|
|
||||||
\hypertarget{al2018adaptive}{Al-Dhubhani et al.}~\cite{al2018adaptive} propose an adaptive privacy preserving technique which adjusts the amount of noise required to obfuscate users' location based on its correlation level with the previous (obfuscated) released locations to deal with correlation analysis attacks. Their technique is based on \emph{geo-indistinguishability}~\cite{andres2013geo}, an adaptation of differential privacy for location data, which adds controlled random noise, to users' locations, drawn from a bivariate Laplace distribution (\emph{Planar Laplace}). The system architecture considered, involves only the users and queried service providers, excluding any third-party entities. After evaluating the adversary's ability to estimate a user's position by utilizing a regression algorithm for a certain prediction window, that exploits previous location releases, noise is added accordingly. I.e., in areas with locations that present strong correlations, therefore, an adversary can predict the current value with lower estimation error, more noise is added to the released locations. The opposite stands for locations with weaker correlations. Adapting the amount of injected noise depending on the data correlation level might lead to a better performance, in terms of both privacy and utility, in the short term. However, alternating the amount of injected noise at each timestamp without taking into account the previously released data, can lead to arbitrary privacy and utility loss in the long term. Applying a filtering algorithm on the perturbed data points, prior to their release, can effectively deal with any possible data discrepancy.
|
|
||||||
|
|
||||||
|
|
||||||
% Preventing velocity-based linkage attacks in location-aware applications
|
|
||||||
|
|
||||||
\hypertarget{ghinita2009preventing}{Ghinita et al.}~\cite{ghinita2009preventing} tackle attacks to location privacy that arise from the linkage of maximum user velocity with cloaked regions, due to adversarial background knowledge, when using Location-Based Services. The proposed methods prevent the disclosure of the exact user location coordinates and bound the association probability to a certain user-defined threshold related to user-sensitive features, e.g.,~religious beliefs, health condition, etc., linked to corresponding locations, e.g.,~church, hospital, etc. The first method referred to as \emph{temporal cloaking} is achieved via either \emph{deferral} or \emph{postdating}. The former is applied by delaying the disclosure of a cloaked region that is `too far' from the previous reported region, i.e.,~impossible to have been reached based on the known maximum user speed. The latter requires to report the nearest previous cloaked region and since it is near to the actual region, the corresponding results are highly probable to be relevant. A request is usually postdated when the user-specified threshold is exceeded, otherwise, the nearest candidate region is selected and is deferred or postdated depending on the outcome of the comparison. The second method, \emph{spatial cloaking}, results in the creation of cloaked regions by first taking into account all the relevant user-specified features to the specific location (\emph{filtering of features}) and then, enlarging the area of the region to satisfy the privacy requirements (\emph{cloaking}). Finally, the region is deferred until it includes the current timestamp (\emph{safety enforcement}) similar to temporal cloaking. The final QoS, due to the privacy protection offered by the present methods, is measured in terms of the \emph{cloaked region size}, \emph{time and space error}, and \emph{failure ratio}. The cloaked region size is taken into consideration since larger regions may decrease the usability of the retrieved information. Time and space error is possible due to delayed location reporting and cloaked regions, built around past locations, that do not include the current one. Finally, failure ratio is calculated by measuring the dropped requests in cases where the specified privacy requirements are impossible to be satisfied. Considering the cloak granularity as the only privacy metric proves inadequate since it can be easily compromised in cases of low user presence around the sensitive area.
|
|
||||||
|
|
||||||
|
|
||||||
\subsection{Time series}
|
|
||||||
|
|
||||||
% Time distortion anonymization for the publication of mobility data with high utility
|
|
||||||
|
|
||||||
\hypertarget{primault2015time}{Primault et al.}~\cite{primault2015time} proposed \emph{Promesse}, an algorithm that builds on time distortion instead of location distortion, to ensure \emph{user-level} privacy when releasing trajectories. \emph{Promesse} takes as input a user's mobility trace comprising of a data set of pairs of geolocations and timestamps, and a parameter \emph{$\varepsilon$}, i.e.,~the privacy budget. Initially, regularly spaced locations are extracted and each one of them is interpolated at a distance depending on the previous location, and the value of $\varepsilon$. Then, the first and last locations of the mobility trace are removed and uniformly distributed timestamps are assigned to the remaining locations of the trajectory. In this way, the resulting trace has a smooth speed and therefore \emph{points of interest (POIs)}, i.e.,~places where the user stayed more time, e.g.,~home, work, etc., are indistinguishable by the adversaries. The present algorithm works better with fine grained data sets, because in this way it can achieve optimal geolocation and timestamp pairing. Furthermore, it can only be used offline, rendering unsuitable for most real life application scenarios.
|
|
113
statistical.tex
113
statistical.tex
@ -1,113 +0,0 @@
|
|||||||
\section{Statistical data}
|
|
||||||
\label{sec:statistical}
|
|
||||||
|
|
||||||
When continuously publishing statistical data, usually in the form of counts, the most widely used privacy method is differential privacy, or derivatives of it, as witnessed in Table~\ref{tab:related}. We now continue in reviewing the works in this category.
|
|
||||||
|
|
||||||
|
|
||||||
% \subsection{Continual data}
|
|
||||||
|
|
||||||
% \mk{Nothing to put here.}
|
|
||||||
|
|
||||||
|
|
||||||
\subsection{Data streams}
|
|
||||||
|
|
||||||
% Private and continual release of statistics
|
|
||||||
|
|
||||||
\hypertarget{chan2011private}{Chan et al.}~\cite{chan2011private} designed a continual counting mechanism satisfying $\varepsilon$-differential privacy with poly-log error. A binary tree is constructed, where each node contains a sum of the counts in its subtree, including noise. It can be used for continual top-k queries in recommendation systems and multidimensional range queries. The mechanism provides guarantees for indefinite runtime without a priori knowledge of an upper temporal bound. It can preserve differential privacy (\emph{pan privacy}) under single or multiple unannounced \emph{intrusions}, i.e.,~snapshots of the mechanism's internal states, by adding a certain amount of noise to each active counter in memory, without incurring any loss in the asymptotic guarantees. The output of the mechanism at every timestamp is a \emph{consistent} approximate integer count, i.e.,~at each time step it increases by either 0 or 1. This makes the mechanism computationally inefficient and not easily applicable in real life scenarios.
|
|
||||||
|
|
||||||
|
|
||||||
% Differentially private real-time data release over infinite trajectory streams
|
|
||||||
|
|
||||||
\hypertarget{cao2015differentially}{Cao et al.}~\cite{cao2015differentially} developed a framework that achieves \emph{l-trajectory} protection and enables personalized user privacy, while dynamically adding noise at each timestamp that exponentially fades over time. The user can specify, in an array of size $l$, the desired protection level for each location of his/her trajectory. The proposed framework is composed of three components. As its name indicates, the \emph{Dynamic Budget Allocation} component allocates portions of the privacy budgets to the other two components; a fixed one to the \emph{Private Approximation}, and a dynamic one to the \emph{Private Publishing} component at each timestamp.
|
|
||||||
The \emph{Private Approximation} component estimates, under a utility goal and an approximation strategy, whether it is beneficial to publish approximate data or not. It chooses an appropriate previous noisy data release and republishes it, if it is similar to the real statistics planned to be published. The \emph{Private Publishing} component takes the real statistics, and timestamp of approximate data as inputs, and releases noisy data using a differential privacy mechanism that adds Laplace noise. If the timestamp of the approximate data is equal to the current timestamp, then the current data with Laplace noise are published. Otherwise, the noisy data at the timestamp of the approximate data will be republished. The utilized approximation technique is highly suitable for streaming processing and can reduce significantly the privacy budget consumption. However, the framework does not take into account privacy leakage stemming from data correlations, fact that limits considerably its applicability in real life.
|
|
||||||
|
|
||||||
|
|
||||||
% Private decayed predicate sums on streams
|
|
||||||
|
|
||||||
\hypertarget{bolot2013private}{Bolot et al.}~\cite{bolot2013private} introduce the notion of \emph{decayed privacy} in continual observation of aggregates (sums). The authors recognize the fact that monitoring applications focus more on recent events and data, therefore, the value of previous data releases exponentially fades. This leads to a schema of \emph{privacy with expiration}, according to which, recent events and data are more privacy sensitive than those preceding. Based on this, they apply \emph{decayed sum} functions for answering sliding window queries of fixed window size $w$ on data streams. Namely, (i) \emph{window} sum, which can be reduced to computing the difference of two running sums, and (ii) \emph{exponentially decayed} and (iii) \emph{polynomial decayed} sums, which estimate the sum of decayed data. For every consecutive $w$ data points, binary trees are generated, where, each node is perturbed by injecting Laplace noise with scale proportional to $w$. Instead of maintaining a binary tree for every window, the windows that span two blocks are viewed as the union of a suffix and a prefix of two consecutive trees. The proposed techniques are designed for fixed window sizes, hence, the available privacy budget must be split for answering multiple sliding window queries with various window sizes.
|
|
||||||
|
|
||||||
|
|
||||||
% PrivApprox: privacy-preserving stream analytics
|
|
||||||
|
|
||||||
\hypertarget{quoc2017privapprox}{Le Quoc et al.}~\cite{quoc2017privapprox} propose \emph{PrivApprox}, a data analytics system for privacy-preserving stream processing of distributed data sets that combines sampling and randomized response. Analysts' queries are distributed to clients via an aggregator and proxies. A randomized response is transmitted by the clients, who sample the locally available data, to the aggregator via proxies that apply (XOR-based) encryption. The combination of sampling and randomized response achieves \emph{zero-knowledge} based privacy, i.e.,~proving that they know a piece of information without actually disclosing its actual value. The aggregator aggregates the received responses and returns statistics to the analysts. For numerical queries, responses are expressed as counts within histogram buckets, whereas, for non-numeric queries, each bucket is specified by a matching rule or a regular expression. A confidence metric quantifies the results' approximation resulting from the sampling and randomization. The system employs sliding window computations over batched stream processing to handle the data stream generated by the clients. \emph{PrivApprox} achieves low latency stream processing and enables a synchronization-free distributed architecture that requires low trust to a central entity. However, the assumption that released data sets are independent, is rarely true in real life scenarios.
|
|
||||||
|
|
||||||
|
|
||||||
% Hiding in the crowd: Privacy preservation on evolving streams through correlation tracking
|
|
||||||
|
|
||||||
\hypertarget{li2007hiding}{Li et al.}~\cite{li2007hiding} attempt to tackle the problem of privacy preservation in data streams by continuously tracking data correlations. Firstly, the authors define utility, and privacy. Utility of a perturbed data stream is the inverse of the \emph{discrepancy} between the original and perturbed measurements. The discrepancy is set as the normalized \emph{Forbenius} norm, i.e.,~a matrix norm defined as the square root of the sum of the absolute squares of its elements. Privacy is the discrepancy between the original and the reconstructed data stream (from the perturbed one), and is comprised by the removed noise and the error introduced by the reconstruction. Then, correlations come into play. The data streams are continuously monitored for new tuples and trends to track correlations, and the system dynamically adds noise accordingly. More specifically, the \emph{Streaming Correlated Additive Noise} (SCAN) module is used to update the estimation of the local principal components of the original data and proportionally distribute noise along the components. Thereafter, the \emph{Streaming Correlation Online Reconstruction} (SCOR) module removes all the noise by utilizing the best linear reconstruction. Overall, the present technique offers robustness against inference attacks by adapting randomization according to data trends, but, fails to quantify the overall privacy guarantee.
|
|
||||||
|
|
||||||
|
|
||||||
% PeGaSus: Data-Adaptive Differentially Private Stream Processing
|
|
||||||
|
|
||||||
\hypertarget{chen2017pegasus}{Chen et al.}~\cite{chen2017pegasus} developed \emph{PeGaSus}, an algorithm for event-level differentially private stream processing that supports different categories of stream queries (counts, sliding window, event monitoring) over multiple stream resolutions. It consists of a \emph{perturber}, a \emph{grouper}, and a \emph{smoother} modules. The perturber consumes the incoming data stream, adds noise using part of the available privacy budget $\varepsilon$ to each data item, and outputs a stream of noisy data. The data-adaptive grouper consumes the original stream and partitions the data into well-approximated regions also using part of the available privacy budget. Finally, a query specific smoother combines the independent information produced by the perturber and the grouper, and performs post-processing by calculating the final estimates of the perturber's values for each partition created by the grouper at each timestamp. The combination of the perturber and the grouper follow the sequential composition and post-processing properties of differential privacy, thus, the resulting algorithm satisfies $\varepsilon_p$ + $\varepsilon_g$ = $\varepsilon$-differential privacy. $\varepsilon_p$ is the privacy budget used by the perturber to add noise to the data and $\varepsilon_g$ the corresponding budget used by the grouper to interfere with the user-defined deviation threshold. Nonetheless, the algorithm does not take into account past and/or future releases, thus failing to capture any related privacy leakage.
|
|
||||||
|
|
||||||
|
|
||||||
% Quantifying Differential Privacy under Temporal Correlations
|
|
||||||
|
|
||||||
\hypertarget{cao2017quantifying}{Cao et al.}~\cite{cao2017quantifying} propose a method of computing the \emph{temporal privacy leakage} of a differential privacy mechanism in the presence of temporal correlations and background knowledge. The goal of this work is to achieve event-level privacy protection and bound privacy leakage at every single time point. The temporal privacy leakage, is calculated as the sum of the \emph{backward} and \emph{forward privacy leakage} minus the privacy leakage of the mechanism, because it is counted twice in the aforementioned entities. The backward privacy leakage at any time depends on the backward privacy leakage at the previous time point, the temporal correlations, and the traditional privacy leakage of the privacy mechanism. The forward privacy leakage is calculated recursively, i.e.,~for every new time point all the previous time points are re-calculated, therefore increasing the privacy loss in the past. According to the intuition, stronger correlations result in higher privacy leakage. However, the leakage is smaller when the dimension of the transition matrix (modeling the correlations) is larger due to the fact that larger transition matrices tend to be uniform, resulting in weaker correlations.
|
|
||||||
|
|
||||||
|
|
||||||
% Differentially private event sequences over infinite streams
|
|
||||||
|
|
||||||
\hypertarget{kellaris2014differentially}{Kellaris et al.}~\cite{kellaris2014differentially} defined $w$-event privacy in the setting of periodical release of statistics (counts) in infinite streams. To achieve $w$-event privacy the authors propose two mechanisms based on sliding windows, which effectively distribute the privacy budget to sub-mechanisms (one sub-mechanism per timestamp) applied on the data of a window of the stream. Both algorithms may decide to publish or not a new noisy count for a specific timestamp, based on the similarity level of the current count with a previously published one. Moreover, both algorithms have the constraint that the total privacy budget consumed in a window is equal or less than $\varepsilon$. However, the first algorithm (Budget Distribution-BD) distributes the privacy budget in a exponential-fading manner following the assumption that in a window most of the counts remain similar. The budget of expired timestamps becomes available for the next publications (of next windows). On the contrary, the second algorithm (Budget Absorption-BA) uniformly distributes from the beginning the budget to the window's timestamps. A publication uses not only the by-default allocated budget but also the budget of non-published timestamps. In order to not exceed the limit of $\varepsilon$, adequate number of subsequent timestamps are `silenced'.
|
|
||||||
%Both algorithms are applicable to real life scenarios including traffic and website visit data.
|
|
||||||
Even though one can argue that $w$-event privacy could be achieved by user-level privacy, it is nevertheless non practical because of the rigidity of the budget allocation that would finally render the output useless.
|
|
||||||
|
|
||||||
|
|
||||||
% RescueDP: Real-time spatio-temporal crowd-sourced data publishing with differential privacy
|
|
||||||
|
|
||||||
\hypertarget{wang2016rescuedp}{Wang et al.}~\cite{wang2016rescuedp} work on the publication of real-time spatiotemporal user-generated data, utilizing differential privacy with $w$-event guarantee. Initially, \emph{RescueDP} performs dynamic \emph{grouping} of regions with small statistics according to the data trends. Then, each group passes from a \emph{perturbation} module that injects Laplace noise. Due to the grouping of the previous phase, the error by perturbation on small statistics can be eliminated, increasing the utility of the resulting statistics. A \emph{budget allocation} module distributes the available privacy budget to sampling points within any successive $w$ timestamps using an adaptive \emph{sampling} module that adjusts according to data dynamics. Non-sampled data are approximated with previously perturbed data, saving part of the available privacy budget. Finally, a \emph{Kalman filtering} module is used to improve the accuracy of the published data.
|
|
||||||
|
|
||||||
|
|
||||||
\subsection{Sequential data}
|
|
||||||
|
|
||||||
% Practical differential privacy via grouping and smoothing
|
|
||||||
|
|
||||||
\hypertarget{kellaris2013practical}{Kellaris et al.}~\cite{kellaris2013practical} pointed out that in time series, where users might contribute to an arbitrary number of aggregates, the sensitivity of the query answering function is significantly influenced by their presence/absence in the data set. Thus, the \emph{Laplace perturbation algorithm}, commonly used with differential privacy, may produce meaningless data sets. Furthermore, under such settings, the discrete Fourier transformation of the \emph{Fourier perturbation algorithm} may behave erratically and affect the utility of the outcome of the mechanism. Hence, the authors proposed a method involving \emph{grouping} and \emph{smoothing} for one-time publishing of time series of \emph{non-overlapping} counts, i.e.,~each individual contributes to one count at a time. Grouping includes separating the data set into similar clusters. The size and the similarity of the clusters is data dependent. Random grouping consumes less privacy budget, as there is minimum interaction with the original data. However, when using a grouping technique based on sampling, which has some privacy cost but produces better groups, the smoothing perturbation is decreased. During the smoothing phase, the average values for each cluster are calculated and finally, Laplace noise is added. This way, the query sensitivity becomes less dependent on each individual's data and therefore, less perturbation is required.
|
|
||||||
|
|
||||||
|
|
||||||
% Differentially private sequential data publication via variable-length n-grams
|
|
||||||
|
|
||||||
\hypertarget{chen2012differentially}{Chen et al.}~\cite{chen2012differentially} exploit a text-processing technique, the \emph{n-gram} model, i.e.,~a contiguous sequence of $n$ items from a given data sample, to retain information of a sequential data set without releasing the noisy counts of all possible sequences. Using this model allows to publish the most common $n$-grams ($n$ is typically smaller than 5) to accurately reconstruct the original data set. Privacy is enhanced by the fact that the universe of all grams with a shorter $n$ value is relatively small resulting in more common sequences. Furthermore, utility is improved by the fact that for small values of $n$ the corresponding counts are large enough to deal with noise injection and the inherent Markov assumption in the $n$-gram model. Variable-length $n$-grams are released with certain thresholds for the values of counts and tree heights, allowing to deal with the trade-off of shorter grams having less information than longer ones, but less relative error. Grams are grouped based on the similarity of their $n$ values, constructing a search tree. The process goes on until reaching the desired maximum $n$ value. Grams with smaller noisy counts have larger relative error thus, lower utility. Instead of allocating the available privacy budget based on the overall maximum height of the tree, each path is adaptively estimated based on known noisy counts. To further improve the final utility, consistency constraints are used, i.e.,~the sum of children's noisy counts has to be less or equal to their parent's noisy count, and noisy counts of leaf nodes should be within a set threshold. The proposed technique is proposed for count query and frequent sequential pattern mining scenarios.
|
|
||||||
|
|
||||||
|
|
||||||
% Differentially private publication of general time-serial trajectory data
|
|
||||||
|
|
||||||
\hypertarget{hua2015differentially}{Hua et al.}~\cite{hua2015differentially} tackle the problem of trajectories containing a small number of $n$-grams, thus, sharing few or even no identical prefixes. They propose a differentially private location generalization algorithm (exponential mechanism), for trajectory publishing, where each position in the trajectory is one record. The algorithm probabilistically partitions the locations at each timestamp with regard to their Euclidean distance from each other. Each partition is replaced by its centroid and therefore, locations belonging to closer trajectories are grouped together resulting in better utility. The algorithm is optimized for time efficiency by using classic k-means clustering. Then, the algorithm releases the new trajectories over the generalized location partitions, and their perturbed counts with noise drawn from a Laplace distribution. The process continues until the total count of the published trajectories reaches the size of the original data set. If the user's moving speed is taken into account, the total number of the possible trajectories can be limited. The authors have measured the utility of distorted spatiotemporal range queries by measuring the Hausdorff distance from the original results and concluded that the utility deterioration is within reasonable boundaries considering the offered privacy guarantees.
|
|
||||||
|
|
||||||
|
|
||||||
% Achieving differential privacy of trajectory data publishing in participatory sensing
|
|
||||||
|
|
||||||
\hypertarget{li2017achieving}{Li et al.}~\cite{li2017achieving} focus on publishing a set of trajectories where, contrary to~\cite{hua2015differentially}, each one is considered as a single entry in the data set. First, the original locations are partitioned by using k-means clustering based on their pairwise Euclidean distances. Each location partition is represented by their mean (centroid). Larger number of partitions, translates into fewer locations in each partition and thus, smaller trajectory precision loss. Before adding noise to the trajectory number, the original size of the database is approximated by randomly observing the generalized trajectories with the original ones. Then, by using a set of consistency constraints, bounded Laplace noise is generated and added to the number of each trajectory. Finally, the generalized trajectories as well as their noisy counts are released. Although this technique reduces considerably the trajectory merging time, the assumption that all trajectories in the data set are recorded at the same time points does not usually apply in real life use cases.
|
|
||||||
|
|
||||||
|
|
||||||
\subsection{Time series}
|
|
||||||
|
|
||||||
% Privacy-utility trade-off under continual observation
|
|
||||||
|
|
||||||
\hypertarget{erdogdu2015privacy}{Erdogdu et al.}~\cite{erdogdu2015privacy} consider the scenario where users generate samples at every timestamp from a time series correlated with their sensitive data. Data, that the users have chosen and are willing to privately share to a service provider, are distorted according to a \emph{privacy mapping}, i.e.,~a stochastic process and then, samples are selected for release. A \emph{distortion metric} quantifies the discrepancy of the distorted data from the original. The authors investigate both a simple attack setting where the adversary can make static assumptions only based on the so far observations that cannot be later altered, and a more complex where assumptions are affected dynamically by past and future data releases. In both cases, information leakage at a time point is quantified by a \emph{privacy metric} that measures the improvement of the adversarial inference after observing the data released at that particular point. The goal of the privacy mapping is to find a balance between the distortion and privacy metrics, i.e.,~achieving maximum released data utility while preserving privacy. Throughout the process, both batch and streaming processing schemas are considered. In order to decrease the complexity of streaming processing, the authors propose the utilization of HMMs for data dependency modeling. The assumption that users are privacy-conscious and the fact that typical smart-meter system data include only the total power usage, can drastically limit the applicability of the technique described. Last but not least, there is no proof that the proposed technique is composable.
|
|
||||||
|
|
||||||
|
|
||||||
% Bayesian Differential Privacy on Correlated Data
|
|
||||||
|
|
||||||
\hypertarget{yang2015bayesian}{Yang et al.}~\cite{yang2015bayesian} show that privacy is poorer against an adversary who has the least prior knowledge. Correlations may sometimes be negative and thus, the weakest adversary may not correspond to the largest privacy leakage. When data are correlated, according to a Gaussian correlation model, the adversary with the least prior knowledge poses the highest risk of information leakage. This is because the expected variation of the query results is enhanced by the unknown tuples and the correlations with respect to different values of the private individual. The adversaries might have different correlation structures since they could collect information from different sources. Therefore, it is necessary to consider the privacy of correlated data and arbitrary adversaries. To address this necessity, the authors extend the definition of differential privacy based in a Bayesian way, and propose a new \emph{Pufferfish} privacy definition, called \emph{Bayesian differential privacy}, to express the level of private information leakage. Additionally, they designed a general perturbation algorithm that guarantees privacy, taking into account prior knowledge of any subset of tuples in the data, when the data are correlated. Data correlations are transformed in a weighted network with an arbitrary topology structure, where the correlation strength is translated into a weight value. The larger the value of the weight, the more likely is for two tuples to be close, thus, correlated. These networks are described by a Gaussian Markov random field. A Gaussian correlation model is used to accurately describe the structure of data correlations and analyze the Bayesian differential privacy of the perturbation algorithm on the basis of this model. This model is extended to a more general one by adding a prior distribution to each tuple, so that it forms a Gaussian joint distribution on all tuples. The uncertain query answer is connected with the given tuples in a Bayesian way. The perturbation mechanism calculates the potential leakage for the strongest adversaries and applies noise proportional to the maximum privacy leakage coefficient. On the downside, the proposed solution is not suitable for applications that require online processing for real-time statistics.
|
|
||||||
|
|
||||||
|
|
||||||
% Pufferfish Privacy Mechanisms for Correlated Data
|
|
||||||
|
|
||||||
\hypertarget{song2017pufferfish}{Song et al.}~\cite{song2017pufferfish} propose the \emph{Wasserstein mechanism}, a technique that can apply to any general instantiation of \emph{Pufferfish}. It adds noise proportional to the \emph{sensitivity} of a query $F$ depending on the worst case distance between the distributions $P(F(X)|s_i,d)$ and $P(F(X)|s_j,d)$ for a variable $X$, a pair of secrets $(s_i,s_j)$, and an evolution scenario $d$. The worst case distance between those two distributions is calculated by the \emph{Wasserstein metric} function. The noise is drawn from a Laplace distribution with parameter equal to the quotient resulting from the division of the maximum Wasserstein distance of the distributions of all the pairs of secrets, by the available privacy budget $\epsilon$. For optimization purposes, the authors consider a more restricted setting, where data correlations, represented by evolution scenario $d$, are modeled by using \emph{Bayesian networks}. Dependencies are calculated by the \emph{Markov quilt mechanism}, a generalization of the \emph{Markov blanket mechanism} where the dependent nodes of any node consist of its parents, its children, and the other parents of its children. The present technique excels at data sets generated by monitoring applications or network, however, it fails to apply in online settings.
|
|
||||||
|
|
||||||
|
|
||||||
% Differentially private multi-dimensional time series release for traffic monitoring
|
|
||||||
|
|
||||||
\hypertarget{fan2013differentially}{Fan et al.}~\cite{fan2013differentially} propose a real-time framework for releasing differentially private multi-dimensional traffic monitoring data. Data at every timestamp are injected with noise, drawn from a Laplace distribution, by the \emph{Perturbation} module. The perturbed data are post-processed by the \emph{Estimation} module to produce a more accurate released version. Domain knowledge, e.g.,~road network and density, is utilized by the \emph{Modeling/Aggregation} module in two ways. On one hand, an internal time series model is estimated for each location to improve the utility of perturbation's outcome by performing a posterior estimation that utilizes \emph{Gaussian} approximation and \emph{Kalman} filtering. On the other hand, data sparsity is reduced by grouping neighboring locations based on \emph{Quadtree}. All modules have a bidirectional interaction between them. Although data correlations between timestamps are taken into account to improve the released data utility, the corresponding privacy leakage is not calculated. Furthermore,The adoption of sampling during the data processing could further improve the budget allocation procedure.
|
|
||||||
|
|
||||||
|
|
||||||
% CTS-DP: publishing correlated time-series data via differential privacy}
|
|
||||||
|
|
||||||
\hypertarget{wang2017cts}{Wang et al.}~\cite{wang2017cts} defined \emph{CTS-DP}, a correlated time-series data publication method based on differential privacy by enforcing \emph{Series-Indistinguishability} and implementing a \emph{correlated Laplace mechanism (CLM)}. \emph{CTS-DP} deals with the shortcomings of independent and~\emph{identically distributed (IID) noise}. Under the presence of correlations, IID noise offers inadequate protection since by applying refinement methods, e.g.,~filtering, one can remove it. Therefore, more noise must be introduced to make up for the amount of noise that is possible to be removed, thus, diminishing data utility. First, \emph{Series-Indistinguishability} is defined which renders the statistical characteristics of the original and noise series indistinguishable. After the Series-Indistinguishability is defined, the autocorrelation function of the noise series is derived. Second, a CLM uses four Gauss white noise series passed through a linear system to produce a correlated Laplace noise series according to their autocorrelation function. However, the privacy leakage stemming from data correlations is not estimated.
|
|
||||||
|
|
||||||
|
|
||||||
% An Adaptive Approach to Real-Time Aggregate Monitoring With Differential Privacy
|
|
||||||
|
|
||||||
\hypertarget{fan2014adaptive}{Fan et al.} propose FAST~\cite{fan2014adaptive}, an adaptive system that allows the release of real-time aggregate time series under user-level differential privacy. These were achieved by using a \emph{sampling}, a \emph{perturbation}, and a \emph{filtering} module. The sampling module samples on an adaptive rate the aggregates to be perturbed. The perturbation module adds noise to each sampled point according to the allocated privacy budget. The filtering module receives the perturbed point and the original one, and generates a posterior estimate, which is finally released. The error between the perturbed and the released (posterior estimate) point is used to adapt the sampling rate; the sampling frequency is increased when data is going through rapid changes and vice-versa. Thus, depending on the adjusted sampling rate, not every single data point is perturbed, saving in this way the available privacy budget. Although, temporal correlations of the processed time series are considered, the corresponding privacy leakage is not calculated.
|
|
Loading…
Reference in New Issue
Block a user