text: Minor corrections
This commit is contained in:
parent
7ea2c964b3
commit
8b0464fdfa
@ -230,7 +230,7 @@ The calculation of FPL (Equation~\ref{eq:fpl-2}) becomes:
|
|||||||
|
|
||||||
The authors propose solutions to bound the temporal privacy loss, under the presence of weak to moderate correlation, in both finite and infinite data publishing scenarios.
|
The authors propose solutions to bound the temporal privacy loss, under the presence of weak to moderate correlation, in both finite and infinite data publishing scenarios.
|
||||||
In the latter case, they try to find a value for $\varepsilon$ for which the backward and forward privacy loss are equal.
|
In the latter case, they try to find a value for $\varepsilon$ for which the backward and forward privacy loss are equal.
|
||||||
In the former, they similarly try to balance the backward and forward privacy loss while they allocate more $\varepsilon$ at the first and last timestamps, since they have higher impact to the privacy loss of the next and previous ones.
|
In the former, they similarly try to balance the backward and forward privacy loss while they allocate more $\varepsilon$ at the first and last timestamps, since they have higher impact on the privacy loss of the next and previous ones.
|
||||||
This way they achieve an overall constant temporal privacy loss throughout the time series.
|
This way they achieve an overall constant temporal privacy loss throughout the time series.
|
||||||
|
|
||||||
According to the technique's intuition, stronger correlation result in higher privacy loss.
|
According to the technique's intuition, stronger correlation result in higher privacy loss.
|
||||||
|
@ -433,7 +433,7 @@ This calculation is done for each individual that is included in the original da
|
|||||||
The backward/forward privacy loss at any time point depends on the backward/forward privacy loss at the previous/next instance, the backward/forward temporal correlations, and $\varepsilon$.
|
The backward/forward privacy loss at any time point depends on the backward/forward privacy loss at the previous/next instance, the backward/forward temporal correlations, and $\varepsilon$.
|
||||||
The authors propose solutions to bound the temporal privacy loss, under the presence of weak to moderate correlations, in both finite and infinite data publishing scenarios.
|
The authors propose solutions to bound the temporal privacy loss, under the presence of weak to moderate correlations, in both finite and infinite data publishing scenarios.
|
||||||
In the latter case, they try to find a value for $\varepsilon$ for which the backward and forward privacy loss are equal.
|
In the latter case, they try to find a value for $\varepsilon$ for which the backward and forward privacy loss are equal.
|
||||||
In the former, they similarly try to balance the backward and forward privacy loss while they allocate more $\varepsilon$ at the first and last time points, since they have higher impact to the privacy loss of the next and previous ones.
|
In the former, they similarly try to balance the backward and forward privacy loss while they allocate more $\varepsilon$ at the first and last time points, since they have higher impact on the privacy loss of the next and previous ones.
|
||||||
This way they achieve an overall constant temporal privacy loss throughout the time series.
|
This way they achieve an overall constant temporal privacy loss throughout the time series.
|
||||||
According to the technique's intuition, stronger correlations result in higher privacy loss.
|
According to the technique's intuition, stronger correlations result in higher privacy loss.
|
||||||
However, the loss is smaller when the dimension of the transition matrix, which is extracted according to the modeling of the correlations (here it is Markov chain), is larger due to the fact that larger transition matrices tend to be uniform, resulting in weaker data dependence.
|
However, the loss is smaller when the dimension of the transition matrix, which is extracted according to the modeling of the correlations (here it is Markov chain), is larger due to the fact that larger transition matrices tend to be uniform, resulting in weaker data dependence.
|
||||||
|
Loading…
Reference in New Issue
Block a user