text: Minor corrections

This commit is contained in:
Manos Katsomallos 2021-10-14 14:30:48 +02:00
parent 7ea2c964b3
commit 8b0464fdfa
2 changed files with 2 additions and 2 deletions

View File

@ -230,7 +230,7 @@ The calculation of FPL (Equation~\ref{eq:fpl-2}) becomes:
The authors propose solutions to bound the temporal privacy loss, under the presence of weak to moderate correlation, in both finite and infinite data publishing scenarios.
In the latter case, they try to find a value for $\varepsilon$ for which the backward and forward privacy loss are equal.
In the former, they similarly try to balance the backward and forward privacy loss while they allocate more $\varepsilon$ at the first and last timestamps, since they have higher impact to the privacy loss of the next and previous ones.
In the former, they similarly try to balance the backward and forward privacy loss while they allocate more $\varepsilon$ at the first and last timestamps, since they have higher impact on the privacy loss of the next and previous ones.
This way they achieve an overall constant temporal privacy loss throughout the time series.
According to the technique's intuition, stronger correlation result in higher privacy loss.

View File

@ -433,7 +433,7 @@ This calculation is done for each individual that is included in the original da
The backward/forward privacy loss at any time point depends on the backward/forward privacy loss at the previous/next instance, the backward/forward temporal correlations, and $\varepsilon$.
The authors propose solutions to bound the temporal privacy loss, under the presence of weak to moderate correlations, in both finite and infinite data publishing scenarios.
In the latter case, they try to find a value for $\varepsilon$ for which the backward and forward privacy loss are equal.
In the former, they similarly try to balance the backward and forward privacy loss while they allocate more $\varepsilon$ at the first and last time points, since they have higher impact to the privacy loss of the next and previous ones.
In the former, they similarly try to balance the backward and forward privacy loss while they allocate more $\varepsilon$ at the first and last time points, since they have higher impact on the privacy loss of the next and previous ones.
This way they achieve an overall constant temporal privacy loss throughout the time series.
According to the technique's intuition, stronger correlations result in higher privacy loss.
However, the loss is smaller when the dimension of the transition matrix, which is extracted according to the modeling of the correlations (here it is Markov chain), is larger due to the fact that larger transition matrices tend to be uniform, resulting in weaker data dependence.