Merge branch 'master' of git.delkappa.com:manos/the-last-thing
This commit is contained in:
		
							
								
								
									
										1185
									
								
								code/lib/gdp.py
									
									
									
									
									
								
							
							
						
						
									
										1185
									
								
								code/lib/gdp.py
									
									
									
									
									
								
							
										
											
												File diff suppressed because it is too large
												Load Diff
											
										
									
								
							@ -7,4 +7,4 @@ The {\thething} selection module introduces a reasonable data utility decline to
 | 
				
			|||||||
% \kat{it would be nice to see it clearly on Figure 5.5. (eg, by including another bar that shows adaptive without landmark selection)}
 | 
					% \kat{it would be nice to see it clearly on Figure 5.5. (eg, by including another bar that shows adaptive without landmark selection)}
 | 
				
			||||||
% \mk{Done.}
 | 
					% \mk{Done.}
 | 
				
			||||||
In terms of temporal correlation, we observe that under moderate and strong temporal correlation, a greater average regular--{\thething} event distance in a {\thething} distribution causes greater overall privacy loss.
 | 
					In terms of temporal correlation, we observe that under moderate and strong temporal correlation, a greater average regular--{\thething} event distance in a {\thething} distribution causes greater overall privacy loss.
 | 
				
			||||||
Finally, the contribution of the {\thething} privacy on enhancing the data utility, while preserving $\epsilon$-differential privacy, is demonstrated by the fact that the selected Adaptive scheme provides better data utility than the user-level privacy protection.
 | 
					Finally, the contribution of the {\thething} privacy on enhancing the data utility, while preserving $\varepsilon$-differential privacy, is demonstrated by the fact that the selected Adaptive scheme provides better data utility than the user-level privacy protection.
 | 
				
			||||||
 | 
				
			|||||||
@ -22,7 +22,7 @@ Take for example the scenario in Figure~\ref{fig:st-cont}, where {\thethings} ar
 | 
				
			|||||||
If we want to protect the {\thething} points, we have to allocate at most a budget of $\varepsilon$ to the {\thethings}, while saving some for the release of regular events.
 | 
					If we want to protect the {\thething} points, we have to allocate at most a budget of $\varepsilon$ to the {\thethings}, while saving some for the release of regular events.
 | 
				
			||||||
Essentially, the more budget we allocate to an event the less we protect it, but at the same time we maintain its utility.
 | 
					Essentially, the more budget we allocate to an event the less we protect it, but at the same time we maintain its utility.
 | 
				
			||||||
With {\thething} privacy we propose to distribute the budget taking into account only the existence of the {\thethings} when we release an event of the time series, i.e.,~allocating $\frac{\varepsilon}{5}$ ($4\ \text{\thethings} + 1\ \text{regular point}$) to each event (see  Figure~\ref{fig:st-cont}).
 | 
					With {\thething} privacy we propose to distribute the budget taking into account only the existence of the {\thethings} when we release an event of the time series, i.e.,~allocating $\frac{\varepsilon}{5}$ ($4\ \text{\thethings} + 1\ \text{regular point}$) to each event (see  Figure~\ref{fig:st-cont}).
 | 
				
			||||||
This way, we still guarantee\footnote{$\epsilon$-differential privacy guarantees that the allocated budget should be less or equal to $\epsilon$, and not precisely how much.\kat{Mano check.}} that the {\thethings} are  adequately protected, as they receive a total budget of $\frac{4\varepsilon}{5}<\varepsilon$. 
 | 
					This way, we still guarantee\footnote{$\varepsilon$-differential privacy guarantees that the allocated budget should be less or equal to $\varepsilon$, and not precisely how much.\kat{Mano check.}} that the {\thethings} are  adequately protected, as they receive a total budget of $\frac{4\varepsilon}{5}<\varepsilon$. 
 | 
				
			||||||
At the same time, we avoid over-perturbing the regular events, as we allocate to them  a higher total budget ($\frac{4\varepsilon}{5}$) compared to the user-level scenario ($\frac{\varepsilon}{2}$), and thus less noise.
 | 
					At the same time, we avoid over-perturbing the regular events, as we allocate to them  a higher total budget ($\frac{4\varepsilon}{5}$) compared to the user-level scenario ($\frac{\varepsilon}{2}$), and thus less noise.
 | 
				
			||||||
  
 | 
					  
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
				
			|||||||
@ -77,7 +77,7 @@ Intuitively, knowing the data set at timestamp $t$ stops the propagation of the
 | 
				
			|||||||
%\kat{do we see this in the formula 1 ?}
 | 
					%\kat{do we see this in the formula 1 ?}
 | 
				
			||||||
%when calculating the forward or backward privacy loss respectively.
 | 
					%when calculating the forward or backward privacy loss respectively.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
Cao et al.~\cite{cao2017quantifying} propose a method for computing the total temporal privacy loss $\alpha_t$ at a timestamp $t$ as the sum of the backward and forward privacy loss, $\alpha^B_t$ and $\alpha^F_t$, minus the privacy budget $\varepsilon_t$ 
 | 
					Cao et al.~\cite{cao2017quantifying} propose a method for computing the temporal privacy loss $\alpha_t$ at a timestamp $t$ as the sum of the backward and forward privacy loss, $\alpha^B_t$ and $\alpha^F_t$, minus the privacy budget $\varepsilon_t$ 
 | 
				
			||||||
to account for the extra privacy loss due to previous and next releases $\pmb{o}$ of $\mathcal{M}$ under temporal correlation.
 | 
					to account for the extra privacy loss due to previous and next releases $\pmb{o}$ of $\mathcal{M}$ under temporal correlation.
 | 
				
			||||||
By Theorem~\ref{theor:thething-prv}, at every timestamp $t$ we consider the data at $t$ and at the {\thething} timestamps $L$.
 | 
					By Theorem~\ref{theor:thething-prv}, at every timestamp $t$ we consider the data at $t$ and at the {\thething} timestamps $L$.
 | 
				
			||||||
%According to the Definitions~{\ref{def:bpl} and \ref{def:fpl}}, we calculate the backward and forward privacy loss by taking into account the privacy budget at previous and next data releases respectively.
 | 
					%According to the Definitions~{\ref{def:bpl} and \ref{def:fpl}}, we calculate the backward and forward privacy loss by taking into account the privacy budget at previous and next data releases respectively.
 | 
				
			||||||
 | 
				
			|||||||
		Reference in New Issue
	
	Block a user