Return to search results
💡 Advanced Search Tip
Search by organization or tag to find related datasets
Privacy Preserving Outlier Detection through Random Nonlinear Data Distortion
Consider a scenario in which the data owner has some private/sensitive data and wants a data miner to access it for studying important patterns without revealing the sensitive information. Privacy preserving data mining aims to solve this problem by randomly transforming the data prior to its release to data miners. Previous work only considered the case of linear data perturbations — additive, multiplicative or a combination of both for studying the usefulness of the perturbed output. In
this paper, we discuss nonlinear data distortion using potentially nonlinear random data transformation and show how it can be
useful for privacy preserving anomaly detection from sensitive datasets. We develop bounds on the expected accuracy of the
nonlinear distortion and also quantify privacy by using standard definitions. The highlight of this approach is to allow a user
to control the amount of privacy by varying the degree of nonlinearity. We show how our general transformation can be used for anomaly detection in practice for two specific problem instances: a linear model and a popular nonlinear model using the sigmoid function. We also analyze the proposed nonlinear transformation in full generality and then show that for specific
cases it is distance preserving. A main contribution of this paper is the discussion between the invertibility of a transformation
and privacy preservation and the application of these techniques to outlier detection. Experiments conducted on real-life datasets
demonstrate the effectiveness of the approach.
Complete Metadata
| bureauCode |
[ "026:00" ] |
|---|---|
| identifier | DASHLINK_260 |
| issued | 2010-11-17 |
| landingPage | https://c3.nasa.gov/dashlink/resources/260/ |
| programCode |
[ "026:029" ] |