The right to be forgotten: Towards Machine Learning on perturbed knowledge bases

Bernd Malle, Peter Kieseberg, Edgar Weippl, Andreas Holzinger

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Today’s increasingly complex information infrastructures represent the basis of any data-driven industries which are rapidly becoming the 21st century’s economic backbone. The sensitivity of those infrastructures to disturbances in their knowledge bases is therefore of crucial interest for companies, organizations, customers and regulating bodies. This holds true with respect to the direct provisioning of such information in crucial applications like clinical settings or the energy industry, but also when considering additional insights, predictions and personalized services that are enabled by the automatic processing of those data. In the light of new EU Data Protection regulations applying from 2018 onwards which give customers the right to have their data deleted on request, information processing bodies will have to react to these changing jurisdictional (and therefore economic) conditions. Their choices include a re-design of their data infrastructure as well as preventive actions like anonymization of databases per default. Therefore, insights into the effects of perturbed / anonymized knowledge bases on the quality of machine learning results are a crucial basis for successfully facing those future challenges. In this paper we introduce a series of experiments we conducted on applying four different classifiers to an established dataset, as well as several distorted versions of it and present our initial results
LanguageEnglish
Title of host publicationSpringer Lecture Notes in Computer Science LNCS 9817
PublisherSpringer International
Pages251-266
Number of pages16
ISBN (Electronic)978-3-319-45507-5
ISBN (Print)978-3-319-45506-8
StatusPublished - 3 Sep 2016
EventPrivacy Aware Machine Learning (PAML) for health data science - CD-ARES 2016, Salzburg, Austria
Duration: 31 Aug 20163 Sep 2016

Conference

ConferencePrivacy Aware Machine Learning (PAML) for health data science
CountryAustria
CitySalzburg
Period31/08/163/09/16

Fingerprint

Learning systems
Industry
Data privacy
Economics
Classifiers
Processing
Experiments

Keywords

  • Machine Learning
  • Health Informatics
  • Privacy-Aware Machine Learning

ASJC Scopus subject areas

  • Artificial Intelligence

Fields of Expertise

  • Information, Communication & Computing

Treatment code (Nähere Zuordnung)

  • Basic - Fundamental (Grundlagenforschung)

Cite this

Malle, B., Kieseberg, P., Weippl, E., & Holzinger, A. (2016). The right to be forgotten: Towards Machine Learning on perturbed knowledge bases. In Springer Lecture Notes in Computer Science LNCS 9817 (pp. 251-266). Springer International.

The right to be forgotten: Towards Machine Learning on perturbed knowledge bases. / Malle, Bernd; Kieseberg, Peter; Weippl, Edgar; Holzinger, Andreas.

Springer Lecture Notes in Computer Science LNCS 9817. Springer International, 2016. p. 251-266.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Malle, B, Kieseberg, P, Weippl, E & Holzinger, A 2016, The right to be forgotten: Towards Machine Learning on perturbed knowledge bases. in Springer Lecture Notes in Computer Science LNCS 9817. Springer International, pp. 251-266, Privacy Aware Machine Learning (PAML) for health data science, Salzburg, Austria, 31/08/16.
Malle B, Kieseberg P, Weippl E, Holzinger A. The right to be forgotten: Towards Machine Learning on perturbed knowledge bases. In Springer Lecture Notes in Computer Science LNCS 9817. Springer International. 2016. p. 251-266.
Malle, Bernd ; Kieseberg, Peter ; Weippl, Edgar ; Holzinger, Andreas. / The right to be forgotten: Towards Machine Learning on perturbed knowledge bases. Springer Lecture Notes in Computer Science LNCS 9817. Springer International, 2016. pp. 251-266
@inproceedings{74a78c1a0bd942a19914209d39efcf63,
title = "The right to be forgotten: Towards Machine Learning on perturbed knowledge bases",
abstract = "Today’s increasingly complex information infrastructures represent the basis of any data-driven industries which are rapidly becoming the 21st century’s economic backbone. The sensitivity of those infrastructures to disturbances in their knowledge bases is therefore of crucial interest for companies, organizations, customers and regulating bodies. This holds true with respect to the direct provisioning of such information in crucial applications like clinical settings or the energy industry, but also when considering additional insights, predictions and personalized services that are enabled by the automatic processing of those data. In the light of new EU Data Protection regulations applying from 2018 onwards which give customers the right to have their data deleted on request, information processing bodies will have to react to these changing jurisdictional (and therefore economic) conditions. Their choices include a re-design of their data infrastructure as well as preventive actions like anonymization of databases per default. Therefore, insights into the effects of perturbed / anonymized knowledge bases on the quality of machine learning results are a crucial basis for successfully facing those future challenges. In this paper we introduce a series of experiments we conducted on applying four different classifiers to an established dataset, as well as several distorted versions of it and present our initial results",
keywords = "Machine Learning, Health Informatics, Privacy-Aware Machine Learning",
author = "Bernd Malle and Peter Kieseberg and Edgar Weippl and Andreas Holzinger",
year = "2016",
month = "9",
day = "3",
language = "English",
isbn = "978-3-319-45506-8",
pages = "251--266",
booktitle = "Springer Lecture Notes in Computer Science LNCS 9817",
publisher = "Springer International",

}

TY - GEN

T1 - The right to be forgotten: Towards Machine Learning on perturbed knowledge bases

AU - Malle,Bernd

AU - Kieseberg,Peter

AU - Weippl,Edgar

AU - Holzinger,Andreas

PY - 2016/9/3

Y1 - 2016/9/3

N2 - Today’s increasingly complex information infrastructures represent the basis of any data-driven industries which are rapidly becoming the 21st century’s economic backbone. The sensitivity of those infrastructures to disturbances in their knowledge bases is therefore of crucial interest for companies, organizations, customers and regulating bodies. This holds true with respect to the direct provisioning of such information in crucial applications like clinical settings or the energy industry, but also when considering additional insights, predictions and personalized services that are enabled by the automatic processing of those data. In the light of new EU Data Protection regulations applying from 2018 onwards which give customers the right to have their data deleted on request, information processing bodies will have to react to these changing jurisdictional (and therefore economic) conditions. Their choices include a re-design of their data infrastructure as well as preventive actions like anonymization of databases per default. Therefore, insights into the effects of perturbed / anonymized knowledge bases on the quality of machine learning results are a crucial basis for successfully facing those future challenges. In this paper we introduce a series of experiments we conducted on applying four different classifiers to an established dataset, as well as several distorted versions of it and present our initial results

AB - Today’s increasingly complex information infrastructures represent the basis of any data-driven industries which are rapidly becoming the 21st century’s economic backbone. The sensitivity of those infrastructures to disturbances in their knowledge bases is therefore of crucial interest for companies, organizations, customers and regulating bodies. This holds true with respect to the direct provisioning of such information in crucial applications like clinical settings or the energy industry, but also when considering additional insights, predictions and personalized services that are enabled by the automatic processing of those data. In the light of new EU Data Protection regulations applying from 2018 onwards which give customers the right to have their data deleted on request, information processing bodies will have to react to these changing jurisdictional (and therefore economic) conditions. Their choices include a re-design of their data infrastructure as well as preventive actions like anonymization of databases per default. Therefore, insights into the effects of perturbed / anonymized knowledge bases on the quality of machine learning results are a crucial basis for successfully facing those future challenges. In this paper we introduce a series of experiments we conducted on applying four different classifiers to an established dataset, as well as several distorted versions of it and present our initial results

KW - Machine Learning

KW - Health Informatics

KW - Privacy-Aware Machine Learning

M3 - Conference contribution

SN - 978-3-319-45506-8

SP - 251

EP - 266

BT - Springer Lecture Notes in Computer Science LNCS 9817

PB - Springer International

ER -