PySpark and RDKit: Moving towards Big Data in Cheminformatics

Mario Lovric, Roman Kern, Jose Molero

Research output: Contribution to journalArticleResearchpeer-review

Abstract

The authors present an implementation of the cheminformatics toolkit RDKit in a distributed computing environment, Apache Hadoop. Together with the Apache Spark analytics engine, wrapped by PySpark, resources from commodity scalable hardware can be employed for cheminformatic calculations and query operations with basic knowledge in Python programming and understanding of the resilient distributed datasets (RDD). Three use cases of cheminfomatical computing in Spark on the Hadoop cluster are presented; querying substructures, calculating fingerprint similarity and calculating molecular descriptors. The source code for the PySpark‐RDKit implementation is provided. The use cases showed that Spark provides a reasonable scalability depending on the use case and can be a suitable choice for datasets too big to be processed with current low‐end workstations.
Original languageEnglish
Article number1800082
Number of pages4
JournalMolecular Informatics
Volume38
Issue number6
DOIs
Publication statusPublished - 7 Mar 2019

    Fingerprint

Cite this