Highly efficient spatial data filtering in parallel using the opensource library CPPPO

Federico Municchi, Christoph Goniva, Stefan Radl

Research output: Contribution to journalArticleResearchpeer-review

Abstract

CPPPO is a compilation of parallel data processing routines developed with the aim to create a library for "scale bridging" (i.e. connecting different scales by mean of closure models) in a multi-scale approach. CPPPO features a number of parallel filtering algorithms designed for use with structured and unstructured Eulerian meshes, as well as Lagrangian data sets. In addition, data can be processed on the fly, allowing the collection of relevant statistics without saving individual snapshots of the simulation state. Our library is provided with an interface to the widely-used CFD solver OpenFOAM®, and can be easily connected to any other software package via interface modules. Also, we introduce a novel, extremely efficient approach to parallel data filtering, and show that our algorithms scale super-linearly on multi-core clusters. Furthermore, we provide a guideline for choosing the optimal Eulerian cell selection algorithm depending on the number of CPU cores used. Finally, we demonstrate the accuracy and the parallel scalability of CPPPO in a showcase focusing on heat and mass transfer from a dense bed of particles. Program summary: Program title: CPPPO. Catalogue identifier: AFAQ_v1_0. Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AFAQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU Lesser General Public License, version 3. No. of lines in distributed program, including test data, etc.: 1043965. No. of bytes in distributed program, including test data, etc.: 11053655. Distribution format: tar.gz. Programming language: C++, MPI, octave. Computer: Linux based clusters for HPC or workstations. Operating system: Linux based. Classification: 4.14, 6.5, 12. External routines: Qt5, hdf5-1.8.15, jsonlab, OpenFOAM/CFDEM, Octave/Matlab. Nature of problem: Development of closure models for momentum, species transport and heat transfer in fluid and fluid-particle systems using purely Eulerian or Euler-Lagrange simulators. Solution method: The CPPPO library contains routines to perform on-line (i.e., runtime) filtering and compute statistics on large parallel data sets. Running time: Performing a Favre averaging on a structured mesh of 1283 cells with a filter size of 643 cells using one Intel Xeon(R) E5-2650, requires approximately 4 h of computation.

Original languageEnglish
Pages (from-to)400–414
Number of pages15
JournalComputer physics communications
Volume207
DOIs
Publication statusPublished - 26 May 2016

Fingerprint

Statistics
Heat transfer
octaves
Fluids
Computer workstations
Computer operating systems
Tar
closures
mesh
Software packages
Computer programming languages
Interfaces (computer)
Program processors
Scalability
cells
heat transfer
Momentum
statistics
Computational fluid dynamics
Mass transfer

Cite this

Highly efficient spatial data filtering in parallel using the opensource library CPPPO. / Municchi, Federico; Goniva, Christoph; Radl, Stefan.

In: Computer physics communications, Vol. 207, 26.05.2016, p. 400–414.

Research output: Contribution to journalArticleResearchpeer-review

@article{d605866f44ab49c0872f1b77c7a52e1b,
title = "Highly efficient spatial data filtering in parallel using the opensource library CPPPO",
abstract = "CPPPO is a compilation of parallel data processing routines developed with the aim to create a library for {"}scale bridging{"} (i.e. connecting different scales by mean of closure models) in a multi-scale approach. CPPPO features a number of parallel filtering algorithms designed for use with structured and unstructured Eulerian meshes, as well as Lagrangian data sets. In addition, data can be processed on the fly, allowing the collection of relevant statistics without saving individual snapshots of the simulation state. Our library is provided with an interface to the widely-used CFD solver OpenFOAM{\circledR}, and can be easily connected to any other software package via interface modules. Also, we introduce a novel, extremely efficient approach to parallel data filtering, and show that our algorithms scale super-linearly on multi-core clusters. Furthermore, we provide a guideline for choosing the optimal Eulerian cell selection algorithm depending on the number of CPU cores used. Finally, we demonstrate the accuracy and the parallel scalability of CPPPO in a showcase focusing on heat and mass transfer from a dense bed of particles. Program summary: Program title: CPPPO. Catalogue identifier: AFAQ_v1_0. Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AFAQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU Lesser General Public License, version 3. No. of lines in distributed program, including test data, etc.: 1043965. No. of bytes in distributed program, including test data, etc.: 11053655. Distribution format: tar.gz. Programming language: C++, MPI, octave. Computer: Linux based clusters for HPC or workstations. Operating system: Linux based. Classification: 4.14, 6.5, 12. External routines: Qt5, hdf5-1.8.15, jsonlab, OpenFOAM/CFDEM, Octave/Matlab. Nature of problem: Development of closure models for momentum, species transport and heat transfer in fluid and fluid-particle systems using purely Eulerian or Euler-Lagrange simulators. Solution method: The CPPPO library contains routines to perform on-line (i.e., runtime) filtering and compute statistics on large parallel data sets. Running time: Performing a Favre averaging on a structured mesh of 1283 cells with a filter size of 643 cells using one Intel Xeon(R) E5-2650, requires approximately 4 h of computation.",
author = "Federico Municchi and Christoph Goniva and Stefan Radl",
year = "2016",
month = "5",
day = "26",
doi = "10.1016/j.cpc.2016.05.026",
language = "English",
volume = "207",
pages = "400–414",
journal = "Computer physics communications",
issn = "0010-4655",
publisher = "Elsevier B.V.",

}

TY - JOUR

T1 - Highly efficient spatial data filtering in parallel using the opensource library CPPPO

AU - Municchi, Federico

AU - Goniva, Christoph

AU - Radl, Stefan

PY - 2016/5/26

Y1 - 2016/5/26

N2 - CPPPO is a compilation of parallel data processing routines developed with the aim to create a library for "scale bridging" (i.e. connecting different scales by mean of closure models) in a multi-scale approach. CPPPO features a number of parallel filtering algorithms designed for use with structured and unstructured Eulerian meshes, as well as Lagrangian data sets. In addition, data can be processed on the fly, allowing the collection of relevant statistics without saving individual snapshots of the simulation state. Our library is provided with an interface to the widely-used CFD solver OpenFOAM®, and can be easily connected to any other software package via interface modules. Also, we introduce a novel, extremely efficient approach to parallel data filtering, and show that our algorithms scale super-linearly on multi-core clusters. Furthermore, we provide a guideline for choosing the optimal Eulerian cell selection algorithm depending on the number of CPU cores used. Finally, we demonstrate the accuracy and the parallel scalability of CPPPO in a showcase focusing on heat and mass transfer from a dense bed of particles. Program summary: Program title: CPPPO. Catalogue identifier: AFAQ_v1_0. Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AFAQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU Lesser General Public License, version 3. No. of lines in distributed program, including test data, etc.: 1043965. No. of bytes in distributed program, including test data, etc.: 11053655. Distribution format: tar.gz. Programming language: C++, MPI, octave. Computer: Linux based clusters for HPC or workstations. Operating system: Linux based. Classification: 4.14, 6.5, 12. External routines: Qt5, hdf5-1.8.15, jsonlab, OpenFOAM/CFDEM, Octave/Matlab. Nature of problem: Development of closure models for momentum, species transport and heat transfer in fluid and fluid-particle systems using purely Eulerian or Euler-Lagrange simulators. Solution method: The CPPPO library contains routines to perform on-line (i.e., runtime) filtering and compute statistics on large parallel data sets. Running time: Performing a Favre averaging on a structured mesh of 1283 cells with a filter size of 643 cells using one Intel Xeon(R) E5-2650, requires approximately 4 h of computation.

AB - CPPPO is a compilation of parallel data processing routines developed with the aim to create a library for "scale bridging" (i.e. connecting different scales by mean of closure models) in a multi-scale approach. CPPPO features a number of parallel filtering algorithms designed for use with structured and unstructured Eulerian meshes, as well as Lagrangian data sets. In addition, data can be processed on the fly, allowing the collection of relevant statistics without saving individual snapshots of the simulation state. Our library is provided with an interface to the widely-used CFD solver OpenFOAM®, and can be easily connected to any other software package via interface modules. Also, we introduce a novel, extremely efficient approach to parallel data filtering, and show that our algorithms scale super-linearly on multi-core clusters. Furthermore, we provide a guideline for choosing the optimal Eulerian cell selection algorithm depending on the number of CPU cores used. Finally, we demonstrate the accuracy and the parallel scalability of CPPPO in a showcase focusing on heat and mass transfer from a dense bed of particles. Program summary: Program title: CPPPO. Catalogue identifier: AFAQ_v1_0. Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AFAQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU Lesser General Public License, version 3. No. of lines in distributed program, including test data, etc.: 1043965. No. of bytes in distributed program, including test data, etc.: 11053655. Distribution format: tar.gz. Programming language: C++, MPI, octave. Computer: Linux based clusters for HPC or workstations. Operating system: Linux based. Classification: 4.14, 6.5, 12. External routines: Qt5, hdf5-1.8.15, jsonlab, OpenFOAM/CFDEM, Octave/Matlab. Nature of problem: Development of closure models for momentum, species transport and heat transfer in fluid and fluid-particle systems using purely Eulerian or Euler-Lagrange simulators. Solution method: The CPPPO library contains routines to perform on-line (i.e., runtime) filtering and compute statistics on large parallel data sets. Running time: Performing a Favre averaging on a structured mesh of 1283 cells with a filter size of 643 cells using one Intel Xeon(R) E5-2650, requires approximately 4 h of computation.

UR - http://www.scopus.com/inward/record.url?scp=84977636850&partnerID=8YFLogxK

U2 - 10.1016/j.cpc.2016.05.026

DO - 10.1016/j.cpc.2016.05.026

M3 - Article

VL - 207

SP - 400

EP - 414

JO - Computer physics communications

JF - Computer physics communications

SN - 0010-4655

ER -