Materializing a data cube using Hadoop MapReduce and Apache Spark
This repository is a part of GSOC 2016 project titled High performance data cube materialization using Apache Spark under Stony Brook
University Dept. of Biomedical Informatics.
A sample data cube has been constructed using Hadoop MapReduce as well as Apache Spark.
The Hadoop implementation can be found here.
The Spark implementation can be found here.