Awesome
Awesome Spark
A curated list of awesome Apache Spark packages and resources.
Apache Spark is an open-source cluster-computing framework. Originally developed at the University of California, Berkeley's AMPLab, the Spark codebase was later donated to the Apache Software Foundation, which has maintained it since. Spark provides an interface for programming entire clusters with implicit data parallelism and fault-tolerance (Wikipedia 2017).
Users of Apache Spark may choose between different the Python, R, Scala and Java programming languages to interface with the Apache Spark APIs.
Packages
Language Bindings
- Kotlin for Apache Spark <img src="https://img.shields.io/github/last-commit/Kotlin/kotlin-spark-api.svg"> - Kotlin API bindings and extensions.
- Mobius <img src="https://img.shields.io/github/last-commit/Microsoft/Mobius.svg"> - C# bindings (Deprecated in favor of .NET for Apache Spark).
- .NET for Apache Spark <img src="https://img.shields.io/github/last-commit/dotnet/spark.svg"> - .NET bindings.
- sparklyr <img src="https://img.shields.io/github/last-commit/rstudio/sparklyr.svg"> - An alternative R backend, using
dplyr
. - sparkle <img src="https://img.shields.io/github/last-commit/tweag/sparkle.svg"> - Haskell on Apache Spark.
Notebooks and IDEs
- almond <img src="https://img.shields.io/github/last-commit/almond-sh/almond.svg"> - A scala kernel for Jupyter.
- Apache Zeppelin <img src="https://img.shields.io/github/last-commit/apache/zeppelin.svg"> - Web-based notebook that enables interactive data analytics with plugable backends, integrated plotting, and extensive Spark support out-of-the-box.
- Polynote <img src="https://img.shields.io/github/last-commit/polynote/polynote.svg"> - Polynote: an IDE-inspired polyglot notebook. It supports mixing multiple languages in one notebook, and sharing data between them seamlessly. It encourages reproducible notebooks with its immutable data model. Originating from Netflix.
- sparkmagic <img src="https://img.shields.io/github/last-commit/jupyter-incubator/sparkmagic.svg"> - Jupyter magics and kernels for working with remote Spark clusters, for interactively working with remote Spark clusters through Livy, in Jupyter notebooks.
General Purpose Libraries
- itachi <img src="https://img.shields.io/github/last-commit/yaooqinn/itachi.svg"> - A library that brings useful functions from modern database management systems to Apache Spark.
- spark-daria <img src="https://img.shields.io/github/last-commit/mrpowers/spark-daria.svg"> - A Scala library with essential Spark functions and extensions to make you more productive.
- quinn <img src="https://img.shields.io/github/last-commit/mrpowers/quinn.svg"> - A native PySpark implementation of spark-daria.
- Apache DataFu <img src="https://img.shields.io/github/last-commit/apache/datafu.svg"> - A library of general purpose functions and UDF's.
- Joblib Apache Spark Backend <img src="https://img.shields.io/github/last-commit/joblib/joblib-spark.svg"> -
joblib
backend for running tasks on Spark clusters.
SQL Data Sources
SparkSQL has serveral built-in Data Sources for files. These include csv
, json
, parquet
, orc
, and avro
. It also supports JDBC databases as well as Apache Hive. Additional data sources can be added by including the packages listed below, or writing your own.
- Spark XML <img src="https://img.shields.io/github/last-commit/databricks/spark-xml.svg"> - XML parser and writer.
- Spark Cassandra Connector <img src="https://img.shields.io/github/last-commit/datastax/spark-cassandra-connector.svg"> - Cassandra support including data source and API and support for arbitrary queries.
- Mongo-Spark <img src="https://img.shields.io/github/last-commit/mongodb/mongo-spark.svg"> - Official MongoDB connector.
Storage
- Delta Lake <img src="https://img.shields.io/github/last-commit/delta-io/delta.svg"> - Storage layer with ACID transactions.
- lakeFS <img src="https://img.shields.io/github/last-commit/treeverse/lakefs.svg"> - Integration with the lakeFS atomic versioned storage layer.
Bioinformatics
- ADAM <img src="https://img.shields.io/github/last-commit/bigdatagenomics/adam.svg"> - Set of tools designed to analyse genomics data.
- Hail <img src="https://img.shields.io/github/last-commit/hail-is/hail.svg"> - Genetic analysis framework.
GIS
- Apache Sedona <img src="https://img.shields.io/github/last-commit/apache/incubator-sedona.svg"> - Cluster computing system for processing large-scale spatial data.
Graph Processing
- GraphFrames <img src="https://img.shields.io/github/last-commit/graphframes/graphframes.svg"> - Data frame based graph API.
- neo4j-spark-connector <img src="https://img.shields.io/github/last-commit/neo4j-contrib/neo4j-spark-connector.svg"> - Bolt protocol based, Neo4j Connector with RDD, DataFrame and GraphX / GraphFrames support.
Machine Learning Extension
- Apache SystemML <img src="https://img.shields.io/github/last-commit/apache/systemml.svg"> - Declarative machine learning framework on top of Spark.
- Mahout Spark Bindings [status unknown] - linear algebra DSL and optimizer with R-like syntax.
- KeystoneML - Type safe machine learning pipelines with RDDs.
- JPMML-Spark <img src="https://img.shields.io/github/last-commit/jpmml/jpmml-spark.svg"> - PMML transformer library for Spark ML.
- ModelDB <img src="https://img.shields.io/github/last-commit/mitdbg/modeldb.svg"> - A system to manage machine learning models for
spark.ml
andscikit-learn
<img src="https://img.shields.io/github/last-commit/scikit-learn/scikit-learn.svg">. - Sparkling Water <img src="https://img.shields.io/github/last-commit/h2oai/sparkling-water.svg"> - H2O interoperability layer.
- BigDL <img src="https://img.shields.io/github/last-commit/intel-analytics/BigDL.svg"> - Distributed Deep Learning library.
- MLeap <img src="https://img.shields.io/github/last-commit/combust/mleap.svg"> - Execution engine and serialization format which supports deployment of
o.a.s.ml
models without dependency onSparkSession
. - Microsoft ML for Apache Spark <img src="https://img.shields.io/github/last-commit/Azure/mmlspark.svg"> - A distributed ml library with support for LightGBM, Vowpal Wabbit, OpenCV, Deep Learning, Cognitive Services, and Model Deployment.
- MLflow <img src="https://img.shields.io/github/last-commit/mlflow/mlflow.svg"> - Machine learning orchestration platform.
Middleware
- Livy <img src="https://img.shields.io/github/last-commit/apache/incubator-livy.svg"> - REST server with extensive language support (Python, R, Scala), ability to maintain interactive sessions and object sharing.
- spark-jobserver <img src="https://img.shields.io/github/last-commit/spark-jobserver/spark-jobserver.svg"> - Simple Spark as a Service which supports objects sharing using so called named objects. JVM only.
- Apache Toree <img src="https://img.shields.io/github/last-commit/apache/incubator-toree.svg"> - IPython protocol based middleware for interactive applications.
- Apache Kyuubi <img src="https://img.shields.io/github/last-commit/apache/kyuubi.svg"> - A distributed multi-tenant JDBC server for large-scale data processing and analytics, built on top of Apache Spark.
Monitoring
- Data Mechanics Delight <img src="https://img.shields.io/github/last-commit/datamechanics/delight.svg"> - Cross-platform monitoring tool (Spark UI / Spark History Server replacement).
Utilities
- sparkly <img src="https://img.shields.io/github/last-commit/Tubular/sparkly.svg"> - Helpers & syntactic sugar for PySpark.
- pyspark-stubs <img src="https://img.shields.io/github/last-commit/zero323/pyspark-stubs.svg"> - Static type annotations for PySpark (obsolete since Spark 3.1. See SPARK-32681).
- Flintrock <img src="https://img.shields.io/github/last-commit/nchammas/flintrock.svg"> - A command-line tool for launching Spark clusters on EC2.
- Optimus <img src="https://img.shields.io/github/last-commit/ironmussa/Optimus.svg"> - Data Cleansing and Exploration utilities with the goal of simplifying data cleaning.
Natural Language Processing
- spark-nlp <img src="https://img.shields.io/github/last-commit/JohnSnowLabs/spark-nlp.svg"> - Natural language processing library built on top of Apache Spark ML.
Streaming
- Apache Bahir <img src="https://img.shields.io/github/last-commit/apache/bahir.svg"> - Collection of the streaming connectors excluded from Spark 2.0 (Akka, MQTT, Twitter. ZeroMQ).
Interfaces
- Apache Beam <img src="https://img.shields.io/github/last-commit/apache/beam.svg"> - Unified data processing engine supporting both batch and streaming applications. Apache Spark is one of the supported execution environments.
- Koalas <img src="https://img.shields.io/github/last-commit/databricks/koalas.svg"> - Pandas DataFrame API on top of Apache Spark.
Testing
- deequ <img src="https://img.shields.io/github/last-commit/awslabs/deequ.svg"> - Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
- spark-testing-base <img src="https://img.shields.io/github/last-commit/holdenk/spark-testing-base.svg"> - Collection of base test classes.
- spark-fast-tests <img src="https://img.shields.io/github/last-commit/MrPowers/spark-fast-tests.svg"> - A lightweight and fast testing framework.
Web Archives
- Archives Unleashed Toolkit <img src="https://img.shields.io/github/last-commit/archivesunleashed/aut.svg"> - Open-source toolkit for analyzing web archives.
Workflow Management
- Cromwell <img src="https://img.shields.io/github/last-commit/broadinstitute/cromwell.svg"> - Workflow management system with Spark backend.
Resources
Books
- Learning Spark, 2nd Edition - Introduction to Spark API with Spark 3.0 covered. Good source of knowledge about basic concepts.
- Advanced Analytics with Spark - Useful collection of Spark processing patterns. Accompanying GitHub repository: sryza/aas.
- Mastering Apache Spark - Interesting compilation of notes by Jacek Laskowski. Focused on different aspects of Spark internals.
- Spark in Action - New book in the Manning's "in action" family with +400 pages. Starts gently, step-by-step and covers large number of topics. Free excerpt on how to setup Eclipse for Spark application development and how to bootstrap a new application using the provided Maven Archetype. You can find the accompanying GitHub repo here.
Papers
- Large-Scale Intelligent Microservices - Microsoft paper that presents an Apache Spark-based micro-service orchestration framework that extends database operations to include web service primitives.
- Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing - Paper introducing a core distributed memory abstraction.
- Spark SQL: Relational Data Processing in Spark - Paper introducing relational underpinnings, code generation and Catalyst optimizer.
- Structured Streaming: A Declarative API for Real-Time Applications in Apache Spark - Structured Streaming is a new high-level streaming API, it is a declarative API based on automatically incrementalizing a static relational query.
MOOCS
- Data Science and Engineering with Apache Spark (edX XSeries) - Series of five courses (Introduction to Apache Spark, Distributed Machine Learning with Apache Spark, Big Data Analysis with Apache Spark, Advanced Apache Spark for Data Science and Data Engineering, Advanced Distributed Machine Learning with Apache Spark) covering different aspects of software engineering and data science. Python oriented.
- Big Data Analysis with Scala and Spark (Coursera) - Scala oriented introductory course. Part of Functional Programming in Scala Specialization.
Workshops
- AMP Camp - Periodical training event organized by the UC Berkeley AMPLab. A source of useful exercise and recorded workshops covering different tools from the Berkeley Data Analytics Stack.
Projects Using Spark
- Oryx 2 - Lambda architecture platform built on Apache Spark and Apache Kafka with specialization for real-time large scale machine learning.
- Photon ML - A machine learning library supporting classical Generalized Mixed Model and Generalized Additive Mixed Effect Model.
- PredictionIO - Machine Learning server for developers and data scientists to build and deploy predictive applications in a fraction of the time.
- Crossdata - Data integration platform with extended DataSource API and multi-user environment.
Docker Images
- apache/spark - Apache Spark Official Docker images.
- jupyter/docker-stacks/pyspark-notebook - PySpark with Jupyter Notebook and Mesos client.
- sequenceiq/docker-spark - Yarn images from SequenceIQ.
- datamechanics/spark - An easy to setup Docker image for Apache Spark from Data Mechanics.
Miscellaneous
- Spark with Scala Gitter channel - "A place to discuss and ask questions about using Scala for Spark programming" started by @deanwampler.
- Apache Spark User List and Apache Spark Developers List - Mailing lists dedicated to usage questions and development topics respectively.
References
<p id="wikipedia-2017">Wikipedia. 2017. “Apache Spark — Wikipedia, the Free Encyclopedia.” <a href="https://en.wikipedia.org/w/index.php?title=Apache_Spark&oldid=781182753" class="uri">https://en.wikipedia.org/w/index.php?title=Apache_Spark&oldid=781182753</a>.</p>License
<p xmlns:dct="http://purl.org/dc/terms/"> <a rel="license" href="http://creativecommons.org/publicdomain/mark/1.0/"> <img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/svg/publicdomain.svg" style="border-style: none;" alt="Public Domain Mark" /> </a> <br /> This work (<span property="dct:title">Awesome Spark</span>, by <a href="https://github.com/awesome-spark/awesome-spark" rel="dct:creator">https://github.com/awesome-spark/awesome-spark</a>), identified by <a href="https://github.com/zero323" rel="dct:publisher"><span property="dct:title">Maciej Szymkiewicz</span></a>, is free of known copyright restrictions. </p>Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of <a href="http://www.apache.org">The Apache Software Foundation</a>. This compilation is not endorsed by The Apache Software Foundation.
Inspired by sindresorhus/awesome.