Use Spark to process a big data set
Make use of Sqoop, Redshift & Spark to design an ETL data pipeline.
Build an end-to-end real-time data processing application using Spark Streaming and Kafka.
*More details under the referral policy under Support Section.