Design, build and configure applications to meet business process and application requirements.
Must Have Skills: Apache Spark, Big Data Analytics, Hadoop, Scala Programming Language
The resource must have strong hands-on with relevant Spark-Scala experience of min 2 years in Apache Spark, Spark core, Spark SQL and Spark Streaming using Scala functional language, Hadoop.
The resource should have good experience working in the Big Data ecosystem which includes: Hadoop, HDFS, Hive, Oozie or Airflow.
Good to have experience working on AWS EMR Redshift, S3, Lambda, API Gateway, Python, Kafka, Elastic Search, Kibana
The resource should have the ability to design and develop workflows in Big data ideally with either Batch Processing and/or Stream processing.
The resource should have good communication and presentation skills.
The resource should have strong analytical skills and the ability to lead the team.
Job No: 202069
Apply to official link