Big Data Engineer
Primary skills: Java/Scala/Python, Apache Spark, HDFS and Hive
Years of experience: 2 – 4 Years
Mandatory Skills:
- 2+ years’ experience in software development with Big data technologies
- Proficiency in at least one of the following programming languages: Java/Scala/Python
- Experience of development and deployment of at least one end-to-end data storage/processing pipeline
- Basic understanding of RDBMS
- Intermediate level expertise in Apache Spark, HDFS and Hive
- Experience of working with Hadoop cluster
- Good communication skills and logical skills
Must Have:
- Prior experience of writing Spark jobs using Java is highly appreciated.
- Prior experience of working with Cloudera Data Platform (CDP)
- Hands-on experience with NoSQL databases like HBase, Cassandra, Elasticsearch etc.
- Experience of using maven and git
- Agile scrum methodologies
- BFSI domain knowledge