Job Descritpion of PySpark Developer
4 to 6 Years Relevant Experience
- Need to work as a developer in Bigdata, Hadoop or Data Warehousing Tools and Cloud Computing
- Work on Hadoop, Hive SQL's, Spark, Bigdata Eco System Tools
- Experience in working with teams in a complex organization involving multiple reporting lines
- The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies
- The candidate should have strong DevOps and Agile Development Framework knowledge
- Create Scala/Spark jobs for data transformation and aggregation
- Experience with stream-processing systems like Storm, Spark-Streaming, Flink
- Working experience of Hadoop, Hive SQL's, Spark, Bigdata Eco System Tools
- Should be able to tweak queries and work on performance enhancement
- The candidate will be responsible for delivering code, setting up environment, connectivity, deploying the code in production after testing
- The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies
- Occasionally, the candidate may have to be responsible as a primary contact and/or driver for small to medium size projects
- The candidate should have strong DevOps and Agile Development Framework knowledge
- Preferable to have good technical knowledge on Cloud computing, AWS or Azure Cloud Services
- Strong conceptual and creative problem-solving skills, ability to work with considerable ambiguity, ability to learn new and complex concepts quickly
- Experience in working with teams in a complex organization involving multiple reporting lines
- Solid understanding of object-oriented programming and HDFS concepts
Required Skills for PySpark Developer Job
- Big Data
- Data Warehousing
- Cloud
Our Hiring Process
- Screening (HR Round)
- Technical Round 1
- Technical Round 2
- Final HR Round