Job Descritpion of Hadoop
4 to 6 Years of Relevant Experience
- 4 to 6 years of experience working with Hadoop and big data technologies.
- Strong hands-on experience in developing data processing solutions using Hadoop components like HDFS, YARN, and Map Reduce.
- Proficiency in PySpark for distributed data processing, transformation, and analytics in Hadoop environments.
- Expertise in Hive for querying, data warehousing, and managing structured data on large-scale datasets.
- Experience in creating and optimizing Hadoop-based applications to ensure performance, scalability, and reliability.
- Familiarity with ETL processes, data ingestion techniques, and workflows within Hadoop ecosystems.
- Hands-on experience with job scheduling and workflow management tools such as Apache Oozie or similar.
- Strong understanding of data partitioning, indexing, and optimization techniques to enhance query performance.
- Ability to troubleshoot Hadoop environment issues and optimize PySpark jobs for efficient processing.
- Excellent teamwork, collaboration, and communication skills, with the ability to work in cross-functional teams.
Required Skills for Hadoop Job
Our Hiring Process
- Screening (HR Round)
- Technical Round 1
- Technical Round 2
- Final HR Round