Job
Description
About the RoleWe are seeking a highly skilled Data Engineer with deep expertise in PySpark and the Cloudera Data Platform (CDP) to join our data engineering team. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines that ensure high data quality and availability across the organization. This role requires a strong background in big data ecosystems, cloud-native tools, and advanced data processing techniques.The ideal candidate has hands-on experience with data ingestion, transformation, and optimization on the Cloudera Data Platform, along with a proven track record of implementing data engineering best practices. You will work closely with other data engineers to build solutions that drive impactful business insights.ResponsibilitiesData Pipeline DevelopmentDesign, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.Data IngestionImplement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP.Data Transformation and ProcessingUse PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements.Performance OptimizationConduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.Data Quality and ValidationImplement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.Automation and OrchestrationAutomate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.Education and ExperienceBachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field.3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform.Technical SkillsPySparkAdvanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques.Cloudera Data PlatformStrong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.Data WarehousingKnowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).Big Data TechnologiesFamiliarity with Hadoop, Kafka, and other distributed computing tools.Orchestration and SchedulingExperience with Apache Oozie, Airflow, or similar orchestration frameworks.Scripting and AutomationStrong scripting skills in Linux.