Home
Jobs

268 Aws Glue Jobs - Page 9

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

2 - 5 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title:Data Engineer Experience5-10 Years Location:Bangalore : Data Engineers with PySpark and AWS Glue experiences. AWS mandatory. GCP and Azure add-on Proven experience as a Data Engineer or similar role in data architecture, database management, and cloud technologies. Proficiency in programming languages such as Python, Java, or Scala. Strong experience with data processing frameworks like PYSpark, Apache Kafka, or Hadoop. Hands-on experience with data warehousing solutions such as Redshift, BigQuery, Snowflake, or similar platforms. Strong knowledge of SQL and relational databases (e.g., PostgreSQL, MySQL, etc.). Experience with version control tools like Git. Familiarity with containerization and orchestration tools like Docker, Kubernetes, and Airflow is a plus. Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication skills and ability to collaborate with cross-functional teams. Certifications Needed Bachelor's or master’s degree in Computer Science, Information Systems, Engineering or equivalent.

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Pune, Baner

Hybrid

Naukri logo

5-10 years of experience in database development or a related field. Proven experience with database design, development, and management. Experience working with large-scale databases and complex data environments Experience with data modelling and database design. Knowledge of database performance tuning and optimization. Architect, Develop and maintain tables, views, procedures, functions and packages in Database MUST HAVE Performing complex relational databases queries using SQL (AWS RDS for PostgreSQL) and Oracle PLSQL MUST HAVE Familiarity with ETL processes and tools (AWS Batch, AWS Glue etc) MUST HAVE Familiarity with CI/CD Pipelines, Jenkins Deployment, Git Repository MUST HAVE Perform in performance tuning. Proactively monitor the database systems to ensure secure services with minimum downtime and improve maintenance of the databases to include rollouts, patching, and upgrades. Experience with Aurora's scaling and replication capabilities. MUST HAVE Proficiency with AWS CloudWatch for monitoring database performance and setting up alerts. Experience with performance tuning and optimization in AWS environments MUST HAVE Experience using Confluence for documentation and collaboration. Proficiency in using SmartDraw for creating database diagrams, flowcharts, and other visual representations of data models and processes. MUST HAVE Proficiency in using libraries such as Pandas and NumPy for data manipulation, analysis, and transformation. Experience with libraries like SQLAlchemy and PyODBC for connecting to and interacting with various databases. MUST HAVE Python programming language MUST HAVE Agile/Scrum , Communication (Spoken English, clarity of thought) Big Data, Data mining, machine learning and natural language processing

Posted 1 month ago

Apply

6.0 - 11.0 years

25 - 30 Lacs

Hyderabad

Hybrid

Naukri logo

About We are hiring a Lead Data Solutions Engineer with expertise in PySpark, Python, and preferably Palantir Foundry. You will focus on transforming complex operational data into clear customer communications for Planned Power Outages (PPO) within the energy sector. Role & responsibilities Build, enhance, and manage scalable data pipelines using PySpark and Python to process dynamic operational data. Interpret and consolidate backend system changes into single-source customer notifications. Leverage Foundry or equivalent platforms to build dynamic data models and operational views. Act as a problem owner for outage communication workflows and edge cases. Collaborate with operations and communication stakeholders to ensure consistent message delivery. Implement logic and validation layers to filter out inconsistencies in notifications. Continuously optimize data accuracy and message clarity. Preferred candidate profile Ideal Profile 5+ years of experience in data engineering/data solutions. Strong command of PySpark, Python, and large-scale data processing. Experience in dynamic, evolving environments with frequent changes. Strong communication and collaboration skills. Ability to simplify uncertain data pipelines into actionable formats. Nice to Have Experience with Palantir Foundry, Databricks, or AWS Glue. Exposure to utility, energy, or infrastructure domains. Familiarity with customer communication systems, SLA governance, or outage scheduling.

Posted 1 month ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, We are hiring a Cloud Architect to design and oversee scalable, secure, and cost-efficient cloud solutions. Great for architects who bridge technical vision with business needs. Key Responsibilities: Design cloud-native solutions using AWS, Azure, or GCP Lead cloud migration and transformation projects Define cloud governance, cost control, and security strategies Collaborate with DevOps and engineering teams for implementation Required Skills & Qualifications: Deep expertise in cloud architecture and multi-cloud environments Experience with containers, serverless, and microservices Proficiency in Terraform, CloudFormation, or equivalent Bonus: Cloud certification (AWS/Azure/GCP Architect) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 month ago

Apply

8.0 - 10.0 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

Salary 20 - 28 LPA About The Role - Mandatory Skills AWS Architect, AWS Glue or Databricks, PySpark, and Python - Hands-on experience with AWS Glue or Databricks, PySpark, and Python. - Minimum of 2 years of hands-on expertise in PySpark, including Spark job performance optimization techniques. - Minimum of 2 years of hands-on involvement with AWS Cloud - Hands on experience in StepFunction, Lambda, S3, Secret Manager, Snowflake/Redshift, RDS, Cloudwatch - Proficiency in crafting low-level designs for data warehousing solutions on AWS cloud. - Proven track record of implementing big-data solutions within the AWS ecosystem including Data Lakes. - Familiarity with data warehousing, data quality assurance, and monitoring practices. - Demonstrated capability in constructing scalable data pipelines and ETL processes. - Proficiency in testing methodologies and validating data pipelines. - Experience with or working knowledge of DevOps environments. - Practical experience in Data security services. - Understanding of data modeling, integration, and design principles. - Strong communication and analytical skills. - A dedicated team player with a goal-oriented mindset, committed to delivering quality work with attention to detail. - Solution Design Collaborate with clients and stakeholders to understand business requirements and translate them into cloud-based solutions utilizing AWS services (EC2, Lambda, S3, RDS, VPC, IAM, etc.). - Architecture and Implementation Design and implement secure, scalable, and high-performance cloud solutions, ensuring alignment with AWS best practices and architectural principles. - Cloud Migration Assist with the migration of on-premise applications to AWS, ensuring minimal disruption and maximum efficiency. - Technical Leadership Provide technical leadership and guidance to development teams to ensure adherence to architecture standards and best practices. - Optimization Continuously evaluate and optimize AWS environments for cost, performance, and security. - Security Ensure the cloud architecture adheres to industry standards and security policies, using tools like AWS Identity and Access Management (IAM), AWS Key Management Service (KMS), and encryption protocols. - Documentation & Reporting Create clear technical documentation to define architectural decisions, solution designs, and cloud configurations. - Stakeholder Collaboration Work with cross-functional teams including developers, DevOps, QA, and business teams to align technical solutions with business goals. - Continuous Learning Stay updated with the latest AWS services, tools, and industry trends to ensure the implementation of cutting-edge solutions. - Strong understanding of AWS cloud services and architecture. - Hands-on experience with Infrastructure as Code (IaC) tools like AWS CloudFormation, Terraform, or AWS CDK. - Knowledge of networking, security, and database services within AWS (e.g., VPC, IAM, RDS, and S3). - Familiarity with containerization and orchestration using AWS services like ECS, EKS, or Fargate. - Proficiency in scripting languages (e.g., Python, Shell, or Node.js). - Familiarity with CI/CD tools and practices in AWS environments (e.g., CodePipeline, Jenkins, etc.). Soft Skills : Communication Skills : - Clear and Concise Communication Ability to articulate complex technical concepts in simple terms for both technical and non-technical stakeholders. - Active Listening Ability to listen to business and technical requirements from stakeholders to ensure the proposed solution meets their needs. - Documentation Skills Ability to document technical designs, solutions, and architectural decisions in a clear and well-organized manner. Leadership and Team Collaboration : - Mentoring and Coaching Ability to mentor junior engineers, providing guidance and fostering professional growth. - Cross-functional Teamwork Collaborating effectively with various teams such as developers, DevOps, QA, business analysts, and security specialists to deliver integrated cloud solutions. - Conflict Resolution Addressing and resolving conflicts within teams and stakeholders to ensure smooth project execution. Problem-Solving and Critical Thinking : - Analytical Thinking Ability to break down complex problems and develop logical, scalable, and cost-effective solutions. - Creative Thinking Think outside the box to design innovative solutions that maximize the value of AWS technologies. - Troubleshooting Skills Quickly identifying root causes of issues and finding solutions to mitigate them. Adaptability and Flexibility : - Handling Change Ability to adapt to evolving requirements, technologies, and business needs. Cloud technologies and customer requirements change quickly. - Resilience Ability to deal with challenges and setbacks while maintaining a positive attitude and focus on delivering results. Stakeholder Management : - Client-facing Skills Ability to manage client relationships, understand their business needs, and translate those needs into cloud solutions. - Negotiation Skills Negotiating technical aspects of projects with clients or business units to balance scope, resources, and timelines. - Expectation Management Ability to set and manage expectations regarding timelines, deliverables, and technical feasibility. Decision-Making : - Sound Judgment Making well-informed and balanced decisions that consider both technical feasibility and business impact. - Risk Management Ability to assess risks in terms of cost, security, and performance and make decisions that minimize potential issues Preferred Skills : - Familiarity with DevOps practices and tools (e.g., Jenkins, Docker, Kubernetes). - Experience with serverless architectures using AWS Lambda, API Gateway, and DynamoDB. - Exposure to multi-cloud architectures (AWS, Azure, Google Cloud). Why Join Us - Competitive salary and benefits. - Opportunity to work on cutting-edge cloud technologies. - A dynamic work environment where innovation is encouraged. - Strong focus on professional development and career growth. Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 1 month ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Kolkata, Hyderabad, Bengaluru

Work from Office

Naukri logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of a Principal Consultant-AWS Developer! We are looking for candidates who have a passion for cloud with knowledge of different cloud environments. Ideal candidates should have technical experience in AWS Platform Services – IAM Role & Policies, Glue, Lamba, EC2, S3, SNS, SQS, EKS, KMS, etc . This key role demands a highly motivated individual with a strong background in Computer Science/ Software Engineering. You are meticulous, thorough and possess excellent communication skills to engage with all levels of our stakeholders. A self-starter, you are up-to-speed with the latest developments in the tech world. Responsibilities Data Storage & Lake Management: Expertise in S3 (data lake design, partitioning, optimization), Glue Catalog (schema/version management), and Lake Formation (access control). Data Processing: Hands-on with AWS Glue (ETL with PySpark/Scala), EMR (Spark/Hadoop), Lambda (event-driven ETL), and Athena (S3 querying and optimization). Data Ingestion: Experience with Kinesis (real-time streaming), DMS (database migration), and Amazon MSK (Kafka-based ingestion). Databases & Warehousing: Proficient in Redshift (data warehousing), DynamoDB (NoSQL), and RDS (PostgreSQL/MySQL). Overall Data Engineering Concepts: Strong grasp of data modeling (star/snowflake, data vault), file formats (Parquet, Avro, etc.), S3 partitioning/bucketing, and ETL/ELT best practices Hands-On experience & good skills on AWS Platform Services – IAM Role & Policies, Glue, Lamba, EC2, S3, SNS, SQS, EKS, KMS, etc. Must have good working knowledge on Kubernetes & Dockers. Utilize AWS services such as Amazon Glue, Amazon S3, AWS Lambda, and others to optimize performance, reliability, and cost-effectiveness. Design, develop, and maintain AWS-based applications, ensuring high performance, scalability, and security. Integrate AWS services into application architecture, leveraging tools such as Lambda, API Gateway, S3, DynamoDB, and RDS. Collaborate with DevOps teams to automate deployment pipelines and optimize CI/CD practices. Develop scripts and automation tools to manage cloud environments efficiently. Monitor, troubleshoot, and resolve application performance issues. Implement best practices for cloud security, data management, and cost optimization. Participate in code reviews and provide technical guidance to junior developers. Qualifications we seek in you! Minimum Qualifications / Skills Experience in software development with a focus on AWS technologies. Proficiency in AWS services such as EC2, Lambda, S3, RDS, and DynamoDB. Strong programming skills in Python, Node.js, or Java. Experience with RESTful APIs and microservices architecture. Familiarity with CI/CD tools like Jenkins, GitLab CI, or AWS CodePipeline. Knowledge of infrastructure as code using CloudFormation or Terraform. Problem-solving skills and the ability to troubleshoot application issues in a cloud environment. Excellent teamwork and communication skills. Preferred Qualifications/ Skills AWS Certified Developer – Associate or AWS Certified Solutions Architect – Associate. Experience with serverless architectures and API development. Familiarity with Agile development practices. Knowledge of monitoring and logging solutions like CloudWatch and ELK Stack. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Gurugram

Remote

Naukri logo

US Shift- 5 working days. Remote Work. (US Airline Group) Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Strong focus on AWS and PySpark. Knowledge of AWS services, including but not limited to S3, Redshift, Athena, EMR, and Glue. Proficiency in PySpark and related Big Data technologies for ETL processing. Strong SQL skills for data manipulation and querying. Familiarity with data warehousing concepts and dimensional modeling. Experience with data governance, data quality, and data security practices. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills to work effectively with cross-functional teams.

Posted 1 month ago

Apply

13.0 - 15.0 years

37 - 40 Lacs

Noida, Gurugram, Delhi / NCR

Work from Office

Naukri logo

Role & responsibilities REQUIREMENTS: Total experience 13+years. Proficient in architecting, designing, and implementing data platforms and data applications Strong experience in AWS Glue and Azure Data Factory. Hands-on experience with Databricks. Experience working with Big Data applications and distributed processing systems Working experience in build and maintain ETL/ELT pipelines using modern data engineering tools and frameworks Lead the architecture and implementation of data lakes, data warehouses, and real-time streaming solution Collaborate with stakeholders to understand business requirements and translate them into technical solutions Participate and contribute to RFPs, workshops, PoCs, and technical solutioning discussions Ensure scalability, reliability, and performance of data platforms Strong communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding the clients business use cases and technical requirements and be able to convert them in to technical design which elegantly meets the requirements Mapping decisions with requirements and be able to translate the same to developers Identifying different solutions and being able to narrow down the best option that meets the client’s requirements Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements Preferred candidate profile

Posted 1 month ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, Looking for a Cloud Data Engineer to build cloud-based data pipelines and analytics platforms. Key Responsibilities: Develop ETL workflows using cloud data services. Manage data storage, lakes, and warehouses. Ensure data quality and pipeline reliability. Required Skills & Qualifications: Experience with BigQuery, Redshift, or Azure Synapse. Proficiency in SQL, Python, or Spark. Familiarity with data lake architecture and batch/streaming. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 month ago

Apply

5 - 9 years

12 - 22 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

AWS Data Engineer To Apply, use the below link: https://career.infosys.com/jobdesc?jobReferenceCode=INFSYS-EXTERNAL-210775&rc=0 JOB Profile: Significant 5 to 9 years of experience in designing and implementing scalable data engineering solutions on AWS. Strong proficiency in Python programming language. Expertise in serverless architecture and AWS services such as Lambda, Glue, Redshift, Kinesis, SNS, SQS, and CloudFormation. Experience with Infrastructure as Code (IaC) using AWS CDK for defining and provisioning AWS resources. Proven leadership skills with the ability to mentor and guide junior team members. Excellent understanding of data modeling concepts and experience with tools like ERStudio. Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment. Experience with Apache Airflow for orchestrating data pipelines is a plus. Knowledge of Data Lakehouse, dbt, or Apache Hudi data format is a plus. Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using AWS services such as S3, Glue, Lambda, Redshift. Collaborate with cross-functional teams to gather requirements and design solutions that meet business needs. Desired Candidate Profile 5-9 years of experience in an IT industry setting with expertise in Python programming language (Pyspark). Strong understanding of AWS ecosystem including S3, Glue, Lambda, Redshift. Bachelor's degree in Any Specialization (B.Tech/B.E.).

Posted 1 month ago

Apply

5 - 10 years

12 - 16 Lacs

Mumbai

Work from Office

Naukri logo

Senior Digital Solutions Consultant - MUM02DM Company Worley Primary Location IND-MM-Mumbai Job Digital Solutions Schedule Full-time Employment Type Employee Job Level Experienced Job Posting May 2, 2025 Unposting Date Jun 1, 2025 Reporting Manager Title Director, Data Platform We deliver the worlds most complex projects Work as part of a collaborative and inclusive team Enjoy a varied & challenging role Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, were bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role Develop and implement data pipelines for ingesting and collecting data from various sources into a centralized data platform. Develop and maintain ETL jobs using AWS Glue services to process and transform data at scale. Optimize and troubleshoot AWS Glue jobs for performance and reliability. Utilize Python and PySpark to efficiently handle large volumes of data during the ingestion process. Collaborate with data architects to design and implement data models that support business requirements. Create and maintain ETL processes using Airflow, Python and PySpark to move and transform data between different systems. Implement monitoring solutions to track data pipeline performance and proactively identify and address issues. Manage and optimize databases, both SQL and NoSQL, to support data storage and retrieval needs. Familiarity with Infrastructure as Code (IaC) tools like Terraform, AWS CDK and others. Proficiency in event-driven integrations, batch-based and API-led data integrations. Proficiency in CICD pipelines such as Azure DevOps, AWS pipelines or Github Actions. About You To be considered for this role it is envisaged you will possess the following attributes Technical and Industry Experience: Independent Integration Developer with over 5+ years of experience in developing and delivering integration projects in an agile or waterfall-based project environment. Proficiency in Python, PySpark and SQL programming language for data manipulation and pipeline development Hands-on experience with AWS Glue, Airflow, Dynamo DB, Redshift, S3 buckets, Event-Grid, and other AWS services Experience implementing CI/CD pipelines, including data testing practices. Proficient in Swagger, JSON, XML, SOAP and REST based web service development Behaviors Required: Driven by our values and purpose in everything we do. Visible, active, hands on approach to help teams be successful. Strong proactive planning ability. Optimistic, energetic, problem solver, ability to see long term business outcomes. Collaborative, ability to listen, compromise to make progress. Stronger together mindset, with a focus on innovation & creation of tangible / realized value. Challenge status quo. Education Qualifications, Accreditation, Training: Degree in Computer Science and/or related fields AWS Data Engineering certifications desirable Moving forward together Were committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law.

Posted 1 month ago

Apply

4 - 6 years

30 - 34 Lacs

Bengaluru

Work from Office

Naukri logo

Overview Annalect is seeking a hands-on Data QA Manager to lead and elevate data quality assurance practices across our growing suite of software and data products. This is a technical leadership role embedded within our Technology teams, focused on establishing best-in-class data quality processes that enable trusted, scalable, and high-performance data solutions. As a Data QA Manager, you will drive the design, implementation, and continuous improvement of end-to-end data quality frameworks, with a strong focus on automation, validation, and governance. You will work closely with data engineering, product, and analytics teams to ensure data integrity, accuracy, and compliance across complex data pipelines, platforms, and architectures, including Data Mesh and modern cloud-based ecosystems. This role requires deep technical expertise in SQL, Python, data testing frameworks like Great Expectations, data orchestration tools (Airbyte, DbT, Trino, Starburst), and cloud platforms (AWS, Azure, GCP). You will lead a team of Data QA Engineers while remaining actively involved in solution design, tool selection, and hands-on QA execution. Responsibilities Key Responsibilities: Develop and implement a comprehensive data quality strategy aligned with organizational goals and product development initiatives. Define and enforce data quality standards, frameworks, and best practices, including data validation, profiling, cleansing, and monitoring processes. Establish data quality checks and automated controls to ensure the accuracy, completeness, consistency, and timeliness of data across systems. Collaborate with Data Engineering, Product, and other teams to design and implement scalable data quality solutions integrated within data pipelines and platforms. Define and track key performance indicators (KPIs) to measure data quality and effectiveness of QA processes, enabling actionable insights for continuous improvement. Generate and communicate regular reports on data quality metrics, issues, and trends to stakeholders, highlighting opportunities for improvement and mitigation plans. Maintain comprehensive documentation of data quality processes, procedures, standards, issues, resolutions, and improvements to support organizational knowledge-sharing. Provide training and guidance to cross-functional teams on data quality best practices, fostering a strong data quality mindset across the organization. Lead, mentor, and develop a team of Data QA Analysts/Engineers, promoting a high-performance, collaborative, and innovative culture. Provide thought leadership and subject matter expertise on data quality, influencing technical and business stakeholders toward quality-focused solutions. Continuously evaluate and adopt emerging tools, technologies, and methodologies to advance data quality assurance capabilities and automation. Stay current with industry trends, innovations, and evolving best practices in data quality, data engineering, and analytics to ensure cutting-edge solutions. Qualifications Required Skills 11+ years of hands-on experience in Data Quality Assurance, Data Test Automation, Data Comparison, and Validation across large-scale datasets and platforms. Strong proficiency in SQL for complex data querying, data validation, and data quality investigations across relational and distributed databases. Deep knowledge of data structures, relational and non-relational databases, stored procedures, packages, functions, and advanced data manipulation techniques. Practical experience with leading data quality tools such as Great Expectations, DbT tests, and data profiling and monitoring solutions. Experience with data mesh and distributed data architecture principles for enabling decentralized data quality frameworks. Hands-on experience with modern query engines and data platforms, including Trino/Presto, Starburst, and Snowflake. Experience working with data integration and ETL/ELT tools such as Airbyte, AWS Glue, and DbT for managing and validating data pipelines. Strong working knowledge of Python and related data libraries (e.g., Pandas, NumPy, SQLAlchemy) for building data quality tests and automation scripts.

Posted 1 month ago

Apply

6 - 10 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced Amazon Redshift Developer / Data Engineer to design, develop, and optimize cloud-based data warehousing solutions. The ideal candidate should have expertise in Amazon Redshift, ETL processes, SQL optimization, and cloud-based data lake architectures. This role involves working with large-scale datasets, performance tuning, and building scalable data pipelines. Key Responsibilities: Design, develop, and maintain data models, schemas, and stored procedures in Amazon Redshift. Optimize Redshift performance using distribution styles, sort keys, and compression techniques. Build and maintain ETL/ELT data pipelines using AWS Glue, AWS Lambda, Apache Airflow, and dbt. Develop complex SQL queries, stored procedures, and materialized views for data transformations. Integrate Redshift with AWS services such as S3, Athena, Glue, Kinesis, and DynamoDB. Implement data partitioning, clustering, and query tuning strategies for optimal performance. Ensure data security, governance, and compliance (GDPR, HIPAA, CCPA, etc.). Work with data scientists and analysts to support BI tools like QuickSight, Tableau, and Power BI. Monitor Redshift clusters, troubleshoot performance issues, and implement cost-saving strategies. Automate data ingestion, transformations, and warehouse maintenance tasks. Required Skills & Qualifications: 6+ years of experience in data warehousing, ETL, and data engineering. Strong hands-on experience with Amazon Redshift and AWS data services. Expertise in SQL performance tuning, indexing, and query optimization. Experience with ETL/ELT tools like AWS Glue, Apache Airflow, dbt, or Talend. Knowledge of big data processing frameworks (Spark, EMR, Presto, Athena). Familiarity with data lake architectures and modern data stack. Proficiency in Python, Shell scripting, or PySpark for automation. Experience working in Agile/DevOps environments with CI/CD pipelines.

Posted 1 month ago

Apply

7 - 10 years

15 - 25 Lacs

Pune

Hybrid

Naukri logo

Lead Data Engineer (DataBricks) Experience: 7 - 10 Years Exp Salary : Upto INR 25 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 10:00AM to 7:00PM IST Opportunity Type: Hybrid (Pune) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : AWS Glue, Databricks, Azure - Data Factory, SQL, Python, Data Modelling, ETL Good to have skills : Big Data Pipelines, Data Warehousing Forbes Advisor (One of Uplers' Clients) is Looking for: Data Quality Analyst who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Position: Lead Data Engineer (Databricks) Location: Pune, Ahmedabad Required Experience: 7 to 10 years Preferred: Immediate Joiners Job Overview: We are looking for an accomplished Lead Data Engineer with expertise in Databricks to join our dynamic team. This role is crucial for enhancing our data engineering capabilities, and it offers the chance to work with advanced technologies, including Generative AI. Key Responsibilities: Lead the design, development, and optimization of data solutions using Databricks, ensuring they are scalable, efficient, and secure. Collaborate with cross-functional teams to gather and analyse data requirements, translating them into robust data architectures and solutions. Develop and maintain ETL pipelines, leveraging Databricks and integrating with Azure Data Factory as needed. Implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensure data quality, governance, and security practices are adhered to, maintaining the integrity and reliability of data solutions. Provide technical leadership and mentorship to junior engineers, fostering an environment of learning and growth. Stay updated on the latest trends and advancements in data engineering, Databricks, Generative AI, and Azure Data Factory to continually enhance team capabilities. Qualifications: Bachelors or masters degree in computer science, Information Technology, or a related field. 7+ to 10 years of experience in data engineering, with a focus on Databricks. Proven expertise in building and optimizing data solutions using Databricks and integrating with Azure Data Factory/AWS Glue. Proficiency in SQL and programming languages such as Python or Scala. Strong understanding of data modelling, ETL processes, and Data Warehousing/Data Lakehouse concepts. Familiarity with cloud platforms, particularly Azure, and containerization technologies such as Docker. Excellent analytical, problem-solving, and communication skills. Demonstrated leadership ability with experience mentoring and guiding junior team members. Preferred Skills: Experience with Generative AI technologies and their applications. Familiarity with other cloud platforms, such as AWS or GCP. Knowledge of data governance frameworks and tools. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: At Inferenz, our team of innovative technologists and domain experts help accelerating the business growth through digital enablement and navigating the industries with data, cloud and AI services and solutions. We dedicate our resources to increase efficiency and gain a greater competitive advantage by leveraging various next generation technologies. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

2 - 7 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Hello Talented Techie! We provide support in Project Services and Transformation, Digital Solutions and Delivery Management. We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make efficient use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective implementations. We are looking for Data Engineer ( AWS, Confluent & Snaplogic ) Data Integration Integrate data from various Siemens organizations into our data factory, ensuring seamless data flow and real-time data fetching. Data Processing Implement and manage large-scale data processing solutions using AWS Glue, ensuring efficient and reliable data transformation and loading. Data Storage Store and manage data in a large-scale data lake, utilizing Iceberg tables in Snowflake for optimized data storage and retrieval. Data Transformation Apply various data transformations to prepare data for analysis and reporting, ensuring data quality and consistency. Data Products Create and maintain data products that meet the needs of various stakeholders, providing actionable insights and supporting data-driven decision-making. Workflow Management Use Apache Airflow to orchestrate and automate data workflows, ensuring timely and accurate data processing. Real-time Data Streaming Utilize Confluent Kafka for real-time data streaming, ensuring low-latency data integration and processing. ETL Processes Design and implement ETL processes using SnapLogic , ensuring efficient data extraction, transformation, and loading. Monitoring and Logging Use Splunk for monitoring and logging data processes, ensuring system reliability and performance. You"™d describe yourself as: Experience 3+ relevant years of experience in data engineering, with a focus on AWS Glue, Iceberg tables, Confluent Kafka, SnapLogic, and Airflow. Technical Skills : Proficiency in AWS services, particularly AWS Glue. Experience with Iceberg tables and Snowflake. Knowledge of Confluent Kafka for real-time data streaming. Familiarity with SnapLogic for ETL processes. Experience with Apache Airflow for workflow management. Understanding of Splunk for monitoring and logging. Programming Skills Proficiency in Python, SQL, and other relevant programming languages. Data Modeling Experience with data modeling and database design. Problem-Solving Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve data-related issues. Preferred Qualities: Attention to Detail Meticulous attention to detail, ensuring data accuracy and quality. Communication Skills Excellent communication skills, with the ability to collaborate effectively with cross-functional teams. Adaptability Ability to adapt to changing technologies and work in a fast-paced environment. Team Player Strong team player with a collaborative mindset. Continuous Learning Eagerness to learn and stay updated with the latest trends and technologies in data engineering. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Find out more about Siemens careers at: www.siemens.com/careers

Posted 1 month ago

Apply

3 - 5 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Hello Talented Techie! We provide support in Project Services and Transformation, Digital Solutions and Delivery Management. We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make efficient use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective implementations. We are looking for Data Engineer We are looking for a skilled Data Architect/Engineer with strong expertise in AWS and data lake solutions. If you"™re passionate about building scalable data platforms, this role is for you. Your responsibilities will include: Architect & Design Build scalable and efficient data solutions using AWS services like Glue, Redshift, S3, Kinesis (Apache Kafka), DynamoDB, Lambda, Glue Streaming ETL, and EMR. Real-Time Data Integration Integrate real-time data from multiple Siemens orgs into our central data lake. Data Lake Management Design and manage large-scale data lakes using S3, Glue, and Lake Formation. Data Transformation Apply transformations to ensure high-quality, analysis-ready data. Snowflake Integration Build and manage pipelines for Snowflake, using Iceberg tables for best performance and flexibility. Performance Tuning Optimize pipelines for speed, scalability, and cost-effectiveness. Security & Compliance Ensure all data solutions meet security standards and compliance guidelines. Team Collaboration Work closely with data engineers, scientists, and app developers to deliver full-stack data solutions. Monitoring & Troubleshooting Set up monitoring tools and quickly resolve pipeline issues when needed. You"™d describe yourself as: Experience 3+ years of experience in data engineering or cloud solutioning, with a focus on AWS services. Technical Skills Proficiency in AWS services such as AWS API, AWS Glue, Amazon Redshift, S3, Apache Kafka and Lake Formation. Experience with real-time data processing and streaming architectures. Big Data Querying Tools: Solid understanding of big data querying tools (e.g., Hive, PySpark). Programming Strong programming skills in languages such as Python, Java, or Scala for building and maintaining scalable systems. Problem-Solving Excellent problem-solving skills and the ability to troubleshoot complex issues. Communication Strong communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Certifications AWS certifications are a plus. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Find out more about Siemens careers at: www.siemens.com/careers

Posted 1 month ago

Apply

5 - 7 years

15 - 18 Lacs

Mumbai, Pune, Bengaluru

Work from Office

Naukri logo

Hands-on experience with AWS services including S3, Lambda,Glue, API Gateway, and SQS. Strong skills in data engineering on AWS, with proficiency in Python ,pyspark & SQL. Experience with batch job scheduling and managing data dependencies. Knowledge of data processing tools like Spark and Airflow. Automate repetitive tasks and build reusable frameworks to improve efficiency. Provide Run/DevOps support and manage the ongoing operation of data services. (Immediate Joiners). Location - Bengaluru,Mumbai,Pune,Chennai,Kolkata,Hyderabad.

Posted 1 month ago

Apply

5 - 10 years

15 - 27 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Role & responsibilities 4+ years of overall experience with at least 3 years of experience in AWS Data Engineering • Implement ETL processes to extract, transform, and load data from various sources into data lakes or data warehouses. • Hands-on experience implementing data ingestion, ETL and data processing using AWS Glue, Spark, and Python: • Proficiency in AWS services related to data engineering, such as AWS IAM, S3, SQS, SNS, Glue, Lambda, Athena, RDS, and CloudWatch • Strong knowledge of SQL (e.g., joins and aggregations) and experience with relational databases • Monitor and troubleshoot data pipeline issues to ensure smooth operation. • CI/CD Pipelines: Good experience of CI/CD tools and pipelines, particularly Bitbucket, with experience in automating deployment processes. Experience in delivering AI/ML-related projects, including model development, data preprocessing, and deployment in production environments

Posted 1 month ago

Apply

4 - 7 years

9 - 16 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

About Company: Avisoft (https://avisoft.io/) is a Technology and IT services company based in Mohali and Jammu serving clients globally. We offer Product Engineering, IT Consultancy, Project Outsourcing and Staff Augmentation services. We partner with businesses to design and build Tech platforms from scratch, or to re-engineer and modernize their legacy systems. Our teams have expertise in Full Stack Technologies, REST API Servers, Blockchain, DevOps, Cloud Technologies, Data Engineering, and Test Automation. We are building next gen SaaS platforms for e-commerce and health-tech domains. About the Role: We are seeking a highly skilled Python Developer with a minimum of 4+ years of experience to join our team. In this role, you will play a key part in designing and implementing functional requirements, building efficient back-end features, managing testing and bug fixes, and providing mentorship to junior team members. Your expertise in Python development, knowledge of design patterns, and experience with testing frameworks such as pytest or unit test will be crucial in contributing to the success of our projects. In addition, you will leverage AI/ML technologies to enhance our data processing and application solutions. Responsibilities: Design and implement functional requirements for software applications. Develop robust and efficient back-end features using Python. Integrate AI/ML models and algorithms to solve complex business problems and enhance product functionality. Oversee testing processes, addressing and resolving bugs using frameworks like pytest or unittest. Prepare comprehensive technical documentation for reference. Mentor and coach junior team members, sharing your knowledge and experience. Actively participate in code reviews and discussions to ensure code quality. Implement software enhancements and suggest improvements for ongoing projects. Collaborate with cross-functional teams to integrate AI/ML models into existing applications. Skills: Proven experience as a Python Developer with a minimum of 4 years in a similar role. Excellent understanding and application of design patterns in software development. Strong experience with testing frameworks, preferably pytest or unittest. Familiarity with data processing frameworks such as pandas, pyspark, or similar. AI/ML experience, with expertise in integrating machine learning models into applications. Experience with cloud platforms, particularly Amazon Web Services (AWS), including AWS Glue and AWS Sagemaker for deploying models. Solid understanding of databases and SQL. Familiarity with building back-end solutions and APIs for machine learning models. Experience with AI/ML libraries such as TensorFlow, PyTorch, and Scikit-learn. Exceptional attention to detail in coding and problem-solving. Demonstrated leadership skills and ability to guide and motivate a team. Familiarity with React for front-end integration with AI/ML-driven back-end services. Bachelor's degree in Computer Science, Information Technology, or a related field. Skills : - Python Developer ,AI ,Machine Learning ,Backend Development ,AWS ,Sage maker ,AWS Glue ,Data Processing ,Pandas ,PySpark ,Design Patterns ,Unit Testing ,pytest ,unit test ,Cloud Platforms ,SQL ,NoSQL ,Model Deployment ,Scalable Applications ,Software Development, Python, AI/ML Integration, TensorFlow/PyTorch/Scikit-learn, Backend Development, AWS (Sagemaker, Glue), Pandas/PySpark, Design Patterns, Unit Testing (pytest, unittest), SQL, Model Deployment, Cloud Platforms (AWS, Google Cloud, Azure). Location : - Remote, Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune

Posted 1 month ago

Apply

5 - 10 years

20 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

To complement the existing cross-functional team, Zensar is looking for a Data Engineer who will assist in designing and also implement scalable and robust processes to support the data engineering capability. This role will be responsible for implementing and supporting large-scale data ecosystems across the Group. This incumbent will use best practices in cloud engineering, data management and data storage to continue our drive to optimize the way that data is stored, consumed and ultimately democratized. The incumbent will also engage with stakeholders across the organization with use of the Data Engineering practices to facilitate the improvement in the way that data is stored and consumed. Role & responsibilities Assist in designing and implementing scalable and robust processes for ingesting and transforming complex datasets. Designs, develops, constructs, maintains and supports data pipelines for ETL from a multitude of sources. Creates blueprints for data management systems to centralize, protect, and maintain data sources. Focused on data stewardship and curation, the data engineer enables the data scientist to run their models and analyses to achieve the desired business outcomes Ingest large, complex data sets that meet functional and non-functional requirements. Enable the business to solve the problem of working with large volumes of data in diverse formats, and in doing so, enable innovative solutions. Design and build bulk and delta data lift patterns for optimal extraction, transformation, and loading of data. Supports the organisations cloud strategy and aligns to the data achitecture and governance including the implementation of these data governance practices. Engineer data in the appropriate formats for downstream customers, risk and product analytics or enterprise applications. Assist in identifying, designing and implementing robust process improvement activities to drive efficiency and automation for greater scalability. This includes looking at new solutions and new ways of working and being on the forefront of emerging technologies. Work with various stakeholders across the organization to understand data requirements and apply technical knowledge of data management to solve key business problems. Provide support in the operational environment with all relevant support teams for data services. Provide input into the management of demand across the various data streams and use cases. Create and maintain functional requirements and system specifications in support of data architecture and detailed design specifications for current and future designs. Support test and deployment of new services and features. Provides technical leadership to junior data engineers in the team Preferred candidate profile A degree in Computer Science, Business Informatics, Mathematics, Statistics, Physics or Engineering. 3+ years of data engineering experience 3+ years of experience with any data warehouse technical architectures, ETL/ELT, and reporting/analytics tools including , but not limited to , any of the following combinations (1) SSIS ,SSRS or something similar (2) ETL Frameworks, (3) Spark (4) AWS data builds Should be at least at a proficient level in at least one of Python or Java Some experience with R, AWS, XML, json, cron will be beneficial Experience with designing and implementing Cloud (AWS) solutions including use of APIs available. Knowledge of Engineering and Operational Excellence using standard methodologies. Best practices in software engineering, data management, data storage, data computing and distributed systems to solve business problems with data.

Posted 1 month ago

Apply

5 - 8 years

5 - 15 Lacs

Pune, Chennai

Work from Office

Naukri logo

• SQL: 2-4 years of experience • Spark: 1-2 years of experience • NoSQL Databases: 1-2 years of experience • Database Architecture: 2-3 years of experience • Cloud Architecture: 1-2 years of experience • Experience in programming language like Python • Good Understanding of ETL (Extract, Transform, Load) concepts • Good analytical and problem-solving skills • Inclination for learning & be self-motivated. • Knowledge of ticketing tool like JIRA/SNOW. • Good communication skills to interact with Customers on issues & requirements. Good to Have: • Knowledge/Experience in Scala.

Posted 1 month ago

Apply

6 - 11 years

11 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

We are hiring for Data Engineer for Hyderabad Location. Please find the below Job Description. Role & responsibilities 6+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in Python and PySpark for data processing and transformation tasks. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Deep understanding of ETL concepts and best practices.. Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS.

Posted 1 month ago

Apply

5 - 10 years

13 - 18 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Role & responsibilities Data Engineer Description Experience: 6-10 years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Proficiency in Python and PySpark for data processing and transformation tasks. Deep understanding of ETL concepts and best practices. Familiarity with AWS Glue (ETL jobs, Data Catalog, and Crawlers). Experience building and maintaining data pipelines with AWS Data Pipeline or similar orchestration tools. Familiarity with AWS S3 for data storage and management, including file formats (CSV, Parquet, Avro). Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data Pipelines: Design, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL Development: Develop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow Automation: Build and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data Integration: Work with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and Scaling: Optimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2. Qualifications Experience: 6-10 years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Proficiency in Python and PySpark for data processing and transformation tasks. Deep understanding of ETL concepts and best practices. Familiarity with AWS Glue (ETL jobs, Data Catalog, and Crawlers). Experience building and maintaining data pipelines with AWS Data Pipeline or similar orchestration tools. Familiarity with AWS S3 for data storage and management, including file formats (CSV, Parquet, Avro). Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data Pipelines: Design, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL Development: Develop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow Automation: Build and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data Integration: Work with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and Scaling: Optimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2 Primary Location : IN-KA-Bangalore, Hyderabad Schedule : Full Time

Posted 1 month ago

Apply

7 - 9 years

14 - 24 Lacs

Chennai

Work from Office

Naukri logo

Experience Range: 4-8 years in Data Quality Engineering Job Summary: As a Senior Data Quality Engineer, you will play a key role in ensuring the reliability and accuracy of our data platform and projects. Your primary responsibility will be developing and leading the product testing strategy while leveraging your technical expertise in AWS and big data technologies. You will also guide the team in implementing shift-left testing using Behavior-Driven Development (BDD) methodologies integrated with AWS CodeBuild CI/CD. Your contributions will ensure the successful execution of testing across multiple data platforms and projects. Key Responsibilities: Develop Product Testing Strategy: Collaborate with stakeholders to define and implement the product testing strategy. Identify key platform and project responsibilities, ensuring a comprehensive and effective testing approach. Lead Testing Strategy Implementation: Take charge of implementing the testing strategy across data platforms and projects, ensuring thorough coverage and timely completion of tasks. BDD & AWS Integration: Utilize Behavior-Driven Development (BDD) methodologies to drive shift-left testing and integrate AWS services such as AWS Glue, Lambda, Airflow jobs, Athena, Quicksight, Amazon Redshift, DynamoDB, Parquet, and Spark to improve test effectiveness. Test Execution & Reporting: Design, execute, and document test cases while providing comprehensive reporting on testing results. Collaborate with the team to identify the appropriate data for testing and manage test environments. Collaboration with Developers: Work closely with application developers and technical support to analyze and resolve identified issues in a timely manner. Automation Solutions: Create and maintain automated test cases, enhancing the test automation process to improve testing efficiency. Must-Have Skills: Big Data Platform Expertise: At least 2 years of experience as a technical test lead working on a big data platform, preferably with direct experience in AWS. Strong Programming Skills: Proficiency in object-oriented programming, particularly with Python. Ability to use programming skills to enhance test automation and tooling. BDD & AWS Integration: Experience with Behavior-Driven Development (BDD) practices and AWS technologies, including AWS Glue, Lambda, Airflow, Athena, Quicksight, Amazon Redshift, DynamoDB, Parquet, and Spark. Testing Frameworks & Tools: Familiarity with testing frameworks such as PyTest, PyTest-BDD, and CI/CD tools like AWS CodeBuild and Harness. Communication Skills: Exceptional communication skills with the ability to convey complex technical concepts to both technical and non-technical stakeholders. Good-to-Have Skills: Automation Engineering: Expertise in creating automation testing solutions to improve testing efficiency. Experience with Test Management: Knowledge of test management processes, including test case design, execution, and defect tracking. Agile Methodologies: Experience working in Agile environments, with familiarity in using Agile tools such as Jira to track stories, bugs, and progress. Experience Range: Minimum Requirements: Bachelors degree in Computer Science or related field, or HS/GED with 8 years of experience in Data Quality Engineering. At least 4 years of experience in big data platforms and test engineering, with a strong focus on AWS and Python. Skills Test Automation,Python,Data Engineering

Posted 1 month ago

Apply

7 - 9 years

14 - 24 Lacs

Chennai

Work from Office

Naukri logo

Job Summary: As a Senior Data Quality Engineer, you will play a crucial role in ensuring the reliability and accuracy of our data platform and projects. Your primary responsibilities will involve developing and leading the product testing strategy, leveraging your technical expertise in AWS and big data technologies. You will also work closely with the team to implement shift-left testing using Behavior-Driven Development (BDD) methodologies integrated with AWS CodeBuild CI/CD. Key Responsibilities: Develop Product Testing Strategy: Collaborate with stakeholders to define and design the product testing strategy, identifying key platform and project responsibilities. Lead Testing Strategy Implementation: Take charge of implementing the testing strategy, ensuring its successful execution across the data platform and projects. Oversee and coordinate testing tasks to ensure thorough coverage and timely completion. BDD and AWS Integration: Guide the team in utilizing Behavior-Driven Development (BDD) practices for shift-left testing. Leverage AWS services (e.g., AWS Glue, Lambda, Airflow, Athena, Quicksight, Redshift, DynamoDB, Parquet, Spark) to enhance testing effectiveness. Test Case Management: Work with the team to identify and prepare data for testing, create/maintain automated test cases, execute test cases, and document results. Problem Resolution: Assist developers and technical support staff in resolving identified issues in a timely manner. Automation Engineering Solutions: Create test automation solutions that improve the efficiency and coverage of testing efforts. Must-Have Skills: Big Data Platform Expertise: At least 2 years of experience as a technical test lead working on a big data platform, preferably with direct experience in AWS. AI/ML Familiarity: Experience with AI/ML concepts and practical experience working on AI/ML-driven initiatives. Synthetic Test Data Creation: Knowledge of synthetic data tooling, test data generation, and best practices. Offshore Team Leadership: Proven ability to lead and collaborate with offshore teams, managing projects with limited real data access. Programming Expertise: Strong proficiency in object-oriented programming, particularly with Python. Testing Tools/Frameworks: Familiarity with tools like PyTest, PyTest-BDD, AWS CodeBuild, and Harness. Excellent Communication: Ability to communicate effectively with both technical and non-technical stakeholders, explaining complex technical concepts in simple terms. Good-to-Have Skills: Experience with AWS Services: Familiarity with AWS DL/DW components like AWS Glue, Lambda, Airflow jobs, Athena, Quicksight, Amazon Redshift, DynamoDB, Parquet, and Spark. Test Automation Experience: Practical experience in implementing test automation frameworks for complex data platforms and systems. Shift-Left Testing Knowledge: Experience in implementing shift-left testing strategies, particularly using Behavior-Driven Development (BDD) methodologies. Project Management: Ability to manage multiple testing projects simultaneously while ensuring the accuracy and quality of deliverables. Minimum Requirements: Bachelors in Computer Science and 4 years of relevant experience, or High School/GED with 8 years of relevant experience. Relevant Experience: Big Data platform testing, test strategy leadership, automation, and working with AWS services and AI/ML concepts. Skills Test Automation,Python,Data Engineering

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies