Home
Jobs

Data Engineer

6 years

0 Lacs

Posted:2 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

NPS Prism

Title: Data Engineer

Location: India (Hybrid)Experience: 3–6 YearsEmployment Type: Full-time

Company Profile:

NPS Prism is a market-leading, cloud-based CX benchmarking and operational improvement platform owned by Bain & Company. NPS Prism provides its customers with actionable insights and analysis that guide the creation of game-changing customer experiences. Based on rock-solid sampling, research, and analytic methodology, it lets customers see how they compare to their competitors on overall NPS®, and on every step of the customer journey.
With NPS Prism you can see where you’re strong, where you lag, and how customers feel about doing business with you and your competitors, in their own words. The result: Prioritize the customer interactions that matter most. NPS Prism customers use our customer experience benchmarks and insights to propel their growth and outpace the competition.Launched in 2019, NPS Prism has rapidly grown to a team of over 200, serving dozens of clients around the world. NPS Prism is 100% owned by Bain & Company, one of the top management consulting firms in the world and a company consistently recognized as one of the world’s best places to work. We believe that diversity, inclusion, and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally.

Position Summary:

We are seeking a highly skilled and experienced Data Engineer to join our team. The ideal candidate will have strong expertise in Python, SQL and PySpark, with proven experience working on Databricks and cloud platforms such as Azure and AWS. A solid understanding of ETL tools like Python as well as basic knowledge of DevOps practices and CI/CD pipelines, will be advantageous. This is a unique opportunity to work in a dynamic and fast-paced environment to design and implement robust data solutions for scalable business needs. Working with Git and versioning.

Key Responsibilities:

Data Pipeline Development:

  • Design, build, and optimize ETL/ELT workflows using tools like Databricks, SQL, Python/pyspark & Alteryx (Good to have)
  • Develop and maintain robust, scalable, and efficient data pipelines for processing large datasets. from source to emerging data

Cloud Data Engineering:

  • Work on cloud platforms (Azure, AWS) to build and manage data lakes, data warehouses, and scalable data architectures
  • Utilize cloud services like Azure Data Factory, AWS Glue, orfor data processing and orchestration

Databricks and Big Data Solutions:

  • Use Databricks for big data processing, analytics, and real-time data processing
  • Leverage Apache Spark for distributed computing and handling complex data transformations

Data Management:

  • Create and manage SQL-based data solutions, ensuring high availability, scalability, and performance
  • Develop and enforce data quality checks and validation mechanisms

Collaboration and Stakeholder Engagement:

  • Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to deliver impactful data solutions
  • Understand business requirements and translate them into technical solutions

DevOps and CI/CD:

  • Leverage CI/CD pipelines to streamline development, testing, and deployment of data engineering workflows
  • Work with DevOps tools like Git, Jenkins, or Azure DevOps for version control and automation

Documentation and Optimization:

  • Maintain clear documentation for data workflows, pipelines, and processes
  • Optimize data systems for performance, scalability, and cost-efficiency

Required Qualifications, Experience And Skills

Educational Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, Engineering, or a related field

Experience:

  • 3–6 years of experience in Data Engineering or related roles
  • Hands-on experience with big data processing frameworks, data lakes, and cloud-native services

Skills:

Core Skills:

  • Proficiency in Python, SQL, and PySpark for data processing and manipulation
  • Proven experience in Databricks and Apache Spark
  • Expertise in working with cloud platforms like Azure, AWS
  • Sound knowledge of ETL processes and tools like Alteryx. (Good to have)

Data Engineering Expertise:

  • Leveraging data lakes, data warehouses, and data pipelines
  • Data Pipeline – Build a Data Pipeline from scratch
  • Strong understanding of distributed systems and big data technologies

DevOps and CI/CD:

  • Basic understanding of DevOps principles and familiarity with CI/CD pipelines
  • Hands-on experience with tools like Git, Jenkins, or Azure DevOps

Additional Skills:

  • Familiarity with data visualization tools like Power BI, Tableau, or similar is a plus
  • Knowledge of streaming technologies such as Kafka or Event Hubs is desirable
  • Strong problem-solving skills and a knack for optimizing data solutions
  • Excellent communication (oral and written) skills
Powered by JazzHR
ZD9SyGoN0u

Mock Interview

Practice Video Interview with JobPe AI

Start Data Interview Now
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
NPS Prism
NPS Prism

Software Development

Boston Massachusetts

201-500 Employees

9 Jobs

    Key People

  • John Doe

    CEO
  • Jane Smith

    CTO

RecommendedJobs for You

Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru

Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru