Home
Jobs

1517 Data Processing Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 6.0 years

9 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Assistant Manager - Learning designer (AI/ML) Location: HSR, Bangalore (Work from office) Experience Required: Minimum 4 years Department: Product Innovation Employment Type: Full-time Role Overview: We are looking for a passionate AI/ML Subject Matter Expert with a strong background in Artificial Intelligence, Machine Learning, Python, and Deep Learning to support our course development initiatives. As an SME, you will collaborate with the content, instructional design, and product teams to create high-quality, industry-relevant learning content. Key Responsibilities: Research and stay updated on emerging trends, tools, and technologies in AI/ML, Python, Deep Learning, and Data Science. Contribute to designing, reviewing, and refining Simplilearns AI/ML curriculum and learning paths. Create and validate course content including learning objectives, project ideas, assessments, and use cases. Work closely with instructional designers to ensure technical accuracy and pedagogical effectiveness. Support the content production team with scripting, reviewing, and storyboarding. Participate in webinars, doubt sessions, and learner engagement activities when required. Guide the integration of industry use-cases and capstone projects to enhance learner outcomes. Required Skills and Qualifications: Minimum 4 years of hands-on experience in AI/ML and related technologies. Proficiency in Python, with experience in libraries like NumPy, Pandas, Scikit-learn, TensorFlow, Keras, or PyTorch. Strong understanding of machine learning algorithms, data preprocessing, deep learning models, and NLP. Good analytical and communication skills. Experience in curriculum development or content creation is a strong plus. Exposure to e-learning, edtech, or training is desirable. Nice to Have: Published work, blogs, or GitHub projects in AI/ML. Experience with LLMs, GenAI, or RAG frameworks. Prior instructional design or teaching experience. Exposure in effectively lead and manage a team by allocating tasks based on individual strengths and capabilities to ensure optimal performance and successful project delivery.

Posted 3 hours ago

Apply

2.0 - 6.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

JOb Responsibilities: The Data Scientists will leverage expertise in advanced statistical and modelling techniques to design, prototype, and build the next-generation analytics engines and services. They will work closely with the analytics teams and business teams to derive actionable insights, helping the organization in achieving its’ strategic goals. Their work will involve high levels of interaction with the integrated analytics team, including data engineers, translators, and more senior data scientists. Has expertise in implementing complex statistical analyses for data processing, exploration, model building and implementation Lead teams of 2-3 associate data scientists in the use case building and delivery process Can communicate complex technical concepts to both technical and non-technical audience Plays a key role in driving ideation around the modelling process and developing models. Can conceptualize and drive re-iteration and fine tuning of models Contribute to knowledge building and sharing by researching best practices, documenting solutions, and continuously iterating on new ways to solve problems. Mentors junior team members to do the same REQUIRED EDUCATION AND EXPERIENCE: Master’s degree in Computer Science, Statistics, Math, Operations Research, Economics, or a related field Advanced level programming skills in at least 1 coding language (R/Python/Scala) Practical experience of developing advanced statistical and machine learning models At least 2 years of relevant analytics experience Experience in using large database systems preferred Has developed niche expertise in at least one functional domain REQUIRED SKILLS: Ability to work well in agile environments in diverse teams with multiple stakeholders Experience of leading small teams Able to problem solve complex problems and break them down into simpler parts Ability to effectively communicate complex analytical and technical content High energy and passionate individual who can work closely with other team members Strong entrepreneurial drive to test new out of the box techniques Able to prioritize workstreams and adopt an agile approach Willing to adopt an iterative approach; experimental mindset to drive innovation LOCATION: Bangalore What’s in it for you ? Disruptive projects : Work on ‘ breakthrough’ digital-and-analytics projects to enable UPL’s vision of building a future ready organization. It involves deploy ing solutions to help us increase our sales, sustain our profitability, improve our speed to market, supercharge our R&D efforts, and support the way we work internally. Help us ensure we have access to the best business insights that our data analysis can offer us. Cross functional leadership exposure : Work directly under guidance of functional leadership at UPL, on the most critical business problems for the organization (and the industry) today. It will give you exposure to a large cross-functional team (e.g.: spanning manufacturing, procurement, commercial, quality, IT/OT experts), allowing multi-functional learning in D&A deployment Environment fostering professional and personal development : Strengthen professional learning in a highly impact-oriented and meritocratic environment that is focused on delivering disproportionate business value through innovative solutions. It will be supported by on-the-job coaching from experienced domain experts, and continuous feedback from a highly motivated and capable set of peers. Comprehensive training programs for continuous development through UPL's D&A academy will help in accelerating growth opportunities. Come join us in this transformational journey! Let’s collectively Change the game with Digital & Analytics!

Posted 3 hours ago

Apply

1.0 - 5.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

JOb Responsibilities: The Data Scientists will leverage expertise in advanced statistical and modelling techniques to design, prototype, and build the next-generation analytics engines and services. They will work closely with the analytics teams and business teams to derive actionable insights, helping the organization in achieving its’ strategic goals. Their work will involve high levels of interaction with the integrated analytics team, including data engineers, translators, and more senior data scientists. Has expertise in implementing complex statistical analyses for data processing, exploration, model building and implementation Lead teams of 2-3 associate data scientists in the use case building and delivery process Can communicate complex technical concepts to both technical and non-technical audience Plays a key role in driving ideation around the modelling process and developing models. Can conceptualize and drive re-iteration and fine tuning of models Contribute to knowledge building and sharing by researching best practices, documenting solutions, and continuously iterating on new ways to solve problems. Mentors junior team members to do the same REQUIRED EDUCATION AND EXPERIENCE: Master’s degree in Computer Science, Statistics, Math, Operations Research, Economics, or a related field Advanced level programming skills in at least 1 coding language (R/Python/Scala) Practical experience of developing advanced statistical and machine learning models At least 2 years of relevant analytics experience Experience in using large database systems preferred Has developed niche expertise in at least one functional domain REQUIRED SKILLS: Ability to work well in agile environments in diverse teams with multiple stakeholders Experience of leading small teams Able to problem solve complex problems and break them down into simpler parts Ability to effectively communicate complex analytical and technical content High energy and passionate individual who can work closely with other team members Strong entrepreneurial drive to test new out of the box techniques Able to prioritize workstreams and adopt an agile approach Willing to adopt an iterative approach; experimental mindset to drive innovation

Posted 3 hours ago

Apply

2.0 - 4.0 years

2 - 8 Lacs

Jaipur

Work from Office

Naukri logo

Responsibilities: Work on end-to-end API integrations (REST, WebSocket) Implement and optimize data pipelines using Pandas, NumPy Use DSA to solve real-world performance-critical problems Handle database interaction (SQLite, PostgreSQL, or MongoDB)

Posted 4 hours ago

Apply

3.0 - 8.0 years

12 - 14 Lacs

Mumbai

Work from Office

Naukri logo

Design, develop, and implement data solutions using Azure Data Stack components . Write and optimize advanced SQL queries for data extraction, transformation, and analysis. Develop data processing workflows and ETL processes using Python and PySpark. Implement data solutions using Azure Data Stack components .

Posted 4 hours ago

Apply

3.0 - 7.0 years

9 - 12 Lacs

Pune

Work from Office

Naukri logo

We are looking for a Senior Data Scientist with expertise in Natural Language Processing (NLP), Computer Vision, Generative AI, and IoT data processing. The ideal candidate should have hands-on experience in deploying AI models on cloud (Azure) and edge environments, building data pipelines, and implementing CI/CD workflows. Strong proficiency in Python, SQL, TensorFlow, PyTorch, and cloud services (Azure ML, App Services, Data Factory) is essential. The role requires model monitoring, performance optimization, and IoT device integration for predictive analytics and real-time AI solutions.

Posted 4 hours ago

Apply

3.0 - 6.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Job summary Join our dynamic team as a Team Member where you will leverage your expertise in Rest API Rabbit MQ Kafka PostgreSQL Quarkus Java 8 and Advanced Java. This hybrid role offers the opportunity to work on innovative projects that drive our companys success. With a focus on collaboration and technical excellence you will contribute to impactful solutions that enhance our services. Responsibilities Develop and maintain robust Rest API solutions to support seamless integration across platforms. Implement messaging solutions using Rabbit MQ and Kafka to ensure reliable and efficient data processing. Design and optimize PostgreSQL databases to enhance data storage and retrieval performance. Utilize Quarkus to build high-performance scalable applications that meet business requirements. Write clean efficient and maintainable code in Java 8 and Advanced Java to deliver high-quality software solutions. Collaborate with cross-functional teams to gather and analyze requirements ensuring alignment with project goals. Participate in code reviews and provide constructive feedback to enhance code quality and team performance. Troubleshoot and resolve technical issues ensuring minimal disruption to business operations. Contribute to the continuous improvement of development processes and practices. Stay updated with the latest industry trends and technologies to drive innovation within the team. Ensure adherence to best practices in software development including security and performance optimization. Document technical specifications and project progress to facilitate knowledge sharing and collaboration. Engage in regular team meetings and discussions to foster a collaborative and supportive work environment. Qualifications Possess a strong understanding of Rest API development and integration. Demonstrate proficiency in Rabbit MQ and Kafka for messaging solutions. Have experience in designing and managing PostgreSQL databases. Show expertise in using Quarkus for application development. Be skilled in Java 8 and Advanced Java programming. Exhibit excellent problem-solving and analytical skills. Display effective communication and teamwork abilities.

Posted 4 hours ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Kolkata, Chennai, Bengaluru

Hybrid

Naukri logo

Global Gen AI Developer Enabling a software-defined, electrified future. Visteon is a technology company that develops and builds innovative digital cockpit and electrification products at the leading-edge of the mobility revolution. Founded in 2000, Visteon brings decades of automotive intelligence combined with Silicon Valley speed to apply global insights that help transform the software-defined vehicle of the future for many of the worlds largest OEMs. The company employs 10,000 employees in 18 countries around the globe. To know more about us click here. Mission of the Role: Facilitate Enterprise machine learning and artificial intelligence solutions using the latest technologies Visteon is adopting globally. Key Objectives of this Role: The primary goal of the Global ML/AI Developer is to leverage advanced machine learning and artificial intelligence techniques to develop innovative solutions that drive Visteons strategic initiatives. By collaborating with cross-functional teams and stakeholders, this role identifies opportunities for AI-driven improvements, designs and implements scalable ML models, and integrates these models into existing systems to enhance operational efficiency. Following development best practices, fostering a culture of continuous learning, and staying abreast of AI advancements, the Global ML/AI Developer ensures that all AI solutions align with organizational goals, support data-driven decision-making, and continuously improve Visteons technological capabilities. Qualification, Experience and Skills: 6-8 Yrs Technical Skills: Expertise in machine learning frameworks (e.g., TensorFlow, PyTorch), programming languages (e.g., Python, R, SQL), and data processing tools (e.g., Apache Spark, Hadoop). Proficiency in developing, training, and deploying ML models, including supervised and unsupervised learning, deep learning, and reinforcement learning. Strong understanding of data engineering concepts, including data preprocessing, feature engineering, and data pipeline development. Experience with cloud platforms (preferably Microsoft Azure) for deploying and scaling ML solutions. Business Acumen : Strong business analysis and ability to translate complex technical concepts into actionable business insights and recommendations. Key Behaviors: Innovation: Continuously seeks out new ideas, technologies, and methodologies to improve AI/ML solutions and drive the organization forward. Attention to Detail: Pays close attention to all aspects of the work, ensuring accuracy and thoroughness in data analysis, model development, and documentation. Effective Communication: Clearly and effectively communicates complex technical concepts to non-technical stakeholders, ensuring understanding and alignment across the organization.

Posted 6 hours ago

Apply

3.0 - 5.0 years

7 - 11 Lacs

Noida

Work from Office

Naukri logo

Collaborate with the project teams in tandem to understand their requirements Keeping a check over project requirement fulfilment and meeting deadlines Define and write data cleaning program Support additional data requirements Responsible for making project specific script Responsible for quality check of final output Maintain comprehensive documentation relating to project 3-5 years of relevant data processing experience Hand on experience working on analysis tools like SPSS for Data Validation and Quantum for tabulations, Excel, VBA, etc. Willing to learn new data analysis tools

Posted 9 hours ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Mumbai

Work from Office

Naukri logo

The Senior Spark Tech Lead will be responsible for integrating and maintaining the Quantexa platform, a spark based software provided by a UK fintech, into our existing systems to enhance our anti-money laundering capabilities. This role requires a deep expertise in Spark development, as well as an ability to analyze and understand underlying data. Additionally, the candidate should have an interest in exploring open-source applications distributed by Apache, Kubernetes, OpenSearch and Oracle. Should be able to work as a Scrum Master Responsibilities Direct Responsibilities Integrate and upgrade the Quantexa tool with our existing systems for enhanced anti-money laundering measures. Develop and maintain Spark-based applications deployed on Kubernetes clusters. Conduct data analysis to understand and interpret underlying data structures. Collaborate with cross-functional teams to ensure seamless integration and functionality. Stay updated with the latest trends and best practices in Spark development and Kubernetes. Contributing Responsibilities Taking complete ownership of project activities and understand each tasks in details. Ensure that the team delivers on time without any delays and deliveries are of high quality standards. Estimation, Planning and scheduling of the project. Ensure all internal timelines are respected and project is on track. Work with team to develop robust software adhering to the timelines & following all the standard guidelines. Act proactively to ensure smooth team operations and effective collaboration Make sure team adheres to all compliance processes and intervene if required Task assignment to the team and tracking until task completion Proactive Status reporting to the management. Identify Risks in the project and highlight to Manager. Create Contingency and Backup planning as necessary. Create Mitigation Plan. Take decision by own based on situation. Play the role of mentor and coach team members as and when required to meet the target goals Gain functional knowledge on applications worked upon Create knowledge repositories for future reference. Arrange knowledge sharing sessions to enhance team's functional capability. Evaluation of new tools and coming with POCs. Provide feedback of team to upper management on timely basis Technical & Behavioral Competencies Key Responsibilities Integrate and upgrade the Quantexa tool with our existing systems for enhanced anti-money laundering measures. Develop and maintain Spark-based applications deployed on Kubernetes clusters. Conduct data analysis to understand and interpret underlying data structures. Collaborate with cross-functional teams to ensure seamless integration and functionality. Stay updated with the latest trends and best practices in Spark development and Kubernetes. Required Qualifications 7+ Years of experience in development Extensive experience in Hadoop, Spark, Scala development (5 years min). Strong analytical skills and experience in data analysis (SQL), data processing (such as ETL), parsing, data mapping and handling real-life data quality issues. Excellent problem-solving abilities and attention to detail. Strong communication and collaboration skills. Experience in Agile development. High quality coding skill, incl. code control, unit testing, design, and documentation (code, test). Experience with tools such as sonar. Experience with GIT, Jenkins. Specific Qualifications (if required) Experience with development and deployment of spark application and deployment on Kubernetes clusters Hands-on development experience (Java, Scala, etc.) via system integration projects, Python, Elastic (optional). Skills Referential Behavioural Skills : (Please select up to 4 skills) Ability to collaborate / Teamwork Adaptability Creativity & Innovation / Problem solving Attention to detail / rigor Transversal Skills: (Please select up to 5 skills) Analytical Ability Ability to develop and adapt a process Ability to develop and leverage networks Choose an item. Choose an item. Education Level: Bachelor Degree or equivalent Experience Level At least 7 years Fluent in English Team player Strong analytical skills Quality oriented and well organized Willing to work under pressure and mission oriented Excellent Oral and Written Communication Skills, Motivational Skills, Results-Oriented

Posted 10 hours ago

Apply

6.0 - 10.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above

Posted 11 hours ago

Apply

6.0 - 10.0 years

13 - 17 Lacs

Pune

Work from Office

Naukri logo

About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities Let's unleash your full potential. See Beyond, Rise Above

Posted 11 hours ago

Apply

6.0 - 10.0 years

13 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above

Posted 11 hours ago

Apply

6.0 - 10.0 years

6 - 10 Lacs

Hyderabad, Telangana, India

On-site

Foundit logo

Let s do this. Let s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes . Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in design and development of the data pipeline. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions. Identify and resolve complex data-related challenges. Adhere to standard processes for coding, testing, and designing reusable code/component. Explore new tools and technologies that will help to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Collaborate and communicate effectively with product teams. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master s degree with 4 - 6 years of experience in Computer Science, IT or related field OR Bachelor s degree with 6 - 8 years of experience in Computer Science, IT or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT or related field. Functional Skills: Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing. Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training. Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Excellent problem-solving skills and the ability to work with large, complex datasets. Strong understanding of data governance frameworks, tools, and standard methodologies. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA). Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Strong understanding of data modeling, data warehousing, and data integration concepts. Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Certified Data Scientist (preferred on Databricks or Cloud environments). Machine Learning Certification (preferred on Databricks or Cloud environments). SAFe for Teams certification (preferred). Soft Skills: Excellent critical-thinking and problem-solving skills. Strong communication and collaboration skills. Demonstrated awareness of how to function in a team setting. Demonstrated presentation skills. Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements.

Posted 11 hours ago

Apply

7.0 - 9.0 years

7 - 9 Lacs

Hyderabad, Telangana, India

On-site

Foundit logo

Roles & Responsibilities: Design and buildscalable enterprise analytics solutions using Databricks, Power BI, and other modern data tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Break downfeatures into work that aligns with the architectural directionrunway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentationfrom data analysis and profiling, and proposed designs and data logic Develop advancedsqlqueries to profile, and unify data Develop data processing code insql, along with semantic views to prepare data for reporting DevelopPowerBIModels and reporting packages Design robust data models, and processing layers,that support both analytical processing and operational reporting needs. Design and develop solutions based onbest practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Develop andmaintainPower BI solutions, ensuring data models and reports areoptimizedfor performance and scalability. Collaborate with stakeholders to define data requirements,functionalspecifications, and project goals. Continuously evaluate and adoptnew technologiesand methodologies to enhance the architecture and performance of data solutions. Basic Qualifications and Experience: Master s degree with1 to 3years of experience inDataEngineeringOR Bachelor s degree with4 to 5years of experience inData Engineering Diploma and 7 to 9 years of experience in Data Engineering. Functional Skills: Must-Have Skills: Minimum of3years of hands-on experience withBI solutions (Preferrable Power BI or Business Objects)including report development, dashboard creation, and optimization. Minimum of3years ofhands-onexperiencebuilding Change-data-capture (CDC) ETL pipelines, data warehouse design and build,and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Deep understanding of Power BI, includingmodel design, DAX, and Power Query. Proven experience designing and implementing data mastering solutions and data governance frameworks. Expertisein cloud platforms (AWS), data lakes, and data warehouses. Strong knowledge of ETL processes, data pipelines, and integration technologies. Strong communicationand collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional hands-oncapabilitieswith data profiling, data transformation, data mastering Success in mentoringand training team members Good-to-Have Skills: Experience in developing differentiated and deliverable solutions Experiencewith human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications(please mention if the certification is preferred or mandatory for the role): ITIL Foundation or other relevant certifications(preferred) SAFeAgile Practitioner (6.0) Microsoft Certified: Data Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity Highestdegree of initiative and self-motivation Strong verbal and written communication skills, includingpresentationtovaried audiencesofcomplextechnical/business topics Confidencetechnical leader Ability to work effectively with global, virtual teams, specifically including leveraging of tools and artifacts to assure clear and efficient collaboration acrosstime zones Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly andretainand synthesize complex information from diverse sources

Posted 12 hours ago

Apply

2.0 - 7.0 years

4 - 5 Lacs

Saharanpur

Work from Office

Naukri logo

Operational Support Report Generation & Data Analysis: Prepare and disseminate key daily & monthly operational reports including Daily Output and efficiency reports, key factory indices. Ensure high accuracy in data and reporting using SAP systems Required Candidate profile Experience in handling production or manufacturing data (preferred). Experience in a similar support role within a production or manufacturing environment. Proficiency in SAP and MS Office tools .

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Hubli, Mangaluru, Mysuru

Work from Office

Naukri logo

Mine and analyse datato identify patterns and correlations among the various data points. Perform end-to-endanalysis across all digital touchpoints, including data gathering from largeand complex data sets, data processing, and analysis. Conduct in-depthanalysis of user behaviour, customer journeys and other relevant metrics tounderstand the effectiveness of digital initiatives and identify areas forimprovement. Present findings fromanalytics and research and make recommendations to business teams. Requirements Must Have... 3-5 years ofexperience working in the field of analytics, reporting out metrics and deepdive analytics. Strong proficiencywith Advanced SQL (Windows Functions, DML, DDL Commands, CTES, Sub Queries,etc)for data analysis and building end to end data pipelines. Ability to writecomplex queries and understanding of database concepts. Strong analyticalproblem-solving skills and an aptitude for learning quickly. Expert in dataanalysis and presentation skills. Exceptionalcommunication and collaboration skills Critical Thinking andability to think beyond the obvious Nice to have... Experience in webanalytics and tools like (Adobe Omniture, Google analytics etc) Experience withprogramming languages like Python & Unix Shell for data pipeline automationand analysis. Knowledge ofstatistics concepts and Machine learning algorithms like regression, clusteringetc. Education Bachelor withPost-Graduation in Management Science and related fields. 2-4 years of relevantexperience in analytics organizations of large corporates or in consultingcompanies in analytics roles.

Posted 2 days ago

Apply

5.0 - 6.0 years

7 - 8 Lacs

Hubli, Mangaluru, Mysuru

Work from Office

Naukri logo

Job Description: Build data crawlers to extract data from customers data sources using available ETL platforms, and troubleshoot the issues faced during data loading & processing Design and build data warehouse models in columnar databases Develop data processing scripts using SQL and be able to optimize complex sequences of SQL Queries Design, build and deploy effective SSIS packages to validate, synthesize, transform customer data for integrated business planning and analytics Work with Solution Architects to understand customers business requirements and implement them Perform data analysis for complex use cases using SQL document technical specification for the cognitive applications that we are creating for the customers Own the entire development and implementation workflow Participate in the technical design, data requirements gathering and making recommendations on best practices, in case of inaccurate or missing data Design and automate the data loading and insights generation process with intelligent checks and balances for sustained value delivery Create and executetest plans, document issues and track progress at resolving issues. Requirements Experience \5-6Years Must Have: O9 is Manadatory Very strong hands-on experience working in ETL (Extract/Transform/Load) processes Proficiency in databases (SQL Server, MySQL) and skills in one or more languages like SQL, MDX, T-SQL with knowledge of DDL, DML, stored procedures, triggers and performance tuning is a MUST Familiarity with workflows in programming concepts Experience using columnar databases Experience working directly with customers and presenting your ideas Excellent communication and presentation skills Good to Have: Applied Python programming Experience working with JSON and XML files Experience using APIs Knowledge of SAP Knowledge of supply chain fundamentals and terminology

Posted 2 days ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Hubli, Mangaluru, Mysuru

Work from Office

Naukri logo

Job Overview: We are looking for highly skilled Senior and Mid-to-Junior Technical Consultants with expertise in SQL, PySpark, Python, Airflow, and API development . The ideal candidates will have hands-on experience in data warehousing concepts (Fact & Dimension) and a strong understanding of supply chain domain processes. You will work closely with cross-functional teams to develop, optimize, and implement scalable data solutions. Senior Technical Consultant: 8+ years Key Responsibilities: - Design, develop, and optimize data pipelines using PySpark, SQL, and Python . - Implement and manage Airflow DAGs for workflow automation. - Work with APIs to integrate data sources and ensure seamless data exchange. - Develop and maintain data models based on fact and dimension tables for efficient reporting and analytics. - Optimize query performance and data processing for large datasets. - Collaborate with business analysts, data engineers, and stakeholders to understand business requirements and translate them into technical solutions. - Ensure data quality, reliability, and scalability of solutions. - Provide mentorship to junior team members (for the Senior Technical Consultant role). Requirements Required Skills & Qualifications: - Strong proficiency in SQL, PySpark, and Python . - Hands-on experience with Airflow for scheduling and orchestrating workflows. - Expertise in working with APIs (development and integration). - Solid understanding of data warehousing concepts (Fact & Dimension modeling). - Experience in the supply chain domain is highly preferred. - Knowledge of cloud platforms (AWS, Azure, or GCP) is a plus and not mandatory. - Excellent problem-solving skills and ability to work in an agile environment. - Strong communication skills to effectively collaborate with cross-functional teams

Posted 2 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Software Product Development Team Extension IT Consulting and Solutions Digital Marketing What you will do : Develop robust and scalable backend services and APIs using Python and integrate them with various AWS services. Design, build, and maintain data processing pipelines leveraging AWS services such as S3, Lambda. Collaborate with cross-functional teams to understand requirements and translate them into technical solutions, ensuring alignment with best practices and AWS architecture principles. Write maintainable code, working in a professional agile software engineering environment (source control, shortened release cycles, continuous integration/deployment, etc. Optimize application performance and scalability by fine-tuning AWS resources and leveraging advanced Python programming techniques. What you should have : Bachelor s degree in computer science or related field. Minimum 5 years of experience as a software developer. Excellent with Python, Relational DBs, Source Control and CI/CD. Experience with AWS services such as S3, Lambda, DynamoDB, OpenSearch. Expertise in Data Structures and Algorithms Demonstrated skill in the effective application and implementation of diverse data structures and algorithms, complemented by strong mathematical problem-solving abilities. Excellent communication skills with an advanced English level. Ability to work independently and as a member of a team on assigned projects and tasks with general supervision. Ability to prioritize, raise issues and resolve tough problems in a timely fashion to meet business deadlines. Self-motivated, task-oriented personality with a strong work ethic and desire to learn. Experience with Gen-AI techniques is a bonus. Apply for this position Full Name * Email * Phone * Cover Letter * Upload CV/Resume * Upload CV/Resume * Allowed Type(s): .pdf, .doc, .docx By using this form you agree with the storage and handling of your data by this website. * Transform your digital presence with our comprehensive suite of software solutions. Lets innovate together Company Our Products Our Solutions Software Product Development Team Extension IT Consulting and Solutions Digital Marketing

Posted 2 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai, Bengaluru

Work from Office

Naukri logo

Job Title: Configuration Service Lead, Digital Business (Golang + Python) Location: Mumbai/Bangalore Department: Platform Engineering About the Role: We are looking for an experienced Backend Developer with expertise in Golang/Python to design, develop, and maintain scalable, reliable, and high-performance backend services. You will work on core backend systems, including the Beacon API and Config Service, ensuring seamless integration, high availability, and optimal performance. This is an exciting opportunity to work in a fast-paced environment, building critical services that power our platform. Key Responsibilities: Design, develop, and maintain high-performance APIs and backend services to handle complex business logic and real-time data processing. Build and maintain microservices architecture using Golang and Python, ensuring modularity, scalability, and maintainability. Work closely with cross-functional teams to integrate services and ensure smooth communication across the platform. Optimize backend systems for low latency, high throughput, and scalability to meet growing demands. Collaborate with DevOps teams to deploy and manage backend services in AWS or other cloud environments. Implement robust error handling, logging, and monitoring for backend services to ensure reliability and observability. Write clean, maintainable, and well-documented code following best practices and industry standards. Participate in design reviews, code reviews, and contribute to technical decisions. Required Skills and Expertise: Strong proficiency in Golang for building and deploying backend services at scale. Experience in Python for data processing, scripting, and backend service development. Solid experience designing and developing RESTful APIs and/or gRPC services. Experience with AWS services like EC2, S3, Lambda, RDS, and OpenSearch. Expertise in AWS OpenSearch for search and analytics workloads. Familiarity with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis). Understanding of microservices architecture, containerization (Docker), and orchestration with Kubernetes. Familiar with Git, and experience with CI/CD tools like Jenkins, GitLab CI, or ArgoCD. Problem Solving: Strong debugging, performance tuning, and problem-solving skills for backend systems. Preferred Qualifications: Familiarity with event-driven architectures and messaging systems (e.g., Kafka, MQ). Experience with monitoring and observability tools like Prometheus, Grafana, and distributed tracing frameworks. Knowledge of Infrastructure as Code tools like Terraform or CloudFormation. Experience in highly available and fault-tolerant system design. Why join us? Sony Pictures Networks is home to some of India s leading entertainment channels such as SET, SAB, MAX, PAL, PIX, Sony BBC Earth, Yay!, Sony Marathi, Sony SIX, Sony TEN, Sony TEN1, SONY Ten2, SONY TEN3, SONY TEN4, to name a few! Our foray into the OTT space with one of the most promising streaming platforms, Sony LIV brings us one step closer to being a progressive digitally-led content powerhouse. Our independent production venture- Studio Next has already made its mark with original content and IPs for TV and Digital Media. But our quest to Go Beyond doesn t end there. Neither does our search to find people who can take us there. We focus on creating an inclusive and equitable workplace where we celebrate diversity with our Bring Your Own Self Philosophy and are recognised as a Great Place to Work. - Great Place to Work Institute- Ranked as one of the Great Places to Work for since 5 years - Included in the Hall of Fame as a part of the Working Mother & Avtar Best Companies for Women in India study- Ranked amongst 100 Best Companies for Women In India - ET Human Capital Awards 2021- Winner across multiple categories - Brandon Hall Group HCM Excellence Award - Outstanding Learning Practices. The biggest award of course is the thrill our employees feel when they can Tell Stories Beyond the Ordinary!

Posted 2 days ago

Apply

6.0 - 7.0 years

8 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Java Developer ETL & API Integration Technology Experts Jun 24, 2025 Java Developer ETL & API Integration Location: Bangalore (Hybrid 2 to 3 days onsite per week) Company: Hireflex247 India Pvt Ltd Type: Contract / Full-Time We are seeking a Java Developer with a strong foundation in ETL design , RESTful API integrations , and data transformation workflows to support a global enterprise client. This role is pivotal in ensuring the smooth flow of data across multiple systems from Candidate to Co-worker , helping maintain system stability, accuracy, and performance. You ll be part of a dynamic technology team supporting the People Operations and Transformation (POT) initiatives. The scope includes balancing enhancements, change requests (CRs) , and bug fixes in an agile environment, ensuring seamless data processing and integration. Key Responsibilities: Develop and maintain Java-based ETL processes for handling high-volume data transformations. Design, consume, and integrate RESTful APIs efficiently, ensuring error handling and reliability. Ensure data accuracy, consistency, and mapping across systems. Collaborate with cross-functional teams to support changes, enhancements, and critical business flows. Contribute to resolving production issues and maintaining business continuity in data processing pipelines. (Preferred) Assist in integrations involving SAP SuccessFactors, particularly around Recruiting and Hiring flows. Required Skills: Strong Java Development expertise, with experience in ETL-style backend logic. Hands-on experience with REST APIs : building, consuming, and handling errors and integration scenarios. Data transformation and mapping skills across enterprise systems. Ability to work both independently and collaboratively in agile delivery teams. Exposure to or working knowledge of SAP SuccessFactors , especially around Recruit-to-Hire or Candidate-to-Co-worker integrations. Experience working in fast-paced enterprise environments and debugging live production systems. Top 3 Must-Haves: Strong Java backend development with a focus on ETL logic Proficiency with RESTful API integration and error handling Experience in data mapping and transformation across multiple systems Apply for this position Are you willing to work & report at Flexible hours? * Allowed Type(s): .pdf, .doc, .docx Years of Experience LinkedIn Profile Link * By using this form you agree with the storage and handling of your data by this website. *

Posted 2 days ago

Apply

6.0 - 11.0 years

10 - 18 Lacs

Chennai

Work from Office

Naukri logo

Design systems for local data processing. Implement solutions for IoT and real-time analytics. Ensure system security and performance.

Posted 2 days ago

Apply

6.0 - 11.0 years

10 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Design systems for local data processing. Implement solutions for IoT and real-time analytics. Ensure system security and performance.

Posted 2 days ago

Apply

6.0 - 11.0 years

10 - 18 Lacs

Hyderabad

Work from Office

Naukri logo

Design systems for local data processing. Implement solutions for IoT and real-time analytics. Ensure system security and performance.

Posted 2 days ago

Apply

Exploring Data Processing Jobs in India

The data processing job market in India is thriving with opportunities for job seekers in the field. With the growing demand for data-driven insights in various industries, the need for professionals skilled in data processing is on the rise. Whether you are a fresh graduate looking to start your career or an experienced professional looking to advance, there are ample opportunities in India for data processing roles.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Pune
  5. Hyderabad

These major cities in India are actively hiring for data processing roles, with a multitude of job opportunities available for job seekers.

Average Salary Range

The average salary range for data processing professionals in India varies based on experience and skill level. Entry-level positions can expect to earn between INR 3-6 lakh per annum, while experienced professionals can earn upwards of INR 10 lakh per annum.

Career Path

A typical career path in data processing may include roles such as Data Analyst, Data Engineer, Data Scientist, and Data Architect. As professionals gain experience and expertise in the field, they may progress from Junior Data Analyst to Senior Data Analyst, and eventually to roles such as Data Scientist or Data Architect.

Related Skills

In addition to data processing skills, professionals in this field are often expected to have knowledge of programming languages such as Python, SQL, and R. Strong analytical and problem-solving skills are also essential for success in data processing roles.

Interview Questions

  • What is data processing? (basic)
  • Explain the difference between data cleaning and data transformation. (medium)
  • How do you handle missing data in a dataset? (medium)
  • What is the importance of data normalization in data processing? (medium)
  • Can you explain the process of feature selection in machine learning? (advanced)
  • How do you evaluate the performance of a machine learning model? (advanced)
  • What is the difference between supervised and unsupervised learning? (basic)
  • How do you deal with outliers in a dataset? (medium)
  • Explain the concept of dimensionality reduction. (medium)
  • What is the bias-variance tradeoff in machine learning? (advanced)
  • How would you handle a dataset with high dimensionality? (medium)
  • Can you explain the process of clustering in data processing? (medium)
  • What is the role of regularization in machine learning? (advanced)
  • How do you assess the quality of a machine learning model? (medium)
  • Can you explain the concept of overfitting in machine learning? (basic)
  • What is the difference between classification and regression in machine learning? (basic)
  • How do you select the right algorithm for a machine learning task? (medium)
  • Explain the process of data preprocessing in machine learning. (medium)
  • How do you handle imbalanced datasets in machine learning? (medium)
  • What is the purpose of cross-validation in machine learning? (medium)
  • Can you explain the difference between batch processing and real-time processing? (medium)
  • How do you handle categorical data in a dataset? (basic)
  • What is the role of data visualization in data processing? (basic)
  • How do you ensure data security and privacy in data processing? (medium)
  • What are the advantages of using cloud computing for data processing? (medium)

Closing Remark

As you explore opportunities in the data processing job market in India, remember to prepare thoroughly for interviews and showcase your skills and expertise confidently. With the right combination of skills and experience, you can embark on a successful career in data processing in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies