Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Role & responsibilities Develop, test, and maintain high-quality Python applications. Write efficient, reusable, and reliable code following best practices. Optimize applications for performance, scalability, and security. Collaborate with cross-functional teams to define, design, and deliver new features. Design and manage databases using SQL to support application functionality. Think from a customer's perspective to provide the best user experience. Participate in code reviews to maintain code quality and share knowledge. Troubleshoot and resolve software defects and issues. Continuously learn and apply new technologies and best practices. Propose and influence architectural decisions to ensure the scalability and performance of the application. Work in agile/iterative software development teams with a DevOps setup. Requirements: Bachelors degree in BE/B.Tech, BSc, BCA, or equivalent. Experience in data engineering domain At least 6 months of professional experience in Python development. Experience developing and implementing robust back-end functionalities, including data processing, APIs, and integrations with external systems. Strong problem-solving skills. Self-motivated and able to work independently as well as part of a team. Familiarity with Git and CI/CD pipelines using Bitbucket/GitHub. Solid understanding of API design, REST API, and GraphQL. Knowledge of unit testing. Good to have Hands-on experience with AWS services, including Lambda, Glue, SQS, AppSync, API Gateway, and Aurora RDS, and experience with AWS serverless deployment. Hands-on experience with AWS and Infrastructure as Code (IaC). Preferred candidate profile
Posted 3 weeks ago
3.0 - 8.0 years
15 - 22 Lacs
Gurugram
Work from Office
Data Engineer Exp : 4 years of Experience Years of Minimum Relevant : 3+ Years in data engineering Location- Gurgaon Role Summary: The Data Engineer will develop and maintain AWS-based data pipelines, ensuring optimal ingestion, transformation, and storage of clinical trial data. The role requires expertise in ETL, AWS Glue, Lambda functions, and Redshift optimization. Must have- AWS (Glue, Lambda, Redshift, Step Functions) Python, SQL, API-based ingestion Pyspark Redshift, SQL/PostgreSQL, Snowflake (optional) Redshift Query Optimization, Indexing IAM, Encryption, Row-Level Security
Posted 3 weeks ago
10.0 - 14.0 years
15 - 20 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
Technical Architect / Solution Architect / Data Architect (Data Analytics) Notice Period: Immediate to 15 Days Job Summary: We are looking for a highly technical and experienced Data Architect / Solution Architect / Technical Architect with expertise in Data Analytics. The candidate should have strong hands-on experience in solutioning, architecture, and cloud technologies to drive data-driven decisions. Key Responsibilities: Design, develop, and implement end-to-end data architecture solutions. Provide technical leadership in Azure, Databricks, Snowflake, and Microsoft Fabric. Architect scalable, secure, and high-performing data solutions. Work on data strategy, governance, and optimization. Implement and optimize Power BI dashboards and SQL-based analytics. Collaborate with cross-functional teams to deliver robust data solutions. Primary Skills Required: Data Architecture & Solutioning Azure Cloud (Data Services, Storage, Synapse, etc.) Databricks & Snowflake (Data Engineering & Warehousing) Power BI (Visualization & Reporting) Microsoft Fabric (Data & AI Integration) SQL (Advanced Querying & Optimization) Contact: 9032956160 Looking for immediate to 15-day joiners
Posted 3 weeks ago
2.0 - 6.0 years
4 - 8 Lacs
Chennai, Delhi / NCR, Bengaluru
Work from Office
Job Summary: Were looking for a SAS Data Integration (DI) Developer to join our team. In this role, you'll be responsible for designing and optimizing ETL processes using SAS DI Studio, as well as automating job schedules. Youll work with large datasets, ensure data integrity, and collaborate with cross-functional teams to meet business requirements. Additionally, you'll troubleshoot and monitor jobs while ensuring adherence to data governance standards. If you're excited to work in a dynamic and growing environment, this is a great opportunity to apply your skills to cutting-edge technologies. Key Responsibilities: Develop and maintain ETL processes using SAS DI Studio. Design and optimize data workflows, including transformations and macros for high-performance data integration. Schedule and automate jobs using SAS Management Console or other scheduling tools (e.g., Control-M, cron). Monitor, troubleshoot, and resolve issues related to scheduled jobs and ETL processes. Work with large datasets, ensuring data integrity and optimizing performance. Collaborate with teams to meet business requirements and improve data workflows. Create documentation for data integration processes and job schedules. Ensure compliance with data governance and security best practices. Qualifications: Experience with SAS DI Studio, SAS programming, and ETL processes. Expertise in job scheduling and automation using SAS Management Console, Control-M, or cron. Proficient in SQL, data transformation, and data quality assurance. Strong problem-solving and troubleshooting skills. Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 3 weeks ago
8.0 - 12.0 years
15 - 20 Lacs
Pune
Work from Office
We are looking for a highly experienced Lead Data Engineer / Data Architect to lead the design, development, and implementation of scalable data pipelines, data Lakehouse, and data warehousing solutions. The ideal candidate will provide technical leadership to a team of data engineers, drive architectural decisions, and ensure best practices in data engineering. This role is critical in enabling data-driven decision-making and modernizing our data infrastructure. Key Responsibilities: Act as a technical leader responsible for guiding the design, development, and implementation of data pipelines, data Lakehouse, and data warehousing solutions. Lead a team of data engineers, ensuring adherence to best practices and standards. Drive the successful delivery of high-quality, scalable, and reliable data solutions. Play a key role in shaping data architecture, adopting modern data technologies, and enabling data-driven decision-making across the team. Provide technical vision, guidance, and mentorship to the team. Lead technical design discussions, perform code reviews, and contribute to architectural decisions.
Posted 3 weeks ago
5.0 - 7.0 years
15 - 25 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
About the Role: We are seeking a skilled and experienced Data Engineer to join our remote team. The ideal candidate will have 5-7 years of professional experience working with Python, PySpark, SQL, and Spark SQL, and will play a key role in building scalable data pipelines, optimizing data workflows, and supporting data-driven decision-making across the organization. Key Responsibilities: Design, build, and maintain scalable and efficient data pipelines using PySpark and SQL. Develop and optimize Spark jobs for large-scale data processing. Collaborate with data scientists, analysts, and other engineers to ensure data quality and accessibility. Implement data integration from multiple sources into a unified data warehouse or lake. Monitor and troubleshoot data pipelines and ETL jobs for performance and reliability. Ensure best practices in data governance, security, and compliance. Create and maintain technical documentation related to data pipelines and infrastructure. Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad,Remote
Posted 3 weeks ago
6.0 - 9.0 years
9 - 18 Lacs
Pune, Chennai
Work from Office
Job Title: Data Engineer (Spark/Scala/Cloudera) Location: Chennai/Pune Job Type : Full time Experience Level: 6- 9 years Job Summary: We are seeking a skilled and motivated Data Engineer to join our data engineering team. The ideal candidate will have deep experience with Apache Spark, Scala, and Cloudera Hadoop ecosystem. You will be responsible for building scalable data pipelines, optimizing data processing workflows, and ensuring the reliability and performance of our big data platform. Key Responsibilities: Design, build, and maintain scalable and efficient ETL/ELT pipelines using Spark and Scala. Work with large-scale datasets on the Cloudera Data Platform (CDP). Collaborate with data scientists, analysts, and other stakeholders to ensure data availability and quality. Optimize Spark jobs for performance and resource utilization. Implement and maintain data governance, security, and compliance standards. Monitor and troubleshoot data pipeline failures and ensure high data reliability. Participate in code reviews, testing, and deployment activities. Document architecture, processes, and best practices. Required Skills and Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 6+ years of experience in big data engineering roles. 2 + Years of Hands on experience into Scala Proficient in Apache Spark (Core/DataFrame/SQL/RDD APIs). Strong programming skills in Scala. Hands-on experience with the Cloudera Hadoop ecosystem (e.g., HDFS, Hive, Impala, HBase, Oozie). Familiarity with distributed computing and data partitioning concepts. Strong understanding of data structures, algorithms, and software engineering principles. Experience with CI/CD pipelines and version control systems (e.g., Git). Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus. Preferred Qualifications: Experience with Cloudera Manager and Cloudera Navigator. Exposure to Kafka, NiFi, or Airflow. Familiarity with data lake, data warehouse, and lakehouse architectures. Preferred candidate profile
Posted 3 weeks ago
8.0 - 13.0 years
18 - 22 Lacs
Hyderabad
Remote
Roles: SQL Data Engineer - ETL, DBT & Snowflake Specialist Location: Remote Duration: 14+ Months Timings: 5:30pm IST 1:30am IST Note: Immediate Joiners Only Required Experience: Advanced SQL Proficiency Writing and optimizing complex queries, stored procedures, functions, and views. Experience with query performance tuning and database optimization. ETL/ELT Development Building, and maintaining ETL/ELT pipelines. Familiarity with ETL tools or processes and orchestration frameworks. Data Modeling Designing and implementing data models Understanding of dimensional modeling and normalization. Snowflake Expertise Hands-on experience with Snowflakes architecture and features Experience with Snowflake database, schema, procedures, functions. DBT (Data Build Tool) Building data models, transformations using DBT. Implementing DBT best practices including testing, documentation, and CI/CD integration. Programming and Automation Proficiency in Python is a plus. Experience with version control systems (e.g., Git, Azure DevOps). Experience with Agile methodologies and DevOps practices. Collaboration and Communication Working effectively with data analysts, and business stakeholders. Translating technical concepts into clear, actionable insights. Prior experience in a fast-paced, data-driven environment.
Posted 3 weeks ago
12.0 - 17.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. You will collaborate with teams to ensure successful project delivery and contribute to key decisions. Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions. Build and operate very large data warehouses or data lakes. ETL optimization, designing, coding, & tuning big data processes using Apache Spark. Build data pipelines & applications to stream and process datasets at low latencies. Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience: Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake. Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery. Minimum of 2 years of experience years in real time streaming using Kafka/Kinesis Minimum 4 year of Experience in one or more programming languages Python, Java, Scala. Experience using airflow for the data pipelines in min 1 project. 1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Professional Attributes: Ready to work in B Shift (12 PM – 10 PM) A Client facing skills:solid experience working in client facing environments, to be able to build trusted relationships with client stakeholders. Good critical thinking and problem-solving abilities Health care knowledge Good Communication Skills Educational Qualification:Bachelor of Engineering / Bachelor of Technology Additional Information: The candidate should have a minimum of 12 years of experience in Databricks Unified Data Analytics Platform. This position is based at our Hyderabad office. A 15 years full-time education is required. A Client facing skills:solid experience working in client facing environments, to be able to build trusted relationships with client stakeholders. Good critical thinking and problem-solving abilities Health care knowledge Good Communication Skills Qualification 15 years full time education
Posted 3 weeks ago
4.0 - 7.0 years
6 - 10 Lacs
Gurugram
Work from Office
Public Services Industry Strategist Join our team in Strategy for an exciting career opportunity to work on the Industry Strategy agenda of our most strategic clients across the globe! Practice: Industry Strategy, Global Network (GN) Areas of Work: Strategy experience in Public Services Industry " Operating Model and Organization Design, Strategic Roadmap Design, Citizen Experience, Business Case Development (incl Financial Modelling), Transformation office, Sustainability, Digital Strategy, Data Strategy, Gen AI, Cloud strategy, Cost Optimization strategy Domain:Public Services " Social Services, Education, Global Critical Infrastructure Services, Revenue, Post & Parcel Level: Consultant Location: Gurgaon, Mumbai, Bengaluru, Chennai, Kolkata, Hyderabad & Pune Years of Exp: 4-7 years of strategy experience post MBA from a Tier 1 institute Explore an Exciting Career at Accenture Do you believe in creating an impact? Are you a problem solver who enjoys working on transformative strategies for global clients? Are you passionate about being part of an inclusive, diverse, and collaborative culture? Then, this is the right place for you! Welcome to a host of exciting global opportunities in Accenture Strategy. The Practice- A Brief Sketch: The GNStrategy Industry Group is a part of Accenture Strategy and focuses on the CXOs most strategic priorities. We help clients with strategies that are at the intersection of business and technology, drive value and impact, shape new businesses & design operating models for the future. As a part of this high performing team, you will: Apply advanced corporate finance to drive value using financial levers, value case shaping, and feasibility studies to evaluate new business opportunities Analyze competitive benchmarking to advise C-suite on 360 value opportunities, scenario planning to solve complex C-suite questions, lead & enable strategic conversations Identify strategic cost take-out opportunities, drive business transformation, and suggest value-based decisions based on insights from data. Apply advanced data analyses to unlock client value aligned with clients business strategy Build future focused PoV and develop strategic ecosystem partners. Build Client Strategy definition leveraging Disruptive technology solutions, like Data & AI, including Gen AI, and Cloud Build relationships with C-suite executives and be a trusted advisor enabling clients to realize value of human-centered change Create Thought Leadership in Industry/Functional areas, Reinvention Agendas, Solution tablets and assets for value definition, and use it, along with your understanding of Industry value chain and macroeconomic analyses, to inform clients strategy Partner with CXOs to architect future proof operating models embracing Future of Work, Workforce and Workplace powered by transformational technology, ecosystems and analytics Work with our ecosystem partners, help clients reach their sustainability goals through digital transformation Prepare and deliver presentations to clients to communicate strategic plans and recommendations on PS domains such as Digital Citizen, Public Infrastructure, Smart Buildings, Net Zero Monitor industry trends and keep clients informed of potential opportunities and threats The candidate will be required to have exposure to core-strategy projects in Public Services domain with a focus on one of the sub-industries within the Public Service (mentioned below), specifically: Public Service Experience: The candidate must have strategy experience in at least one of the below Public Service sub-industries: Social Services + (Employment, Pensions, Education, Child welfare, Government as a platform, Digital Citizen Services) Education Global Critical Infrastructure Services (Urban & city planning, Smart Cities, High Performing City Operating Model) Admin (Citizen experience, Federal Funds Strategy, Workforce Strategy, Intelligent Back Office, Revenue industry strategy, Post & Parcel) Strategy Skills and Mindsets Expected: A Strategic Mindset to shape innovative, fact-based strategies and operating models Communication and Presentation Skills to hold C-Suite influential dialogues, narratives, conversations, and share ideas Ability to solve problems in unstructured scenarios, to decode and solve complex and unstructured business questions Analytical and outcome-driven approach to perform data analyses & generate insights, and application of these insights for strategic insights and outcomes Qualifications Value Driven Business Acumen to drive actionable outcomes for clients with the latest industry trends, innovations and disruptions, metrics and value drivers Financial Acumen and Value Creation to develop relevant financial models to back up a business case Articulation of strategic and future vision Ability to identify Technology Disruptions in the Public Services industry What's in it for you? An opportunity to work on transformative projects with key G20OO clients and CxOs Potential to co-create with leaders in strategy, industry experts, enterprise function practitioners and, business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies Ability to embed responsible business into everythingfrom how you service your clients to how you operate as a responsible professional Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge and capabilities Opportunity to thrive in a culture that is committed to accelerate equality for all Engage in boundaryless collaboration across the entire organization About Accenture: Accenture is a leading global professional services company, providing a broad range of services and solutions in strategy, consulting, digital, technology and operations. Combining unmatched experience and specialized skills across more than 40 industries and all business functions underpinned by the world's largest delivery network Accenture works at the intersection of business and technology to help clients improve their performance and create sustainable value for their stakeholders. With more than 732,000 p eople serving clients in more than 120 countries, Accenture drives innovation to improve the way the world works and lives. Visit us at About Accenture Strategy & Consulting: Accenture Strategy shapes our clients' future, combining deep business insight with the understanding of how technology will impact industry and business models. Our focus on issues such as digital disruption, redefining competitiveness, operating and business models as well as the workforce of the future helps our clients find future value and growth in a digital world. Today, digital is changing the way organizations engage with their employees, business partners, customers and communities. This is our unique differentiator. To bring this global perspective to our clients, Accenture Strategy's services include those provided by our Global Network a distributed management consulting organization that provides management consulting and strategy expertise across the client lifecycle. Our Global Network teams complement our in-country teams to deliver cutting-edge expertise and measurable value to clients all around the world. For more information visit en /careers/local/capability-network- careers Accenture Global Network | AGcenture in One Word At the heart of every great change is a great human. If you have ideas, ingenuity and a passion for making a difference, come and be a part of our team .
Posted 3 weeks ago
3.0 - 5.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Tableau Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer in the Data Engineering team, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. You will play a crucial role in developing efficient and scalable solutions to process and analyze data. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work related problems. Collaborate with cross-functional teams to gather and understand business requirements. Design and develop applications using Tableau to visualize and analyze data. Build and maintain data pipelines to ensure efficient data processing and storage. Perform data quality checks and implement data cleaning and transformation techniques. Optimize application performance and troubleshoot any issues that arise. Stay up-to-date with industry trends and best practices in application development and data engineering. Provide technical guidance and mentor junior team members. Professional & Technical Skills: Must To Have Skills:Proficiency in Tableau. Strong understanding of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Hands-on experience implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: The candidate should have a minimum of 3 years of experience in Tableau. This position is based at our Hyderabad office. A 15 years full time education is required. Qualifications 15 years full time education
Posted 3 weeks ago
3.0 - 7.0 years
17 - 20 Lacs
Bengaluru
Work from Office
Job Title :Industry & Function AI Data Engineer + S&C GN Management Level :09 - Consultant Location :Primary - Bengaluru, Secondary - Gurugram Must-Have Skills :Data Engineering expertise, Cloud platforms:AWS, Azure, GCP, Proficiency in Python, SQL, PySpark and ETL frameworks Good-to-Have Skills :LLM Architecture, Containerization tools:Docker, Kubernetes, Real-time data processing tools:Kafka, Flink, Certifications like AWS Certified Data Analytics Specialty, Google Professional Data Engineer,Snowflake,DBT,etc. Job Summary : As a Data Engineer, you will play a critical role in designing, implementing, and optimizing data infrastructure to power analytics, machine learning, and enterprise decision-making. Your work will ensure high-quality, reliable data is accessible for actionable insights. This involves leveraging technical expertise, collaborating with stakeholders, and staying updated with the latest tools and technologies to deliver scalable and efficient data solutions. Roles & Responsibilities: Build and Maintain Data Infrastructure:Design, implement, and optimize scalable data pipelines and systems for seamless ingestion, transformation, and storage of data. Collaborate with Stakeholders:Work closely with business teams, data analysts, and data scientists to understand data requirements and deliver actionable solutions. Leverage Tools and Technologies:Utilize Python, SQL, PySpark, and ETL frameworks to manage large datasets efficiently. Cloud Integration:Develop secure, scalable, and cost-efficient solutions using cloud platforms such as Azure, AWS, and GCP. Ensure Data Quality:Focus on data reliability, consistency, and quality using automation and monitoring techniques. Document and Share Best Practices:Create detailed documentation, share best practices, and mentor team members to promote a strong data culture. Continuous Learning:Stay updated with the latest tools and technologies in data engineering through professional development opportunities. Professional & Technical Skills: Strong proficiency in programming languages such as Python, SQL, and PySpark Experience with cloud platforms (AWS, Azure, GCP) and their data services Familiarity with ETL frameworks and data pipeline design Strong knowledge of traditional statistical methods, basic machine learning techniques. Knowledge of containerization tools (Docker, Kubernetes) Knowing LLM, RAG & Agentic AI architecture Certification in Data Science or related fields (e.g., AWS Certified Data Analytics Specialty, Google Professional Data Engineer) Additional Information: The ideal candidate has a robust educational background in data engineering or a related field and a proven track record of building scalable, high-quality data solutions in the Consumer Goods sector. This position offers opportunities to design and implement cutting-edge data systems that drive business transformation, collaborate with global teams to solve complex data challenges and deliver measurable business outcomes and enhance your expertise by working on innovative projects utilizing the latest technologies in cloud, data engineering, and AI. About Our Company | Qualifications Experience :Minimum 3-7 years in data engineering or related fields, with a focus on the Consumer Goods Industry Educational Qualification :Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field
Posted 3 weeks ago
5.0 - 10.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Informatica Data Quality Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Be involved in the end-to-end data management process. Roles & Responsibilities: Gather, process, and analyze data to generate meaningful insights that drive strategic decision-making. Design and implement efficient data collection systems to enhance accuracy and optimize workflow. Utilize statistical techniques to interpret data and create detailed reports that support business goals. Work closely with cross-functional teams to solve business challenges through data-driven solutions. Strong proficiency in SQL/SOQL for data querying and analysis. Professional & Technical Skills: Must To Have Skills: Proficiency in Informatica Data Quality. Strong understanding of data modeling and data architecture. Experience with SQL and database management systems. Hands-on experience with data integration tools. Knowledge of data warehousing concepts. Additional Information: The candidate should have a minimum of 5 years of experience in Informatica Data Quality. This position is based at our Hyderabad office. A 15 years full-time education is required. Qualification 15 years full time education
Posted 3 weeks ago
5.0 - 10.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Talend ETL Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Be involved in the end-to-end data management process. Roles & Responsibilities: Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Lead the design and implementation of data solutions. Optimize and troubleshoot ETL processes. Conduct data analysis and provide insights for decision-making. Professional & Technical Skills: Must To Have Skills: Proficiency in Talend ETL. Strong understanding of data modeling and database design. Experience with data integration and data warehousing concepts. Hands-on experience with SQL and scripting languages. Knowledge of cloud platforms and big data technologies. Additional Information: The candidate should have a minimum of 5 years of experience in Talend ETL. This position is based at our Hyderabad office. A 15 years full-time education is required. Qualification 15 years full time education
Posted 3 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
Project Role : Cloud Services Engineer Project Role Description : Act as liaison between the client and Accenture operations teams for support and escalations. Communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Must have skills : Managed File Transfer Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Cloud Services Engineer, you will act as a liaison between the client and Accenture operations teams for support and escalations. You will communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Roles & Responsibilities: Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Ensure effective communication between client and operations teams. Analyze service delivery health and address performance issues. Conduct performance meetings to share data and trends. Professional & Technical Skills: Must To Have Skills:Proficiency in Managed File Transfer. Strong understanding of cloud orchestration and automation. Experience in SLA management and performance analysis. Knowledge of IT service delivery and escalation processes. Additional Information: The candidate should have a minimum of 5 years of experience in Managed File Transfer. This position is based at our Pune office. A 15 years full time education is required. Qualifications 15 years full time education
Posted 3 weeks ago
12.0 - 17.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Talend ETL Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer Lead, you will design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. A typical day involves working on data solutions and ETL processes. Roles & Responsibilities: Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Expected to provide solutions to problems that apply across multiple teams. Lead data architecture design. Implement data integration solutions. Optimize ETL processes. Professional & Technical Skills: Must To Have Skills: Proficiency in Talend ETL. Strong understanding of data modeling. Experience with SQL and database management. Knowledge of cloud platforms like AWS or Azure. Hands-on experience with data warehousing. Good To Have Skills: Experience with data visualization tools. Additional Information: The candidate should have a minimum of 12 years of experience in Talend ETL. This position is based at our Hyderabad office. A 15 years full-time education is required. Qualification 15 years full time education
Posted 3 weeks ago
7.0 - 12.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Talend ETL Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. Roles & Responsibilities: Expected to be a SME with deep knowledge and experience. Engage with multiple teams and contribute on key decisions. Expected to provide solutions to problems that apply across multiple teams. Create data pipelines to extract, transform, and load data across systems. Implement ETL processes to migrate and deploy data across systems. Ensure data quality and integrity throughout the data lifecycle. Professional & Technical Skills: Required Skill:Expert proficiency in Talend Big Data. Strong understanding of data engineering principles and best practices. Experience with data integration and data warehousing concepts. Experience with data migration and deployment. Proficiency in SQL and database management. Knowledge of data modeling and optimization techniques. Additional Information: The candidate should have minimum 5 years of experience in Talend Big Data. Qualification 15 years full time education
Posted 3 weeks ago
5.0 - 10.0 years
10 - 12 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : SAP HCM On Premise ABAP Good to have skills : SAP HCM Organizational Management Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Be involved in the end-to-end data management process. Roles & Responsibilities: Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Develop and maintain data pipelines. Ensure data quality and integrity. Implement ETL processes for data migration and deployment. Professional & Technical Skills: Must To Have Skills:Proficiency in SAP HCM On Premise ABAP. Good To Have Skills:Experience with SAP HCM Organizational Management. Strong understanding of data management principles. Experience in designing and implementing data solutions. Proficient in ETL processes and data migration techniques. Additional Information: The candidate should have a minimum of 5 years of experience in SAP HCM On Premise ABAP. This position is based at our Bengaluru office. A 15 years full-time education is required. Qualifications 15 years full time education
Posted 3 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad
Work from Office
Project Role :Data Engineer Project Role Description :Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills :SAP HCM On Premise ABAP Good to have skills :SAP HCM Organizational Management Minimum 5 year(s) of experience is required Educational Qualification :15 years full time education Summary:As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and analyzing data to support business decisions and drive data-driven insights. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines to extract, transform, and load data across systems. Ensure data quality and integrity throughout the data lifecycle. Implement ETL processes to migrate and deploy data across systems. Professional & Technical Skills: Must To Have Skills:Proficiency in SAP HCM On Premise ABAP. Good To Have Skills:Experience with SAP HCM Organizational Management. Strong understanding of data engineering principles and best practices. Experience with data modeling and database design. Hands-on experience with ETL tools and processes. Proficient in programming languages such as ABAP, SQL, and Python. Additional Information: The candidate should have a minimum of 5 years of experience in SAP HCM On Premise ABAP. A 15 years full-time education is required. Qualifications 15 years full time education
Posted 3 weeks ago
3.0 - 8.0 years
5 - 10 Lacs
Mumbai
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : SAP HCM On Premise ABAP Good to have skills : SAP HCM Organizational Management Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and analyzing data to support business decisions and drive data-driven insights. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines to extract, transform, and load data across systems. Ensure data quality and integrity throughout the data lifecycle. Implement ETL processes to migrate and deploy data across systems. Professional & Technical Skills: Must To Have Skills:Proficiency in SAP HCM On Premise ABAP. Good To Have Skills:Experience with SAP HCM Organizational Management. Strong understanding of data engineering principles and best practices. Experience with data modeling and database design. Hands-on experience with ETL tools and processes. Proficient in programming languages such as ABAP, SQL, and Python. Additional Information: The candidate should have a minimum of 5 years of experience in SAP HCM On Premise ABAP. A 15 years full-time education is required. Qualifications 15 years full time education
Posted 3 weeks ago
5.0 - 10.0 years
35 - 60 Lacs
Bengaluru
Work from Office
Job Role: Data Scientist Experience: 5-8 years Location: Bangalore, India Employment Type: Full Time, Hybrid. About Cognizant Join a rapidly growing consulting and IT services Fortune 500 company with around 350,000 employees worldwide, a very flexible international business, customers that are leaders in their respective sectors, and a high level of commitment. Cognizant Advanced AI Lab / Neuro AI team The Cognizant AI Labs were created to pioneer scientific innovation and bridge the gap to commercial applications. The AI Labs collaborate with institutions, academia and technology partners to develop groundbreaking AI solutions responsibly. The Lab’s mission is to maximize human potential with decision AI, a combination of multi-agent architectures, generative AI, deep learning, evolutionary AI and trustworthy techniques to create sophisticated decision-making systems. Through scientific publications, open-source software, AI for Good projects and the Cognizant Neuro® AI decisioning platform and Multi-Agent Accelerator, the AI Labs support our goal of improving everyday life. Your role: As a data scientist and software engineer you will develop the Neuro AI platform and use it on a variety of projects related to optimizing data-driven decision making. You are a data scientist, Python developer and AI researcher with knowledge in multiple technologies; you are driven, curious and passionate about your work; you are innovative, creative and focused on excellence; and you want to be part of an ego-free work environment where we value honest, healthy interactions and collaboration. Key responsibilities: Design, implement and deploy software applications that analyze datasets, train predictive and prescriptive models, assess uncertainties and interactively present results to end users Monitor and analyze the performance of software applications and infrastructure Collaborate with cross-functional teams to identify and prioritize business requirements Research, design and implement novel AI systems to support decision-making processes Work with the research team to publish papers and patents Communicate research findings to both technical and non-technical audiences Provide guidance on our Neuro AI offering and AI best practices Work in a highly collaborative, fast-paced environment with your peers on the Neuro AI platform and research teams Your profile: PhD or Master’s in Data Science, Computer Science, Statistics, Mathematics, Engineering or related field 5-8 years of experience in artificial intelligence, machine learning, data science and software engineering Strong programming skills in Python with Pandas, Numpy, TensorFlow, PyTorch, Jupyter Notebook, GitHub Experience with handling large datasets, data engineering, statistical analysis, and building predictive models Experience developing AI software platforms and tools Knowledge of data visualization tools (e.g. Matplotlib, Tableau, ) Knowledge of Generative AI and LLMs is a plus Utilize cloud platforms for data processing and analytics, optimize cloud-based solutions for performance, cost, and scalability Strong problem-solving and analytical skills Strong attention to detail and ability to work independently Ability to leverage design thinking, business process optimization, and stakeholder management skills
Posted 3 weeks ago
6.0 - 8.0 years
8 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background #ContractDetails Role: Senior Data Engineer Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, India Duration: 6 Months Email to Apply: navaneeta@suzva.com Contact: 9032956160
Posted 3 weeks ago
1.0 - 4.0 years
14 - 19 Lacs
Mumbai
Work from Office
Overview We’re looking for a Senior AI Engineer to join our AI team, where you’ll design and develop highly scalable, robust systems that drive our AI initiatives and data operations. Success in this role depends on strong software engineering skills, familiarity with large-scale distributed systems, and expertise in AI technologies. Our ideal candidate has a proven ability to build reliable platforms (rather than standalone applications) and to iterate effectively over multiple release cycles. While AI experience isn a required, you should also be enthusiastic about software engineering, and building scalable platforms and cloud services. A significant aspect of the job involves collaborating with cross-functional teams to translate complex requirements into scalable, efficient code, with responsibilities including the implementation and maintenance of software infrastructure for our AI platform. You’ll regularly work alongside AI specialists, domain leads, and other engineering teams. Throughout your work, you’ll apply best practices in software engineering and system architecture. If you’re passionate about delivering high-impact AI solutions in a dynamic environment, this is an exciting opportunity to have a substantial influence on our AI capabilities through your expertise. Responsibilities Design and implement scalable distributed systems Architect solutions that can handle large volumes of data for real-time and batch processing Design and develop efficient AI pipelines with automation and reliability across the platform Integrate agentic workflows and AI agents into data extraction processes, and enable systems to perform multi-step reasoning and tool usage to improve accuracy and efficiency of data extraction. Deploy, monitor, and maintain the LLM-based extraction systems in production, ensuring reliability and scalability. Set up appropriate monitoring, logging, and evaluation metrics to track performance, and perform continual tuning and improvements based on human in the loop feedback. Conduct applied research and experimentation with the latest generative AI models and techniques to enhance extraction capabilities. Prototype new approaches and iterate quickly to integrate successful methods into the production pipeline. Collaborate with cross-functional teams (data engineers, product managers, domain experts) to gather requirements, align AI solutions with business needs. Qualifications 4-5 years of experience in applied AI or machine learning engineering, with a track record of building and deploying AI solutions (especially in NLP). Hands-on experience with using Generative AI models and APIs/frameworks (e.g., OpenAI GPT-4, Google Gemini). Ability to build Agentic AI systems where LLMs interact with tools or perform multi-step workflows. Proficiency in Python (preferred) and experience deploying machine learning models or pipelines at scale. Good understanding of embeddings, LLM models and experience with retrieval-augmented generation (RAG) workflows to incorporate external knowledge into LLM-based systems. Knowledge of LLMOps and cloud services (Azure, GCP, or similar) for deploying and managing AI solutions. Experience with containerization, orchestration, and monitoring of ML models in a production cloud environment. Excellent collaboration and communication skills, with the ability to work effectively in a team, translate complex technical concepts to non-technical stakeholders, and document work clearly. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com
Posted 3 weeks ago
2.0 - 4.0 years
3 - 7 Lacs
Pune
Work from Office
Essential Skills: Machine Learning & Deep Learning: Solid understanding of ML concepts and hands-on experience with deep learning (especially neural networks and Transformers). Python Programming: Strong Python coding skills with familiarity in frameworks like TensorFlow, PyTorch, or Keras. Natural Language Processing (NLP): Experience with tokenization, embeddings, and language model fine-tuning. Data Engineering: Ability to clean, process, and manage large datasets with a focus on data quality. Math & Statistics: Good knowledge of linear algebra, calculus, probability, and statistics. Preferred Additional Skills: Model Deployment (MLOps): Exposure to deploying models using Docker, APIs, CI/CD, and tools like MLflow or Hugging Face Hub. RAG & LLM Integration: Hands-on with FAISS, LangChain, embedding models, and large language models. Prompt Engineering: Skilled in designing prompts and evaluating AI output quality. System Scalability: Understanding of GPU optimization and AI system performance.
Posted 3 weeks ago
4.0 - 8.0 years
10 - 30 Lacs
Hyderabad, Pune, Greater Noida
Work from Office
Roles and Responsibilities : Design, develop, test, deploy, and maintain large-scale data pipelines using Python and SQL. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Develop high-quality code that is efficient, scalable, and easy to maintain. Troubleshoot issues related to data processing, storage, and retrieval. Job Requirements : 4-8 years of experience in data engineering or a related field. Strong proficiency in Python programming language. Experience working with relational databases (SQL) for querying and manipulating large datasets.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane