Home
Jobs

1933 Data Engineering Jobs - Page 16

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse Good to have skills : Data Building Tool, Python (Programming Language)Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. You will create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Your day will involve working on data solutions and collaborating with teams to optimize data processes. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Develop and maintain data pipelines- Ensure data quality and integrity- Implement ETL processes Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse- Good To Have Skills: Experience with Data Building Tool- Strong understanding of data architecture- Proficiency in SQL and database management- Experience with cloud data platforms- Knowledge of data modeling Additional Information:- The candidate should have a minimum of 5 years of experience in Snowflake Data Warehouse- This position is based at our Bengaluru office- A 15 years full time education is required Qualification 15 years full time education

Posted 6 days ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL), AWS Redshift, TableauMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for ensuring that the applications are developed according to the specified requirements and standards, and that they are delivered on time and within budget. Your typical day will involve collaborating with the team to design and develop applications, configuring and customizing applications based on business needs, and troubleshooting and resolving any issues that arise during the development process. You will also be involved in testing and deploying applications, as well as providing support and maintenance for existing applications. Roles & Responsibilities:- Excellent SQL skills with experience in building and interpreting complex queries and Create logical and physical data models and Experience in advanced SQL programming- Advanced working SQL knowledge and experience working with relational databases, and query authoring (SQL)- Must experience to design, code, test, and analyze applications leveraging RDBMS (Redshift, MySQL & MS SQL SERVER databases).- Assist Delivery and Operations team with customization requests and technical feasibility responses to the clients- Expert experience with performance tuning and optimization and stored procedures- Work with BRMs directly for Planning, Solutioning, Assessment, Urgent issues and Consultation. Represent BRM in meetings when there is time conflict or unavailable.- Making sure resolve all blocker so offshore operations run smooth at offshore time.- Provide a better understanding to offshore team, resolve conflict and understanding gaps. Dealing with cultural differences and making communication is easier.- Taking initiatives, continuous improvement & drive best practices that has worked well in the past.- Building bridge outside the project boundary and helping other Vendors like PWC, DK, Beghou to work together to achieve client deliverables.- Build stand operations process & continuous improvement to help EISAI IT and business to make decision Professional & Technical Skills: - Resource should have experience in Data Engineering, Data Quality, AWS Redshift, SQL, Tableau, Enterprise Data Warehouse, Jira, Service Now, Confluence, UNIX shell scripting, Python- Must To Have Skills: Proficiency in Data Engineering.- Good To Have Skills: Experience with Oracle Procedural Language Extensions to SQL (PLSQL), AWS Redshift, Tableau.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Functional/Industry skills:- LS-Pharma on commercial datasets Additional Information:- The candidate should have a minimum of 12 years of experience in Data Engineering.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 6 days ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Microsoft Azure Databricks, PySpark, Core JavaMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with teams to develop innovative solutions and streamline processes. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the development and implementation of new applications- Conduct code reviews and ensure coding standards are met- Stay updated on industry trends and best practices Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform- Good To Have Skills: Experience with PySpark- Strong understanding of data engineering concepts- Experience in building and optimizing data pipelines- Knowledge of cloud platforms like Microsoft Azure- Familiarity with data governance and security practices Additional Information:- The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Bengaluru office- A 15 years full-time education is required Qualification 15 years full time education

Posted 1 week ago

Apply

10.0 - 14.0 years

10 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Proven expert at writing SQL code with at least 10 years of experience. Must have 5+ years of experience working with large data with transactions in the order of 5 10M records. 5+ years of experience modeling loosely coupled relational databases that can store tera or petabytes of data. 3+ years of proven expertise in working with large Data Warehouses. Expert at ETL transformations using SSIS.

Posted 1 week ago

Apply

5.0 - 7.0 years

12 - 15 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Naukri logo

We are looking for a skilled Data Engineer with expertise in SSIS, Tableau, SQL, and ETL processes. The ideal candidate should have experience in Data Modeling, Data Pipelines, and Agile methodologies. Responsibilities include designing and maintaining data pipelines, implementing ETL processes using SSIS, optimizing data models for reporting, and developing advanced dashboards in Tableau. The role requires proficiency in SQL for complex data transformations, troubleshooting data workflows, and ensuring data integrity and compliance. Strong problem-solving skills, Agile collaboration experience, and the ability to work independently in a remote setup are essential. Location-Remote,Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 1 week ago

Apply

8.0 - 13.0 years

15 - 19 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Technology Architect Project Role Description : Review and integrate all application requirements, including functional, security, integration, performance, quality and operations requirements. Review and integrate the technical architecture requirements. Provide input into final decisions regarding hardware, network products, system software and security. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Cloud Data ArchitectureMinimum 5 year(s) of experience is required Educational Qualification : BE or MCA Summary :As a Technology Architect, you will be responsible for reviewing and integrating all application requirements, including functional, security, integration, performance, quality, and operations requirements. Your typical day will involve reviewing and integrating technical architecture requirements, providing input into final decisions regarding hardware, network products, system software, and security. Roles & ResponsibilitiesShould have a minimum of 8 years of experience in Databricks Unified Data Analytics Platform. Good experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform Should have strong educational background in technology and information architectures, along with a proven track record of delivering impactful data-driven solutions. Strong requirement analysis and technical solutioning skill in Data and Analytics Client facing role in terms of running solution workshops, client visits, handled large RFP pursuits and managed multiple stakeholders. Technical Experience6 or more years of experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform. 2 or more years of experience using Python, PySpark or Scala. Experience in Databricks on cloud. Exp in any of AWS, Azure or GCPe, ETL, data engineering, data cleansing and insertion into a data warehouse Must have Skills like Databricks, Cloud Data Architecture, Python Programming Language, Data Engineering. Professional AttributesExcellent writing, communication and presentation skills. Eagerness to learn and develop self on an ongoing basis. Excellent client facing and interpersonal skills. Qualification BE or MCA

Posted 1 week ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : Graduate Project Role :Data Platform Engineer Project Role Description :Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have Skills :Databricks Unified Data Analytics Platform, SSINON SSI:Good to Have Skills :SSI:No Technology Specialization NON SSI :Job :Key Responsibilities :1 Show a strong development skill in Pyspark and Databrick sto build complex data pipelines 2 Should be able to deliver the development task assigned independently or with small help 3 Should be able to participate in daily status calls and have good communication skills to manage day to day work Technical Experience :1 Should have more than 5 years of experience in IT 2. Should have more than 2 years of experience in technologies like Databricks and Pyspark 3 Should be able to build end to end pipelines using Pyspark with good knowledge on Delta Lake 4 Should have good knowledge on Azure services like Azure Data Factory, Azure storage solutions like ADLS, Delta Lake, Azure AD Professional Attributes :1 Should have involved in data engineering project from requirements phase to delivery 2 Good communication skill to interact with client and understand the requirement 3 Should have capability to work independently and guide the team Educational Qualification:GraduateAdditional Info :Skill Flex for Pyspark, only Bengaluru, Should be flexible to work form Client Office Qualification Graduate

Posted 1 week ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Patna

Work from Office

Naukri logo

Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies

Posted 1 week ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Surat

Work from Office

Naukri logo

Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies

Posted 1 week ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Duration : 6 Months Timings : General IST Notice Period : within 15 days or immediate joiner About the Role : As a Data Engineer for the Data Science team, you will play a pivotal role in enriching and maintaining the organization's central repository of datasets. This repository serves as the backbone for advanced data analytics and machine learning applications, enabling actionable insights from financial and market data. You will work closely with cross-functional teams to design and implement robust ETL pipelines that automate data updates and ensure accessibility across the organization. This is a critical role requiring technical expertise in building scalable data pipelines, ensuring data quality, and supporting data analytics and reporting infrastructure for business growth. Note : Must be ready for face-to-face interview in Bangalore (last round) - Should be working with Azure as cloud technology. Key Responsibilities : ETL Development : - Design, develop, and maintain efficient ETL processes for handling multi-scale datasets. - Implement and optimize data transformation and validation processes to ensure data accuracy and consistency. - Collaborate with cross-functional teams to gather data requirements and translate business logic into ETL workflows. Data Pipeline Architecture : - Architect, build, and maintain scalable and high-performance data pipelines to enable seamless data flow. - Evaluate and implement modern technologies to enhance the efficiency and reliability of data pipelines. - Build pipelines for extracting data via web scraping to source sector-specific datasets on an ad hoc basis. Data Modeling : - Design and implement data models to support analytics and reporting needs across teams. - Optimize database structures to enhance performance and scalability. Data Quality and Governance : - Develop and implement data quality checks and governance processes to ensure data integrity. - Collaborate with stakeholders to define and enforce data quality standards across the organization. Documentation and Communication : - Maintain detailed documentation of ETL processes, data models, and other key workflows. - Effectively communicate complex technical concepts to non-technical stakeholders and business users. Cross-Functional Collaboration : - Work closely with the Quant team and developers to design and optimize data pipelines. - Collaborate with external stakeholders to understand business requirements and translate them into technical solutions. Essential Requirements : Basic Qualifications : - Bachelor's degree in Computer Science, Information Technology, or a related field.Familiarity with big data technologies like Hadoop, Spark, and Kafka. - Experience with data modeling tools and techniques. - Excellent problem-solving, analytical, and communication skills. - Proven experience as a Data Engineer with expertise in ETL techniques (minimum years). - 3-6 years of strong programming experience in languages such as Python, Java, or Scala - Hands-on experience in web scraping to extract and transform data from publicly available web sources. - Proficiency with cloud-based data platforms such as AWS, Azure, or GCP. - Strong knowledge of SQL and experience with relational and non-relational databases. - Deep understanding of data warehousing concepts and architectures. Preferred Qualifications : - Master's degree in Computer Science or Data Science. - Knowledge of data streaming and real-time processing frameworks. - Familiarity with data governance and security best practices.

Posted 1 week ago

Apply

2.0 - 6.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

JOb Responsibilities: The Data Scientists will leverage expertise in advanced statistical and modelling techniques to design, prototype, and build the next-generation analytics engines and services. They will work closely with the analytics teams and business teams to derive actionable insights, helping the organization in achieving its’ strategic goals. Their work will involve high levels of interaction with the integrated analytics team, including data engineers, translators, and more senior data scientists. Has expertise in implementing complex statistical analyses for data processing, exploration, model building and implementation Lead teams of 2-3 associate data scientists in the use case building and delivery process Can communicate complex technical concepts to both technical and non-technical audience Plays a key role in driving ideation around the modelling process and developing models. Can conceptualize and drive re-iteration and fine tuning of models Contribute to knowledge building and sharing by researching best practices, documenting solutions, and continuously iterating on new ways to solve problems. Mentors junior team members to do the same REQUIRED EDUCATION AND EXPERIENCE: Master’s degree in Computer Science, Statistics, Math, Operations Research, Economics, or a related field Advanced level programming skills in at least 1 coding language (R/Python/Scala) Practical experience of developing advanced statistical and machine learning models At least 2 years of relevant analytics experience Experience in using large database systems preferred Has developed niche expertise in at least one functional domain REQUIRED SKILLS: Ability to work well in agile environments in diverse teams with multiple stakeholders Experience of leading small teams Able to problem solve complex problems and break them down into simpler parts Ability to effectively communicate complex analytical and technical content High energy and passionate individual who can work closely with other team members Strong entrepreneurial drive to test new out of the box techniques Able to prioritize workstreams and adopt an agile approach Willing to adopt an iterative approach; experimental mindset to drive innovation LOCATION: Bangalore What’s in it for you ? Disruptive projects : Work on ‘ breakthrough’ digital-and-analytics projects to enable UPL’s vision of building a future ready organization. It involves deploy ing solutions to help us increase our sales, sustain our profitability, improve our speed to market, supercharge our R&D efforts, and support the way we work internally. Help us ensure we have access to the best business insights that our data analysis can offer us. Cross functional leadership exposure : Work directly under guidance of functional leadership at UPL, on the most critical business problems for the organization (and the industry) today. It will give you exposure to a large cross-functional team (e.g.: spanning manufacturing, procurement, commercial, quality, IT/OT experts), allowing multi-functional learning in D&A deployment Environment fostering professional and personal development : Strengthen professional learning in a highly impact-oriented and meritocratic environment that is focused on delivering disproportionate business value through innovative solutions. It will be supported by on-the-job coaching from experienced domain experts, and continuous feedback from a highly motivated and capable set of peers. Comprehensive training programs for continuous development through UPL's D&A academy will help in accelerating growth opportunities. Come join us in this transformational journey! Let’s collectively Change the game with Digital & Analytics!

Posted 1 week ago

Apply

10.0 - 20.0 years

15 - 30 Lacs

Pune

Work from Office

Naukri logo

Role & responsibilities Python architect At Cybage: 1. The central Architect pool to assist existing customer and providing technical solutions for new prospects / customers. 2. R&D on new technologies / frameworks / tools in Python ecosystem. 3. Providing Architectural Services for turn-key projects, new clients with large scale requirements. 4. Online travels on need basis for new customer engagement and during discovery phase to transfer knowledge. Python Architect - Web/API/Application Designing, building and maintaining, scalable and secure services and REST APIs, in at least one Python framework such as Django, Flask, FastAPI etc. or in Python with gRPC. Expertise in at least one one RDBMS such as Postgres, MySQL, Oracle etc. and one NoSQL database such as MongoDB, Redis, etc. Familiarity with different caching strategies and use of at least one caching solution such as Redis, Memcached, etc. to do the same. Designing for distributed / asynchronous jobs and familiarity with tools such as Celery, Redis Queue, Kafka, etc. to implement the same. Building these services in at least one cloud platform such as AWS, Azure, GCP etc. with an eye on scalability, performance and high availability. Experienced in building these use containerization ( Docker, etc. ) and orchestration ( Kubernetes, etc. ) techniques. Ensuring code quality and use of at least one tool / linter such as pylint, black, flake8 etc. Using automated tests to validate the services and APIs built in order to allow for continuous delivery. Effectively monitor and observe these services by tracking application logs through at least one logging tool such as Splunk, Datadog Logs, AWS Cloudwatch, etc. and application metrics such as latency, concurrency, etc. over at least one monitoring tool such as Datadog APM/RUM, AWS Cloudwatch, etc. Expertise in identifying the right thresholds and alerts on these services and plugging these against alerting tools such as PagerDuty, OpsGenie, AWS Cloudwatch alarms etc. in order to respond to incidents quickly and effectively. Python Architect - Data Designing, building and maintaining, effective and scalable data solutions using Python. Creating and maintaining data integration processes, ETL ( Extract, Transform, Load ) workflows, and data pipelines ( Airflow, etc. ) to seamlessly transport data between systems. Expertise in parallel processing massive datasets and use of Spark, Hadoop, etc. to do the same. Experienced in working with datasets hosted in at least one data warehouse such as Snowflake, Amazon Redshift, etc. Familiarity with reporting on datatsets using at least one BI tool such as Looker, Tableau, Power BI, Quicksight etc. Expertise in at least one one RDBMS such as Postgres, MySQL, Oracle etc. and one NoSQL database such as MongoDB, Redis, etc. Building these in at least one cloud platform such as AWS, Azure, GCP etc. with an eye on scalability, performance and high availability. Experienced in building these use containerization ( Docker, etc. ) and orchestration ( Kubernetes, etc. ) techniques. Ensuring code quality and use of at least one tool / linter such as pylint, black, flake8 etc. Using automated tests to validate the services built in order to allow for continuous delivery. Effectively monitor and observe these services by tracking service logs through at least one logging tool such as Splunk, Datadog Logs, AWS Cloudwatch, etc. and service metrics such as latency, concurrency, etc. over at least one monitoring tool such as Datadog APM, AWS Cloudwatch, etc. Expertise in identifying the right thresholds and alerts on these services and plugging these against alerting tools such as PagerDuty, OpsGenie, AWS Cloudwatch alarms etc. in order to respond to incidents quickly and effectively. Passion for maintaining software configurations in code and familiarity in the use of at least one of Ansible, Terraform, Helm, etc. to do the same. Company Profile: Founded in 1995, Cybage Software Pvt. Ltd is a technology consulting organization specialized in outsourced product engineering services. As a leader in the Technology and product engineering space, Cybage works with some of the worlds best independent software vendors. Our solutions are focused on modern technologies and are enabled by a scientific, data driven system called Decision Mines for Digital Excellence. This unique model de-risks our approach, provides better predictability, and ensures a better value per unit cost to our clients. An ISO 27001 certified company based in Pune, India that is partnered with more than 200 global software houses of fine repute. The array of services includes Product Engineering (OPD), Enterprise Business Solutions, Value Added Services and Idea Incubation Services. Cybage specializes in the implementation of the Offshore Development Center (ODC) model. You will get an opportunity to be a part of highly skilled talent pool of more than 7500 employees. Apart from Pune, we have our operations hub in GNR and Hyderabad as well and we have also marked our presence in North America, Canada, UK, Europe, Japan, Australia, and Singapore. We provide seamless services and dependable deliveries to our clients from diverse industry verticals such as Media and Advertising, Travel and Hospitality, Digital Retail, Healthcare, SCM and HI-Tech. For more information, log on to www.cybage.com

Posted 1 week ago

Apply

1.0 - 5.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

JOb Responsibilities: The Data Scientists will leverage expertise in advanced statistical and modelling techniques to design, prototype, and build the next-generation analytics engines and services. They will work closely with the analytics teams and business teams to derive actionable insights, helping the organization in achieving its’ strategic goals. Their work will involve high levels of interaction with the integrated analytics team, including data engineers, translators, and more senior data scientists. Has expertise in implementing complex statistical analyses for data processing, exploration, model building and implementation Lead teams of 2-3 associate data scientists in the use case building and delivery process Can communicate complex technical concepts to both technical and non-technical audience Plays a key role in driving ideation around the modelling process and developing models. Can conceptualize and drive re-iteration and fine tuning of models Contribute to knowledge building and sharing by researching best practices, documenting solutions, and continuously iterating on new ways to solve problems. Mentors junior team members to do the same REQUIRED EDUCATION AND EXPERIENCE: Master’s degree in Computer Science, Statistics, Math, Operations Research, Economics, or a related field Advanced level programming skills in at least 1 coding language (R/Python/Scala) Practical experience of developing advanced statistical and machine learning models At least 2 years of relevant analytics experience Experience in using large database systems preferred Has developed niche expertise in at least one functional domain REQUIRED SKILLS: Ability to work well in agile environments in diverse teams with multiple stakeholders Experience of leading small teams Able to problem solve complex problems and break them down into simpler parts Ability to effectively communicate complex analytical and technical content High energy and passionate individual who can work closely with other team members Strong entrepreneurial drive to test new out of the box techniques Able to prioritize workstreams and adopt an agile approach Willing to adopt an iterative approach; experimental mindset to drive innovation

Posted 1 week ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Naukri logo

Job Location: Pune, Bangalore, or Gurugram Available to join immediately or within a notice period of up to 30 days. MANDATE SKILLS -Python, Spark, Airflow, SQL, Snowflake Over 5 years of overall experience in the data engineering and analytics industry. 3+ years of hands-on experience with Python, Apache Spark, and Apache Airflow for building scalable data pipelines and ETL workflows. Proficient in SQL with strong knowledge of data querying and transformation; experience with Snowflake is a plus. Solid experience working with both relational (e.g., PostgreSQL, MySQL) and non-relational databases (e.g., MongoDB, Cassandra). Strong understanding of data modeling principles and the design of both batch and real-time data pipelines. Proven track record of developing robust, scalable solutions in cloud environments such as AWS, Azure, or GCP. Well-versed in DevOps practices including CI/CD, infrastructure as code, and containerization. Experienced in Agile development methodologies with active participation in sprint planning, standups, and retrospectives. For more information, please share your updated CV at admin@spearheadps.com or contact me via call/WhatsApp at 9899080360

Posted 1 week ago

Apply

4.0 - 6.0 years

0 - 2 Lacs

Pune

Hybrid

Naukri logo

Must Have Skills: 4+ years of experience in data engineering, with a focus on big data technologies (e.g., Spark, Kafka) 2+ years of Databricks experience is must Strong understanding of data architecture, ETL processes, and data warehousing Proficiency in programming languages such as Python or Java Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data tools Excellent communication, interpersonal, and leadership skills Ability to work in a fast-paced environment and manage multiple priorities Preferred candidate profile Solid written, verbal, and presentation communication skills Strong team and individual player Maintains composure during all types of situations and is collaborative by nature High standards of professionalism, consistently producing high-quality results Self-sufficient, independent requiring very little supervision or intervention Demonstrate flexibility and openness to bring creative solutions to address issues

Posted 1 week ago

Apply

4.0 - 8.0 years

10 - 18 Lacs

Hyderabad

Hybrid

Naukri logo

About the Role: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: Design, develop, test, and maintain scalable ETL data pipelines using Python. Work extensively on Google Cloud Platform (GCP) services such as: Dataflow for real-time and batch data processing Cloud Functions for lightweight serverless compute BigQuery for data warehousing and analytics Cloud Composer for orchestration of data workflows (based on Apache Airflow) Google Cloud Storage (GCS) for managing data at scale IAM for access control and security Cloud Run for containerized applications Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery. Implement and enforce data quality checks, validation rules, and monitoring. Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions. Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects. Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL. Document pipeline designs, data flow diagrams, and operational support procedures. Required Skills: 4-6 years of hands-on experience in Python for backend or data engineering projects. Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.). Solid understanding of data pipeline architecture, data integration, and transformation techniques. Experience in working with version control systems like GitHub and knowledge of CI/CD practices. Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.). Good to Have (Optional Skills): Experience working with Snowflake cloud data platform. Hands-on knowledge of Databricks for big data processing and analytics. Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools.

Posted 1 week ago

Apply

8.0 - 12.0 years

5 - 10 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Teradata to Snowflake and Databricks on Azure Cloud,data migration projects, including complex migrations to Databricks,Strong expertise in ETL pipeline design and optimization, particularly for cloud environments and large-scale data migration

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Required Qualifications - B/Tech or M/Tech in Computer Science/ Data Science from a reputed educational institute. - 5 to 8 years of proven relevant experience in data architecture & AI/ML working for a good MNC company. - Work experience/Knowledge of the healthcare domain is highly desirable Apply machine learning, deep learning & Gen AI tools, focusing on doing feasibility studies, developing proof of concept implementaons, and establishing robust soluon blueprint for subsequent implementaon by Engineering teams. - Keep track of the latest emerging technologies, idenfy data trends, and perform complex data analysis and analycs using AI /ML & Gen AI. - Use Generave AI (GEN-AI), predicve and prescripve analycs to generate innovave ideas and soluons. - Yoke NLP and computer vision techniques to extract insights from unstructured clinical/healthcare data. - Apply core data modeling, data engineering principles and pracces, leveraging large datasets to design AI powered soluons. - Perform data analysis and data visualizaon to deliver data-driven insights. - Develop architecture blueprints, strategies for building robust AI powered soluons leveraging large datasets. - Define best pracces, pracces, standards for developing and deploying AI models to producon environments using MLOps. - Providing technical guidance and support to Data Engineers, Business users, and soware applicaon team members - Maintaining documentaon and ensuring compliance with relevant AI governance, regulaons, and standards. - Communicate effecvely with both technical and non-technical stakeholders. - Collaborate with cross-funconal teams, including business stakeholders, data sciensts, data engineers, and IT teams, to define future data management architecture, aligning with value creaon opportunies across the business. Competencies: - Good understanding of strategic and emerging technology trends and the praccal applicaon of exisng and emerging technologies to new and evolving business and operang models, especially AI/ML, GEN AI. - Ability to develop architecture blueprints, strategies. - Ability to effecvely communicate and present to senior-level execuves and technical audiences. - Proficiency in architecng soluons using Cloud Nave approach. - Well versed with Data modelling techniques and tools (Star Schema (de-normalized data model), Transaconal Model (Normalized data model) - Proven skills in building data systems that are reliable, fault tolerant, performing and at the same me equally economical from a cost perspecve. - Listens to the ideas, concerns of stakeholders, and develops an understanding of how their ideas relate to others - and acts to address concerns. - Idenfies risks and develops risk management plans/processes and successfully implements/operates them. - Possess a deep personal movaon to develop new efficient, effecve, and valuable ways to accomplish team tasks. - Demonstrates crical\out of box thinking and the ability to look at problems from different points of view and tries to find soluons that meet the needs of those outside of the team including staff and members. - Proacvely experiments with new digital technologies and be a technology evangelist sharing the key learning with colleagues.

Posted 1 week ago

Apply

5.0 - 8.0 years

25 - 30 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Naukri logo

NYU Manager - Owais UR Delivery Manager - Laxmi Title: Senior Data Developer with Strong MS/Oracle SQL, Python Skills and Critical Thinking Description: The EDA team seeks a dedicated and detail-oriented Senior Developer I to join our dynamic team. The responsibility of the successful candidate will be to handle repetitive technical tasks, such as Healthy Planet MS SQL file loads into a data warehouse, monitor Airflow DAGs, manage alerts, and rerun failed processes. Additionally, the role will require the analyst to monitor various daily and weekly jobs, which may include generation of revenue cycle reports and data delivery to external vendors. The perfect candidate will have a robust experience with MS/Oracle SQL, Python, Epic Health Systems, and other relevant technologies. Overview: As a Senior Developer I at NYU EDA team, you will play a vital role to improve the operation of our data load and management processes. Your primary responsibilities will be to ensure the accuracy and timeliness of data loads, maintain the health of data pipelines, and monitor that all scheduled jobs are completed successfully. You will collaborate with cross-functional teams to identify and resolve issues, improve processes, and maintain a high standard of data integrity. Responsibilities: Manage and perform Healthy Planet file loads into a data warehouse. Monitor Airflow DAGs for successful completion, manage alerts, and rerun failed tasks as necessary. Monitor and oversee other daily and weekly jobs, including FGP cash reports and external reports. Collaborate with the data engineering team to streamline data processing workflows. Develop automation scripts to reduce manual intervention in repetitive tasks using SQL and Python. Ensure all data-related tasks are performed accurately and on time. Investigate and resolve data discrepancies and processing issues. Prepare and maintain documentation for processes and workflows. Conduct periodic data audits to ensure data integrity and compliance with defined standards. Skillset Requirements: MS/Oracle SQL Python Data warehousing and ETL processes Monitoring tools such as Apache Airflow Data quality and integrity assurance Strong analytical and problem-solving abilities Excellent written and verbal communication Additional Skillset Familiarity with monitoring and managing Apache Airflow DAGs. Experience: Minimum of 5 years experience in a similar role, with a focus on data management and process automation. Proven track record of successfully managing complex data processes and meeting deadlines. Education: Bachelor's degree in Computer Science, Information Technology, Data Science, or a related field. Certifications: Epic Cogito MS/Oracle SQL, Python, or data management are a plus.

Posted 1 week ago

Apply

7.0 - 11.0 years

9 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

Data Engineer with 7+ years of hands-on experience in data engineering, specializing in MS SQL Server, T-SQL, and ETL processes. The role involves developing and managing robust data pipelines, designing complex queries, stored procedures, and functions, and implementing scalable ETL solutions using ADF and SSIS. The candidate will work on Azure Cloud DB, CI/CD processes, and version control systems like Git or TFS in a collaborative team environment. The ideal candidate must have excellent communication skills and the ability to work with stakeholders across all levels. This role is on-site in Hyderabad with overlap in India and US timings.

Posted 1 week ago

Apply

10.0 - 14.0 years

12 - 16 Lacs

Hyderabad

Work from Office

Naukri logo

Proven expert at writing SQL code with at least 10 years of experience. Must have 5+ years of experience working with large data with transactions in the order of 5 10M records. 5+ years of experience modeling loosely coupled relational databases that can store tera or petabytes of data. 3+ years of proven expertise in working with large Data Warehouses. Expert at ETL transformations using SSIS.

Posted 1 week ago

Apply

5.0 - 7.0 years

15 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking a highly skilled GCP Data Engineer with experience in designing and developing data ingestion frameworks, real-time processing solutions, and data transformation frameworks using open-source tools. The role involves operationalizing open-source data-analytic tools for enterprise use, ensuring adherence to data governance policies, and performing root-cause analysis on data-related issues. The ideal candidate should have a strong understanding of cloud platforms, especially GCP, with hands-on expertise in tools such as Kafka, Apache Spark, Python, Hadoop, and Hive. Experience with data governance and DevOps practices, along with GCP certifications, is preferred.

Posted 1 week ago

Apply

8.0 - 13.0 years

85 - 90 Lacs

Noida

Work from Office

Naukri logo

About the Role We are looking for a Staff EngineerReal-time Data Processing to design and develop highly scalable, low-latency data streaming platforms and processing engines. This role is ideal for engineers who enjoy building core systems and infrastructure that enable mission-critical analytics at scale. Youll work on solving some of the toughest data engineering challenges in healthcare. A Day in the Life Architect, build, and maintain a large-scale real-time data processing platform. Collaborate with data scientists, product managers, and engineering teams to define system architecture and design. Optimize systems for scalability, reliability, and low-latency performance. Implement robust monitoring, alerting, and failover mechanisms to ensure high availability. Evaluate and integrate open-source and third-party streaming frameworks. Contribute to the overall engineering strategy and promote best practices for stream and event processing. Mentor junior engineers and lead technical initiatives. What You Need 8+ years of experience in backend or data engineering roles, with a strong focus on building real-time systems or platforms. Hands-on experience with stream processing frameworks like Apache Flink, Apache Kafka Streams, or Apache Spark Streaming. Proficiency in Java, Scala, or Python or Go for building high-performance services. Strong understanding of distributed systems, event-driven architecture, and microservices. Experience with Kafka, Pulsar, or other distributed messaging systems. Working knowledge of containerization tools like Docker and orchestration tools like Kubernetes. Proficiency in observability tools such as Prometheus, Grafana, OpenTelemetry. Experience with cloud-native architectures and services (AWS, GCP, or Azure). Bachelor's or Masters degree in Computer Science, Engineering, or a related field.

Posted 1 week ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, We are hiring a Data Engineer to build and maintain data pipelines for our analytics platform. Perfect for engineers focused on data processing and scalability. Key Responsibilities: Design and implement ETL processes Manage data warehouses and ensure data quality Collaborate with data scientists to provide necessary data Optimize data workflows for performance Required Skills & Qualifications: Proficiency in SQL and Python Experience with data pipeline tools like Apache Airflow Familiarity with big data technologies (Spark, Hadoop) Bonus: Knowledge of cloud data services (AWS Redshift, Google BigQuery) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 week ago

Apply

5.0 - 15.0 years

0 - 28 Lacs

Bengaluru

Work from Office

Naukri logo

Key Skills : Python, Pyspark, AWS Glue, Redshift and Spark Steaming, Job Description: 6+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. What The requirement is, you can check with the candidate the followings (Vast knowledge of python, pyspark, glue job, lambda, step function, sql) Please find the Data Engineering requirement JD and client expectations from candidate. 1. Process these events and save data in Trusted and refined bucket schemas 2. Bring Six Tables for Historical data to Raw bucket. Populate historical data in trusted and refined bucket schemas. 3. Publish raw, trusted and refined bucket data from #2 and #3 to corresponding buckets in CCB data lake Develop Analytics pipeline to publish data to Snowflake 4. Integrate TDQ/BDQ in the Glue pipeline 5. Develop Observability dashboards for these jobs 6. Implement reliability wherever needed to prevent data loss 7. Configure Data archival policies and periodic cleanup 8. Perform end to end testing of the implementation 9. Implement all of the above in Production 10. Implement Reconcile data across SORs, Auth Data Lake and CCB Data Lake 11. Success criteria is All the 50 Kafka events are ingested in the CCB data lake and existing 16 Tableau dashboards are populated using this data.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies