Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
25 - 30 Lacs
Bengaluru
Remote
Senior Data Developer with Strong MS/Oracle SQL, Python Skills and Critical Thinking Description: The EDA team seeks a dedicated and detail-oriented Senior Developer I to join our dynamic team. The responsibility of the successful candidate will be to handle repetitive technical tasks, such as Healthy Planet MS SQL file loads into a data warehouse, monitor Airflow DAGs, manage alerts, and rerun failed processes. Additionally, the role will require the analyst to monitor various daily and weekly jobs, which may include generation of revenue cycle reports and data delivery to external vendors. The perfect candidate will have a robust experience with MS/Oracle SQL, Python, Epic Health Systems, and other relevant technologies. Overview : As a Senior Developer I at Useready team, you will play a vital role to improve the operation of our data load and management processes. Your primary responsibilities will be to ensure the accuracy and timeliness of data loads, maintain the health of data pipelines, and monitor that all scheduled jobs are completed successfully. You will collaborate with cross-functional teams to identify and resolve issues, improve processes, and maintain a high standard of data integrity. Responsibilities: Manage and perform Healthy Planet file loads into a data warehouse. Monitor Airflow DAGs for successful completion, manage alerts, and rerun failed tasks as necessary. Monitor and oversee other daily and weekly jobs, including FGP cash reports and external reports. Collaborate with the data engineering team to streamline data processing workflows. Develop automation scripts to reduce manual intervention in repetitive tasks using SQL and Python. Ensure all data-related tasks are performed accurately and on time. Investigate and resolve data discrepancies and processing issues. Prepare and maintain documentation for processes and workflows. Conduct periodic data audits to ensure data integrity and compliance with defined standards. Skillset Requirements: MS/Oracle SQL Python Data warehousing and ETL processes Monitoring tools such as Apache Airflow Data quality and integrity assurance Strong analytical and problem-solving abilities Excellent written and verbal communication Additional Skillset Familiarity with monitoring and managing Apache Airflow DAGs. Experience: Minimum of 5 years experience in a similar role, with a focus on data management and process automation. Proven track record of successfully managing complex data processes and meeting deadlines. Education: Bachelor's degree in Computer Science, Information Technology, Data Science, or a related field. Certifications: Epic Cogito MS/Oracle SQL, Python, or data management are a plus. Notice period - 0-30 days
Posted 2 weeks ago
15.0 - 20.0 years
18 - 22 Lacs
Bengaluru
Work from Office
About The Role Project Role : Data Platform Architect Project Role Description : Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with the team to develop and implement solutions that align with the organization's goals and objectives. You will utilize your expertise in Databricks Unified Data Analytics Platform to create efficient and effective applications that enhance business processes and drive innovation. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Collaborate with stakeholders to gather requirements and understand business needs.- Design and develop applications using Databricks Unified Data Analytics Platform.- Configure and customize applications to meet specific business process requirements.- Perform code reviews and ensure adherence to coding standards.- Provide technical guidance and mentorship to junior team members. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Microsoft Azure Data Services, Microsoft Azure Analytics Services.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 12 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Chennai office.- A Full Time Education is required. Qualification 15 years full time education
Posted 2 weeks ago
1.0 - 3.0 years
20 - 25 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
[{"Salary":"12-24 Lakhs Per Annum" , "Remote_Job":false , "Posting_Title":"Data Engineer" , "Is_Locked":false , "City":"Mumbai" , "Industry":"Financial Services","Job_Description":" Build and maintain Sav\u2019s data infrastructure, pipelines, and storage systems for structured and unstructured data Ingest and normalize 350+ user-level data points across banking APIs, behavioral events, transaction data, and 3rd party services Architect a data mesh that allows distributed ownership but unified insight Set up and maintain modern ELT/ETL pipelines with performance, cost, and scale in mind Integrate LLM tooling (Ollama, Gemini, OpenAI) into data workflows for real-time inference and analysis Enable self-service data access and build visualization layers using tools like Looker, Metabase, or Superset Collaborate with data scientists and ML engineers to build model-ready datasets and ensure feature consistency Own data quality, governance, security, and documentation Continuously improve latency, observability, and cost-e\uFB03ciency of data systems Requirements 4\u20137 years of hands-on experience as a Data Engineer or in a similar data infrastructure role Strong programming skills in Python, SQL, and experience with dbt or similar tools Experience building data pipelines using modern stack: Airflow, Kafka, Spark, or equivalent Familiarity with cloud data platforms (GCP preferred), BigQuery, Firestore, or similar Experience handling large-scale transactional and behavioral data Bonus: Prior experience integrating LLMs/AI APIs (Ollama, OpenAI, Gemini) into analytics or automation workflows Good understanding of data warehousing, governance, and privacy-by-design principles Self-starter, comfortable in 0 \u2192 1 environments, and excited to work cross-functionally with tech and non-tech teams Clear communication and documentation skills Benefits Be part of a mission-driven fintech scaling across the GCC. Work alongside a passionate and visionary leadership team. A flat, fast-paced, no-drama culture with high ownership and high trust Work alongside a globally distributed team of experienced builders Compensation: INR 12-24LPA + ESOPs + Employee benefits ","Work_Experience":"5+ years","Job_Type":"Full time","Job_Opening_Name":"Data Engineer" , "State":"Maharashtra" , "Country":"India" , "Zip_Code":"400058" , "id":"781666000000580021" , "Publish":true , "Date_Opened":"2025-06-11" , "Keep_on_Career_Site":false}]
Posted 2 weeks ago
3.0 - 8.0 years
20 - 25 Lacs
Bengaluru
Work from Office
As a Software Engineer III at JPMorgan Chase within the Consumer and Community Banking- JP Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm s business objectives. Job responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Recommend improvements and modifications on new and existing data and ETL pipelines. Create data analytics for data scientists to innovate, build and optimize our ecosystem Assemble large, complex data sets that meet functional and non-functional business requirements Be responsible for ingesting data into our data lake and providing frameworks and services for operating on that data including the use of Spark Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Proficient in coding in one or more languages Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Overall knowledge of the Software Development Life Cycle Analyze, debug and correct issues with data pipelines Operate on or build solution required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, AWS and Spark technologies. Collaborate with product teams to resolve data quality issues promptly. Preferred qualifications, capabilities, and skills Experience implementing Data pipelines preferably in connection with Hadoop or Spark with Java or Python Exceptional coding and design skills in Java or Python and PYSPARK Experience with API calls for data extraction Good knowledge of using SQL and analytical queries Experience in JAVA,PYSARK and AWS Implementations As a Software Engineer III at JPMorgan Chase within the Consumer and Community Banking- JP Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm s business objectives. Job responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Recommend improvements and modifications on new and existing data and ETL pipelines. Create data analytics for data scientists to innovate, build and optimize our ecosystem Assemble large, complex data sets that meet functional and non-functional business requirements Be responsible for ingesting data into our data lake and providing frameworks and services for operating on that data including the use of Spark Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Proficient in coding in one or more languages Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Overall knowledge of the Software Development Life Cycle Analyze, debug and correct issues with data pipelines Operate on or build solution required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, AWS and Spark technologies. Collaborate with product teams to resolve data quality issues promptly. Preferred qualifications, capabilities, and skills Experience implementing Data pipelines preferably in connection with Hadoop or Spark with Java or Python Exceptional coding and design skills in Java or Python and PYSPARK Experience with API calls for data extraction Good knowledge of using SQL and analytical queries Experience in JAVA,PYSARK and AWS Implementations
Posted 2 weeks ago
3.0 - 8.0 years
20 - 25 Lacs
Bengaluru
Work from Office
As a Software Engineer III at JPMorgan Chase within the Risk Central development team, you will have the opportunity to work at the intersection of critical front office, product control facing data analytics, and big data technologies. You will be part of a high-energy team, helping us build our next-generation business analytics platform by landing financial data from various front office and product control platforms. This role offers a chance to be the primary subject matter expert for data in our analytics platform, develop data quality rules and controls, and analyze and solve query performance bottlenecks in cloud-based warehouses like Redshift and AWS Glue. Job Responsibilities Land data from various firm sources into big data warehouse Investigate data issues, provide support on data issues Develop automation for data extraction. Design and tune schema for data landed on platform Partner with information modelling teams on firmwide logical data models Drive primary subject matter expert (SME) for data in analytics platform Develop data quality rules and controls for data Analyze and solve query performance bottlenecks in Cloud based warehouses like Redshift and AWS Glue Required qualifications, capabilities and skills Formal training or certification on Computer Science concepts and 3+ years applied experience Strong hands on coding in Python, Java and Apache Spark & SQL Strong CS fundamentals, data structures, algorithms with good understanding of big data Experience with AWS application development including services such as Lambda, Glue, ECS/EKS Excellent communication skills are a must for this position Experience with Unix/Linux and shell scripting Preferred qualifications, capabilities and skills Good understanding of data modelling challenges with big data Good understanding of Financial data especially in front office investment banking is a major plus Ability to code in Apache Spark using Scala is an added advantage As a Software Engineer III at JPMorgan Chase within the Risk Central development team, you will have the opportunity to work at the intersection of critical front office, product control facing data analytics, and big data technologies. You will be part of a high-energy team, helping us build our next-generation business analytics platform by landing financial data from various front office and product control platforms. This role offers a chance to be the primary subject matter expert for data in our analytics platform, develop data quality rules and controls, and analyze and solve query performance bottlenecks in cloud-based warehouses like Redshift and AWS Glue. Job Responsibilities Land data from various firm sources into big data warehouse Investigate data issues, provide support on data issues Develop automation for data extraction. Design and tune schema for data landed on platform Partner with information modelling teams on firmwide logical data models Drive primary subject matter expert (SME) for data in analytics platform Develop data quality rules and controls for data Analyze and solve query performance bottlenecks in Cloud based warehouses like Redshift and AWS Glue Required qualifications, capabilities and skills Formal training or certification on Computer Science concepts and 3+ years applied experience Strong hands on coding in Python, Java and Apache Spark & SQL Strong CS fundamentals, data structures, algorithms with good understanding of big data Experience with AWS application development including services such as Lambda, Glue, ECS/EKS Excellent communication skills are a must for this position Experience with Unix/Linux and shell scripting Preferred qualifications, capabilities and skills Good understanding of data modelling challenges with big data Good understanding of Financial data especially in front office investment banking is a major plus Ability to code in Apache Spark using Scala is an added advantage
Posted 2 weeks ago
10.0 - 15.0 years
20 - 25 Lacs
Bengaluru
Work from Office
Job Title Senior AWS Data Engineer About the Role We are seeking an experienced and highly skilled Senior AWS Data Engineer with over 10 years of experience to join our dynamic team. The ideal candidate will have a deep understanding of data engineering principles, extensive experience with AWS services, and a proven track record of designing and implementing scalable data solutions. Key Responsibilities Design and implement robust, scalable, and efficient data pipelines and architectures on AWS. Develop data models and schemas to support business intelligence and analytics requirements. Utilize AWS services such as S3, Redshift, EMR, Glue, Lambda, and Kinesis to build and optimize data solutions. Implement data security and compliance measures using AWS IAM, KMS, and other security services. Design and develop ETL processes to ingest, transform, and load data from various sources into data warehouses and lakes. Ensure data quality and integrity through validation, cleansing, and transformation processes. Optimize data storage and retrieval performance through indexing, partitioning, and other techniques. Monitor and troubleshoot data pipelines to ensure high availability and reliability. Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions. Provide technical leadership and mentorship to junior data engineers and team members. Identify opportunities to automate and streamline data processes for increased efficiency. Participate in on-call rotations to provide 24/7 support for critical systems and services. Required Skills and Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field. Over 10 years of experience in data engineering, with a strong focus on AWS services. Proficiency in programming languages such as Python, Java, or Scala. Extensive experience with SQL and NoSQL databases. Strong understanding of data warehousing concepts and technologies. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Job Title Senior AWS Data Engineer About the Role We are seeking an experienced and highly skilled Senior AWS Data Engineer with over 10 years of experience to join our dynamic team. The ideal candidate will have a deep understanding of data engineering principles, extensive experience with AWS services, and a proven track record of designing and implementing scalable data solutions. Key Responsibilities Design and implement robust, scalable, and efficient data pipelines and architectures on AWS. Develop data models and schemas to support business intelligence and analytics requirements. Utilize AWS services such as S3, Redshift, EMR, Glue, Lambda, and Kinesis to build and optimize data solutions. Implement data security and compliance measures using AWS IAM, KMS, and other security services. Design and develop ETL processes to ingest, transform, and load data from various sources into data warehouses and lakes. Ensure data quality and integrity through validation, cleansing, and transformation processes. Optimize data storage and retrieval performance through indexing, partitioning, and other techniques. Monitor and troubleshoot data pipelines to ensure high availability and reliability. Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions. Provide technical leadership and mentorship to junior data engineers and team members. Identify opportunities to automate and streamline data processes for increased efficiency. Participate in on-call rotations to provide 24/7 support for critical systems and services. Required Skills and Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field. Over 10 years of experience in data engineering, with a strong focus on AWS services. Proficiency in programming languages such as Python, Java, or Scala. Extensive experience with SQL and NoSQL databases. Strong understanding of data warehousing concepts and technologies. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills.
Posted 2 weeks ago
5.0 - 9.0 years
3 - 7 Lacs
Bengaluru
Work from Office
: We are looking for a meticulous and experienced BI Reporting QA professional to lead the team and ensure the accuracy and reliability of our Business Intelligence (BI) reports and data visualizations. The BI Reporting QA plays a critical role in validating and verifying the quality of reports and dashboards, ensuring they provide dependable insights for decision-making. Responsibilities: Design and develop QA strategy for BI by identifying all the relevant tests and validations. Develop and execute comprehensive test plans and test cases for BI reports, dashboards, and data visualizations to identify defects and discrepancies. Verify data accuracy by comparing data sources to report outputs, identifying inconsistencies, anomalies, and data quality issues. Evaluate the performance and responsiveness of reports and dashboards, ensuring they load efficiently and meet performance expectations. Perform regression testing to confirm that report modifications or updates do not introduce new issues or affect existing functionalities. Collaborate with end-users and stakeholders to conduct UAT and ensure that reports meet business requirements and user expectations. Document and track defects and issues, working closely with BI developers to ensure timely resolution. Create and maintain documentation of test cases, results, and validation procedures for reference and reporting. Ensure that BI reports adhere to data governance principles, including data accuracy, data quality, and data security. Manage and maintain test environments, including data sets and configurations, to support effective testing. Required Skills: Proven experience in Power BI reporting quality assurance Proficiency in designing and writing SQL statements for data querying and QA validation Travel industry experience is essential Strong understanding of BI reporting tools and platforms. Proficiency in data validation, data comparison, and data quality assessment. Expertise in implementation of automation in QA processes Preferences: Relevant BI reporting tool certifications (Microsoft Certified: Power BI). Relevant quality assurance certifications (e.g., ISTQB Certified Tester). Qualifications Graduate
Posted 2 weeks ago
3.0 - 4.0 years
5 - 6 Lacs
Bengaluru
Work from Office
About the role: Support SFRM management to adhere to various governance standards (e.g. EUA guidelines) and manage operational resilience. Oversee the various aspects of market data licensing, facilitate cross-departmental discussions and manage relationships with various stakeholders. Identify and initiate projects and processes that aid in improving the management of audits and data reviews. Facilitate various SFRM-wide initiatives (e.g. process documentation) and track them to closure. Manage effective communication channels within the team to facilitate easier audits, control reviews etc About the team: The SFRM Governance and Data Management team within Solvency and Financial Risk Management (SFRM) oversees and reports various governance topics. Adherence of the departmental activities to data quality standards, adherence to End User Application guidelines, managing market data licensing agreements are a few examples. Additionally, the team manages master data in the core IT systems to facilitate seamless reporting across multiple dimensions. We rely on strong communication skills and IT capabilities for an efficient implementation of our mandate. About you: Good academic track record in Finance or MBA. 3-4 years of experience in the financial services or insurance sector with a good track record of management reporting. Excellent communication skills and strong management presence. Ability to effectively collaborate with a broad range of people and roles in a diverse cultural environment. Strong project management skills with consulting and can do mindset. Attention to detail and strive to understand concepts from start to finish. About Swiss Re Swiss Re is one of the world s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. We cover both Property & Casualty and Life & Health. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world. If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience. Keywords: Reference Code: 134590
Posted 2 weeks ago
3.0 - 8.0 years
4 - 8 Lacs
Bengaluru
Work from Office
About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : GCP Dataflow Good to have skills : Google BigQueryMinimum 3 year(s) of experience is required Educational Qualification : Any Graduate Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and optimize data workflows, ensuring that the data infrastructure supports the organization's analytical needs and business objectives. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and maintain robust data pipelines to support data processing and analytics.- Monitor and troubleshoot data workflows to ensure optimal performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in GCP Dataflow.- Good To Have Skills: Experience with Google BigQuery.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud computing concepts and services. Additional Information:- The candidate should have minimum 3 years of experience in GCP Dataflow.- This position is based at our Bengaluru office.- Any Graduate is required. Qualification Any Graduate
Posted 2 weeks ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Informatica Data Quality Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly to support business operations. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking ways to improve processes and deliver high-quality solutions. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in code reviews to ensure adherence to best practices and standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in Informatica Data Quality.- Strong understanding of data profiling and cleansing techniques.- Experience with data integration and transformation processes.- Familiarity with ETL processes and data warehousing concepts.- Ability to troubleshoot and resolve data quality issues efficiently. Additional Information:- The candidate should have minimum 3 years of experience in Informatica Data Quality.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Chennai
Work from Office
About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will collaborate with teams, make team decisions, and provide solutions to problems. Your typical day will involve designing and developing applications, collaborating with multiple teams, and contributing to key decisions. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Design and develop applications- Collaborate with cross-functional teams- Contribute to key decisions Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform- Strong understanding of statistical analysis and machine learning algorithms- Experience with data visualization tools such as Tableau or Power BI- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information:- The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Bengaluru office- A 15 years full time education is required Qualification 15 years full time education
Posted 2 weeks ago
3.0 - 8.0 years
4 - 8 Lacs
Chennai
Work from Office
About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse, Core Banking Good to have skills : AWS BigDataMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the optimization of data processing workflows to enhance efficiency.- Collaborate with stakeholders to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse, Core Banking.- Good To Have Skills: Experience with AWS BigData.- Strong understanding of data modeling and database design principles.- Experience with data integration tools and ETL processes.- Familiarity with data governance and data quality frameworks. Additional Information:- The candidate should have minimum 3 years of experience in Snowflake Data Warehouse.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
2.0 - 7.0 years
10 - 14 Lacs
Pune
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Lead the design, development, and implementation of applications.- Collaborate with cross-functional teams to gather and analyze requirements.- Ensure the applications meet quality standards and are delivered on time.- Provide technical guidance and mentorship to junior team members.- Stay updated with the latest industry trends and technologies.- Identify and resolve any issues or bottlenecks in the application development process. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 2 years of experience in PySpark.- This position is based at our Hyderabad office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 2 weeks ago
5.0 - 10.0 years
4 - 8 Lacs
Hyderabad
Work from Office
About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Talend ETL Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Be involved in the end-to-end data management process. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Lead the design and implementation of data solutions.- Optimize and troubleshoot ETL processes.- Conduct data analysis and provide insights for decision-making. Professional & Technical Skills: - Must To Have Skills: Proficiency in Talend ETL.- Strong understanding of data modeling and database design.- Experience with data integration and data warehousing concepts.- Hands-on experience with SQL and scripting languages.- Knowledge of cloud platforms and big data technologies. Additional Information:- The candidate should have a minimum of 5 years of experience in Talend ETL.- This position is based at our Hyderabad office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 2 weeks ago
15.0 - 20.0 years
4 - 8 Lacs
Bengaluru
Work from Office
About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve data processes to optimize performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse.- Good To Have Skills: Experience with data modeling and database design.- Strong understanding of ETL processes and data integration techniques.- Familiarity with cloud data warehousing solutions.- Experience in performance tuning and optimization of data queries. Additional Information:- The candidate should have minimum 5 years of experience in Snowflake Data Warehouse.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
8.0 - 10.0 years
20 - 27 Lacs
Gurugram
Work from Office
Job Description Job Summary We are seeking a highly skilled Global Data Engineer at the Senior Associate level to support our domestic Digital Finance Transformation team. The ideal candidate will possess advanced expertise in SQL, Alteryx, Power BI, and Python, along with strong experience in current finance use cases and leading practices related to artificial intelligence (AI) and machine learning (ML). This role will focus on building robust data pipelines, developing analytics and reporting solutions, and integrating AI/ML models to drive business insights and process automation. Job Duties Key Responsibilities Collaborate closely with the stateside Digital/Finance Transformation team to understand business requirements and deliver data-driven solutions. Design, develop, and maintain ETL/ELT pipelines using SQL and Alteryx for data ingestion, transformation, and integration from multiple sources. Cleanse, structure, and transform large data sets from various sources (e.g., ERPs, APIs, cloud data lakes). Build and optimize data models and data warehouses to support analytics, reporting, and machine learning workflows. Develop, publish, and maintain interactive dashboards and reports in Power BI for finance and business stakeholders. Integrate machine learning models into data pipelines for real-time or batch inference. Integrate data quality, integrity, and security concepts across all data assets and processes. Automate data processes and workflows to improve efficiency and scalability. Document data engineering processes, data flows, and AI/ML model integration. Translate business requirements into technical solutions and deliverables. Follow best practices in data security, data governance, privacy, and compliance. Qualifications, Knowledge, Skills and Abilities Education : Bachelor s or master s degree in computer science, Data Science, Engineering, or related field (preferred). Experience & Skills : Advanced proficiency in SQL for data extraction, transformation, and analysis. Hands-on experience with Alteryx for ETL/ELT workflows and data automation. Strong expertise in Power BI for data visualization, dashboard development, and report publishing. Strong expertise in Python, including data integration and analysis (preferred). Solid understanding of AI and machine learning concepts and leading practices. Experience with machine learning frameworks, data modeling, warehousing, and relational databases. Excellent problem-solving, analytical, and communication skills; comfortable working across time zones in a collaborative, fast-paced virtual environment.
Posted 2 weeks ago
2.0 - 9.0 years
15 - 17 Lacs
Mumbai
Work from Office
Job Description Are You Ready to Make It Happen at Mondel z International? Join our Mission to Lead the Future of Snacking. Make It With Pride. You will provide technical contributions to the data science process. In this role, you are the internally recognized expert in data, building infrastructure and data pipelines/retrieval mechanisms to support our data needs How you will contribute You will: Operationalize and automate activities for efficiency and timely production of data visuals Assist in providing accessibility, retrievability, security and protection of data in an ethical manner Search for ways to get new data sources and assess their accuracy Build and maintain the transports/data pipelines and retrieve applicable data sets for specific use cases Understand data and metadata to support consistency of information retrieval, combination, analysis, pattern recognition and interpretation Validate information from multiple sources. Assess issues that might prevent the organization from making maximum use of its information assets What you will bring A desire to drive your future and accelerate your career and the following experience and knowledge: Extensive experience in data engineering in a large, complex business with multiple systems such as SAP, internal and external data, etc. and experience setting up, testing and maintaining new systems Experience of a wide variety of languages and tools (e.g. script languages) to retrieve, merge and combine data Ability to simplify complex problems and communicate to a broad audience In This Role As a Senior Data Engineer, you will have the opportunity to design and build scalable, secure, and cost-effective cloud-based data solutions. You will develop and maintain data pipelines to extract, transform, and load data into data warehouses or data lakes, ensuring data quality and validation processes to maintain data accuracy and integrity. You will ensure efficient data storage and retrieval for optimal performance, and collaborate closely with data teams, product owners, and other stakeholders to stay updated with the latest cloud technologies and best practices. Role & Responsibilities: Design and Build: Develop and implement scalable, secure, and cost-effective cloud-based data solutions. Manage Data Pipelines: Develop and maintain data pipelines to extract, transform, and load data into data warehouses or data lakes. Ensure Data Quality: Implement data quality and validation processes to ensure data accuracy and integrity. Optimize Data Storage: Ensure efficient data storage and retrieval for optimal performance. Collaborate and Innovate: Work closely with data teams, product owners, and stay updated with the latest cloud technologies and best practices. Technical Requirements: Programming: Python, PySpark, Go/Java Database: SQL, PL/SQL ETL & Integration: DBT, Databricks + DLT, AecorSoft, Talend, Informatica/Pentaho/Ab-Initio, Fivetran. Data Warehousing: SCD, Schema Types, Data Mart. Visualization: Databricks Notebook, PowerBI (Optional), Tableau (Optional), Looker. GCP Cloud Services: Big Query, GCS, Cloud Function, PubSub, Dataflow, DataProc, Dataplex. AWS Cloud Services: S3, Redshift, Lambda, Glue, CloudWatch, EMR, SNS, Kinesis. Azure Cloud Services: Azure Datalake Gen2, Azure Databricks, Azure Synapse Analytics, Azure Data Factory, Azure Stream Analytics. Supporting Technologies: Graph Database/Neo4j, Erwin, Collibra, Ataccama DQ, Kafka, Airflow. Soft Skills: Problem-Solving: The ability to identify and solve complex data-related challenges. Communication: Effective communication skills to collaborate with Product Owners, analysts, and stakeholders. Analytical Thinking: The capacity to analyze data and draw meaningful insights. Attention to Detail: Meticulousness in data preparation and pipeline development. Adaptability: The ability to stay updated with emerging technologies and trends in the data engineering field. Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy Business Unit Summary Job Type Regular Data Science Analytics & Data Science
Posted 2 weeks ago
5.0 - 8.0 years
5 - 8 Lacs
Chennai, Bengaluru
Work from Office
Job Summary This job will design, develop, and implement machine learning models and algorithms to solve complex problems. You will work closely with data scientists, software engineers, and product teams to enhance services through innovative AI/ML solutions. Your role will involve building scalable ML pipelines, ensuring data quality, and deploying models into production environments to drive business insights and improve customer experiences. Job Description Essential Responsibilities Develop and optimize machine learning models for various applications. Preprocess and analyze large datasets to extract meaningful insights. Deploy ML solutions into production environments using appropriate tools and frameworks. Collaborate with cross-functional teams to integrate ML models into products and services. Monitor and evaluate the performance of deployed models. Minimum Qualifications Minimum of 5 years of relevant work experience and a Bachelors degree or equivalent experience. Experience with ML frameworks like TensorFlow, PyTorch, or scikit-learn. Familiarity with cloud platforms (AWS, Azure, GCP) and tools for data processing and model deployment. Several years of experience in designing, implementing, and deploying machine learning models. Preferred Qualification MSc or equivalent experience in a quantitative field (Computer Science, Mathematics, Engineering, Artificial Intelligence, etc.) or a bachelors degree in engineering, science, statistics or mathematics with a strong technical background in machine learning. Hands-on experience with Python or Java, along with relevant technologies such as Spark, Hadoop, Big-Query, SQL, is required. Candidates must possess a comprehensive understanding of machine learning algorithms and explainable AI techniques. Additionally, expertise in at least one of the following specialized areas is required Computer Vision, Graph Mining, Natural Language Processing (NLP), or Generative AI (GenAI). Experience with Cloud frameworks such as GCP, AWS is preferred. Experience with developing machine learning models at scale from inception to business impact Experience in designing ML pipelines, including model versioning, model deployment, model testing, and monitoring Experience in mentoring and supporting junior data scientists or engineers. Experience working in a multi-cultural and multi-location organization - an advantage. Team player, responsible, delivery-oriented, details-oriented, outstanding communication skills. Subsidiary PayPal Travel Percent 20 For the majority of employees, PayPals balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https//www.paypalbenefits.com . Who We Are Click Here to learn more about our culture and community. Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at talentaccommodations@paypal.com . Belonging at PayPal Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community . We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don t hesitate to apply.
Posted 2 weeks ago
1.0 - 4.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Job Summary This job will design, develop, and implement machine learning models and algorithms to solve complex problems. You will work closely with data scientists, software engineers, and product teams to enhance services through innovative AI/ML solutions. Your role will involve building scalable ML pipelines, ensuring data quality, and deploying models into production environments to drive business insights and improve customer experiences. Job Description Essential Responsibilities Develop and optimize machine learning models for various applications. Preprocess and analyze large datasets to extract meaningful insights. Deploy ML solutions into production environments using appropriate tools and frameworks. Collaborate with cross-functional teams to integrate ML models into products and services. Monitor and evaluate the performance of deployed models. Minimum Qualifications Minimum of 5 years of relevant work experience and a Bachelors degree or equivalent experience. Experience with ML frameworks like TensorFlow, PyTorch, or scikit-learn. Familiarity with cloud platforms (AWS, Azure, GCP) and tools for data processing and model deployment. Several years of experience in designing, implementing, and deploying machine learning models. Preferred Qualification Experience in risk management (Sanctions, AML, Fraud, Credit, Compliance, CIP/KYC etc.) Experience in payments industry Subsidiary PayPal Travel Percent 0 For the majority of employees, PayPals balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https//www.paypalbenefits.com . Who We Are Click Here to learn more about our culture and community. Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at talentaccommodations@paypal.com . Belonging at PayPal Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community . We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don t hesitate to apply.
Posted 2 weeks ago
1.0 - 4.0 years
5 - 8 Lacs
Chennai, Bengaluru
Work from Office
Job Summary This job will design, develop, and implement machine learning models and algorithms to solve complex problems. You will work closely with data scientists, software engineers, and product teams to enhance services through innovative AI/ML solutions. Your role will involve building scalable ML pipelines, ensuring data quality, and deploying models into production environments to drive business insights and improve customer experiences. Job Description Essential Responsibilities Develop and optimize machine learning models for various applications. Preprocess and analyze large datasets to extract meaningful insights. Deploy ML solutions into production environments using appropriate tools and frameworks. Collaborate with cross-functional teams to integrate ML models into products and services. Monitor and evaluate the performance of deployed models. Minimum Qualifications Minimum of 5 years of relevant work experience and a Bachelors degree or equivalent experience. Experience with ML frameworks like TensorFlow, PyTorch, or scikit-learn. Familiarity with cloud platforms (AWS, Azure, GCP) and tools for data processing and model deployment. Several years of experience in designing, implementing, and deploying machine learning models. Preferred Qualification MSc or equivalent experience in a quantitative field (Computer Science, Mathematics, Engineering, Artificial Intelligence, etc.) or a bachelors degree in engineering, science, statistics or mathematics with a strong technical background in machine learning. Hands-on experience with Python or Java, along with relevant technologies such as Spark, Hadoop, Big-Query, SQL, is required. Candidates must possess a comprehensive understanding of machine learning algorithms and explainable AI techniques. Additionally, expertise in at least one of the following specialized areas is required Computer Vision, Graph Mining, Natural Language Processing (NLP), or Generative AI (GenAI). Experience with Cloud frameworks such as GCP, AWS is preferred. Experience with developing machine learning models at scale from inception to business impact Experience in designing ML pipelines, including model versioning, model deployment, model testing, and monitoring Experience in mentoring and supporting junior data scientists or engineers. Experience working in a multi-cultural and multi-location organization - an advantage. Team player, responsible, delivery-oriented, details-oriented, outstanding communication skills. Subsidiary PayPal Travel Percent 0 For the majority of employees, PayPals balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https//www.paypalbenefits.com . Who We Are Click Here to learn more about our culture and community. Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at talentaccommodations@paypal.com . Belonging at PayPal Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community . We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don t hesitate to apply.
Posted 2 weeks ago
2.0 - 4.0 years
3 - 7 Lacs
Chennai
Work from Office
Lennox (NYSE: LII) Driven by 130 years of legacy, HVAC and refrigeration success, Lennox provides our residential and commercial customers with industry-leading climate-control solutions. At Lennox, we win as a team, aiming for excellence and delivering innovative, sustainable products and services. Our culture guides us and creates a workplace where all employees feel heard and welcomed. Lennox is a global community that values each team member s contributions and offers a supportive environment for career development. Come, stay, and grow with us. Job Description In this role, you will be responsible to perform day-to-day operations while maintaining SLA. Solving queries related to Customer, Vendor, or Material domains. This role will be reporting to the manager of MDA. The resource will be working with a cross functional team and will maintain, analyse, execute requests with utmost quality with eye for details with good experience in SAP MDM- customer, vendor, or material master. We are looking for some who should be open to work in any shift as per the business requirement. This role will work closely with the Vendor Maintenance team member in LITC as well as with team members in Richardson TX. Ideally, this role is best filled by an independent thinker that can look at the big picture and ask questions for clarification or make suggestions on how to improve the processes. Follow defined standard operating processes and procedures for all deliverables (processes) Take ownership of performance against the Vendor Master data processes and key categories Accountable for the quality and accuracy Plan and priorities the workload to ensure that all assignments are delivered accurately to meet specified timelines. Ensure timely and accurate update of all Vendor master data records. Responsible for delivering all the tasks on-time, every-time. Be responsible for maintaining the SLA of each period and the delivery performance. Ensure that all the reports are maintained and cleared on time. Maintain Vendor master data in SAP as per business input and SOP. Basis understanding of SAP MM module ( P2P cycle ) commercial structures and Category/Product Hierarchies Participate in idea generation and improvement activities. Ensure all Data Quality KPIs (Key Performance Indicators) are met. Identifies areas for data quality and process improvements and helps to resolve data quality problems through the appropriate choice of error detection and correction, process control and improvement, or process design strategies. Qualifications Any Graduate / bachelor s degree. 2 to 4 years of experience in Master data management Analytical skills and attention to detail Excellent communication (verbal and written). Understanding of SAP ERP or relevant experience Intermediate knowledge in MS office tools Working knowledge of SharePoint and TEAMS Passion for quality data. Asks the question why.
Posted 2 weeks ago
4.0 - 10.0 years
13 - 14 Lacs
Chennai, Bengaluru, Thiruvananthapuram
Work from Office
Senior Clinical Data Science Programmer- Office based- Bangalore/ Chennai/ Trivandrum We are currently seeking a Senior Clinical Data Science Programmer to join our diverse and dynamic team. As a Senior Clinical Data Science Programmer at ICON, you will play a pivotal role in developing and implementing programming solutions to support the analysis and reporting of clinical trial data. You will contribute to the advancement of innovative treatments and therapies by ensuring the accuracy and efficiency of data processing through your programming expertise. What You Will Be Doing: Developing, validating, and maintaining programming solutions for data analysis and reporting in clinical trials. Collaborating with clinical data scientists and biostatisticians to ensure the integration of programming solutions into the overall data management process. Overseeing the generation of statistical datasets, tables, listings, and figures to support regulatory submissions and study reports. Providing guidance on programming best practices, coding standards, and data quality control measures. Staying updated on advancements in programming languages and data management tools to enhance operational efficiencies. Your Profile: Advanced degree in a relevant field such as computer science, statistics, or life sciences. Extensive experience in programming for clinical trials, with proficiency in languages such as Rave+ Custom Function/ SAS, R, or Python. Strong problem-solving skills and the ability to work collaboratively in a fast-paced, cross-functional environment. Excellent attention to detail and organizational skills, with a commitment to delivering high-quality results. Strong communication and interpersonal skills, with the ability to effectively collaborate with diverse teams and influence outcomes. Visit our careers site to read more about the benefits ICON offers. At ICON, inclusion & belonging are fundamental to our culture and values. We re dedicated to providing an inclusive and accessible environment for all candidates. ICON is committed to providing a workplace free of discrimination and harassment. All qualified applicants will receive equal consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the application process, or in order to perform the essential functions of a position, please let us know or submit a request here . Interested in the role, but unsure if you meet all of the requirements? We would encourage you to apply regardless there s every chance you re exactly what we re looking for here at ICON whether it is for this or other roles. Are you a current ICON Employee? Please click here to apply
Posted 2 weeks ago
5.0 - 10.0 years
4 - 8 Lacs
Coimbatore
Work from Office
About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : SAP ABAP Cloud Good to have skills : SAP HCM Personnel AdministrationMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Be involved in the end-to-end data management process. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Develop and maintain data pipelines- Ensure data quality and integrity- Implement ETL processes Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP HCM On Premise ABAP- Strong understanding of data management principles- Experience with SAP HCM Organizational Management- Knowledge of data modeling and database design- Proficient in SQL and database querying- Hands-on experience with data integration tools Additional Information:- The candidate should have a minimum of 5 years of experience in SAP HCM On Premise ABAP- This position is based at our Coimbatore office- A 15 years full-time education is required Qualification 15 years full time education
Posted 2 weeks ago
5.0 - 10.0 years
6 - 10 Lacs
Bengaluru
Work from Office
About The Role Project Role : Tech Delivery Subject Matter Expert Project Role Description : Drive innovative practices into delivery, bring depth of expertise to a delivery engagement. Sought out as experts, enhance Accentures marketplace reputation. Bring emerging ideas to life by shaping Accenture and client strategy. Use deep technical expertise, business acumen and fluid communication skills, work directly with a client in a trusted advisor relationship to gather requirements to analyze, design and/or implement technology best practice business changes. Must have skills : SAP ABAP Cloud Good to have skills : SAP Data Services DevelopmentMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Tech Delivery Subject Matter Expert, you will drive innovative practices into delivery, bring depth of expertise to a delivery engagement. Sought out as experts, enhance Organization marketplace reputation. Bring emerging ideas to life by shaping Organization and client strategy. Use deep technical expertise, business acumen and fluid communication skills, work directly with a client in a trusted advisor relationship to gather requirements to analyze, design and/or implement technology best practice business changes. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the team in implementing SAP HCM On Premise ABAP solutions- Provide technical guidance and support to team members- Participate in client meetings to understand requirements and provide technical insights Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP HCM On Premise ABAP- Strong understanding of SAP Data Migration- Experience in ABAP development for SAP HCM modules- Knowledge of SAP HR processes and configurations- Ability to troubleshoot and resolve technical issues in SAP HCM modules Additional Information:- The candidate should have a minimum of 5 years of experience in SAP HCM On Premise ABAP- This position is based at our Bengaluru office- A 15 years full time education is required Qualification 15 years full time education
Posted 2 weeks ago
7.0 - 12.0 years
6 - 10 Lacs
Bengaluru
Work from Office
About The Role Project Role : Tech Delivery Subject Matter Expert Project Role Description : Drive innovative practices into delivery, bring depth of expertise to a delivery engagement. Sought out as experts, enhance Accentures marketplace reputation. Bring emerging ideas to life by shaping Accenture and client strategy. Use deep technical expertise, business acumen and fluid communication skills, work directly with a client in a trusted advisor relationship to gather requirements to analyze, design and/or implement technology best practice business changes. Must have skills : SAP ABAP Cloud Good to have skills : SAP Data Services DevelopmentMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Tech Delivery Subject Matter Expert, you will drive innovative practices into delivery, bring depth of expertise to a delivery engagement. Sought out as experts, enhance Organization marketplace reputation. Bring emerging ideas to life by shaping Organization and client strategy. Use deep technical expertise, business acumen and fluid communication skills, work directly with a client in a trusted advisor relationship to gather requirements to analyze, design and/or implement technology best practice business changes. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead technical discussions with clients- Develop and implement technical solutions- Provide guidance and mentorship to junior team members Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP HCM On Premise ABAP- Good To Have Skills: Experience with SAP Data Migration- Strong understanding of SAP HCM On Premise ABAP- Experience in SAP ABAP development and customization- Knowledge of SAP HCM modules and integration with other SAP modules Additional Information:- The candidate should have a minimum of 7.5 years of experience in SAP HCM On Premise ABAP- This position is based at our Bengaluru office- A 15 years full time education is required Qualification 15 years full time education
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France