Jobs
Interviews

5704 Databricks Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

India

Remote

🌟 Title: Data Scientist 📍 Location: Remote or Gurgaon (Gurugram) 🚀 Start Date: Immediate (need someone who can join within 30 days or less) 💰 Compensation: 35-40 LPA (100% fixed) + benefits **Benefits include: 11 National Holidays and 15 PTO days Health Insurance: 30 Lakh for employee, spouse, two dependent children, and two parents/in-laws Accident & Disability: 50 Lakh for employee Free doctor consultations 12% of basic salary contributed to PF Key Requirements Must be ready to join immediately within 15-30 days Must Haves: 5+ years of experience as a data scientist Statistics or mathematics background/degree Strong technical experience with Snowflake, Azure, Databricks, Python and SQL Plusses: Med device, healthcare, or pharmaceutical background Description: An employer is seeking an experienced Data Scientist based in Gurgaon, India to join an analytics organization for a global medical device client. This individual will support a major compliance initiative and serve as lead data scientist overseeing algorithm buildouts. They will be collaborating with the data engineering team to source and clean the data. They will also work closely with stakeholders to understand the specific requirements and will create algorithms based off of those specifications. The technology landscape includes: Databricks, Python, SQL, Snowflake and Azure.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Develop analytic reports, web application and dashboards. Support the Capitation department at all levels for planning and insight into monthly Capitation payment, eligibility and growth of providers and membership Maintain web application, dashboards, and reports Perform high level and detailed analysis on all levels of Capitation data across UnitedHealthcare and Optum Learn and utilize healthcare industry knowledge necessary to extract, analyze and work with the data to return the requested reports at various levels for both internal and external clients Analyze Capitation from different data sources and identify data integrity issues related to submissions, reconciliation and provider detail Work with staff and customers to develop new and modify existing processes, documentation and templates to drive more insightful and accurate reporting solution These are new 2 position for Ireland team in NICE Reporting area. Ireland leadership would like to hire these positions in India. The resource would be responsible for NICE reporting. From BI standpoint team is looking to hire Power BI and from Data Engineering side - they are looking to hire SQL, snowflakes Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Associate or Bachelor’s degree in computer science or related fields 3+ years of relevant experience in web development and must have deployed web applications to production environment Intermediate level of proficiency with Python with relevant experience of 3+ years Intermediate to advance level proficiency with at least one web application development tool including but not limit to: Flask API HTML Django Angular JS Streamlit Microsoft Power Platform Intermediate level of proficiency with SQL Excellent communication, time/project management, problem solving, organizational, and analytical skills Preferred Qualifications Health care industry experience Experience with Databricks and Azure Data Factory (ADF) Intermediate level of proficiency with Excel, Excel Macros Beginner to Intermediate level of proficiency with Microsoft Power BI At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

As a skilled professional, you will be responsible for developing innovative solutions on a cutting-edge cloud-based platform for efficiently managing and analyzing large datasets. Your primary duties will include designing, developing, and deploying digital solutions while ensuring adherence to the software development life cycle within an agile environment. Additionally, you will be tasked with creating comprehensive technical documentation, analyzing and breaking down business requirements into technical functionalities, and writing clean and optimized code that aligns with business specifications. Furthermore, you will be expected to utilize SCALA or Python to design and implement Notebooks, pipelines, and workflows for data ingestion, processing, and delivery on our platform. Your active involvement in the continuous enhancement of Nordex's development processes through participation in retrospectives and suggesting process optimizations will be crucial for the team's success. To excel in this role, you should possess a technical degree in computer science, data analytics, electrical engineering, automation technology, or a related field. Additionally, having experience or certification in tools such as Databricks, Azure Data lakes, and SQL Data Warehousing will be advantageous. Proficiency in Security Protocols and devices like stateful, firewall, IPS, VPN, IPSec, TLS, and L2-4 Security is highly desirable. Ideally, you should have 2-3 years of relevant experience in a similar job profile to effectively contribute to the team's objectives.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As the MLOps Engineering Director at Horizontal Data Science Enablement Team within SSO Data Science, you will play a crucial role in managing the Databricks platform for the entire organization and leading best practices in MLOps. Your responsibilities will include overseeing the administration, configuration, and maintenance of Databricks clusters and workspaces. You will continuously monitor the clusters for high workloads or excessive usage costs, ensuring the overall health of the clusters and addressing any issues promptly. Implementing and managing security protocols to safeguard sensitive information and facilitating the integration of various data sources into Databricks will be key aspects of your role. Collaborating closely with data engineers, data scientists, and stakeholders, you will provide support for data processing and analytics needs. Maintaining comprehensive documentation of Databricks configurations, processes, and best practices, as well as leading participation in security and architecture reviews, will be part of your responsibilities. Additionally, you will bring MLOps expertise to the table by focusing on areas such as model monitoring, feature catalog/store, model lineage maintenance, and CI/CD pipelines. To excel in this role, you should possess a Master's degree in computer science or a related field, along with strong experience in Databricks management, cloud technologies, and MLOps solutions like MLFlow. Your background should include hands-on experience with industry-standard CI/CD tools, data governance processes, and coding proficiency in languages such as Python, Java, and C++. A systematic problem-solving approach, excellent communication skills, and a sense of ownership and drive are essential qualities for success in this position. Moreover, your ability to set yourself apart will be demonstrated through your experience in SQL tuning, automation, data observability, and supporting highly scalable systems. Operating in a 24x7 environment, self-motivation, creativity in solving software problems, and ensuring system availability across global time zones will further enhance your profile for this role. In alignment with Mastercard's corporate security responsibility, you will be expected to adhere to security policies and practices, maintain the confidentiality and integrity of accessed information, report any security violations, and complete mandatory security trainings. By taking on this role, you will contribute to ensuring the efficiency and security of Mastercard's data science operations.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

The Data Engineer (Databricks, PySpark) position is available in Bangalore, Pune, Gurgaon, and Navi Mumbai with a hybrid work mode. As a Data Engineer with 4-5 years of experience, you will be responsible for designing, building, and optimizing data pipelines, primarily using Databricks and PySpark. You will collaborate with cross-functional teams to deliver effective data solutions and optimize ETL processes for enhanced performance and reliability. Additionally, you will ensure adherence to data quality and governance best practices while deploying and managing data solutions in cloud environments such as Azure or AWS. Requirements for this role include proven experience as a Data Engineer with a focus on Databricks and PySpark, strong proficiency in Python and SQL, experience with cloud platforms like Azure or AWS, and familiarity with data warehousing and integration technologies. Joining our team will offer you learning and certification opportunities for career advancement, comprehensive medical coverage and life insurance for peace of mind, a flexible work environment with a 5-day work week, and a fun, collaborative, and innovative workplace culture.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

At Medtronic, you can embark on a rewarding career dedicated to exploration and innovation, all while contributing to the advancement of healthcare access and equity for all. As a Digital Engineer at our new Minimed India Hub, you will play a crucial role in leveraging technology to enhance healthcare solutions on a global scale. Specifically, as a PySpark Data Engineer, you will be tasked with designing, developing, and maintaining data pipelines using PySpark. Your collaboration with data scientists, analysts, and stakeholders will be essential in ensuring the efficient processing and analysis of large datasets, as well as handling complex transformations and aggregations. This role offers an exciting opportunity to work within Medtronic's Diabetes business. As the Diabetes division prepares for separation to foster future growth and innovation, you will have the chance to operate with increased speed and agility. By working as a separate entity, there will be a focus on driving meaningful innovation and enhancing the impact on patient care. Your responsibilities will include designing, developing, and maintaining scalable and efficient ETL pipelines using PySpark, working with structured and unstructured data from various sources, optimizing PySpark applications for performance and scalability, collaborating with data scientists and analysts to understand data requirements, implementing data quality checks, monitoring and troubleshooting data pipeline issues, documenting technical specifications, and staying updated on the latest trends and technologies in big data and distributed computing. To excel in this role, you should possess a Bachelor's degree in computer science, engineering, or a related field, along with 4-5 years of experience in data engineering focusing on PySpark. Proficiency in Python and Spark, strong coding and debugging skills, knowledge of SQL and relational databases, hands-on experience with cloud platforms, familiarity with data warehousing solutions, experience with big data technologies, problem-solving abilities, and effective communication and collaboration skills are essential. Preferred skills include experience with Databricks, orchestration tools like Apache Airflow, knowledge of machine learning workflows, understanding of data security and governance best practices, familiarity with streaming data platforms, and knowledge of CI/CD pipelines and version control systems. Medtronic offers a competitive salary and flexible benefits package, along with a commitment to recognizing and supporting employees at every stage of their career and life. As part of the Medtronic team, you will contribute to the mission of alleviating pain, restoring health, and extending life by tackling the most challenging health problems facing humanity. Join us in engineering solutions that make a real difference in people's lives.,

Posted 1 week ago

Apply

7.0 - 9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 16,700 stores in 31 countries, serving more than 9 million customers each day. At Circle K, we are building a best-in-class global data engineering practice to support intelligent business decision-making and drive value across our retail ecosystem. As we scale our engineering capabilities, we’re seeking a Lead Data Engineer to serve as both a technical leader and people coach for our India-based Data Enablement pod. This role will oversee the design, delivery, and maintenance of critical cross-functional datasets and reusable data assets while also managing a group of talented engineers in India. This position plays a dual role: contributing hands-on to engineering execution while mentoring and developing engineers in their technical careers. About The Role The ideal candidate combines deep technical acumen, stakeholder awareness, and a people-first leadership mindset. You’ll collaborate with global tech leads, managers, platform teams, and business analysts to build trusted, performant data pipelines that serve use cases beyond traditional data domains. Responsibilities Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines) Architect data models and re-usable layers consumed by multiple downstream pods Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks Mentoring and coaching team Partner with product and platform leaders to ensure engineering consistency and delivery excellence Act as an L3 escalation point for operational data issues impacting foundational pipelines Own engineering best practices, sprint planning, and quality across the Enablement pod Contribute to platform discussions and architectural decisions across regions Job Requirements Education Bachelor’s or master’s degree in computer science, Engineering, or related field Relevant Experience 7-9 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse Knowledge And Preferred Skills Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices. Solid grasp of data governance, metadata tagging, and role-based access control. Proven ability to mentor and grow engineers in a matrixed or global environment. Strong verbal and written communication skills, with the ability to operate cross-functionally. Certifications in Azure, Databricks, or Snowflake are a plus. Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, Snowflake, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use : Databricks, Azure SQL DW/Synapse, Snowflake, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Senior Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Sr. Data Engineer who has experience architecting and implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by modeling and transforming data to achieve actionable business outcomes. The Sr. Data Engineer will create, troubleshoot and support ETL pipelines and the cloud infrastructure involved in the process, will be able to support the visualizations team. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals. Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options. Determine solutions that are best suited to develop a pipeline for a particular data source. Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development. Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery. Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders. Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability). Stay current with and adopt new tools and applications to ensure high quality and efficient solutions. Build cross-platform data strategy to aggregate multiple sources and process development datasets. Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation. Job Requirements Bachelor’s Degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred. 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment. 5+ years of experience with setting up and operating data pipelines using Python or SQL 5+ years of advanced SQL Programming: PL/SQL, T-SQL 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization. Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads. 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data. 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions. 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring. Strong analytical abilities and a strong intellectual curiosity In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design. Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration and teamwork skills & excellent written and verbal communications skills. Self-starter and motivated with ability to work in a fast-paced development environment. Agile experience highly desirable. Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools. Knowledge Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques. Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks. Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use: Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options Determine solutions that are best suited to develop a pipeline for a particular data source Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance) Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability) Stay current with and adopt new tools and applications to ensure high quality and efficient solutions Build cross-platform data strategy to aggregate multiple sources and process development datasets Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation Job Requirements Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment 3+ years of experience with setting up and operating data pipelines using Python or SQL 3+ years of advanced SQL Programming: PL/SQL, T-SQL 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring Strong analytical abilities and a strong intellectual curiosity. In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration, teamwork skills, excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced development environment Agile experience highly desirable Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools Preferred Skills Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management) Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance) Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting ADF, Databricks and Azure certification is a plus Technologies we use : Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

pune, maharashtra

On-site

You are a results-driven Data Project Manager (PM) responsible for leading data initiatives within a regulated banking environment, focusing on leveraging Databricks and Confluent Kafka. Your role involves overseeing the successful end-to-end delivery of complex data transformation projects aligned with business and regulatory requirements. In this position, you will be required to lead the planning, execution, and delivery of enterprise data projects using Databricks and Confluent. This includes developing detailed project plans, delivery roadmaps, and work breakdown structures, as well as ensuring resource allocation, budgeting, and adherence to timelines and quality standards. Collaboration with data engineers, architects, business analysts, and platform teams is essential to align on project goals. You will act as the primary liaison between business units, technology teams, and vendors, facilitating regular updates, steering committee meetings, and issue/risk escalations. Your technical oversight responsibilities include managing solution delivery on Databricks for data processing, ML pipelines, and analytics, as well as overseeing real-time data streaming pipelines via Confluent Kafka. Ensuring alignment with data governance, security, and regulatory frameworks such as GDPR, CBUAE, and BCBS 239 is crucial. Risk and compliance management are key aspects of your role, involving ensuring regulatory reporting data flows comply with local and international financial standards and managing controls and audit requirements in collaboration with Compliance and Risk teams. The required skills and experience for this role include 7+ years of Project Management experience within the banking or financial services sector, proven experience in leading data platform projects, a strong understanding of data architecture, pipelines, and streaming technologies, experience in managing cross-functional teams, and proficiency in Agile/Scrum and Waterfall methodologies. Technical exposure to Databricks (Delta Lake, MLflow, Spark), Confluent Kafka (Kafka Connect, kSQL, Schema Registry), Azure or AWS Cloud Platforms, integration tools, CI/CD pipelines, and Oracle ERP Implementation is expected. Preferred qualifications include PMP/Prince2/Scrum Master certification, familiarity with regulatory frameworks, and a strong understanding of data governance principles. The ideal candidate will hold a Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field. Key performance indicators for this role include on-time, on-budget delivery of data initiatives, uptime and SLAs of data pipelines, user satisfaction, and compliance with regulatory milestones.,

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Application Designer Project Role Description : Assist in defining requirements and designing applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks Good to have skills : Microsoft Azure Architecture Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Designer, you will assist in defining requirements and designing applications to meet business process and application requirements. A typical day involves collaborating with cross-functional teams to gather insights, analyzing user needs, and translating them into functional specifications. You will engage in discussions to refine application designs and ensure alignment with business objectives, while also addressing any challenges that arise during the development process. Your role will be pivotal in ensuring that the applications developed are user-friendly and effectively meet the needs of the organization. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with stakeholders to gather and analyze requirements for application design. - Develop and document application specifications and design documents. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks. - Good To Have Skills: Experience with Microsoft Azure Architecture. - Strong understanding of cloud computing concepts and services. - Experience in application design and development methodologies. - Familiarity with agile development practices and tools. Additional Information: - The candidate should have minimum 3 years of experience in Microsoft Azure Databricks. - This position is based at our Hyderabad office. - A 15 years full time education is required., 15 years full time education

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Summary Are you passionate about the intersection of data, technology and science, and excited by the potential of Real-World Data (RWD) and AI? Do you thrive in collaborative environments and aspire to contribute to the discovery of groundbreaking medical insights? If so, join the data42 team at Novartis! At Novartis, we reimagine medicine by leveraging state-of-the-art analytics and our extensive internal and external data resources. Our data42 platform grants access to high-quality, multi-modal preclinical and clinical data, along with RWD, creating the optimal environment for developing advanced AI/ML models and generating health insights. Our global team of data scientists and engineers utilizes this platform to uncover novel insights and guide drug development decisions. As an RWD SME / RWE Execution Data Scientist, you will focus on executing innovative methodologies and AI models to mine RWD on the data42 platform. You will be the go-to authority for leveraging diverse RWD modalities patterns crucial to understanding patient populations, biomarkers, and drug targets, accelerating the development of life-changing medicines. About The Role Duties and Responsibilities: Collaborate with R&D stakeholders to co-create and implement innovative, repeatable, scalable and automated data and technology solutions in line with data42 strategy. Be a data Subject Matter Expert (SME), understand Real World Data (RWD) of different modalities, vocabularies (LOINC, ICD, HCPCS etc.), non-traditional RWD (Patient reported outcomes, Wearables and Mobile Health Data) and where and how they can be used, including in conjunction with clinical data, omics data, pre-clinical data, and commercial data. Contribute to data strategy implementation such as Federated Learning, tokenization, data quality frameworks, regulatory requirements (submission data to HL7 FHIR formats conversion, Sentinel initiative), conversion to common data models and standards (OMOP, FHIR, SEND etc.), FAIR principles and integration with enterprise catalog Define and execute advanced integrated and scalable analytical approaches and research methodologies (including industry trends) in support of exploratory and regulatory use of AI models for RWD analysis across the Research Development Commercial continuum by facilitating research questions. Stay current with emerging applications and trends, driving the development of advanced analytic capabilities for data42 across the Real-world evidence generation lifecycle, from ideation to study design and execution. Demonstrate high agility working across various cross-located and cross-functional associates across business domains (commercial, Development, Biomedical Research) or Therapeutic area divisions for our priority disease areas to execute complex and critical business problems with quantified business impact/ROI. Ideal Candidate Profile PhD or MSc. in a quantitative discipline (e.g., but not restricted to Computer Science, Physics, Statistics, Epidemiology) with proven expertise in artificial Intelligence / Machine Learning. 8+ years of relevant experience in Data Science (or 4+ years post-qualification in case of PhD). Extensive experience in Statistical and Machine Learning techniques: Regression, Classification, Clustering, Design of Experiments, Monte Carlo Simulations, Statistical Inference, Feature Engineering, Time Series Forecasting, Text Mining, and Natural Language Processing, LLMs, and multi-modal Generative AI. Good to have skills: Stochastic models, Bayesian Models, Markov Chains, Optimization techniques including, Dynamic Programming Deep Learning techniques on structured and unstructured data, Recommender Systems. Proficiency in tools and packages: Python, R(optional), SQL; exposure to dashboard or web-app building using PowerBI, R-Shiny, Flask, open source or proprietary software and packages is an advantage. Knowledge in data standards e.g. OHDSI OMOP, and other data standards, FHIR HL7 for regulatory, and best practices. Good to have: Foundry, big data programming, working knowledge of executing data science on AWS, DataBricks or SnowFlake Strong in Matrix collaboration environments with good communication and collaboration skills with country/ regional/ global stakeholders in an individual contributor capacity. Novartis is committed to building an outstanding, inclusive work environment and diverse teams representative of the patients and communities we serve. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be joining the Digital Engineering Solutions team at CDM Smith as an Artificial Intelligence/Machine Learning Engineer. In this role, you will work within the Data Technology group to support strategic Architecture, Engineering, and Construction (AEC) initiatives by leveraging cutting-edge data technologies and analytics. Your responsibilities will include contributing to advanced analytics and implementing AI/ML solutions to address complex business challenges within the AEC domain. You will utilize state-of-the-art algorithms and techniques such as deep learning, NLP, computer vision, and time-series analysis for domain-specific use cases. Your role will involve analyzing large datasets to identify patterns and trends and ensuring AI models perform in line with business requirements. As an AI/ML Engineer, you will collaborate with Data Engineers, Data Scientists, and other stakeholders to design and implement end-to-end AI/ML solutions. You will also be involved in optimizing AI/ML workflows by implementing MLOps practices, including CI/CD pipelines, model retraining, and version control. Additionally, staying updated on the latest developments in technology and best practices will be crucial to your role. To succeed in this position, you should have a good understanding of the software development life cycle and experience with building and deploying machine learning models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Knowledge of cloud-based AI/ML services, programming languages (e.g., R, Python, Scala), and MLOps practices is essential. You must also be familiar with data privacy, security, and ethical AI principles to ensure compliance with relevant standards. Strong problem-solving, critical thinking, and communication skills are required to effectively address technical challenges and engage with both technical and non-technical audiences. Minimum qualifications for this role include a Bachelor's degree and 1-2 years of related experience. Equivalent additional experience will be considered in lieu of a degree. The position does not require any travel. Background checks, including criminal history, employment verification, and drug testing, may be conducted as per CDM Smith's policies for certain positions. As an AI/ML Engineer at CDM Smith, you will play a key role in driving innovation and delivering impactful solutions for AEC professionals and client outcomes. Your contributions to the Data Technology group will help shape the future of AI/ML in the architecture, engineering, and construction industry.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You should have 5-7 years of relevant experience to be considered for this position. Key Responsibilities: Data Management: You will be responsible for collecting, analyzing, and interpreting large datasets to provide actionable insights. Data Solutions: You will develop and manage Advanced Analytics solutions, including BI Dashboards design, Reports, and digital solutions. Stakeholder Management: Collaborate with stakeholders to understand business needs and translate them into technical requirements. Project Management: Lead and manage Data and Advanced Analytics projects, ensuring timely delivery and alignment with business goals. Documentation: Create and maintain documentation, including design specifications and user manuals. Continuous Improvement: Identify opportunities for process improvements and recommend digital solutions. Experience Requirements: - Must have experience with Databricks for big data processing and analytics. - Strong skills in SQL for database querying, Data Modeling, and DWH (Data Warehousing) are essential. - Develop design documentation by translating business requirements into source-to-target mappings. - Experience in Power BI and Qlik Sense is a must; a development background is an advantage. - Experience with Azure services for data storage, processing, and analytics. - Knowledge of data fabric architecture and implementation. - Azure Data Factory (ADF): Expertise in data integration and orchestration using ADF is an advantage. - Power Platform: Proficiency in using Power Apps, Power Automate, and Power Virtual Agents to create and manage digital solutions. - AI Technologies: Knowledge of AI tools and frameworks to develop predictive models and automate data analysis. Must-Haves: - AI: 4 out of 5 - Data Warehousing (DWH): 4 out of 5 - Data Management: 3 out of 5,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Lead Data Engineer at Mastercard, you will be a key player in the Mastercard Services Technology team, responsible for driving the mission to unlock the potential of data assets by innovating, managing big data assets, ensuring accessibility of data, and enforcing standards and principles in the Big Data space. Your role will involve designing and building scalable, cloud-native data platforms using PySpark, Python, and modern data engineering practices. You will mentor and guide other engineers, foster a culture of curiosity and continuous improvement, and create robust ETL/ELT pipelines that integrate with various systems. Your responsibilities will include decomposing complex problems into scalable components aligned with platform goals, championing best practices in data engineering, collaborating across teams, supporting data governance and quality efforts, and optimizing cloud infrastructure components related to data engineering workflows. You will actively participate in architectural discussions, iteration planning, and feature sizing meetings while adhering to Agile processes. To excel in this role, you should have at least 5 years of hands-on experience in data engineering with strong PySpark and Python skills. You must possess solid experience in designing and implementing data models, pipelines, and batch/stream processing systems. Additionally, a strong foundation in data modeling, database design, and performance optimization is required. Experience working with cloud platforms like AWS, Azure, or GCP and knowledge of modern data architectures and data lifecycle management are essential. Furthermore, familiarity with CI/CD practices, version control, and automated testing is crucial. You should demonstrate the ability to mentor junior engineers effectively, possess excellent communication and collaboration skills, and hold a Bachelor's degree in computer science, Engineering, or a related field. Comfort with Agile/Scrum development environments, curiosity, adaptability, problem-solving skills, and a drive for continuous improvement are key traits for success in this role. Experience with integrating heterogeneous systems, building resilient data pipelines across cloud environments, orchestration tools, data governance practices, containerization, infrastructure automation, and exposure to machine learning data pipelines or MLOps will be advantageous. Holding a Master's degree, relevant certifications, or contributions to open-source/data engineering communities will be a bonus.,

Posted 1 week ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Do you want to be on the leading edge of using big data and help drive engineering and product decisions for the biggest productivity software on the planet? Office Product Group (OPG) has embarked on a mission to delight our customers by using data-informed engineering to develop compelling products and services. OPG is looking for an experienced professional with a passion for delivering business value with data insights and analytics to join our team as a Data & Applied Scientist. We are looking for a strong Senior Data Scientist with a proven track record of solving large, complex data analysis problems in a real-world software product development setting. Ideal candidates should be able to take a business or engineering problem from a Product Manager or Engineering leader and translate it to a data problem. This includes all the steps to identify and deeply understand potential data sources, conduct the appropriate analysis to reveal actionable insights, and then operationalize the metrics or solution into PowerBI dashboards. You will be delivering results through innovation and persistence when similar candidates have given up. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Dashboard Development and Maintenance: Design, build, and maintain interactive dashboards and reports in PowerBI to visualize key business metrics and insights. Work closely with stakeholders to understand their data visualization needs and translate business requirements into technical specifications. Data Extraction And Analysis Perform ad-hoc data extraction and analysis from various data sources, including SQL databases, cloud-based data storage solutions, and external APIs. Ensure data accuracy and integrity in reporting and analysis. Deliver high impact analysis to diagnose and drive business critical insights to guide product and business development. Metric Development And Tracking Be the SME who understand landscape of what data (telemetry) are and should be captured Advice feature teams on telemetry best practices to ensure business needs for data are met. Collaborate with product owners and other stakeholders to define and track key performance indicators (KPIs) and other relevant metrics for business performance. Identify trends and insights in the data to support decision-making processes. User Journey And Funnel Analysis Assist product owners in mapping out user journeys and funnels to understand user behavior and identify opportunities for feature improvement. Develop and implement ML models to analyze user journeys and funnels. Utilize a variety of techniques to uncover patterns in user behavior that can help improve the product. Forecasting And Growth Analysis Support the forecasting of key results (KRs) and growth metrics through data analysis and predictive modeling. Provide insights and recommendations to help drive strategic planning and execution. Qualifications Required Qualifications: Doctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 1+ year(s) data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR Master's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 3+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techn OR Bachelor's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 5+ years data-science experience (e.g., managing structured and unstructured data, applying statistical tec OR equivalent experience. 2+ years customer-facing, project-delivery experience, professional services, and/or consulting experience. Preferred Qualifications 7+ years of experience involving programming with languages Python/R and hands on experience using technologies such as SQL, Kusto, Databricks, Spark etc. 7+ years of experience working with data exploration and data visualization tools like PowerBI or similar. Candidate must be able to communicate complex ideas and concepts to leadership and deliver results. Candidate must be comfortable in manipulating and analyzing complex, high dimensional data from varying sources to solve difficult problems. Bachelors or higher degrees in Computer Science, Statistics, Mathematics, Physics, Engineering, or related disciplines. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

The multifamily real estate industry is undergoing a massive transformation, and Accolade is at the forefront. We are building the industry's first AI-native Operations Centralization Platform, designed to streamline complex processes, unlock new efficiencies, and create better experiences for residents and operators alike. Our platform uses advanced AI to automate routine tasks, capture critical data, and intelligently manage communications. We're on a mission to help our clients achieve what we call "Full Potential Centralization," and we need a visionary engineer to scale our frontend engineering practice and deliver a world-class user experience. As a Senior Frontend Engineer at Accolade, you will be a critical leader in our technology organization. You will take ownership of our entire frontend architecture, strategy, and team. This is a strategic, hands-on leadership role for an engineer with expert product sense who is passionate about building not just high-quality applications, but also high-performing teams. You will be responsible for setting the technical direction for our user-facing products, mentoring a growing team of talented engineers, and collaborating across the company to build a landmark enterprise application from the ground up. In this role, you will lead and mentor a team of frontend engineers, fostering a culture of excellence, collaboration, and continuous improvement. You will set the technical vision and strategy for our frontend ecosystem, defining the architectural vision, driving the adoption of best practices, and dictating technical standards. You will mastermind our release and experimentation strategy using LaunchDarkly, lead the team in leveraging its capabilities for feature flagging and product analytics, and drive design and product decisions in partnership with Product and Design. To excel in this role, you should have 6+ years of professional frontend development experience, 1+ years of direct people management experience, deep hands-on experience with LaunchDarkly, expert product sense, and expertise in TypeScript and React. You should be a strategic thinker with experience in architectural design and demonstrate strong collaboration and communication skills. Accolade's Technology Ecosystem includes TypeScript, React, Redux for frontend, LaunchDarkly for feature management, gRPC, REST for APIs & Services, Java, Python, Kafka, Postgres, AWS, Databricks for backend & platform. If you have experience building complex, data-intensive applications and are looking to shape the future of real estate technology, Accolade offers you the opportunity to have executive impact, build and scale a world-class frontend engineering team, work on a greenfield opportunity, and thrive in a modern, quality-driven culture. We offer competitive compensation including an attractive executive salary, equity, and benefits package. Our interview process is collaborative and in-depth, consisting of stages like an initial conversation, technical & architectural deep dive, collaborative system design & machine coding, and a final fit interview. If you are ready to contribute to the future of real estate tech, we look forward to receiving your application.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

maharashtra

On-site

Are you ready to make it happen at Mondelz International Join our mission to lead the future of snacking and make it with pride. Together with analytics team leaders, you will support our business by providing excellent data models to uncover trends that can drive long-term business results. In this role, you will work closely with the business leadership team to execute the analytics agenda. Your responsibilities will include identifying and incubating best-in-class external partners to drive delivery on strategic projects, developing custom models/algorithms to uncover signals/patterns and trends that drive long-term business performance, and executing the business analytics program agenda using a methodical approach that conveys to stakeholders what business analytics will deliver. To excel in this role, you should possess a desire to drive your future and accelerate your career. You should have experience in using data analysis to make recommendations to senior leaders, technical experience in roles focusing on best-in-class analytics practices, experience deploying new analytical approaches in a complex and highly matrixed organization, and be savvy in using analytics techniques to create business impacts. As part of the Global MSC (Mondelez Supply Chain) Data & Analytics team, you will play a key technical leadership role in developing the cutting-edge Supply Chain Data Product ecosystem. Your responsibilities will involve designing, building, and automating data ingestion, harmonization, and transformation processes to drive advanced analytics, reporting, and insights for optimizing Supply Chain performance across the organization. You should bring expertise in SAP data, including hands-on experience in extracting, transforming, and modeling data from SAP ECC/S4HANA and SAP BW/HANA. Proficiency in cloud data engineering with a focus on Google Cloud Platform (GCP) is essential, along with experience in data pipeline development, BI & Analytics enablement, and ideally hands-on experience with Databricks and system monitoring & optimization. Additionally, you should have excellent communication and collaboration skills, experience delegating work and guiding team members through technical challenges, the ability to thrive in a fast-paced setting, strong problem-solving skills within the Supply Chain domain, and experience working in Agile development environments with a Product mindset. To qualify for this role, you should hold a Bachelor's degree in Information Systems/Technology, Computer Science, Analytics, Engineering, or a related field, along with at least 6 years of hands-on experience in data engineering, data warehousing, or a similar technical role, preferably in CPG or manufacturing with a focus on Supply Chain data. If you are looking for a challenging yet rewarding opportunity to drive innovation in the snacking industry, this role at Mondelz International might be the perfect fit for you. Join our diverse community of makers and bakers who are energized for growth and committed to empowering people to snack right with the right snack, for the right moment, made the right way.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineer, you will be responsible for designing, developing, and maintaining robust ETL pipelines using Azure Data Factory (ADF) to support complex insurance data workflows. Your role will involve integrating and extracting data from various Guidewire modules such as PolicyCenter, BillingCenter, and ClaimCenter, ensuring data quality, integrity, and consistency. You will be tasked with building reusable components for data ingestion, transformation, and orchestration across Guidewire and Azure ecosystems. Optimizing ADF pipelines for performance, scalability, and cost-efficiency while following industry-standard DevOps and CI/CD practices will be a key part of your responsibilities. Collaboration with solution architects, data modelers, and Guidewire functional teams to translate business requirements into scalable ETL solutions is essential. You will conduct thorough unit testing, data validation, and error handling across all data transformation steps. Additionally, your involvement will span end-to-end data lifecycle management from requirement gathering through deployment and post-deployment support. Providing technical documentation, pipeline monitoring dashboards, and ensuring production readiness will be crucial. You will also support data migration projects involving legacy platforms to Azure cloud environments. You will need to follow Agile/Scrum practices, contribute to sprint planning, retrospectives, and stand-ups with a strong ownership of deliverables. Your mandatory skills should include 6+ years of experience in data engineering with expertise in Azure Data Factory, Azure SQL, and related Azure services. Hands-on experience in building ADF pipelines that integrate with Guidewire Insurance Suite is a must. Proficiency in data transformation using SQL, Stored Procedures, and Data Flows is required, along with experience working on Guidewire data models and understanding PC/Billing/Claim schema and business entities. A solid understanding of cloud-based data warehousing concepts, data lake patterns, and data governance best practices is expected. You should also have experience in integrating Guidewire systems with downstream reporting and analytics platforms. Excellent debugging skills will be necessary to resolve complex data transformation and pipeline performance issues. Preferred skills include prior experience in the Insurance (P&C preferred) domain or implementing Guidewire DataHub and/or InfoCenter. Familiarity with tools like Power BI, Databricks, or Synapse Analytics is a plus. In terms of work mode, this position requires 100% onsite presence at the Hyderabad office with no remote or hybrid flexibility. Strong interpersonal and communication skills are essential as you will be working with cross-functional teams and client stakeholders. A self-starter mindset with a high sense of ownership is crucial, as you must thrive under pressure and tight deadlines.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

As a UX Lead, you will be responsible for leading end-to-end user experience design processes including research, wireframing, prototyping, and final UI development for both web and mobile platforms. You will collaborate closely with product, engineering, and data teams to seamlessly integrate user experience with AWS/Azure and Databricks insights, ensuring a cohesive and impactful user interface. Conducting usability testing, iterating designs, and ensuring accessibility and feature performance will be key parts of your role. In this position, mentoring junior designers and advocating for UX best practices within agile squads will be essential. You will also play a crucial role in enforcing secure, compliant, and data-governed design standards to maintain high-quality user experiences. To excel in this role, you should possess a strong proficiency in UX tools such as Figma or Sketch, along with expertise in prototyping, user research, and responsive design. Familiarity with AWS/Azure UX patterns and the ability to embed Databricks analytics into workflows are mandatory requirements. Additionally, you should demonstrate skill in conducting usability studies, leveraging UX analytics, and ensuring effective collaboration through excellent stakeholder communication and facilitation. Being agile-savvy, delivery-focused, and having experience in mentorship will further complement your fit for this position. A solid foundation in visual design principles, including information architecture, hierarchy, and interface consistency, will be valuable in driving successful user experiences.,

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

Remote

HI Folks Please check the JD and share your updated resume to my email naresh@sapphiresoftwaresolutions.com and ping me on whatsapp (+91 970-529-6474) along with your resume Position: Data Scientist Duration: 6 month contract + extensions/conversion Location: Remote Shift: Day Time / Regular IST hours we need someone coming from a degree in Statistics or mathematics Must Haves: 5+ years of experience as a Data Scientist Degree in Statistics or Mathematics Hands-on experience with Python and SQL Experience with Databricks, Snowflake, MS Azure preferred Med device, healthcare or pharmaceutical background preferred Description: We are looking for a Data Scientist based in India to join the analytics organization at a global medical device client. This candidate will be supporting a compliance initiative and will be the lead data scientist overseeing algorithm buildouts. This candidate will be collaborating with the data engineering team to source and clean the data. This will work closely with stakeholders to understand the specific requirements and will create algorithms based off of those specifications. The technology landscape includes: Databricks, Python, SQL, Snowflake and Azure. The ideal candidate will come from experience or background in Statistics

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring that best practices are followed throughout the development process. Your role will be pivotal in driving the success of application initiatives and fostering a collaborative environment. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate training and development opportunities for team members to enhance their skills. - Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data integration and ETL processes. - Experience with cloud computing platforms and services. - Familiarity with data governance and compliance standards. - Ability to analyze and interpret complex data sets. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bhubaneswar office. - A 15 years full time education is required.

Posted 1 week ago

Apply

8.0 - 13.0 years

11 - 16 Lacs

Noida, Greater Noida, Delhi / NCR

Work from Office

Core Responsibilities Seasoned Databricks developer having 8 + years of experience is expected to design, build, and maintain scalable data pipelines and workflows using Azure Databricks. This includes: Developing ETL pipelines using Python, SQL, and Delta Live Tables Managing job execution through Databricks Jobs, including scheduling and parameterisation Implementing data transformation using built-in Databricks features Collaborating with data architects and analysts to ensure data models align with business needs Implementing security and governance using features like encryption, access control and data lineage tracking Data performance management using built-in Databricks features (i.e. concurrency for better query execution) Must be an expert in writing complex SQL queries. Technical Skills The following technical proficiencies are commonly required: Languages: Python, SQL Databricks Features: Delta Lake, Delta Live Tables, MLflow, Databricks Jobs, Secrets Management, encryption access control etc. Azure Integration: Azure Key Vault, Azure Functions, Azure Data Lake, Azure DevOps Deployment Pipelines: Experience with CI/CD for Databricks and integration with Snowflake schemas (DEV, PRE-PROD, PROD) Monitoring & Optimisation: Cost analysis of clusters, performance tuning, and job concurrency management Desirable Experience Working knowledge of Control-M for orchestrating Databricks jobs Familiarity with Power BI integration and semantic modelling Exposure to geospatial or time-series data processing using Databricks Basic knowledge of data modelling Share your resume at Aarushi.Shukla@coforge.com if you are early or immediate joiner. Good to have exposure to Agile methodology of software development Soft Skills & Qualifications Strong problem-solving and communication and interpersonal skills. Ability to work collaboratively across teams (e.g. HR, Architecture, Data Engineering). A degree in Computer Science, Data Engineering, or a related field is typically preferred

Posted 1 week ago

Apply

6.0 years

0 Lacs

Greater Lucknow Area

On-site

Kyndryl Data Science Bengaluru, Karnataka, India Gurugram, Haryana, India Hyderabad, Telangana, India Chennai, Tamil Nadu, India Pune, Maharashtra, India Pune, Maharashtra, India Sector 135, Noida, Uttar Pradesh, India Posted on Jul 14, 2025 Apply now Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills And Experience 6 Years experience in Data Engineer 2 to 3 Years relevant experience in ELK Expertise in data mining, data storage and Extract-Transform-Load (ETL) processes Experience in data pipelines development and tooling, e.g., Glue, Databricks, Synapse, or Datapro Experience with both relational and NoSQL databases, PostgreSQL, DB2, MongoDB Excellent problem-solving, analytical, and critical thinking skills Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail Communication Skills: Must be able to communicate with both technical and non-technical colleagues, to derive technical requirements from business needs and problems Preferred Skills And Experience Experience working as a Data Engineer and/or in cloud modernization Experience in Data Modelling, to create conceptual model of how data is connected and how it will be used in business processes Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization Cloud platform certification, e.g., AWS Certified Data Analytics – Specialty, Elastic Certified Engineer, Google Cloud Professional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate Understanding of social coding and Integrated Development Environments, e.g., GitHub and Visual Studio Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address. Apply now See more open positions at Kyndryl

Posted 1 week ago

Apply

2.0 - 3.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title - Data Engineer Sr.Analyst ACS Song Management Level: Level 10- Sr. Analyst Location: Kochi, Coimbatore, Trivandrum Must have skills: Python/Scala, Pyspark/Pytorch Good to have skills: Redshift Job Summary You’ll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Responsibilities Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured→ unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional And Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture (do not remove the hyperlink) , Experience: 3.5 -5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture)

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies