Jobs
Interviews

1401 Data Bricks Jobs - Page 46

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 - 18.0 years

50 - 80 Lacs

Hyderabad

Work from Office

Executive Director Data Management Company Overview Accordion is a global private equity-focused financial consulting firm specializing in driving value creation through services rooted in Data & Analytics and powered by technology. Accordion works at the intersection of Private Equity sponsors and portfolio companies management teams across every stage of the investment lifecycle. We provide hands-on, execution-oriented support, driving value through the office of the CFO by building data and analytics capabilities and identifying and implementing strategic work, rooted in data and analytics. Accordion is headquartered in New York City with 10 offices worldwide. Join us and make your mark on our company. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) practice in India delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team members deliver data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Working at Accordion in India means joining 800+ analytics, data science, finance, and technology experts in a high-growth, agile, and entrepreneurial environment to transform how portfolio companies drive value. It also means making your mark on Accordion's future by embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Join us and experience a better way to work! Location: Hyderabad, Telangana Role Overview: Accordion is looking for an experienced Enterprise Data Architect to lead the strategy, design, and implementation of data architectures for across all its data management projects. He/she will be part of the technology team and will possess in-depth knowledge of distinct types of data architectures and frameworks including distributed large-scale implementations. He/she will collaborate closely with the client partnership team to design and recommend robust and scalable data architecture to clients and work with engineering teams to implement the same in on-premises or cloud-based environments. He/she will be a data evangelist and will conduct knowledge sharing sessions in the company on various data management topics to spread awareness of data architecture principles and improve the overall capabilities of the team. The Enterprise Data Architect will also conduct design review sessions to validate/verify implementations, emphasize and implement best practices followed by exhaustive documentation which are in line with the design philosophy. He/she will have excellent communication skills and will possess industry standard certification in the data architecture areas. What You will do: Partner with clients to understand their business and create comprehensive requirements to enable development of optimal data architecture. Translate business requirements into logical and physical design of databases, data warehouses, and data streams. Analyze, plan, and define data architecture framework, including security, reference data, metadata, and master data. Create elaborate data management processes and procedures and consult with Senior Management to share the knowledge. Collaborate with client and internal project teams to devise and implement data strategies, build models, and assess shareholder needs and goals. Develop application programming interfaces (APIs) to extract and store data in the most optimal manner. Align business requirements with technical architecture and collaborate with the technical teams for implementation and tracking purposes. Research and track the latest developments in the field to maintain expertise about the latest best practices and techniques within the industry. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. 12+ years of experience in related field. Experience in designing logical & physical data design architectures in various RDBMS (SQL Server, Oracle, MySQL etc.), Non-RDBMS (MongoDB, Cassandra etc.) and Data Warehouse (Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.) environments. Deep knowledge and implementation experience on Modern Data Warehouse principles using Kimball & Inmon Models or Data Vault including their application based on data quality requirements. In-depth knowledge of any one of cloud-based infrastructure (AWS, Azure, Google Cloud) for solution design, development, and delivery is mandatory. Proven abilities to take on initiative, be innovative and drive it through completion. Analytical mind with strong problem-solving attitude. Excellent communication skills, both written and verbal. Any Enterprise Data Architect certification will be an added advantage. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment: Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctor's consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.

Posted 1 month ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Chennai

Hybrid

Roles & Responsibilities : • We are looking for a strong Senior Data Engineering who will be majorly responsible for designing, building and maintaining ETL/ ELT pipelines . • Integration of data from multiple sources or vendors to provide the holistic insights from data. • You are expected to build and manage Data Lake and Data warehouse solutions, design data models, create ETL processes, implementing data quality mechanisms etc. • Perform EDA (exploratory data analysis) required to troubleshoot data related issues and assist in the resolution of data issues. • Should have experience in client interaction oral and written. • Experience in mentoring juniors and providing required guidance to the team. Required Technical Skills • Extensive experience in languages such as Python, Pyspark, SQL (basics and advanced). • Strong experience in Data Warehouse, ETL, Data Modelling, building ETL Pipelines, Data Architecture . • Must be proficient in Redshift, Azure Data Factory, Snowflake etc. • Hands-on experience in cloud services like AWS S3, Glue, Lambda, CloudWatch, Athena etc. • Good to have knowledge in Dataiku, Big Data Technologies and basic knowledge of BI tools like Power BI, Tableau etc will be plus. • Sound knowledge in Data management, data operations, data quality and data governance. • Knowledge of SFDC, Waterfall/ Agile methodology. • Strong knowledge of Pharma domain / life sciences commercial data operations. Qualifications • Bachelors or masters Engineering/ MCA or equivalent degree. • 4-6 years of relevant industry experience as Data Engineer . • Experience working on Pharma syndicated data such as IQVIA, Veeva, Symphony; Claims, CRM, Sales, Open Data etc. • High motivation, good work ethic, maturity, self-organized and personal initiative. • Ability to work collaboratively and providing the support to the team. • Excellent written and verbal communication skills. • Strong analytical and problem-solving skills. Location • Chennai, India

Posted 1 month ago

Apply

1.0 - 5.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Role Data Scientist LocationBangalore TimingsFull Time (As per company timings) Notice Period(Immediate Joiner Only) Experience5 Years We are looking for a highly motivated and skilled Data Scientist to join our growing team The ideal candidate should possess a robust background in data science, machine learning, and statistical analysis, with a passion for uncovering insights from complex datasets This role demands hands-on experience in Python and various ML libraries, strong business acumen, and effective communication skills for translating data insights into strategic decisions. Key Responsibilities Develop, implement, and optimize machine learning models for predictive analytics and business decision-making. Work with both structured and unstructured data to extract valuable insights and patterns. Leverage Python and standard ML libraries (NumPy, Pandas, SciPy, Scikit-Learn, TensorFlow, PyTorch, Keras, Matplotlib) for data modeling and analysis. Design and build data pipelines for streamlined data processing and integration. Conduct Exploratory Data Analysis (EDA) to identify trends, anomalies, and business opportunities. Partner with cross-functional teams to embed data-driven strategies into core business operations. Create compelling data stories through visualization techniques to convey findings to non-technical stakeholders. Stay abreast of the latest ML/AI innovations and industry best practices. Required Skills & Qualifications 5 years of proven experience in Data Scientist and machine learning. Proficient in Python and key data science libraries. Experience with ML frameworks such as TensorFlow, Keras, or PyTorch. Strong understanding of SQL and relational databases. Solid grounding in statistical analysis, hypothesis testing, and feature engineering. Familiarity with data visualization tools like Matplotlib, Seaborn, or Plotly. Demonstrated ability to work with large datasets and solve complex analytical problems. Excellent communication and data storytelling skills. Knowledge of Marketing Mix Modeling is a plus. Preferred Skills Hands-on experience with cloud platforms like AWS, Azure, or GCP. Exposure to big data technologies such as Hadoop, Spark, or Databricks. Familiarity with NLP, computer vision, or deep learning. Understanding of A/B testing and experimental design methodologies. Show more Show less

Posted 1 month ago

Apply

1.0 - 5.0 years

10 - 14 Lacs

Pune

Work from Office

Technical Project Manager Company Overview: At Codvo, software and people transformations go hand-in-hand We are a global empathy-led technology services company Product innovation and mature software engineering are part of our core DNA Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results. : Lead and manage end-to-end data and analytics projects, ensuring timely delivery and alignment with business objectives. Collaborate with cross-functional teams, including data scientists, analysts, engineers, and business stakeholders, to define project scope, goals, and deliverables. Develop detailed project plans, including timelines, milestones, resource allocation, and risk management strategies. Monitor project progress, identify potential issues, and implement corrective actions to ensure project success. Facilitate effective communication and collaboration among team members and stakeholders. Ensure data quality, integrity, and security throughout the project lifecycle. Stay updated with the latest trends and technologies in data and analytics to drive continuous improvement and innovation. Provide regular project updates and reports to senior management and stakeholders. Effective leadership, interpersonal and communication skills. Ability to stay calm and composed to deliver under pressure. Strategic thinkers having adequate cost control / management experience would be a plus. Strong knowledge of Change, Risk and Resource management is required. Thorough understanding of project/program management techniques and methods from initiation to closure. Working knowledge of program/project management tools like JIRA, Azure DevOps Board, Basecamp, MS Project, Excellent communication skills and clarity of thought. Excellent problem-solving ability, with escalation handling experience. Qualifications: Bachelors degree in Computer Science, Information Technology, Data Science, or a related field A Masters degree is a plus. Proven experience as a Technical Project Manager, preferably in data and analytics projects. Strong understanding of data management, analytics, and visualization tools and technologies. Excellent project management skills, including the ability to manage multiple projects simultaneously. Proficiency in project management software (e.g., JIRA, MS Project, ADO). Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Ability to work effectively in a fast-paced, dynamic environment. Preferred Skills: Experience with big data technologies (e.g., Hadoop, Spark, Azure, Databricks). Knowledge of machine learning and artificial intelligence. Certification in project management (e.g., PMP, PRINCE2). Work Location Remote / Pune Work timings 2.30 pm- 11.30 pm Show more Show less

Posted 1 month ago

Apply

5.0 - 9.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities As a Tech Lead, the candidate should be able to work as an individual contributor as well as people manager Be able to work on data pipelines and databases Be able to work on data intensive applications or systems Be able to lead the team and have to soft skills for the same Be able to review code, design and mentor the team members Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Graduate degree or equivalent experience Experience working on Databricks Well versed with Apache spark, Azure, SQL, Pyspark, Airflow, Hadoop, UNIX etc. Proven ability to work on big data technology stack on cloud and on-prem Proven ability to communicate effectively with the team Proven ability to lead and mentor the team Proven ability to have soft skills for people management

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead the migration of the ETLs from on-premises SQLServer based data warehouse to Azure Cloud, Databricks and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, and Databricks (Pyspark) Review and analyze existing on-premises ETL processes developed in SSIS and T-SQL Implement DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 6+ years of experience as a Cloud Data Engineer Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage) and Databricks Solid experience in ETL development using on-premises databases and ETL technologies Experience with Python or other scripting languages for data processing Experience with Agile methodologies Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Proven solid communication and collaboration skills Proven solid analytical skills and attention to detail Proven ability to adapt to new technologies and learn quickly Preferred Qualifications Certification in Azure or Databricks Experience with data modeling and database design Experience with development in Snowflake for data engineering and analytics workloads Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud)

Posted 1 month ago

Apply

5.0 - 9.0 years

13 - 18 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a highly skilled and experienced Technical Delivery Lead to join our team for a Cloud Data Modernization project. The successful candidate will be responsible for managing and leading the migration of an on-premises Enterprise Data Warehouse (SQLServer) to a modern cloud-based data platform utilizing Azure Cloud data tools and Snowflake. This platform will enable offshore (non-US) resources to build and develop Reporting, Analytics, and Data Science solutions. Primary Responsibilities Manage and lead the migration of the on-premises SQLServer Enterprise Data Warehouse to Azure Cloud and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, Databricks, and Snowflake Manage and guide the development of cloud-native ETLs and data pipelines using modern technologies on Azure Cloud, Databricks, and Snowflake Implement and oversee DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Provide technical leadership and mentorship to the engineering team Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 8+ years of experience in a Cloud Data Engineering role, with 3+ years in a leadership or technical delivery role Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage), Databricks, and Snowflake Experience with Python or other scripting languages for data processing Experience with Agile methodologies and project management tools Solid experience in developing cloud-native ETLs and data pipelines using modern technologies on Azure Cloud, Databricks, and Snowflake Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Proven solid communication and collaboration skills. Solid analytical skills and attention to detail Proven track record of successful project delivery in a cloud environment Preferred Qualifications Certification in Azure or Snowflake Experience working with automated ETL conversion tools used during cloud migrations (SnowConvert, BladeBridge, etc.) Experience with data modeling and database design Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud)

Posted 1 month ago

Apply

4.0 - 7.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Data Pipeline ManagementOversee the design, deployment, and maintenance of data pipelines to ensure they are optimized and highly available Data Collection and StorageBuild and maintain systems for data collection, storage, and processing ETL ProcessesDevelop and manage ETL (Extract, Transform, Load) processes to convert raw data into usable formats CollaborationWork closely with data analysts, data scientists, and other stakeholders to gather technical requirements and ensure data quality System MonitoringMonitor existing metrics, analyze data, and identify opportunities for system and process improvements Data GovernanceEnsure data compliance and security needs are met in system construction MentorshipOversee and mentor junior data engineers, ensuring proper execution of their duties ReportingDevelop queries for ad hoc business projects and ongoing reporting Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor degree in engineering or equivalent experience Minimum 3/4 years of experience in SQL (Joins, Stored procedures, performance tuning), Azure, PySpark, Databricks & Big Data Ecosystem) Flexibility to work in different shift timings Flexibility to work as Dev OPS Engineers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

4.0 - 8.0 years

10 - 15 Lacs

Gurugram

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design and develop client-side and server-side architecture Build the front-end of applications through appealing visual design Develop and manage well-functioning databases and applications Write effective APIs and integrate third-party services Ensure cross-platform optimization for mobile devices Collaborate with graphic designers to implement web design features Troubleshoot, debug, and upgrade software Write technical documentation and maintain code quality Stay updated with emerging technologies and industry trends Work with data scientists and analysts to improve software Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related field 4+ years of proven experience as a Full Stack Engineer or similar role Work experience in front-end technologies (Front-end technology) Experience with back-end languages (Python, Pyspark, (Good to have Java) Experience with cloud services (AWS, Azure, Google Cloud) Knowledge of DevOps practices and CI/CD pipelines Familiarity with containerization technologies (Docker, Kubernetes) Familiarity with databases (MySQL, MongoDB, Databricks, Hive) Proven excellent problem-solving skills and attention to detail Demonstrated ability to work independently and as part of a team At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 14 Lacs

Chennai

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Technical Leadership Technical GuidanceProvide technical direction and guidance to the development team, ensuring that best practices are followed in coding standards, architecture, and design patterns Architecture DesignDesign and oversee the architecture of software solutions to ensure they are scalable, reliable, and performant Technology StackMake informed decisions on the technology stack (.Net for backend services, React for frontend development) to ensure it aligns with project requirements Code ReviewsConduct regular code reviews to maintain code quality and provide constructive feedback to team members Hands-on DevelopmentEngage in hands-on coding and development tasks, particularly in complex or critical areas of the project Project Management Task PlanningBreak down project requirements into manageable tasks and assign them to team members while tracking progress Milestone TrackingMonitor project milestones and deliverables to ensure timely completion of projects Data Pipeline & ETL Management Data Pipeline DesignDesign robust data pipelines that can handle large volumes of data efficiently using appropriate technologies (e.g., Apache Kafka) ETL ProcessesDevelop efficient ETL processes to extract, transform, and load data from various sources into the analytics platform Product Development Feature DevelopmentLead the development of new features from concept through implementation while ensuring they meet user requirements Integration TestingEnsure thorough testing (unit tests, integration tests) is conducted for all features before deployment Collaboration Cross-functional CollaborationCollaborate closely with product managers, UX/UI designers, QA engineers, and other stakeholders to deliver high-quality products Stakeholder CommunicationCommunicate effectively with stakeholders regarding project status updates, technical challenges, and proposed solutions Quality Assurance Performance OptimizationIdentify performance bottlenecks within applications or data pipelines and implement optimizations Bug ResolutionTriage bugs reported by users or QA teams promptly and ensure timely resolution Innovation & Continuous Improvement Stay Updated with TrendsKeep abreast of emerging technologies in .Net, React, Data Pipelines/ETL tools (like Apache Kafka or Azure Data Factory) that could benefit the product Process ImprovementContinuously seek ways to improve engineering processes for increased efficiency and productivity within the team Mentorship & Team Development MentorshipMentor junior developers by providing guidance on their technical growth as well as career development opportunities Team Building ActivitiesFoster a positive team environment through regular meetings (stand-ups), brainstorming sessions/workshops focusing on problem-solving techniques related specifically towards our tech stack needs (.Net/React/Data pipeline) Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so These responsibilities collectively ensure that the Lead Software Engineer not only contributes technically but also plays a crucial role in guiding their team towards successful project delivery for advanced data analytics products utilizing modern technologies such as .Net backend services combined seamlessly alongside frontend interfaces built using React coupled together via robustly engineered pipelines facilitating efficient ETL processes necessary powering insightful analytical outcomes beneficial end-users alike! Required Qualifications Bachelor’s DegreeA Bachelor’s degree in Computer Science, Software Engineering, Information Technology, or a related field Professional Experience8+ years of experience in software development with significant time spent on both backend (.Net) and frontend (React) technologies Leadership ExperienceProven experience in a technical leadership role where you have led projects or teams Technical Expertise: Extensive experience with .Net framework (C#) for backend development Proficiency with React for frontend development Solid knowledge and hands-on experience with data pipeline technologies (e.g., Apache Kafka) Solid understanding of ETL processes and tools such as DataBricks, ADF, Scala/Spark Technical Skills Architectural KnowledgeExperience designing scalable and high-performance architectures Cloud ServicesExperience with cloud platforms such as Azure, AWS or Google Cloud Platform Software Development LifecycleComprehensive understanding of the software development lifecycle (SDLC), including Agile methodologies Database ManagementProficiency with SQL and NoSQL databases (e.g., SQL Server, MongoDB) Leadership AbilitiesProven solid leadership skills with the ability to inspire and motivate teams Communication Skills: Proven superior verbal and written communication skills for effective collaboration with cross-functional teams and stakeholders Problem-Solving AbilitiesProven solid analytical and problem-solving skills Preferred Qualification Advanced Degree (Optional)A Master’s degree in a relevant field

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 13 Lacs

Gurugram

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a highly skilled and experienced Senior Cloud Data Engineer to join our team for a Cloud Data Modernization project. The successful candidate will be responsible for migrating our on-premises Enterprise Data Warehouse (SQLServer) to a modern cloud-based data platform utilizing Azure Cloud data tools, Delta lake and Snowflake. Primary Responsibilities Lead the migration of the ETLs from on-premises SQLServer based data warehouse to Azure Cloud, Databricks and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, and Databricks (Pyspark) Review and analyze existing on-premises ETL processes developed in SSIS and T-SQL Implement DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 6+ years of experience as a Cloud Data Engineer Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage) and Databricks Solid experience in ETL development using on-premises databases and ETL technologies Experience with Python or other scripting languages for data processing Experience with Agile methodologies Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Solid communication and collaboration skills Solid analytical skills and attention to detail Ability to adapt to new technologies and learn quickly Preferred Qualifications Certification in Azure or Databricks Experience with data modeling and database design Experience with development in Snowflake for data engineering and analytics workloads Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 15 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Test Planning & Automation Lead - Cloud Data Modernization Position Overview: We are seeking a highly skilled and experienced Test Planning & Automation Lead to join our team for a Cloud Data Modernization project. This role involves leading the data validation testing efforts for the migration of an on-premises Enterprise Data Warehouse (SQLServer) to a target cloud tech stack comprising Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage, etc.) and Snowflake. The primary goal is to ensure data consistency between the on-premises and cloud environments. Primary Responsibilities Lead Data Validation TestingOversee and manage the data validation testing process to ensure data consistency between the on-premises SQLServer and the target cloud environment Tool Identification and AutomationIdentify and implement appropriate tools to automate the testing process, reducing reliance on manual methods such as Excel or manual file comparisons Testing Plan DevelopmentDefine and develop a comprehensive testing plan that addresses validations for all data within the data warehouse CollaborationWork closely with data engineers, cloud architects, and other stakeholders to ensure seamless integration and validation of data Quality AssuranceEstablish and maintain quality assurance standards and best practices for data validation and testing ReportingGenerate detailed reports on testing outcomes, data inconsistencies, and corrective actions Continuous ImprovementContinuously evaluate and improve testing processes and tools to enhance efficiency and effectiveness Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor degree or above education Leadership Experience6+ years as a testing lead in Data Warehousing or Cloud Data Migration projects Automation ToolsExperience with data validation through custom built python frameworks and testing automation tools Testing MethodologiesProficiency in defining and implementing testing methodologies and frameworks for data validation Technical ExpertiseSolid knowledge of Python, SQL Server, Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage), Databricks, and Snowflake Analytical Skills: Proven excellent analytical and problem-solving skills to identify and resolve data inconsistencies CommunicationProven solid communication skills to collaborate effectively with cross-functional teams Project ManagementDemonstrated ability to manage multiple tasks and projects simultaneously, ensuring timely delivery of testing outcomes Preferred Qualifications Experience in leading data validation testing efforts in cloud migration projects Familiarity with Agile methodologies and project management tools At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

3.0 - 7.0 years

11 - 15 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities Support the full data engineering lifecycle including research, proof of concepts, design, development, testing, deployment, and maintenance of data management solutions Utilize knowledge of various data management technologies to drive data engineering projects Lead data acquisition efforts to gather data from various structured or semi-structured source systems of record to hydrate client data warehouse and power analytics across numerous health care domains Leverage combination of ETL/ELT methodologies to pull complex relational and dimensional data to support loading DataMart’s and reporting aggregates Eliminate unwarranted complexity and unneeded interdependencies Detect data quality issues, identify root causes, implement fixes, and manage data audits to mitigate data challenges Implement, modify, and maintain data integration efforts that improve data efficiency, reliability, and value Leverage and facilitate the evolution of best practices for data acquisition, transformation, storage, and aggregation that solve current challenges and reduce the risk of future challenges Effectively create data transformations that address business requirements and other constraints Partner with the broader analytics organization to make recommendations for changes to data systems and the architecture of data platforms Support the implementation of a modern data framework that facilitates business intelligence reporting and advanced analytics Prepare high level design documents and detailed technical design documents with best practices to enable efficient data ingestion, transformation and data movement Leverage DevOps tools to enable code versioning and code deployment Leverage data pipeline monitoring tools to detect data integrity issues before they result into user visible outages or data quality issues Leverage processes and diagnostics tools to troubleshoot, maintain and optimize solutions and respond to customer and production issues Continuously support technical debt reduction, process transformation, and overall optimization Leverage and contribute to the evolution of standards for high quality documentation of data definitions, transformations, and processes to ensure data transparency, governance, and security Ensure that all solutions meet the business needs and requirements for security, scalability, and reliability Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s Degree (preferably in information technology, engineering, math, computer science, analytics, engineering or other related field) 3+ years of experience in Microsoft Azure Cloud, Azure Data Factory, Data Bricks, Spark, Scala / Python , ADO. 5+ years of combined experience in data engineering, ingestion, normalization, transformation, aggregation, structuring, and storage 5+ years of combined experience working with industry standard relational, dimensional or non-relational data storage systems 5+ years of experience in designing ETL/ELT solutions using tools like Informatica, DataStage, SSIS , PL/SQL, T-SQL, etc. 5+ years of experience in managing data assets using SQL, Python, Scala, VB.NET or other similar querying/coding language 3+ years of experience working with healthcare data or data to support healthcare organizations At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

2.0 - 6.0 years

10 - 15 Lacs

Gurugram

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Leverage AI/ML technologies to transform and improve current Optum system. This involves working with large-scale computing frameworks and data analysis systems Produce innovative solutions driven by exploratory data analysis from unstructured, diverse datasets. They apply knowledge of statistics, machine learning, programming, data modeling, simulation, and advanced mathematics to recognize patterns, identify opportunities, pose business questions, and make valuable discoveries leading to prototype development and product improvement Collaborate closely with business and product teams to grasp the requirements and translate vague concepts into tangible AI solutions. Possess solid problem solving and implementation skills to bring these solutions to life and work with engineering teams to build scalable, flexible product pipelines Adapt to an agile, fast-paced environment, and maintain a passion for exploring and integrating cutting-edge AI technologies to support future projects Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s / master’s degree in computer science, Math/Statistics, Electrical Engineering, Industrial Engineering, Bioinformatics, or another technical field that provides mathematical training 5+ years of Python development 3+ years of experience with AI/ML technologies and working with predictive analytics, image processing, Generative AI, or related technologies 3+ years of experience with AI Cloud services like Azure ML, Databricks, Azure OpenAI, or similar services in GCP / AWS 1+ years of experience in LLMs (Large Language Models) in cloud (Azure, GCP, AWS) and RAG (Retrieval-Augmented Generation) Experience in building enterprise grade AI-driven solutions Experience in designing and implementing effective prompts for LLMs to ensure optimal performance and accuracy Experience in data ingestion, transformation, and management processes Familiarity with frameworks/libraries such as TensorFlow, PyMuPDF, Tesseract Demonstrated skills in data analytics and the ability to work with large datasets to extract meaningful insights Preferred Qualification Proven ability to find AI solutions for undefined problems Soft Skills Ability to be a fast learner and self-driven Ability to work independently on complex AI projects

Posted 1 month ago

Apply

7.0 - 12.0 years

18 - 22 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a talented and hands-on Azure Engineer to join our team. The ideal candidate will have significant experience working on Azure, as well as a solid background in cloud data engineering, data pipelines, and analytics solutions. You will be responsible for designing, building, and managing scalable data architectures, enabling seamless data integration, and leveraging advanced analytics capabilities to drive business insights. Primary Responsibilities Azure Platform Implementation: Develop, manage, and optimize data pipelines using AML workspace on Azure Design and implement end-to-end data processing workflows, leveraging Databricks notebooks and jobs for data transformation, modeling, and analysis Build and maintain scalable data models in Databricks using Apache Spark for big data processing Integrate Databricks with other Azure services, including Azure Data Lake, Azure Synapse, and Azure Blob Storage Data Engineering & ETL Development: Design and implement robust ETL/ELT pipelines to ingest, transform, and load large volumes of data Optimize data processing jobs for performance, reliability, and scalability Use Apache Spark and other Databricks features to process structured, semi-structured, and unstructured data efficiently Azure Cloud Architecture: Work with Azure cloud services to design and deploy cloud-based data solutions Architect and implement data lakes, data warehouses, and analytics solutions within the Azure ecosystem Ensure security, compliance, and governance best practices for cloud-based data solutions Collaboration & Analytics: Collaborate with data scientists, analysts, and business stakeholders to deliver actionable insights Build advanced analytics models and solutions using Databricks, leveraging Python, SQL, and Spark-based technologies Provide guidance and technical expertise to other teams on best practices for working with Databricks and Azure Performance Optimization & Monitoring: Monitor and optimize the performance of data pipelines and Databricks jobs Troubleshoot and resolve performance and reliability issues within the data engineering pipelines Ensure high availability, fault tolerance, and efficient resource utilization on Databricks Continuous Improvement: Stay up-to-date with the latest features of Databricks, Azure, and related technologies Continuously improve data architectures, pipelines, and processes for better performance and scalability Propose and implement innovative solutions to meet evolving business needs Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 10+ years of hands-on experience with Azure ecosystem Solid experience with cloud-based data engineering, particularly with Azure services (Azure Data Lake, Azure Synapse, Azure Blob Storage, etc.) Experience with Databricks notebooks and managing Databricks environments Hands-on experience with data storage technologies (Data Lake, Data Warehouse, Blob Storage) Solid knowledge of SQL and Python for data processing and transformation Familiarity with cloud infrastructure management on Azure and using Azure DevOps for CI/CD Solid understanding of data modeling, data warehousing, and data lake architectures Expertise in building and managing ETL/ELT pipelines using Apache Spark, Databricks, and other related technologies Proficiency in Apache Spark (PySpark, Scala, SQL) Proven solid problem-solving skills with a proactive approach to identifying and addressing issues Proven ability to communicate complex technical concepts to non-technical stakeholders Proven excellent collaboration skills to work effectively with cross-functional teams Preferred Qualifications Certifications in Azure (Azure Data Engineer, Azure Solutions Architect) Experience with advanced analytics techniques, including machine learning and AI, using Databricks Experience with other big data processing frameworks or platforms Experience with data governance and security best practices in cloud environments Knowledge of DevOps practices and CI/CD pipelines for cloud environments At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

5.0 - 9.0 years

14 - 19 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Be able to work as an individual contributor Be able to work on data pipelines and databases Be able to work on data intensive applications or systems Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Experience working on Databricks Well versed with Apache spark, Azure, SQL, Pyspark, Airflow, Hadoop, UNIX etc. Demonstrated ability to work on big data technology stack on cloud and on-prem Demonstrated ability to communicate effectively with the team At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. #gen

Posted 1 month ago

Apply

3.0 - 7.0 years

7 - 11 Lacs

Mysuru

Work from Office

As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Proficient Software Development with Microsoft TechnologiesDemonstrate expertise in software development using Microsoft technologies, ensuring high-quality code and efficient application performance. Collaborative Problem-Solving and Stakeholder EngagementCollaborate effectively with stakeholders to understand product requirements and challenges, proactively addressing issues through analytical problem-solving and strategic software solutions. Agile Learning and Technology IntegrationStay updated with the latest Microsoft technologies, eagerly embracing continuous learning and integrating newfound knowledge to enhance software development processes and product features Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL ADF Azure Data Bricks Preferred technical and professional experience PostgreSQL, MSSQL Eureka Hystrix, zuul/API gateway In-Memory storage

Posted 1 month ago

Apply

1.0 - 4.0 years

10 - 14 Lacs

Pune

Work from Office

Overview Design, develop, and maintain data pipelines and ETL/ELT processes using PySpark/Databricks. Optimize performance for large datasets through techniques such as partitioning, indexing, and Spark optimization. Collaborate with cross-functional teams to resolve technical issues and gather requirements. Responsibilities Ensure data quality and integrity through data validation and cleansing processes. Analyze existing SQL queries, functions, and stored procedures for performance improvements. Develop database routines like procedures, functions, and views. Participate in data migration projects and understand technologies like Delta Lake/warehouse. Debug and solve complex problems in data pipelines and processes. Qualifications Bachelor’s degree in computer science, Engineering, or a related field. Strong understanding of distributed data processing platforms like Databricks and BigQuery. Proficiency in Python, PySpark, and SQL programming languages. Experience with performance optimization for large datasets. Strong debugging and problem-solving skills. Fundamental knowledge of cloud services, preferably Azure or GCP. Excellent communication and teamwork skills. Nice to Have: Experience in data migration projects. Understanding of technologies like Delta Lake/warehouse. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 1 month ago

Apply

3.0 - 7.0 years

0 - 3 Lacs

Pune

Work from Office

We are urgently hiring for Sr. Data Engineer for Pune location, WFO, 4+ Year experience, IITian only, must have hands on experience on Azure, Databricks, Pyspark and Python, Immediate joiners preferred

Posted 1 month ago

Apply

2.0 - 5.0 years

15 - 19 Lacs

Mumbai

Work from Office

Overview The Data Technology team at MSCI is responsible for meeting the data requirements across various business areas, including Index, Analytics, and Sustainability. Our team collates data from multiple sources such as vendors (e.g., Bloomberg, Reuters), website acquisitions, and web scraping (e.g., financial news sites, company websites, exchange websites, filings). This data can be in structured or semi-structured formats. We normalize the data, perform quality checks, assign internal identifiers, and release it to downstream applications. Responsibilities As data engineers, we build scalable systems to process data in various formats and volumes, ranging from megabytes to terabytes. Our systems perform quality checks, match data across various sources, and release it in multiple formats. We leverage the latest technologies, sources, and tools to process the data. Some of the exciting technologies we work with include Snowflake, Databricks, and Apache Spark. Qualifications Core Java, Spring Boot, Apache Spark, Spring Batch, Python. Exposure to sql databases like Oracle, Mysql, Microsoft Sql is a must. Any experience/knowledge/certification on Cloud technology preferrably Microsoft Azure or Google cloud platform is good to have. Exposures to non sql databases like Neo4j or Document database is again good to have. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 1 month ago

Apply

5.0 - 8.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Years Of Exp - 5-12 Yrs Location - PAN India OFSAA Data Modeler Experience in design, build ,customize OFSAA Data model ,Validation of data model Excellent knowledge in Data model guidelines for Staging. processing and Reporting tables. Knowledge on Data model Support on configuring the UDPs, subtype and supertype relationship enhancements Experience on OFSAA platform (OFSAAI) with one or more of following OFSAA modules: o OFSAA Financial Solution Data Foundation - (Preferred) o OFSA Data Integrated Hub - Optional Good in SQL and PL/SQL. Strong in Data Warehouse Principles, ETL/Data Flow tools. Should have excellent Analytical and Communication skills. OFSAA Integration SME - DIH/Batch run framework Experience in ETL process, familiar with OFSAA. DIH setup in EDS, EDD, T2T, etc. Familiar with different seeded tables, SCD, DIM, hierarchy, look ups, etc Worked with FSDF in knowing the STG, CSA, FACT table structures Have working with different APIs and out of box connectors, etc. Familiar with Oracle patching and SR

Posted 1 month ago

Apply

5.0 - 8.0 years

16 - 27 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

ML Engineer (ML Ops)Chennai / Bangalore / Hyderabad Curious about the role? What your typical day would look like? We are looking for a Machine Learning Engineer/Sr MLE who will work on a broad range of cutting-edge data analytics and machine learning problems across a variety of industries. More specifically, you will Engage with clients to understand their business context. Translate business problems and technical constraints into technical requirements for the desired analytics solution. Collaborate with a team of data scientists and engineers to embed AI and analytics into the business decision processes. What do we expect? 6+ years of experience with at least 4+ years of relevant MLOps experience . Proficient in a structured Python (Mandate) Proficient in any one of cloud technologies is mandatory ( AWS/ Azure/ GCP) Proficient in Azure Databricks Follows good software engineering practices and has an interest in building reliable and robust software. Good understanding of DS concepts and DS model lifecycle. Working knowledge of Linux or Unix environments ideally in a cloud environment. Working knowledge of Spark/ PySpark is desirable. Model deployment / model monitoring experience is desirable. CI/CD pipeline creation is good to have. Excellent written and verbal communication skills. B.Tech from Tier-1 college / M.S or M. Tech is preferred. You are important to us, lets stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow.We are an equal-opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Note: The designation will be commensurate with expertise and experience. Compensation packages are among the best in the industry.Additional Benefits: Health insurance (self & family), virtual wellness platform, and knowledge communities.

Posted 1 month ago

Apply

7.0 - 12.0 years

10 - 18 Lacs

Bengaluru

Hybrid

Job Goals Design and implement resilient data pipelines to ensure data reliability, accuracy, and performance. Collaborate with cross-functional teams to maintain the quality of production services and smoothly integrate data processes. Oversee the implementation of common data models and data transformation pipelines, ensuring alignement to standards. Drive continuous improvement in internal data frameworks and support the hiring process for new Data Engineers. Regularly engage with collaborators to discuss considerations and manage the impact of changes. Support architects in shaping the future of the data platform and help land new capabilities into business-as-usual operations. Identify relevant emerging trends and build compelling cases for adoption, such as tool selection. Ideal Skills & Capabilities A minimum of 6 years of experience in a comparable Data Engineer position is required. Data Engineering Expertise: Proficiency in designing and implementing resilient data pipelines, ensuring data reliability, accuracy, and performance, with practical knowledge of modern cloud data technology stacks (AZURE) Technical Proficiency: Experience with Azure Data Factory and Databricks , and skilled in Python , Apache Spark , or other distributed data programming frameworks. Operational Knowledge: In-depth understanding of data concepts, data structures, modelling techniques, and provisioning data to support varying consumption needs, along with accomplished ETL/ELT engineering skills. Automation & DevOps: Experience using DevOps toolchains for managing CI/CD and an automation-first mindset in building solutions, including self-healing and fault-tolerant methods. Data Management Principles: Practical application of data management principles such as security and data privacy, with experience handling sensitive data through techniques like anonymisation/tokenisation/pseudo-anonymisation.

Posted 1 month ago

Apply

5.0 - 10.0 years

9 - 18 Lacs

Coimbatore, Bengaluru

Hybrid

Job Title : SQL Developer Location : Bangalore / Coimbatore Job Type : Full Time, Hybrid (Night Shift) Experience : 5+ years Shift Timings: US Shift 6:30pm to 3:30am Preferred : Immediate joiner or 15 days Notice About LogixHealth: At LogixHealth we provide expert coding and billing services that allow physicians to focus on providing great clinical care. LogixHealth was founded in the 1990s by physicians to service their own practices and has grown to become the nations leading provider of unsurpassed softwareenabled revenue cycle management services, offering a complete range of solutions, including coding and claims management and the latest business intelligence reporting dashboards for clients in 40 states. Since our first day, we have had a clear vision of a better healthcare system and have continually evolved to get there. In addition to providing expert revenue cycle services, we utilize proprietary software to provide valuable financial, clinical, and other data insights that directly improve the quality and efficiency of patient care. At LogixHealth, we’re committed to Making intelligence matter through our pillars of PhysicianInspired Knowledge, Unrivaled Technology and Impeccable Service. To learn more about us, visit our website https://www.logixhealth.com What we offer: At LogixHealth, we value our people and are committed to their growth and well-being. We strive to create an environment where innovation is rewarded and careers flourish. Join us and be part of something meaningful. Opportunity to shape the future of digital healthcare products Collaborative and inclusive company culture Competitive salary and performance-based incentives Professional development and training opportunities Flexible work environment with hybrid options Job Description As a SQL Developer at LogixHealth, you will work closely with a collaborative team to understand complex business logic and develop robust, high-performance SQL solutions that support critical business operations. You’ll play a key role in peer reviews, provide best practice recommendations, and contribute to process improvements. Tasks and Responsibilities Participate in peer code reviews to ensure code quality and adherence to best practices Recommend and implement improvements to database structure and processes Provide night shift support for production systems, including monitoring and troubleshooting Optimize database performance through query tuning, indexing, and partitioning Assist in deploying database changes to production and staging environments Manage database access and audit user permissions in line with security policies Troubleshoot and maintain Azure CI/CD pipelines for database deployments Write and maintain PowerShell scripts for automation, deployment, and monitoring Design, deploy, and support SSIS packages for data integration and ETL processes Required Skills and Knowledge Advanced T-SQL Development: Proficient in writing efficient queries, stored procedures, and views Dynamic SQL: Ability to implement customizable logic using dynamic SQL Performance Tuning: Expertise in analyzing execution plans, indexing, and query optimization Production Support: Capable of resolving SQL issues in a 24/7 environment, especially during night shifts Collaboration: Experience working with cross-functional teams including analysts and business users Code Review & Standards: Participates in code reviews and promotes clean, standardized code practices Security & Compliance: Familiarity with data governance, security standards, and audit practices. Preferred Skills Cloud Experience: Exposure to Azure SQL, Azure DevOps, Databricks and Airflow Scripting: Python, Bash, Powershell

Posted 1 month ago

Apply

5.0 - 7.0 years

15 - 20 Lacs

Pune

Work from Office

Roles and Responsibilities: You are detailed reviewing and analyzing structured, semi-structured and unstructured data sources for quality, completeness, and business value. You design, architect, implement and test rapid prototypes that demonstrate value of the data and present them to diverse audiences. You participate in early state design and feature definition activities. Responsible for implementing robust data pipeline using Microsoft, Databricks Stack Responsible for creating reusable and scalable data pipelines. You are a Team-Player, collaborating with team members across multiple engineering teams to support the integration of proven prototypes into core intelligence products. You have strong communication skills to effectively convey complex data insights to non-technical stakeholders. Critical Skills to Possess: Skills: Advanced working knowledge and experience with relational and non-relational databases. Advanced working knowledge and experience with API data providers Experience building and optimizing Big Data pipelines, architectures, and datasets. Strong analytic skills related to working with structured and unstructured datasets. Hands-on experience in Azure Databricks utilizing Spark to develop ETL pipelines. Strong proficiency in data analysis, manipulation, and statistical modeling using tools like Spark, Python, Scala, SQL, or similar languages. Strong experience in Azure Data Lake Storage Gen2, Azure Data Factory, Databricks, Event Hub, Azure Synapse. Familiarity with several of the following technologies: Event Hub, Docker, Azure Kubernetes Service, Azure DWH, API Azure, Azure Function, Power BI, Azure Cognitive Services. Azure DevOps experience to deploy the data pipelines through CI/CD. Roles and Responsibilities Skills: Azure Databricks, Azure Datafactory, Big Data Pipelines, Pyspark, Azure Synapse, Azure DevOps, Azure Data Lake Storage Gen2, Event Hub, Azure DWH, API Azure. Experience: Minimum 5-7 years of practical experience as Data Engineer. Azure cloud stack in-production experience. Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies