Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
7 - 9 Lacs
Hyderābād
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: Design and implement data solutions and data models using Azure Data Lake to support Data Warehouse, Data Lake, and Lakehouse architectures, ensuring seamless integration with Azure Fabric services. Develop and manage data ingestion pipelines for both batch and streaming data using Azure Data Fabric to ensure efficient and reliable data flow. Responsibilities: -Experience with Apache Spark / PySpark for data processing and optimization, within Azure Fabric environments. -Apply data governance best practices using Azure Purview, including metadata management, data cataloging, and lineage tracking, to ensure compliance and effective data management within Azure Fabric ecosystems. -Utilize Azure Fabric's Toolbox and Metadata-Driven Ingestion & Processing accelerators to enhance data processing workflows and improve efficiency. Mandatory skill sets: Perform data migration from legacy databases or other cloud platforms to Azure Fabric, leveraging Azure Migrate and other Azure-native migration tools. Collaborate with source system owners to integrate data from multiple source databases, making use of Azure Fabric's data integration capabilities to ensure seamless data consolidation. Preferred skill sets: 3+ years of experience in data engineering, with hands-on experience in Azure Fabric. Good understanding of Lakehouse architecture, OneLake, and Microsoft Fabric components. Strong expertise in Spark/PySpark, and Azure SQL-based solutions,Azure Data Factory, Azure Databricks. Strong experience in data migration strategies involving legacy or cloud-native data sources. Years of experience required: 5 to 10 years experience req. Education qualification: B.Tech / M.Tech (Computer Science, Mathematics & Scientific Computing etc.) Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Fabric Design Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 week ago
8.0 years
8 - 10 Lacs
Hyderābād
On-site
FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate , serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. Lead Software Engineer-FullStack (Python & VueJS/ReactJS) Group Description Data Solutions - Platforms and Environments department manages an industry leading content delivery platform. Clients seamlessly access organized and connected content that is easily discoverable, explorable, and procured via the FactSet Marketplace. Data is delivered via a variety of technologies and formats that meet the needs of our clients’ workflows. By enabling our clients to utilize their preferred choice of industry standard databases, programing languages, and data visualization tools, we empower them to focus on the core competencies needed to drive their business. The Data Solutions - Platforms and Environments solutions portfolio includes Standard DataFeed, Data Exploration, OnDemand (API), Views, Cornerstone, Exchange DataFeed, Benchmark Feeds, the Open:FactSet Marketplace, DataDictionary , Navigator and other non-workstation initiatives. Job Description The Data Solutions - Platforms and Environments team is looking for a talented, highly motivated Lead Software Engineer (Full Stack) to join our Platforms and Environments Development team, an important part of one of FactSet’s highest profile and most strategic areas of investment and development. As the Full Stack Lead Software Engineer, you will design and develop Applications including UI, API, Database frameworks and data engineering pipelines, help implement improvements to existing pipelines and infrastructure and provide production support. You will be collaborating closely with Product Developer/Business Analyst for capturing technical requirements. FactSet is happy to setup an information session with an Engineer working on this product to talk about the product, team and the interview process. What You’II Do Architect new components and Application Features for Client facing applications as a Full Stack Developer. Maintain and resolve bugs in existing components Contribute new features, fixes, and refactors to the existing code Perform code reviews and coach engineers with respect to best practices Work with other engineers in following the test-driven methodology in an agile environment Collaborate with other engineers and Product Developers in a Scrum Agile environment using Jira and Confluence Ability to work as part of a geographically diverse team Ability to create and review documentation and test plans Estimate task sizes and regularly communicate progress in daily standups and biweekly Scrum meetings Coordinate with other teams across offices and departments What We’re Looking For Master's or bachelor’s degree in engineering or relevant field required. 8+ years of relevant experience Experience in architecting of distributed engineering applications and data pipelines Expert level proficiency in writing and optimizing code in Python . Proficient in frontend technologies such as Vue.js (preferred) or ReactJS and experience with JavaScript, CSS, HTML . Expert knowledge of REST API Development, preferably Python Flask, Open API Working knowledge of Relational databases, preferably with MSSQL or Postgres Knowledge of Generative AI and Vector Databases is a huge plus Good understanding of general database design and architecture principles High level knowledge of datalake houses like Snowflake and Databricks is a plus A realistic, pragmatic approach. Can deliver functional prototypes that can be enhanced & optimized in later phases Strong written and verbal communication skills Working experience on AWS services, Lambda, EC2, S3, AWS Glue etc. Strong Working experience with any container / PAAS technology (Docker or Heroku) ETL and Data pipelines experience a plus. Working experience of Apache Spark, Apache Airflow, GraphQL, is a plus Experience in developing event driven distributed serverless Infrastructure (AWS-Lambda), SNS-SQS is a plus. Must be a Voracious Learner. What's In It For You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an S&P 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview: FactSet ( NYSE:FDS | NASDAQ:FDS ) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn . At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.
Posted 1 week ago
8.0 - 10.0 years
6 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Principal IS Bus Sys Analyst, Neural Nexus What you will do Let’s do this. Let’s change the world. In this vital role you will support the delivery of emerging AI/ML capabilities within the Commercial organization as a leader in Amgen's Neural Nexus program. We seek a technology leader with a passion for innovation and a collaborative working style that partners effectively with business and technology leaders. Are you interested in building a team that consistently delivers business value in an agile model using technologies such as AWS, Databricks, Airflow, and Tableau? Come join our team! Roles & Responsibilities: Establish an effective engagement model to collaborate with the Commercial Data & Analytics (CD&A) team to help realize business value through the application of commercial data and emerging AI/ML technologies. Serve as the technology product owner for the launch and growth of the Neural Nexus product teams focused on data connectivity, predictive modeling, and fast-cycle value delivery for commercial teams. Lead and mentor junior team members to deliver on the needs of the business Interact with business clients and technology management to create technology roadmaps, build cases, and drive DevOps to achieve the roadmaps. Help to mature Agile operating principles through deployment of creative and consistent practices for user story development, robust testing and quality oversight, and focus on user experience. Become the subject matter expert in emerging technology capabilities by researching and implementing new tools and features, internal and external methodologies. Build expertise and domain expertise in a wide variety of Commercial data domains. Provide input for governance discussions and help prepare materials to support executive alignment on technology strategy and investment. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree with 8 - 10 years of experience in Information Systems experience OR Bachelor’s degree with 10 - 14 years of experience in Information Systems experience OR Diploma with 14 - 18 years of experience in Information Systems experience Excellent problem-solving skills and a passion for tackling complex challenges in data and analytics with technology Experience leading data and analytics teams in a Scaled Agile Framework (SAFe) Good interpersonal skills, good attention to detail, and ability to influence based on data and business value Ability to build compelling business cases with accurate cost and effort estimations Has experience with writing user requirements and acceptance criteria in agile project management systems such as Jira Ability to explain sophisticated technical concepts to non-technical clients Good understanding of sales and incentive compensation value streams Technical Skills: ETL tools: Experience in ETL tools such as Databricks Redshift or equivalent cloud-based dB Big Data, Analytics, Reporting, Data Lake, and Data Integration technologies S3 or equivalent storage system AWS (similar cloud-based platforms) BI Tools (Tableau and Power BI preferred) Preferred Qualifications: Jira Align & Confluence experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Understanding of software systems strategy, governance, and infrastructure Experience in managing product features for PI planning and developing product roadmaps and user journeys Familiarity with low-code, no-code test automation software Technical thought leadership Soft Skills: Able to work effectively across multiple geographies (primarily India, Portugal, and the United States) under minimal supervision Demonstrated proficiency in written and verbal communication in English language Skilled in providing oversight and mentoring team members. Demonstrated ability in effectively delegating work Intellectual curiosity and the ability to question partners across functions Ability to prioritize successfully based on business value High degree of initiative and self-motivation Ability to manage multiple priorities successfully across virtual teams Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
0 years
6 - 10 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will play a key role in successfully leading the engagement model between Amgen's Technology organization and Global Commercial Operations. Collaborate with G&A (Finance, HR, Legal, IT etc.) Business SMEs, Data Engineers, Data Scientists and Product Managers to lead business analysis activities, ensuring alignment with engineering and product goals on the Data & AI Product Teams Become a G&A (Finance, HR, Legal, IT etc.) domain authority in Data & AI technology capabilities by researching, deploying, and sustaining features built according to Amgen’s Quality System Lead the voice of the customer assessment to define business processes and product needs Work with Product Managers and customers to define scope and value for new developments Collaborate with Engineering and Product Management to prioritize release scopes and refine the Product backlog Ensure non-functional requirements are included and prioritized in the Product and Release Backlogs Facilitate the breakdown of Epics into Features and Sprint-Sized User Stories and participate in backlog reviews with the development team Clearly express features in User Stories/requirements so all team members and partners understand how they fit into the product backlog Ensure Acceptance Criteria and Definition of Done are well-defined Work closely with Business SME’s, Data Scientists, ML Engineers to understand the requirements around Data product requirements, KPI’s etc. Analyzing the source systems and create the STTM documents. Develop and implement effective product demonstrations for internal and external partners Maintain accurate documentation of configurations, processes, and changes Understand end-to-end data pipeline design and dataflow Apply knowledge of data structures to diagnose data issues for resolution by data engineering team What we expect of you We are all different, yet we all use our unique contributions to serve patients. We are seeking a highly skilled and experienced Principal IS Business Analyst with a passion for innovation and a collaborative working style that partners effectively with business and technology leaders with these qualifications. Basic Qualifications: 12 to 17 years of experience in G&A (Finance, HR, Legal, IT etc.) Information Systems Mandatory work experience in acting as a business analyst in DWH, Data product building, BI & Analytics Applications. Experience in Analyzing the requirements of BI, AI & Analytics applications and working with Data Source SME, Data Owners to identify the data sources and data flows Experience with writing user requirements and acceptance criteria Affinity to work in a DevOps environment and Agile mind set Ability to work in a team environment, effectively interacting with others Ability to meet deadlines and schedules and be accountable Preferred Qualifications: Must-Have Skills Excellent problem-solving skills and a passion for solving complex challenges in for AI-driven technologies Experience with Agile software development methodologies (Scrum) Superb communication skills and the ability to work with senior leadership with confidence and clarity Has experience with writing user requirements and acceptance criteria in agile project management systems such as JIRA Experience in managing product features for PI planning and developing product roadmaps and user journeys Good-to-Have Skills: Demonstrated expertise in data and analytics and related technology concepts Understanding of data and analytics software systems strategy, governance, and infrastructure Familiarity with low-code, no-code test automation software Technical thought leadership Able to communicate technical or complex subject matters in business terms Jira Align experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Soft Skills: Able to work under minimal supervision Excellent analytical and gap/fit assessment skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Technical Skills: Experience with cloud-based data technologies (e.g., Databricks, Redshift, S3 buckets) AWS (similar cloud-based platforms) Experience with design patterns, data structures, test-driven development Knowledge of NLP techniques for text analysis and sentiment analysis What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
3.0 years
8 - 10 Lacs
Hyderābād
On-site
FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate , serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. Job Description FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate, serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. Your Team Impact FactSet’s Data Solutions organization is seeking a motivated Software Engineer III to join the Usage and Consumption Pricing initiativ e , an exciting and growing area at the company . This team is responsible for building vital tools to understand client usage , instrumentation of critical applications and infrastructures to source usage data, and p ricing c alculators that create quotes based on metered usage . This indiv idual must be adaptable to learning and working with a wide breadth of technologies ranging from Big Data ETL and analytics to traditional on-premise Linux server programming . What You'll do Collaborate with internal engineering groups to collect usage Collaborate with internal sales and strategy groups to report usage Apply multiple styles of testing techniques to deliver reliable software and data to our clients and stakeholders Follow best practices for runtime, on-call support, and deployment procedures Engage with cross-functional peers on a Scrum/Agile team and communicate with stakeholders regarding demos of delivered projects, status updates, challenges, and obstacles. What we're looking for Bachelor’s Degree or equivalent in Computer Science or related field At least 3 years of experience as a Software Engineer Solid understanding of systems design, data structures, and algorithms A realistic, pragmatic approach, encouraging prototyping and iterative development Relational database experience such as MSSQL or PostgreSQL Experience with AWS technologies such as S3, Lambda functions, ECS Fargate, EC2, etc. Experience with Python, Java, and REST APIs Strong written and verbal communication skills Desired Skills Familiarity with infrastructure-as-code (IaC,) especially Terraform Experience with Big Data products and technologies such as DataBricks, Snowflake, Athena, ETL pipelines, etc. Experience working with a Scrum/Agile team Experience with C++, Linux What's In It For You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an S&P 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview: FactSet ( NYSE:FDS | NASDAQ:FDS ) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn . At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.
Posted 1 week ago
5.0 - 9.0 years
7 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role will be a key contributor to the Clinical Trial Data & Analytics (CTDA) Team, driving the development of robust data pipelines and platforms to enable advanced analytics and decision-making. Operating within a SAFE Agile product team, this role ensures system performance, minimizes downtime through automation, and supports the creation of actionable insights from clinical trial data. Collaborating with product owners, architects, and engineers, the Data Engineer will implement and enhance analytics capabilities. Ideal candidates are diligent professionals with strong technical skills, a problem-solving approach, and a passion for advancing clinical operations through data engineering and analytics. Roles & Responsibilities: Proficiency in developing interactive dashboards and visualizations using Spotfire, Power BI, and Tableau to provide actionable insights. Expertise in creating dynamic reports and visualizations that support data-driven decision-making and meet collaborator requirements. Ability to analyze complex datasets and translate them into meaningful KPIs, metrics, and trends. Strong knowledge of data visualization standard methodologies, including user-centric design, accessibility, and responsiveness. Experience in integrating data from multiple sources (databases, APIs, data warehouses) into visualizations. Skilled in performance tuning of dashboards and reports to optimize responsiveness and usability. Ability to work with end-users to define reporting requirements, develop prototypes, and implement final solutions. Familiarity with integrating real-time and predictive analytics within dashboards to enhance forecasting capabilities. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years' of experience in Computer Science/IT or related field Must-Have Skills: Proven hands-on experience with cloud platforms such as AWS, Azure, and GCP. Proficiency in using Python, PySpark, and SQL, with practical experience in ETL performance tuning. Development knowledge in Databricks. Strong analytical and problem-solving skills to tackle complex data challenges, with expertise in using analytical tools like Spotfire, Power BI, and Tableau. Preferred Qualifications: Good-to-Have Skills: Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Familiarity with SQL/NOSQL database, vector database for large language models Familiarity with prompt engineering, model fine tuning Professional Certifications AWS Certified Data Engineer (preferred) Databricks Certification (preferred) Any SAFe Agile certification (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
WorkMode :Hybrid Work Location : Chennai / Hyderabad / Bangalore / Pune / mumbai / gurgaon Work Timing : 2 PM to 11 PM Primary : Data Engineer AWS Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process , SQl, Databricks JD Examining the business needs to determine the testing technique by automation testing. Maintenance of present regression suites and test scripts is an important responsibility of the tester. The testers must attend agile meetings for backlog refinement, sprint planning, and daily scrum meetings. Testers to execute regression suites for better results. Must provide results to developers, project managers, stakeholders, and manual testers. Responsibility AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency.
Posted 1 week ago
0 years
3 - 9 Lacs
Hyderābād
On-site
We are seeking a skilled Agentic AI Developer to design and implement intelligent agent systems powered by Large Language Models (LLMs) . This role involves developing LLM-based pipelines that can ingest transcripts, documents, or business narratives and generate structured artifacts such as workflows, decision trees, action plans, or contextual recommendations. You will collaborate with cross-functional teams to deploy autonomous AI agents capable of reasoning, planning, memory, and tool usage in enterprise environments — primarily within the Microsoft ecosystem (Azure, Power Platform, Copilot, and M365 integrations). Key Responsibilities Build and deploy autonomous agent systems using frameworks such as LangChain, AutoGen, CrewAI, or Semantic Kernel. Develop pipelines to process natural language input and generate structured outputs tailored to business needs. Implement agentic features such as task orchestration, memory storage, tool integration , and feedback loops. Fine-tune LLMs or apply prompt engineering to optimize accuracy, explainability, and responsiveness. Integrate agents with Microsoft 365 services (Teams, Outlook, SharePoint) and Power Platform components (Dataverse, Power Automate). Collaborate with business and product teams to define use cases, test scenarios, and performance benchmarks. Participate in scenario-based UAT testing, risk evaluation, and continuous optimization. Must-Have Skills Proficiency in Python and hands-on experience with ML/AI libraries and frameworks (Transformers, PyTorch, LangChain). Strong understanding of LLMs (e.g., GPT, Claude, LLaMA, Mistral) and prompt engineering principles. Experience developing agent workflows using ReAct, AutoGen, CrewAI, or OpenAI function calling . Familiarity with Vector Databases (FAISS, Pinecone, Qdrant) and RAG-based architectures . Skills in Natural Language Processing (NLP) : summarization, entity recognition, intent classification. Integration experience with APIs, SDKs , and enterprise tools (preferably Microsoft stack). Preferred Certifications (Candidates with the following certifications will have a strong advantage) : ✅ Microsoft Certified: Azure AI Engineer Associate (AI-102) ✅ Microsoft Certified: Power Platform App Maker (PL-100) ✅ Microsoft 365 Certified: Developer Associate (MS-600) ✅ OpenAI Developer Certifications or Prompt Engineering Badge ✅ Google Cloud Certified: Professional Machine Learning Engineer ✅ NVIDIA Deep Learning Institute Certifications ✅ Databricks Generative AI Pathway (optional)
Posted 1 week ago
6.0 years
0 Lacs
Thiruvananthapuram
On-site
Job Requirements Job Title: AWS Data Engineer Company: Quest Global Job Category: Multi Industry - MedTech Quest Global is seeking a highly skilled AWS Data Engineer with 6+ years of experience in AWS, Databricks, Pyspark and S3. As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining data pipelines, optimizing data workflows, and ensuring data quality and integrity within our MedTech industry. Key Responsibilities: Designing and implementing scalable data solutions on AWS cloud platform Developing and maintaining data pipelines using Databricks and pyspark Collaborating with cross-functional teams to understand data requirements and deliver effective solutions Ensuring data quality and integrity through data validation and testing processes Optimizing data workflows for improved performance and efficiency Qualifications: Bachelor's degree in Computer Science, Engineering, or related field 6+ years of experience working as a Data Engineer with expertise in AWS, Databricks, Pyspark and S3 Strong understanding of data architecture, data modeling, and data warehousing concepts Experience with ETL processes, data integration, and data transformation Excellent problem-solving skills and ability to work in a fast-paced environment If you are a talented AWS Data Engineer looking to make a significant impact in the MedTech industry, we encourage you to apply for this exciting opportunity at Quest Global. Work Experience Experience in implementing Cloud based analytics solutions in Databricks(AWS), S3 etc. Scripting experience in building data processing pipelines in Databricks (pySpark). Strong knowledge of Data Platform and Cloud (AWS) eco-systems, in developing, designing, and operating application development platforms. Working experience with AWS Native services – DynamoDB , Glue, MSK, S3, Athena, Cloud watch, lambda, IAM etc ETL development and Analytics Applications development. Experience in migration of Data from Excel, Flat file etc Exposure to all stages of SDLC primarily in Database Architecture, Logical and Physical database modelling Strong SQL development skills including query tuning and performance optimization. Strong development skills in Python , pySpark etc Experience in writing unit test cases using PyTest or any other tools
Posted 1 week ago
6.0 years
3 - 8 Lacs
Cochin
On-site
Responsibilities: Identify automation opportunities within the system and lead discussions with stakeholders and clients. Conduct multiple Proof of Concepts (PoC’s) on Tosca's capabilities and present demos to stakeholders and clients. Review software requirements and prepare test scenarios. Collaborate with QA Engineers to develop effective strategies, test plans, and test cases. Develop and execute both automated and manual test cases and scripts. Analyze test results for database impacts, errors, bugs, and usability issues. Report bugs and errors to development teams. Assist in troubleshooting issues. Work with cross-functional teams to ensure quality throughout the software development lifecycle. Regularly update and maintain test documentation, ensuring accuracy and completeness for all testing phases. Monitor, track, and report on testing activities, progress, and outcomes to stakeholders, providing insights and recommendations for improvements. Provide expertise and guidance in quality assurance best practices, continually improving testing methodologies and processes. Qualifications: Bachelor's degree or equivalent experience. 6-8 years of professional experience in software quality assurance with a focus on Tosca automation. 2-3+ years of experience as an Automation Lead. Proficiency in the Tosca tool suite, including Tosca Commander and DEX. Strong leadership and communication skills to effectively lead a team and collaborate with stakeholders. Experience in creating and maintaining test automation frameworks and scripts using Tosca. Solid understanding of agile methodologies and the software development lifecycle (SDLC). Ability to analyze and interpret complex technical information and communicate it effectively to non-technical stakeholders. Experience in mobile, mainframe, API, and desktop-based automation. Test plan and test strategy creation. Mandatory certification: Automation Specialist (AS1, AS2, AE1, TDS1). Experience integrating Tosca with CI/CD pipelines and DEX Server for continuous testing. Familiarity with other automation testing tools and frameworks. Up-to-date knowledge of software test design and testing methodologies. Familiarity with agile frameworks. Ability to develop test strategies and write test cases. Ability to document and troubleshoot errors. Working knowledge of test management software. Industry experience with a healthcare insurance company. Experience testing both web and mobile applications. Excellent communication and critical thinking skills. Good organizational skills and a detail-oriented mindset. Analytical mind and problem-solving aptitude. Technical Skills: ETL testing tools: Tableau, Cognos, Informatica, DataStage, MuleSoft, Power BI, DataBricks. Test management tools: Spira, qTest, TestRail, HP ALM, and JIRA. Automation testing tools: Tosca and Selenium. Programming languages: Java, Python. Databases: Oracle, AWS RedShift. IQVIA is a leading global provider of clinical research services, commercial insights and healthcare intelligence to the life sciences and healthcare industries. We create intelligent connections to accelerate the development and commercialization of innovative medical treatments to help improve patient outcomes and population health worldwide . Learn more at https://jobs.iqvia.com
Posted 1 week ago
0 years
4 - 5 Lacs
Gurgaon
On-site
Overview We are seeking an experienced Data Modeller with expertise in designing and implementing data models for modern data platforms. This role requires deep knowledge of data modeling techniques, healthcare data structures, and experience with Databricks Lakehouse architecture. The ideal candidate will have a proven track record of translating complex business requirements into efficient, scalable data models that support analytics and reporting needs. About the Role As a Data Modeller, you will be responsible for designing and implementing data models for our Databricks-based Modern Data Platform. You will work closely with business stakeholders, data architects, and data engineers to create logical and physical data models that support the migration from legacy systems to the Databricks Lakehouse architecture, ensuring data integrity, performance, and compliance with healthcare industry standards. Key Responsibilities Design and implement logical and physical data models for Databricks Lakehouse implementations Translate business requirements into efficient, scalable data models Create and maintain data dictionaries, entity relationship diagrams, and model documentation Develop dimensional models, data vault models, and other modeling approaches as appropriate Support the migration of data models from legacy systems to Databricks platform Collaborate with data architects to ensure alignment with overall data architecture Work with data engineers to implement and optimize data models Ensure data models comply with healthcare industry regulations and standards Implement data modeling best practices and standards Provide guidance on data modeling approaches and techniques Participate in data governance initiatives and data quality assessments Stay current with evolving data modeling techniques and industry trends Qualifications Extensive experience in data modeling for analytics and reporting systems Strong knowledge of dimensional modeling, data vault, and other modeling methodologies Experience with Databricks platform and Delta Lake architecture Expertise in healthcare data modeling and industry standards Experience migrating data models from legacy systems to modern platforms Strong SQL skills and experience with data definition languages Understanding of data governance principles and practices Experience with data modeling tools and technologies Knowledge of performance optimization techniques for data models Bachelor's degree in Computer Science, Information Systems, or related field; advanced degree preferred Professional certifications in data modeling or related areas Technical Skills Data modeling methodologies (dimensional, data vault, etc.) Databricks platform and Delta Lake SQL and data definition languages Data modeling tools (erwin, ER/Studio, etc.) Data warehousing concepts and principles ETL/ELT processes and data integration Performance tuning for data models Metadata management and data cataloging Cloud platforms (AWS, Azure, GCP) Big data technologies and distributed computing Healthcare Industry Knowledge Healthcare data structures and relationships Healthcare terminology and coding systems (ICD, CPT, SNOMED, etc.) Healthcare data standards (HL7, FHIR, etc.) Healthcare analytics use cases and requirements Optionally Healthcare regulatory requirements (HIPAA, HITECH, etc.) Clinical and operational data modeling challenges Population health and value-based care data needs Personal Attributes Strong analytical and problem-solving skills Excellent attention to detail and data quality focus Ability to translate complex business requirements into technical solutions Effective communication skills with both technical and non-technical stakeholders Collaborative approach to working with cross-functional teams Self-motivated with ability to work independently Continuous learner who stays current with industry trends What We Offer Opportunity to design data models for cutting-edge healthcare analytics Collaborative and innovative work environment Competitive compensation package Professional development opportunities Work with leading technologies in the data space This position requires a unique combination of data modeling expertise, technical knowledge, and healthcare industry understanding. The ideal candidate will have demonstrated success in designing efficient, scalable data models and a passion for creating data structures that enable powerful analytics and insights.
Posted 1 week ago
3.0 years
3 - 6 Lacs
Noida
On-site
Engineering at Innovaccer With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point we gather into valuable insights for our customers. Join us and be part of a team that's turning dreams of better healthcare into reality, one line of code at a time. Together, we’re shaping the future and making a meaningful impact on the world. About the Role The technology that once promised to simplify patient care has brought more issues than anyone ever anticipated. At Innovaccer, we defeat this beast by making full use of all the data Healthcare has worked so hard to collect, and replacing long-standing problems with ideal solutions. Data is our bread and butter for innovation. We are looking for a Software Development Engineer II (AI) who understands healthcare data and can leverage the data to build algorithms to personalize treatments based on the clinical and behavioral history of patients. The purpose for this role, would be to build agents for the platform, automating areas around support engineering and data engineering. We are looking for a superstar who will define and build the next generation of predictive analytics tools in healthcare. A Day in the Life Design and lead the development of various artificial intelligence initiatives to help improve health and wellness of patients Work with the business leaders and customers to understand their pain-points and build large-scale solutions for them. Proven ability to break down complex business problems into machine learning problems and design solution workflows. Work with our data platform team to help them successfully integrate the agent capability or algorithms in their product/workflows. Work with development teams to build tools for repeatable data tasks that will accelerate and automate development cycle. What You Need 3+ years of experience in Software Engineering, with experience inbuilding APIs, who understand how APIs function, and can effectively develop applications. Familiarity with Prompt Engineering and working experience in fine tuning LLMs Hands-on experience of working with Multi-Agent Systems with Frameworks like CrewAI/Langchain/Autogen (One of them is a must) and Prompt engineering. Hands-on with Vector Databases such as ChromaDB, FAISS etc. Experience in Reinforcement learning (RL), specially for autonomous agents. Working experience with Embedding models and RAG design. Strong hands-on experience in Python - building enterprise applications along with optimization techniques, along with working with API integrations (Fast API/Django) Hands on experience with at least one ML platform from amongst Databricks, Azure ML, Sagemaker Good to have - Comfortable with Docker, Kubernetes, AWS cloud technologies, Snowflake and also some experience in Healthcare. Preferred Skills Python - Building highly scalable and performant applications LLM - Deep experience in working and fine tuning LLM Models Re-inforcement Learning and MAS Vector Databases Here’s What We Offer Generous Leaves: Enjoy generous leave benefits of up to 40 days. Parental Leave: Leverage one of industry's best parental leave policies to spend time with your new addition. Sabbatical: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer comprehensive health insurance to support you and your family, covering medical expenses related to illness, disease, or injury. Extending support to the family members who matter most. Care Program: Whether it’s a celebration or a time of need, we’ve got you covered with care vouchers to mark major life events. Through our Care Vouchers program, employees receive thoughtful gestures for significant personal milestones and moments of need. Financial Assistance: Life happens, and when it does, we’re here to help. Our financial assistance policy offers support through salary advances and personal loans for genuine personal needs, ensuring help is there when you need it most. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer activates the flow of healthcare data, empowering providers, payers, and government organizations to deliver intelligent and connected experiences that advance health outcomes. The Healthcare Intelligence Cloud equips every stakeholder in the patient journey to turn fragmented data into proactive, coordinated actions that elevate the quality of care and drive operational performance. Leading healthcare organizations like CommonSpirit Health, Atlantic Health, and Banner Health trust Innovaccer to integrate a system of intelligence into their existing infrastructure, extending the human touch in healthcare. For more information, visit www.innovaccer.com. Check us out on YouTube, Glassdoor, LinkedIn, Instagram, and the Web.
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the position: Chevron invites applications for the role of Lead Data Scientist in India. This position is integral to designing and developing AI/ML models that significantly accelerate the delivery of business value. The successful candidate will be working with business stakeholders to prototype and deliver innovative data science applications that add value to Chevron’s business. This position will provide broad exposure to the application of technology to enable business with many opportunities for growth and professional development for the candidate. Key Responsibilities: Combine expertise in mathematics, statistics, computer science, and domain knowledge to create AI/ML models to solve various Chevron business challenges Collaborate closely with the AI Technical Manager and GCC Petro-technical professionals and data engineers to integrate models into the business framework. Identify and frame opportunities to apply advanced analytics, modeling, and related technologies to data to help Chevron businesses gain insight and improve decision making, workflow, and automation. Understand and communicate the value of proposed opportunity with team members and other stakeholders. Identify needed data and appropriate technology to solve identified business challenges. Clean data and develop and test models. Establish the life cycle management process for models. Provide technical mentoring in modelling and analytics technologies, the specifics of the modelling process, and general consulting skills. Drive innovation in AI/ML to enhance Chevron's capabilities in data-driven decision-making. Aligns with team on shared goals and outcomes, recognizes others’ contributions, and work collaboratively & seek diverse perspectives. Takes actions to develop self and others beyond existing skillset. Encourages innovative ideas, adapts to change and changing technologies Understand and communicate data, insights, and model behaviours to stakeholders with varying levels of technical expertise. Required Qualifications: Minimum 5 years of experience in designing and developing AI/ML models and/or various optimization algorithms. Total of 10- 15 years of experience. Solid foundation in mathematics, probability, and statistics with demonstrated depth of knowledge and experience in advanced analytics and data science methodologies (e.g. supervised and unsupervised learning, statistics, data science model development) Proficiency in Python and working knowledge of cloud AI/ML services (Azure Machine Learning and Databricks preferred) Domain knowledge relevant to the energy sector and working knowledge of Oil and Gas value chain (e.g., upstream, midstream, or downstream) and associated business workflows. Proven ability to frame data science opportunities, leverage standard foundational tools and Azure services to perform exploratory data analysis (for purposes of data cleaning and discovery), visualize data, and identify actions to reach needed results. Ability to quickly assess current state and apply technical concepts across cross-functional business workflows. Experience with driving successful execution, deliverables, and accountabilities to meet quality and schedule goals. Ability to translate complex data into actionable insights that drive business value. Demonstrated ability to engage and establish collaborative relationships both inside and outside immediate workgroup at various organizational levels, across functional and geographic boundaries to achieve desired outcomes. Demonstrated ability to adjust behaviour based on feedback and provide feedback to others. Team-oriented mindset with effective communication skills and the ability to work collaboratively.
Posted 1 week ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Role and Responsibilities Develop, execute, and maintain automated test scripts using Selenium, Python, or Java. Conduct thorough testing of various modules, including Reporting, Chart of Accounts, General Ledger, Accounting Model, IFRS17 and Actuarial, Insurance, Re-insurance, Investment, Payroll, Procurement, Expense Allocation, and Consolidation. Ensure seamless integration with our Azure Databricks Data Repository. Collaborate with Agile teams to ensure comprehensive test coverage and timely delivery. Identify, document, and track defects and issues. Perform regression testing to ensure existing functionality is not affected by new changes. Work closely with developers, business analysts, and other stakeholders to understand requirements and provide feedback. Continuously improve test processes and methodologies. Qualifications Bachelors degree in Computer Science, Information Technology, Finance, or a related field. Proven experience as a Test Engineer, preferably in the finance or insurance industry. Strong knowledge of test automation tools and frameworks, particularly Selenium, Python, and Java. Familiarity with Oracle Fusion and its modules. Experience with Azure Databricks and data integration processes. Strong analytical and problem-solving abilities. Excellent communication and interpersonal skills. Ability to work under pressure and manage multiple priorities. Preferred Skills Certification in software testing (e.g., ISTQB). Experience with IFRS17 and actuarial processes. Knowledge of insurance and re-insurance operations. Familiarity with investment, payroll, procurement, and expense allocation processes.
Posted 1 week ago
5.0 - 8.0 years
6 - 9 Lacs
Ahmedabad
On-site
Senior Software Engineer Ahmedabad, India; Hyderabad, India; Islamabad, Pakistan Information Technology 317835 Job Description About The Role: Grade Level (for internal use): 10 The Team: We seek a highly motivated, enthusiastic, and skilled engineer for our Industry Data Solutions Team. We strive to deliver sector-specific, data-rich, and hyper-targeted solutions for evolving business needs. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts and Infrastructure Teams. The Impact: Enterprise Data Organization is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. What’s in it for you: Opportunity to contribute to the development of a world-class Platform Engineering team . Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of data—ideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies, including AWS Cloud and Databricks . Grow your career within a globally distributed team , with clear opportunities for advancement and skill development. Responsibilities: Design and develop applications, components, and common services based on development models, languages, and tools, including unit testing, performance testing, and monitoring, and implementation Support business and technology teams as necessary during design, development, and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially in writing, with the business and other technical groups Basic Qualifications: Bachelor's/Master’s Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Nice to have knowledge in Grafana, Kibana, Big data, Git Hub, EMR, Terraforms, AI-ML Advanced SQL programming skills Highly recommended skillset in Databricks , Scala technologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to have: No-SQL (Databricks, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications: Hands-on experience with cloud computing platforms including AWS , Azure , or Google Cloud Platform (GCP) . Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 317835 Posted On: 2025-07-09 Location: Ahmedabad, Gujarat, India
Posted 1 week ago
7.0 years
4 - 18 Lacs
Coimbatore, Tamil Nadu
On-site
Urgent Hiring: Data Engineer - supply chain applications Location: Coimbatore, India NovintiX is on the lookout for experienced Data Engineers (x4) to strengthen our Supply Chain Analytics team. If you're passionate about building smart data solutions that power real-world logistics, procurement, and demand planning—this role is built for you. Key Responsibilities: Data pipelines & Integration: Build and optimize scalable pipelines for inventory, shipping, and procurement data Integrate data from ERP, PLM, and third-party sources Develop APIs to enable seamless data exchange Data Warehouses & Lakes: Design, develop, and maintain enterprise-grade data lakes and warehouses Ensure high standards of data quality, integrity, and security Analytics, BI & Visualization: Collaborate with stakeholders to build reporting dashboards using Power BI, Tableau, or QlikSense Support supply chain decision-making through data-driven insights Advanced analytics & Prediction: Build data models and algorithms for demand forecasting and logistics optimization Work with ML libraries and concepts to support predictive analysis Cross Functional Collaboration: Coordinate with supply chain, logistics, and IT teams Translate complex technical solutions into business language Governance and Compliance: Implement strong data governance frameworks Ensure data compliance and audit readiness Qualifications & Tech skills: 7+ years in Data Engineering Bachelor's in Computer Science/IT or related field Expertise in: Python, Java, SQL, Spark SQL Hadoop, PySpark, NoSQL Power BI, Tableau, Qliksense Azure Data Factory, Azure Databricks, AWS Strong collaboration and communication skills Exposure to fast-paced, agile environments Send your resume to: shanmathi.saravanan@novintix.com Job Type: Full-time Pay: ₹470,797.12 - ₹1,819,955.11 per year Ability to commute/relocate: Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person Application Deadline: 13/07/2025
Posted 1 week ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures Of Outcomes Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration Define and govern the configuration management plan. Ensure compliance within the team. Testing Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management Manage the delivery of modules effectively. Defect Management Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation Create and provide input for effort and size estimation for projects. Knowledge Management Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management Execute and monitor the release process to ensure smooth transitions. Design Contribution Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments UST is seeking a highly skilled and motivated Lead Data Engineer to join our Telecommunications vertical, leading impactful data engineering initiatives for US-based Telco clients. The ideal candidate will have 6–8 years of experience in designing and developing scalable data pipelines using Snowflake, Azure Data Factory, Azure Databricks. Proficiency in Python, PySpark, and advanced SQL is essential, with a strong focus on query optimization, performance tuning, and cost-effective architecture. A solid understanding of data integration, real-time and batch processing, and metadata management is required, along with experience in building robust ETL/ELT workflows. Candidates should demonstrate a strong commitment to data quality, validation, and consistency, with working knowledge of data governance, RBAC, encryption, and compliance frameworks considered a plus. Familiarity with Power BI or similar BI tools is also advantageous, enabling effective data visualization and storytelling. The role demands the ability to work in a dynamic, fast-paced environment, collaborating closely with stakeholders and cross-functional teams while also being capable of working independently. Strong communication skills and the ability to coordinate across multiple teams and stakeholders are critical for success. In addition to technical expertise, the candidate should bring experience in solution design and architecture planning, contributing to scalable and future-ready data platforms. A proactive mindset, eagerness to learn, and adaptability to the rapidly evolving data engineering landscape—including AI integration into data workflows—are highly valued. This is a leadership role that involves mentoring junior engineers, fostering innovation, and driving continuous improvement in data engineering practices. Skills Azure Databricks,Snowflake,python,Data Engineering
Posted 1 week ago
6.0 - 8.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Azure ML and Python Dev- Senior 1/2 EY GDS Consulting digital engineering, is seeking experienced Azure ML and Python Developer. Your responsibilities will include designing and implementing data pre-processing, feature engineering, and model training pipelines, collaborating closely with data scientists, and ensuring model performance and reliability in production environments. Proficiency in Azure ML services, Python programming, and a strong background in machine learning are key requirements for this role. The opportunity We are seeking a seasoned Azure ML and Python Developer with 6-8 years of experience to join our innovative team. This role presents an exciting opportunity for individuals with a strong background in machine learning and cloud computing to lead and contribute to cutting-edge projects on the Azure platform. Your Key Responsibilities Develop and deploy machine learning models on Azure cloud platform using Python programming language and Azure ML services. Design and implement scalable and efficient data pipelines for model training and inference. Collaborate with data scientists and business stakeholders to understand requirements and translate them into technical solutions. Implement best practices for ML development, including version control, testing, and documentation. Develop and maintain APIs on Azure cloud platform for model deployment and integration with applications. Monitor and optimize model performance, ensuring scalability, reliability, and security. Participate in code reviews, troubleshooting, and optimization to improve performance and maintainability. Stay updated with industry trends and best practices in machine learning, cloud computing, and API development. Mentor junior team members, sharing expertise and guiding them in technical challenges. Collaborate with cross-functional teams, including data engineers, data scientists, and business analysts, to deliver end-to-end solutions. Conduct regular evaluations and assessments of ML models and APIs, identifying areas for improvement and optimization. Lead and contribute to innovation initiatives, exploring new technologies and approaches to enhance our ML and cloud capabilities. Soft Skills Bachelor's degree in computer science, data science, or related field. 6-8 years of experience in machine learning, data engineering, and cloud computing, with a strong focus on Azure services. Proficiency in Python programming language for ML model development and deployment. Experience with Azure ML services, Azure data services (e.g., Azure Data Lake, Azure SQL Database), and Azure cloud infrastructure. Strong understanding of machine learning concepts, algorithms, and techniques, including supervised and unsupervised learning, deep learning, and natural language processing. Expertise in designing and implementing data pipelines for ML model training and inference, using tools like Azure Databricks, Azure Data Factory, and Azure Synapse Analytics. Familiarity with cloud-native technologies, containerization (e.g., Docker, Kubernetes), and DevOps practices for ML model deployment and automation. Experience with version control systems (e.g., Git/GitHub), CI/CD pipelines, and software engineering best practices for ML projects. Strong analytical and problem-solving skills, with the ability to analyze complex data sets, identify patterns, and derive meaningful insights. Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams and communicate technical concepts to non-technical stakeholders. Proactive mindset, self-driven, and eager to learn new technologies and methodologies in the ML and cloud computing space. Ability to work in a fast-paced environment, manage multiple priorities, and deliver high-quality results under tight deadlines. To qualify for the role, you must have A bachelor's or master's degree in computer science, data science, or related field. 6-8 years of hands-on experience in machine learning, cloud computing, and API development, with a focus on Azure services. Proficiency in Python programming language for ML model development and deployment, along with experience using Azure ML services and Azure cloud infrastructure. Strong communication skills, both verbal and written, with the ability to articulate technical concepts effectively to diverse audiences. Proven track record of successfully delivering ML projects, designing scalable data pipelines, and collaborating in cross-functional teams. Certifications such as Microsoft Certified: Azure AI Engineer Associate, Microsoft Certified: Azure Data Scientist Associate, or equivalent, are highly desirable. Experience with other cloud platforms (e.g., AWS, Google Cloud Platform) and additional programming languages (e.g., R, Java) is a plus. Ideally, you’ll also have Analytical ability to manage multiple ML projects simultaneously, prioritize tasks effectively, and make data-driven decisions. Experience operating independently or with minimal supervision, demonstrating strong problem-solving skills and initiative. Familiarity with advanced ML techniques and frameworks (e.g., TensorFlow, PyTorch) for deep learning and computer vision applications. Knowledge of cloud security principles, data governance, and regulatory compliance frameworks relevant to ML and cloud environments. Experience with Big Data technologies (e.g., Apache Spark, Hadoop) for processing large-scale datasets and building ML models at scale. What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The future is our choice At Atos, as the global leader in secure and decarbonized digital, our purpose is to help design the future of the information space. Together we bring the diversity of our people’s skills and backgrounds to make the right choices with our clients, for our company and for our own futures. Roles & Responsibilities Should have solid experience on Databricks. Overall 6+ years of experience. Requirements Good experience on the medallion architecture Very good knowledge in pyspark, python n sql Lake flow connectors, DLT, Delta sharing. Data extraction from various source application like sfdc, sap, and cloud apps Data integration from databricks to various downstream Need to have good understanding and hands-on of data warehousing concepts. Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture. Here at Atos, diversity and inclusion are embedded in our DNA. Read more about our commitment to a fair work environment for all. Atos is a recognized leader in its industry across Environment, Social and Governance (ESG) criteria. Find out more on our CSR commitment. Choose your future. Choose Atos.
Posted 1 week ago
0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
About the Role This role is accountable for running day to day operations of Data Platform in Azure / AWS Databricks. Data Engineer is accountable for ongoing Development, Enhancement support and maintenance data availability and data quality, performance enhancement and stability of the system. 1. Designing and implementing data ingestion pipelines from multiple sources using Azure Databricks 2. Ensure data pipelines run smoothly and efficiently 3. Adherence to security, regulatory and audit control guidelines 4. Drive optimization, continuous improvement and efficiency Essential for this role Minimum 5yrs of experience in data analytics field Experience with Azure/AWS Databricks Experience in building and optimizing data pipelines, architectures and data sets Excellent experience in Scala or Python, PySpark and SQL Ability to troubleshoot and optimize complex queries on the Spark platform Knowledgeable on structured and unstructured data design / modelling, data access and data storage techniques Expertise in designing and deploying data applications on cloud solutions, such as Azure or AWS Hands on experience in performance tuning and optimizing code running in Databricks environment Demonstrated analytical and problem-solving skills particularly those that apply to a big data environment Technical / Professional Skills Please provide at least 3 Azure/AWS Databricks. Python / Scala / Spark / PySpark. HIVE / HBase / Impala / Parquet. Sqoop, Kafka, Flume. SQL and RDBMS. Airflow. Jenkins / Bamboo. Github / Bitbucket. Nexus.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
AWS Data Engineer WorkMode :Hybrid Work Location : Chennai / Hyderabad / Bangalore / Pune / mumbai / gurgaon Work Timing : 2 PM to 11 PM Primary : Data Engineer AWS Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process , SQl, Databricks JD Examining the business needs to determine the testing technique by automation testing. Maintenance of present regression suites and test scripts is an important responsibility of the tester. The testers must attend agile meetings for backlog refinement, sprint planning, and daily scrum meetings. Testers to execute regression suites for better results. Must provide results to developers, project managers, stakeholders, and manual testers. Responsibility AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency.
Posted 1 week ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Required Skills: Proven experience with Azure Databricks and Azure Data Factory. Strong programming skills in PySpark, SQL, and Python. Experience with semi-structured data and building generic frameworks for data pipelines. Familiarity with Azure cloud fundamentals and best practices. Experience with CI/CD pipelines for deploying ADF and ADB artifacts using Azure DevOps. Preferred knowledge of streaming technologies such as Kafka. Excellent problem-solving skills and attention to detail. Willingness to provide on-call support, including during non-office hours, for critical systems. Nice to have: Knowledge of Cosmos DB is a plus. Experience with Python FastAPI for API development. Experience with real-time data processing and streaming. Familiarity with data governance and compliance standards Required Experience : 7+ years Location : Noida and Bangalore
Posted 1 week ago
3.0 - 8.0 years
0 - 3 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Need Immediate Joiners Responsibilities: Defines, designs, develops and test software components/applications using AWS - (Databricks on AWS, AWS Glue, Amazon S3, AWS Lambda, Amazon Redshift, AWS Secrets Manager) Strong SQL skills with experience. Experience handling Structured and unstructured datasets. Experience in Data Modeling and Advanced SQL techniques. Experience implementing AWS Glue, Airflow, or any other data orchestration tool using latest technologies and techniques. • Good exposure in Application Development. The candidate should work independently with minimal supervision. Must Have: Hands-on experience with distributed computing framework like Databricks, Spark-Ecosystem (Spark Core, PySpark, Spark Streaming, SparkSQL) Willing to work with product teams to best optimize product features/functions. Experience on Batch workloads and real-time streaming with high volume data frequency Performance optimization on Spark workloads • Environment setup, user management, Authentication and cluster management on Databricks Professional curiosity and the ability to enable yourself in new technologies and tasks. Good understanding of SQL and a good grasp of relational and analytical database management theory and practice. Key Skills: • Python, SQL and PySpark • Big Data Ecosystem (Hadoop, Hive, Sqoop, HDFS, HBase) • Spark Ecosystem (Spark Core, Spark Streaming, Spark SQL) / Databricks • AWS (AWS Glue, Databricks on AWS, Lambda, Amazon Redshift, Amazon S3, AWS Secrets Manager) • Data Modelling, ETL Methodology
Posted 1 week ago
4.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a Senior/Lead DevOps Engineer – Databricks with strong experience in Azure Databricks to design, implement, and optimize Databricks infrastructure, CI/CD pipelines, and ML model deployment. The ideal candidate will be responsible for Databricks environment setup, networking, cluster management, access control, CI/CD automation, model deployment, asset bundle management, and monitoring. This role requires hands-on experience with DevOps best practices, infrastructure automation, and cloud-native architectures. Required Skills & Experience • 4 to 10 years of experience in DevOps with a strong focus on Azure Databricks. • Hands-on experience with Azure networking, VNET integration, and firewall rules. • Strong knowledge of Databricks cluster management, job scheduling, and optimization. • Expertise in CI/CD pipeline development for Databricks and ML models using Azure DevOps, Terraform, or GitHub Actions. • Experience with Databricks Asset Bundles (DAB) for packaging and deployment. • Proficiency in RBAC, Unity Catalog, and workspace access control. • Experience with Infrastructure as Code (IaC) tools like Terraform, ARM Templates, or Bicep. • Strong scripting skills in Python, Bash, or PowerShell. • Familiarity with monitoring tools (Azure Monitor, Prometheus, or Datadog). Preferred Qualifications • Databricks Certified Associate/Professional Administrator or equivalent certification. • Experience with AWS or GCP Databricks in addition to Azure. • Knowledge of Delta Live Tables (DLT), Databricks SQL, and MLflow. • Exposure to Kubernetes (AKS, EKS, or GKE) for ML model deployment. Roles & Responsibilities Key Responsibilities 1. Databricks Infrastructure Setup & Management • Configure and manage Azure Databricks workspaces, networking, and security. • Set up networking components like VNET integration, private endpoints, and firewall configurations. • Implement scalability strategies for efficient resource utilization. • Ensure high availability, resilience, and security of Databricks infrastructure. 2. Cluster & Capacity Management • Manage Databricks clusters, including autoscaling, instance selection, and performance tuning. • Optimize compute resources to minimize costs while maintaining performance. • Implement cluster policies and governance controls. 3. User & Access Management • Implement RBAC (Role-Based Access Control) and IAM (Identity and Access Management) for users and services. • Manage Databricks Unity Catalog and enforce workspace-level access controls. • Define and enforce security policies across Databricks workspaces. 4. CI/CD Automation for Databricks & ML Models • Develop and manage CI/CD pipelines for Databricks Notebooks, Jobs, and ML models using Azure DevOps, GitHub Actions, or Jenkins. • Automate Databricks infrastructure deployment using Terraform, ARM Templates, or Bicep. • Implement automated testing, version control, and rollback strategies for Databricks workloads. • Integrate Databricks Asset Bundles (DAB) for standardized and repeatable Databricks deployments. 5. Databricks Asset Bundle Management • Implement Databricks Asset Bundles (DAB) to package, version, and deploy Databricks workflows efficiently. • Automate workspace configuration, job definitions, and dependencies using DAB. • Ensure traceability, rollback, and version control of deployed assets. • Integrate DAB with CI/CD pipelines for seamless deployment. 6. ML Model Deployment & Monitoring • Deploy ML models using Databricks MLflow, Azure Machine Learning, or Kubernetes (AKS). • Optimize model performance and enable real-time inference. • Implement model monitoring, drift detection, and automated retraining pipelines. 7. Monitoring, Troubleshooting & Performance Optimization • Set up Databricks monitoring and logging using Azure Monitor, Datadog, or Prometheus. • Analyze cluster performance metrics, audit logs, and cost insights to optimize workloads. • Troubleshoot Databricks infrastructure, pipelines, and deployment issues.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: AWS Data Engineer Location: Pune, Jaipur, Bengaluru, Hyderabad, Noida Duration: Fulltime Positions: Multiple Responsibilities: • Defines, designs, develops and test software components/applications using AWS - (Databricks on AWS, AWS Glue, Amazon S3, AWS Lambda, Amazon Redshift, AWS Secrets Manager) • Strong SQL skills with experience. • Experience handling Structured and unstructured datasets. • Experience in Data Modeling and Advanced SQL techniques. • Experience implementing AWS Glue, Airflow, or any other data orchestration tool using latest technologies and techniques. • Good exposure in Application Development. • The candidate should work independently with minimal supervision. Must Have: • Hands-on experience with distributed computing framework like Databricks, Spark-Ecosystem (Spark Core, PySpark, Spark Streaming, SparkSQL) • Willing to work with product teams to best optimize product features/functions. • Experience on Batch workloads and real-time streaming with high volume data frequency • Performance optimization on Spark workloads • Environment setup, user management, Authentication and cluster management on Databricks • Professional curiosity and the ability to enable yourself in new technologies and tasks. • Good understanding of SQL and a good grasp of relational and analytical database management theory and practice. Good To Have: • Hands-on experience with distributed computing framework like Databricks. • Experience with Databricks migration from On-premise to Cloud OR Cloud to Cloud • Migration of ETL workloads from Apache Spark implementations to Databricks • Experience on Databricks ML will be a plus • Migration from Spark 2.0 to Spark 3.5 Key Skills: • Python, SQL and PySpark • Big Data Ecosystem (Hadoop, Hive, Sqoop, HDFS, HBase) • Spark Ecosystem (Spark Core, Spark Streaming, Spark SQL) / Databricks • AWS (AWS Glue, Databricks on AWS, Lambda, Amazon Redshift, Amazon S3, AWS Secrets Manager) • Data Modelling, ETL Methodology
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France