Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Data Scientist Experience : 8year+ Location : Ahmedabad /Hyderabad We are seeking a Data Scientist to develop and implement advanced predictive models and optimization solutions. The ideal candidate will have expertise in predictive modelling, integer programming, Python development, and cloud-based data processing. This role will involve working with large datasets, solving complex and managing cloud infrastructure for scalable solutions. Key Responsibilities: Develop and implement models using statistical methods (e.g., Bayesian models). Solve optimization problems such as bin packing, TSP, and clustering using integer programming. Develop Python-based solutions using Git/Poetry for code and library management. Work with data processing libraries such as Pandas, Polars, and others. Deploy and manage data pipelines on Databricks and Azure Blob Storage. Monitor and troubleshoot pipelines, logs, and cloud resources. Implement DevOps best practices (nice-to-have) for automation and CI/CD workflows. Utilize Power Apps/Power Automate for workflow automation and business process improvement. Ensure cloud cost optimization and performance tuning for scalable architectures. Required Skills & Qualifications: Strong experience in predictive modeling and statistical techniques (Bayesian modeling preferred). Hands-on experience with integer programming and clustering methods. Proficiency in Python, including experience with Git/Poetry for code and dependency management. Expertise in data processing libraries such as Pandas, Polars, or equivalent. Familiarity with Azure cloud services, Databricks, and Azure Blob Storage. Ability to read and analyze logs for debugging and performance monitoring. Experience with cloud management and optimizing resources. Knowledge of monitoring pipelines and troubleshooting issues. Strong problem-solving skills and ability to work with large-scale datasets. Preferred Qualifications: Exposure to DevOps practices, including CI/CD pipelines and automation. Familiarity with Power Apps/Power Automate for process automation. Strong background in cloud cost management and performance tuning
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
India
Remote
Data Engineer -I/II - IN (Operations/ Support) Work Timings -24x7 (IST) Work Location - Remote Experience: 2-4 years Job Description Summary The Data engineer is responsible for managing and operating upon Tableau, Tableau bridge server, Databricks , Dbt, SQL , SSRS, SSIS, AWS DWS, AWS APP Flow, PowerBI. The engineer will work closely with the customer and team lead to manage and operate cloud data platform. JOB COMPLEXITY: This role requires extensive problem solving skills and the ability to research an issue, determine the root cause, and implement the resolution; research of various sources such as Databricks/AWS/Tableau documentation that may be required to identify and resolve issues. Must have the ability to prioritize issues and multi-task SUPERVISION: Works under moderate supervision EXPERIENCE/EDUCATION: Requires a Bachelor’s degree in computer science or other related field plus 2-4 years of hands-on experience in configuring and managing Tableau/Databricks and SQL based data analytics solution. Experience with Tableau/Databricks and SQL Datawarehouse environment is desired PHYSICAL DEMANDS: General office environment. No special physical demands required. Schedule flexibility to include working a weekend day regularly and holidays as required by the business for 24/7 operations. Occasional travel, less than 10% POLICY COMPLIANCE: Responsible for adhering to company security policies and procedures and any other relevant policies and standards Knowledge/ Skills Good hands on Tableau, Tableau bridge server, Databricks, SSRS/ SSIS, AWS DWS, AWS APP Flow, PowerBI. Ability to read and write sql and stored procedures. Experience on AWS Good hands on experience in configuring, managing and troubleshooting along with general analytical and problem solving skills. Excellent written and verbal communication skills. Ability to communicate technical info and ideas so others will understand. Ability to successfully work and promote inclusiveness in small groups. Job Responsibilities Troubleshooting incident/problem, includes collecting logs, cross-checking against known issues, investigate common root causes (for example failed batches, infra related items such as connectivity to source, network issues etc.) Knowledge Management: Create/update runbooks as needed / Entitlements Governance: Watch all the configuration changes to batches and infrastructure (cloud platform) along with mapping it with proper documentation and aligning resources Communication: Lead and act as a POC for customer from off-site, handling communication, escalation, isolating issues and coordinating with off-site resources while level setting expectation across stakeholders Change Management: Align resources for on-demand changes and coordinate with stakeholders as required Request Management: Handle user requests – if the request is not runbook-based create a new KB or update runbook accordingly Incident Management and Problem Management, Root cause Analysis, coming up with preventive measures and recommendations such as enhancing monitoring or systematic changes as needed About Rackspace Technology We are the multicloud solutions experts. We combine our expertise with the world’s leading technologies — across applications, data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges, designing solutions that scale, building and managing those solutions, and optimizing returns into the future. Named a best place to work, year after year according to Fortune, Forbes and Glassdoor, we attract and develop world-class talent. Join us on our mission to embrace technology, empower customers and deliver the future. More on Rackspace Technology Though we’re all different, Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age, color, disability, gender reassignment or identity or expression, genetic information, marital or civil partner status, pregnancy or maternity status, military or veteran status, nationality, ethnic or national origin, race, religion or belief, sexual orientation, or any legally protected characteristic. If you have a disability or special need that requires accommodation, please let us know. Apply for this job
Posted 1 week ago
5.0 - 7.0 years
7 - 9 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Work Mode: Remote Contract Duration: 6 Months to 1 Year Location: Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,Remote (Open to candidates across India) Job Overview: We are seeking a highly skilled Technical Data Analyst for a remote contract position (6 to 12 months) to help build a single source of truth for our high-volume direct-to-consumer accounting and financial data warehouse. You will work closely with Finance & Accounting teams and play a pivotal role in dashboard creation, data transformation, and migration from Snowflake to Databricks. Key Responsibilities: 1. Data Analysis & Reporting Develop month-end accounting and tax dashboards using SQL in Snowflake (Snowsight) Migrate and transition reports/dashboards to Databricks Gather, analyze, and transform business requirements from finance/accounting stakeholders into data products 2. Data Transformation & Aggregation Build transformation pipelines in Databricks to support balance sheet look-forward views Maintain data accuracy and consistency throughout the Snowflake Databricks migration Partner with Data Engineering to optimize pipeline performance 3. ERP & Data Integration Support integration of financial data with NetSuite ERP Validate transformed data to ensure correct ingestion and mapping into ERP systems 4. Ingestion & Data Ops Work with Fivetran for ingestion and resolve any pipeline or data accuracy issues Monitor data workflows and collaborate with engineering teams on troubleshooting Required Skills & Qualifications: 5+ years of experience as a Data Analyst (preferably in Finance/Accounting domain) Strong in SQL, with proven experience in Snowflake and Databricks Experience in building financial dashboards (month-end close, tax reporting, balance sheets) Understanding of financial/accounting data: GL, journal entries, balance sheet, income statements Familiarity with Fivetran or similar data ingestion tools Experience with data transformation in a cloud environment Strong communication and stakeholder management skills Nice to have: Experience working with NetSuite ERP Apply Now: Please share your updated resume with the following details: Full Name Total Experience Relevant Experience in SQL, Snowflake, Databricks Experience in Finance or Accounting domain Current Location Availability (Notice Period) Current and Expected Rate
Posted 1 week ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
We are hiring a Data Engineer to design and manage data pipelines from factory floors to the Azure cloud, supporting our central data lakehouse architecture. You'll work closely with OT engineers, architects, and AI teams to move data from edge devices into curated layers (Bronze → Silver → Gold), ensuring high data quality, security, and performance. Your work will directly enable advanced analytics and AI in production and operations. Key Job functions Build data ingestion and transformation pipelines using Azure Data Factory, IoT Hub, and Databricks 2) Integrate OT sensor data using protocols like OPC-UA and MQTT Design Medallion architecture flows with Delta Lake and Synapse Monitor and optimize data performance and reliability- Implement data quality, observability, and lineage practices (e.g., with Purview or Unity Catalog) Collaborate with OT and IT teams to ensure contextualized, usable data
Posted 1 week ago
15.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Key Responsibilities: Platform Stabilization & Operational Excellence: Accountable for stable, reliable, and secure operations across all Datawarehouse applications, ensuring adherence to defined SLAs and KPIs. Assess the current data platform architecture, identify bottlenecks, and implement solutions to ensure high availability, reliability, performance, and scalability. Establish robust monitoring, alerting, and incident management processes for all data pipelines and infrastructure. Drive initiatives to improve data quality, consistency, and trustworthiness across the platform. Oversee the operational health and day-to-day management of existing data systems during the transition period. Manage relationships with strategic vendors across the enterprise applications landscape, ensuring strong performance, innovation contributions, and commercial value. Platform Modernization & Architecture: Define and execute a strategic roadmap for modernizing PerkinElmer's data platform, leveraging cloud-native technologies (AWS, Azure, or GCP) and modern data stack components (e.g., data lakes/lakehouses, Data Fabric/Mesh architectures, streaming platforms like Kafka/Kinesis, orchestration tools like Airflow, ELT/ETL tools, containerization). Lead the design and implementation of a scalable, resilient, and cost-effective data architecture that meets current and future business needs. (DaaS) Champion and implement DataOps principles, including CI/CD, automated testing, and infrastructure-as-code, to improve development velocity and reliability. Stay abreast of emerging technologies and industry trends, evaluating and recommending new tools and techniques to enhance the platform. Leadership & Strategy: Build, mentor, and lead a world-class data engineering team, fostering a culture of innovation, collaboration, and continuous improvement. Develop and manage the data engineering budget, resources, and vendor relationships. Define the overall data engineering vision, strategy, and multi-year roadmap in alignment with PerkinElmer's business objectives. Effectively communicate strategy, progress, and challenges to executive leadership and key stakeholders across the organization. Drive cross-functional collaboration with IT, Security, Enterprise Apps, R&D, and Business Units. Data Monetization Enablement: Partner closely with business leaders, enterprise app teams, and other business teams to understand data needs and identify opportunities for data monetization. Architect data solutions, APIs, and data products that enable the creation of new revenue streams or significant internal efficiencies derived from data assets. Ensure robust data governance, security, and privacy controls are embedded within the platform design and data products, adhering to relevant regulations (e.g., GDPR, HIPAA where applicable). Build the foundational data infrastructure required to support advanced analytics, machine learning, and AI initiatives. Basic Qualifications Required Qualifications & Experience Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related quantitative field. 15+ years of experience in data engineering, data architecture and/or data warehousing. 5+ years of experience in a leadership role, managing data engineering teams and driving large-scale data initiatives. Proven track record of successfully leading the stabilization, modernization, and scaling of complex data platforms. Deep expertise in modern data architecture patterns (Data Lakes, Data Warehouses, Lakehouses, Lambda/Kappa architectures). Extensive hands-on experience with cloud data platforms (AWS, Azure, or GCP – specify preferred if applicable) and their associated data services (e.g., S3/ADLS/GCS, Redshift/Synapse/BigQuery, EMR/Dataproc/Databricks, Kinesis/Kafka/Event Hubs, Glue/Data Factory/Dataflow). Strong experience with big data technologies (e.g., Spark, Hadoop ecosystem) and data processing frameworks. Proficiency with data pipeline orchestration tools (e.g., Airflow, Prefect, Dagster). Solid understanding of SQL and NoSQL databases, data modeling techniques, and ETL/ELT development. Experience with programming languages commonly used in data engineering (e.g., Python, Scala, Java). Excellent understanding of data governance, data security, and data privacy principles and best practices. Exceptional leadership, communication, stakeholder management, and strategic thinking skills. Demonstrated ability to translate business requirements into technical solutions.
Posted 1 week ago
3.0 years
0 Lacs
Greater Nashik Area
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Title: Data Scientist Location: Bangalore Reporting to: Manager- Analytics/ Senior Manager-Analytics PURPOSE OF THE ROLE Contributing to the Data Science efforts of AB InBevʼs global non-commercial analytics capability of Supply Analytics. Candidate will be required to contribute and may also need to guide the DS team staffed on the area and assess the efforts required to scale and standardize the use of Data Science across multiple ABI markets KEY TASKS AND ACCOUNTABILITIES Understand the business problem and translate that to an analytical problem; participate in the solution design process. Manage the full AI/ML lifecycle, including data preprocessing, feature engineering, model training, validation, deployment, and monitoring. Develop reusable and modular Python code adhering to OOP (Object-Oriented Programming) principles. Design, develop, and deploy machine learning models into production environments on Azure. Collaborate with data scientists, software engineers, and other stakeholders to meet business needs. Ability to communicate findings clearly to both technical and business stakeholders. Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) B.Tech /BE/ Masters in CS/IS/AI/ML. Previous Work Experience Required Minimum 3 years of relevant experience. Technical Skills Required Must Have Strong expertise in Python, including advanced knowledge of OOP concepts. Exposure to AI/ML methodologies with a previous hands-on experience in ML concepts like forecasting, clustering, regression, classification, optimization using Python. Azure Tech Stack, Databricks, ML Flow in any cloud platform Airflow for orchestrating and automating workflows. MLOPS concepts and containerization tools like Docker. Experience with version control tools such as Git. Consistently display an intent for problem solving. Strong communication skills (vocal and written). Ability to effectively communicate and present information at various levels of an organization. Good To Have Preferred industry exposure in Manufacturing Domain. Product building experience. Other Skills Required Passion for solving problems using data. Detail oriented, analytical and inquisitive. Ability to learn on the go. Ability to work independently and with others. We dream big to create future with more cheers!
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
Data Engineer -I/II - IN (Operations/ Support) Work Timings -24x7 (IST) Work Location - Remote Experience: 2-4 years Job Description Summary The Data engineer is responsible for managing and operating upon Tableau, Tableau bridge server, Databricks , Dbt, SQL , SSRS, SSIS, AWS DWS, AWS APP Flow, PowerBI. The engineer will work closely with the customer and team lead to manage and operate cloud data platform. JOB COMPLEXITY: This role requires extensive problem solving skills and the ability to research an issue, determine the root cause, and implement the resolution; research of various sources such as Databricks/AWS/Tableau documentation that may be required to identify and resolve issues. Must have the ability to prioritize issues and multi-task SUPERVISION: Works under moderate supervision EXPERIENCE/EDUCATION: Requires a Bachelor’s degree in computer science or other related field plus 2-4 years of hands-on experience in configuring and managing Tableau/Databricks and SQL based data analytics solution. Experience with Tableau/Databricks and SQL Datawarehouse environment is desired PHYSICAL DEMANDS: General office environment. No special physical demands required. Schedule flexibility to include working a weekend day regularly and holidays as required by the business for 24/7 operations. Occasional travel, less than 10% POLICY COMPLIANCE: Responsible for adhering to company security policies and procedures and any other relevant policies and standards Knowledge/ Skills Good hands on Tableau, Tableau bridge server, Databricks, SSRS/ SSIS, AWS DWS, AWS APP Flow, PowerBI. Ability to read and write sql and stored procedures. Experience on AWS Good hands on experience in configuring, managing and troubleshooting along with general analytical and problem solving skills. Excellent written and verbal communication skills. Ability to communicate technical info and ideas so others will understand. Ability to successfully work and promote inclusiveness in small groups. Job Responsibilities Troubleshooting incident/problem, includes collecting logs, cross-checking against known issues, investigate common root causes (for example failed batches, infra related items such as connectivity to source, network issues etc.) Knowledge Management: Create/update runbooks as needed / Entitlements Governance: Watch all the configuration changes to batches and infrastructure (cloud platform) along with mapping it with proper documentation and aligning resources Communication: Lead and act as a POC for customer from off-site, handling communication, escalation, isolating issues and coordinating with off-site resources while level setting expectation across stakeholders Change Management: Align resources for on-demand changes and coordinate with stakeholders as required Request Management: Handle user requests – if the request is not runbook-based create a new KB or update runbook accordingly Incident Management and Problem Management, Root cause Analysis, coming up with preventive measures and recommendations such as enhancing monitoring or systematic changes as needed About Rackspace Technology We are the multicloud solutions experts. We combine our expertise with the world’s leading technologies — across applications, data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges, designing solutions that scale, building and managing those solutions, and optimizing returns into the future. Named a best place to work, year after year according to Fortune, Forbes and Glassdoor, we attract and develop world-class talent. Join us on our mission to embrace technology, empower customers and deliver the future. More on Rackspace Technology Though we’re all different, Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age, color, disability, gender reassignment or identity or expression, genetic information, marital or civil partner status, pregnancy or maternity status, military or veteran status, nationality, ethnic or national origin, race, religion or belief, sexual orientation, or any legally protected characteristic. If you have a disability or special need that requires accommodation, please let us know. Apply for this job
Posted 1 week ago
0 years
0 Lacs
Sadar, Uttar Pradesh, India
On-site
Summary: We are seeking a talented and motivated AI Engineer to join our team and focus on building cutting-edge Generative AI applications. The ideal candidate will possess a strong background in data science, machine learning, and deep learning, with specific experience in developing and fine-tuning Large Language Models (LLMs) and Small Language Models (SLMs). You should be comfortable managing the full lifecycle of AI projects, from initial design and data handling to deployment and production monitoring. A foundational understanding of software engineering principles is also required to collaborate effectively with engineering teams and ensure robust deployments. Responsibilities: Design, develop, and implement Generative AI solutions, including applications leveraging Retrieval-Augmented Generation (RAG) techniques. Fine-tune existing Large Language Models (LLMs) and potentially develop smaller, specialized language models (SLMs) for specific tasks. Manage the end-to-end lifecycle of AI model development, including data curation, feature extraction, model training, validation, deployment, and monitoring. Research and experiment with state-of-the-art AI/ML/DL techniques to enhance model performance and capabilities. Build and maintain scalable production pipelines for AI models. Collaborate with data engineering and IT teams to define deployment roadmaps and integrate AI solutions into existing systems. Develop AI-powered tools to solve business problems, such as summarization, chatbots, recommendation systems, or code assistance. Stay updated with the latest advancements in Generative AI, machine learning, and deep learning. Qualifications: Proven experience as a Data Scientist, Machine Learning Engineer, or AI Engineer with a focus on LLMs and Generative AI. Strong experience with Generative AI techniques and frameworks (e.g., RAG, Fine-tuning, Langchain, LlamaIndex, PEFT, LoRA). Solid foundation in machine learning (e.g., Regression, Classification, Clustering, XGBoost, SVM) and deep learning (e.g., ANN, LSTM, RNN, CNN) concepts and applications. Proficiency in Python and relevant libraries (e.g., Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch). Experience with data science principles, including statistics, hypothesis testing, and A/B testing. Experience deploying and managing models in production environments (e.g., using platforms like AWS, Databricks, MLFlow). Familiarity with data handling and processing tools (e.g., SQL, Spark/PySpark). Basic understanding of software engineering practices, including version control (Git) and containerization (Docker). Bachelor's or master’s degree in computer science, Artificial Intelligence, Data Science, or a related quantitative field. Preferred Skills: Experience building RAG-based chatbots or similar applications. Experience developing custom SLMs. Experience with MLOps principles and tools (e.g., MLFlow, Airflow). Experience migrating ML workflows between cloud platforms. Familiarity with vector databases and indexing techniques. Experience with Python web frameworks (e.g., Django, Flask). Experience building and integrating APIs (e.g., RESTful APIs). Basic experience with front-end development or UI building for showcasing AI applications. Qualifications Bachelorʼs or Masterʼs degree in Computer Science, Engineering, or a related discipline.
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. Grade - T5 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Accountabilities What your main responsibilities are: Data Pipeline - Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity Data Integration - Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data Data Quality Management - Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms Data Transformation - Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process Data Enablement - Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation. Qualifications & Specifications Masters /Bachelor’s degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Strong programming skills in Python/Pyspark/SAS. Proven experience with large data sets and related technologies – Hadoop, Hive, Distributed computing systems, Spark optimization. Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps. Hands-on experience on Databricks, Delta Lake, Workflows. Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus. Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs. Experience on any BI tool like Power BI (Good to have). Cloud migration experience (Good to have) Cloud and Data Engineering certification (Good to have) Working in an Agile environment 4-6 Years Of Relevant Work Experience Is Required. Experience with stakeholder management is an added advantage. What We Are Looking For Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Knowledge, Skills And Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: · Design, develop, and maintain scalable data pipelines using Azure data services such as Azure Data Factory and Apache Spark. · Implement efficient Extract, Transform, Load (ETL) processes to move and transform data across various sources. · Design, develop, and maintain data solutions using Azure Synapse Analytics. · Implement data ingestion, transformation, and extraction processes using Azure Synapse Pipelines. · Knowledge about data warehousing concepts · Utilize Azure SQL Database, Azure Blob Storage, Azure Data Lake Storage, and other Azure data services to store and retrieve data. · Performance optimization and troubleshooting capabilities · Advanced SQL knowledge, capable to write optimized queries for faster data workflows. · Proven work experience in Spark, Python, SQL, Any RDBMS. · Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS · Must be extremely well versed with handling large volume data and work using different tools to derive the required solution. Mandatory skill sets: Azure Databricks, Azure Data Factory (ADF), or Azure Synapse Analytics, along with Python and SQL expertise Preferred skill sets: · Experienced in Delta Lake, Power BI, or Azure DevOps. · Knowledge of Databricks will be a plus · Knowledge of Spark, Scala, or other distributed processing frameworks. · Exposure to BI tools like Power BI, Tableau, or Looker. · Familiarity with data security and compliance in the cloud. · Experience in leading a development team. Years of experience required: 4 – 7 yrs Education qualification: B.tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Azure Synapse Analytics, Databricks Platform Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 1 week ago
11.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
As a Principal Software Engineer for Data, the person will lead the design and implementation of scalable, secure, and high-performance data pipelines across that involves healthcare clinical data, using modern big data and cloud technologies (Azure, Databricks, and Spark), ensuring alignment with UnitedHealth Group’s data governance standards. This role requires a hands-on leader who can write and review code, mentor teams, and collaborate across business and technical stakeholders to drive data strategy and innovation. The person needs to be ready to take up AI and AIOps as part of their work and support the data science teams with ideas and review their work. Primary Responsibilities Architecture Design and lead the implementation of robust, scalable, and secure data architectures for clinical and healthcare data for batch and real time pipelines. Architect end-to-end data pipelines using big data and cloud-native technologies (e.g., Spark, Databricks, Azure Data Factory). Ensure data solutions meet performance, scalability, and compliance requirements, including HIPAA and internal governance policies. Have good experience with designing, evolving and reviewing database schema. Experience with schema management for unstructured data, structured data, relational, star schema. Experience with designing and managing semantic data elements (metadata, configuration, master data). Come up with automated pipelines to keep them up-to-date from upstream sources. Build and optimize data ingestion, transformation, and storage pipelines for structured and unstructured clinical data. Guide teams that are doing it and ensure support for incremental data processing. Ensure data quality, lineage is embedded in all solutions. Lead code reviews, proof-of-concepts, and performance tuning for large-scale data systems. Collaborate with data governance teams to ensure adherence to UHG and healthcare data standards, lineage, certification, Data use rights, and data privacy. Contribute to the maturity of data governance domains and participate in governance councils and working groups. Design, Build and monitor MLOps pipelines, model inference and robust piplelines for running AI operations on data. Secondary Responsibilities Mentor data engineers and analysts, fostering a culture of technical excellence and continuous learning. Collaborate with product managers, data scientists, and business stakeholders to translate requirements into data solutions. Influence architectural decisions across teams and contribute to enterprise-wide data strategy. Stay current with emerging technologies in cloud, big data, and AI/ML, and evaluate their applicability to healthcare data. Promote the use of generative AI tools (e.g., GitHub Copilot) to enhance development productivity and innovation. Drive adoption of DevOps and DataOps practices, including CI/CD, IaC, and automated testing for data pipelines. Required Skills & Qualifications Technical skills Ideally 11+ years of experience in data architecture, data engineering, or related roles, with a focus on healthcare or clinical data preferred. Proven track record of designing and delivering large-scale data solutions in cloud environments. Cloud Platforms: Strong experience with Azure (preferred), AWS, or GCP. Big Data Technologies: Proficient in Apache Spark, Databricks, Delta Lake, and distributed data processing. Data Engineering: Expertise in building ETL/ELT pipelines, data lakes, and real-time streaming architectures using python, scala or other comparable technologies. Data Modelling: Deep understanding of dimensional modeling, canonical models, and healthcare data standards (e.g., HL7, FHIR). Programming: Proficiency in Python, SQL, and optionally Scala or Java. DevOps/DataOps: Familiarity with CI/CD, IaC (Terraform, ARM) Soft Skills Strong leadership, communication, and stakeholder management skills. Ability to mentor and influence across teams and levels. Strategic thinker with a passion for data-driven innovation. Ability to get into details whenever required and spend time in understanding and solving problems. Preferred Skills Experience with healthcare data interoperability standards (FHIR, HL7, CCD). Familiarity with MLOps and integrating data pipelines with ML workflows. Contributions to open-source projects or publications in data architecture or healthcare analytics.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
P2-C3-TSTS AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. Skill Proficiency Level expected AWS Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process , SQl, Databricks
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
WorkMode :Hybrid Work Location : Chennai / Hyderabad / Bangalore / Pune / mumbai / gurgaon Work Timing : 2 PM to 11 PM Primary : Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. Skill Proficiency Level expected AWS Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process , SQl, Databricks
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
WorkMode :Hybrid Work Location : Chennai / Hyderabad / Bangalore / Pune / mumbai / gurgaon Work Timing : 2 PM to 11 PM Primary : Data Engineer AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. Skill Proficiency Level expected AWS Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process , SQl, Databricks
Posted 1 week ago
16.0 years
2 - 9 Lacs
Hyderābād
On-site
Overview: An EDAP Solution Architect/Sr Manager plays a crucial role in designing and implementing cloud solutions based on Microsoft Azure. The role involves a blend of technical expertise in cloud infrastructure, architectural design, and leadership to guide teams and stakeholders through Azure deployments. Has to lead a small team of Engineers/Associate Architects. Responsibilities: Design, develop, and implement Azure-based cloud solutions tailored to business requirements. Participate in architectural discussions and design solutions for complex cloud applications, ensuring optimal performance, scalability, and security. Lead a small team of Engineers/Architects. Oversee the migration of legacy systems to Azure, ensuring minimal downtime and seamless integration. Create architecture blueprints and documentation for stakeholders and development teams. Collaborate with development, operations, and security teams to ensure solutions meet non-functional requirements such as security, scalability, and performance. Ensure governance by implementing best practices and compliance policies, especially in industries with stringent regulatory requirements. Provide expert-level guidance on the adoption of Azure services such as IaaS, PaaS, serverless computing, containers, and microservices architecture. Engage with clients and internal stakeholders to identify business challenges and develop appropriate cloud solutions. Conduct performance tuning and optimization of Azure resources. Continuously evaluate new Azure services and stay updated with platform enhancements. Participate in R&D for exploring emerging cloud technologies to enhance architectural practices. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree is a plus. 16 + years of experience in the data and analytics platform 10+ years of experience in designing and deploying cloud solutions with at least 7 years on the Azure platform. Technical Skills Expertise in Azure IaaS (Virtual Machines, Storage, Networking). Proficiency in PaaS (App Services, Azure SQL Database, Azure Kubernetes Service (AKS), and Azure Functions). Experience with Serverless architecture and microservices design. Strong understanding of Azure DevOps, CI/CD pipelines, and Infrastructure as Code (IaC) using tools like ARM templates, Terraform. Strong experience in designing cloud-native applications and implementing multi-tier, distributed cloud applications. Familiarity with cloud design patterns (event-driven, microservices, etc.). Knowledge of API management, load balancing, and traffic distribution in Azure. Understanding of Azure security best practices, including identity management (Azure AD), encryption, security policies, and role-based access control (RBAC). Experience in designing secure cloud architectures for highly sensitive data environments. Knowledge of Azure Virtual Network, VPNs, ExpressRoute, DNS, Azure Firewall, and Application Gateway. Proficiency in designing network architectures that support high availability, disaster recovery, and hybrid cloud models. Experience with Azure SQL, Cosmos DB, Data Lakes, Azure Databricks and Azure Storage. Knowledge of data replication, disaster recovery, and backup solutions in the cloud. Understanding of real-time data processing using Azure services like Event Hubs, Azure Data Factory, and Stream Analytics. Hands-on experience with PowerShell, Azure CLI, and automation frameworks. Ability to script and automate cloud infrastructure tasks. Expertise in using Azure Monitor, Application Insights, and Log Analytics for performance monitoring and troubleshooting. Knowledge of APM (Application Performance Management) tools and practices. Proficiency in Docker, Kubernetes (AKS), and container orchestration in Azure. Experience with deploying containerized applications using Azure Container Registry (ACR). Familiarity with other cloud platforms (AWS, GCP) is a plus. Non-Technical skills Fluency in English and effective communication with team members and business teams. Strong verbal and written communication skills, with the ability to articulate technical concepts to non-technical stakeholders. Ability to create and present detailed architectural designs, reports, and recommendations to clients and management. Proven experience leading cloud migration and implementation projects, guiding cross-functional teams in adopting cloud services. Ability to mentor and provide technical guidance to engineers and developers. Collaborative mindset, working closely with DevOps, security, and infrastructure teams. Ability to understand business requirements and translate them into technical solutions. Forward-thinking, with a focus on scalability, cost-effectiveness, and long-term sustainability of cloud solutions. Strong analytical skills, with the ability to troubleshoot complex architectural and platform-related issues. Ability to innovate and think creatively when faced with challenges in cloud solution design. Ability to engage with senior management, IT leaders, and business stakeholders to align technical solutions with organizational goals. Adept at gathering requirements, managing expectations, and balancing business priorities with technical feasibility.
Posted 1 week ago
7.0 - 10.0 years
6 - 8 Lacs
Hyderābād
Remote
At Accellor, we are a trusted consultant that uses best-of-breed Cloud technology to deliver superior customer engagement and business effectiveness for clients. We bring a deep understanding of Financial, Retail, High Tech, Healthcare, and retail verticals. We’ve created an atmosphere that encourages curiosity, constant learning, and persistence. We encourage our employees to grow and explore their interests. We cultivate an environment of collaboration, autonomy, and delegation – we know our people have a strong work ethic and a sense of pride and ownership over their work. They are passionate, eager, and motivated – focused on building the perfect solution but never losing sight of the bigger picture. As a Lead Data Engineer specializing in Snowflake and Databricks, you will be responsible for designing, developing, and delivering data engineering solutions using modern cloud data platforms. The candidate should have strong expertise in the data lifecycle, including data ingestion, transformation, and modeling, as well as experience with distributed data processing, data security, and integration with internal and external data sources. Additionally, the candidate should be proficient in leveraging best practices in data architecture and performance optimization. The role also requires the ability to drive end-to-end project delivery aligned with business objectives and ensure the realization of data-driven value. Responsibilities: Demonstrated ability to have successfully completed multiple, complex technical projects and create high-level design and architecture of the solution, including class, sequence and deployment infrastructure diagrams. Take ownership of technical solutions from design and architecture perspective for projects in presales phase as well as on-going projects. Experience with gathering end user requirements and writing technical documentation. Suggest innovative solutions based on new technologies and latest trends. Review the architectural/ technological solutions for ongoing projects and ensure right choice of solution. Work closely with client teams to understand their business, capture requirements, identify pain areas, accordingly, propose an ideal solution and win business. Requirements 7-10 years of experience working with Snowflake/Databricks in a data engineering or architecture role. Familiarity with programming languages such as Python, Java, or Scala for data processing and automation. Strong expertise in SQL, data modeling and advanced query optimization techniques. Hands-on experience with cloud platforms (AWS, Azure, or GCP) and their integration with Snowflake. Proficiency in ETL/ELT tools such as ADF, Fabric, etc. Experience with data visualization tools like Tableau, Power BI, or Looker. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced, dynamic environment. Certification in Databricks is added advantage Benefits Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global canters. Work-Life Balance: Accellor prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training, Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Personal Accident Insurance, Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Disclaimer: Accellor is proud to be an equal opportunity employer. We do not discriminate in hiring or any employment decision based on race, color, religion, national origin, age, sex (including pregnancy, childbirth, or related medical conditions), marital status, ancestry, physical or mental disability, genetic information, veteran status, gender identity or expression, sexual orientation, or other applicable legally protected characteristic.
Posted 1 week ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Artificial Intelligence / Machine Learning WorkMode :Hybrid Work Location : Chennai / Hyderabad / Work Timing : 2 PM to 11 PM Primary : Artificial Intelligence Job Summary We are seeking a visionary and technically skilled AI Lead to drive the adoption and development of Artificial Intelligence solutions in the financial services domain. This role will lead the design, development, and implementation of machine learning (ML), deep learning (DL), generative AI and Agent AI solutions that improve decision-making, enhance risk management, and deliver intelligent automation across business functions. The ideal candidate combines deep technical expertise in AI with a strong understanding of financial products, regulatory constraints, and business operations. Lead the development of AI/ML strategies aligned with business goals in areas like credit risk, fraud detection, customer insights, algorithmic trading, and compliance automation. Identify AI opportunities within financial products and services and build a roadmap to deliver value. Develop and deploy predictive models for credit scoring, risk analytics, customer segmentation, and forecasting. Implement NLP-based solutions for document processing, sentiment analysis, chatbots, and regulatory reporting. Use advanced AI techniques including deep learning, reinforcement learning, and LLMs to build next-gen financial intelligence systems. Lead and mentor a team of data scientists and machine learning engineers. Foster collaboration with internal stakeholders including product, compliance, and IT. Drive innovation by staying ahead of industry trends in AI, FinTech, and RegTech. Explore and prototype generative AI/Agent AI use cases for financial advisors, reporting automation, or intelligent document analysis. Bachelors or Masters Degree in Computer Science, Statistics, Machine Learning, or related field. Hands-on experience in building and deploying Generative AI models and Agent AI systems. 7+ years of experience in AI/ML, with at least 2 years in a leadership role. Deep understanding of supervised and unsupervised learning, NLP, time-series modeling, and deep learning. Strong programming skills in Python and experience with ML libraries (TensorFlow, PyTorch, Scikit-learn, XGBoost). Experience working with financial datasets (e.g., transaction data, market data, customer data). Hands-on experience with MLOps and deploying models into production environments (e.g., using AWS, Azure ML, or GCP). Secondary Skills Experience with financial risk modeling, credit scoring systems, or fraud detection engines. Familiarity with tools like Snowflake, Databricks, SAS, Bloomberg, or FICO. Knowledge of financial regulatory requirements and risk frameworks. Experience integrating AI solutions into financial workflows or trading systems. Published work or patents in financial AI or applied machine learning.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Data Engineer Work Mode: Hybrid Work timings: 2pm to 11pm Location: Chennai & Hyderabad Primary Skills: AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. Skill Proficiency Level expected AWS Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process , SQl, Databricks
Posted 1 week ago
3.0 years
2 - 9 Lacs
Hyderābād
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Develop analytic reports, web application and dashboards. Support the Capitation department at all levels for planning and insight into monthly Capitation payment, eligibility and growth of providers and membership Maintain web application, dashboards, and reports Perform high level and detailed analysis on all levels of Capitation data across UnitedHealthcare and Optum Learn and utilize healthcare industry knowledge necessary to extract, analyze and work with the data to return the requested reports at various levels for both internal and external clients Analyze Capitation from different data sources and identify data integrity issues related to submissions, reconciliation and provider detail Work with staff and customers to develop new and modify existing processes, documentation and templates to drive more insightful and accurate reporting solution These are new 2 position for Ireland team in NICE Reporting area. Ireland leadership would like to hire these positions in India. The resource would be responsible for NICE reporting. From BI standpoint team is looking to hire Power BI and from Data Engineering side - they are looking to hire SQL, snowflakes Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Associate or Bachelor’s degree in computer science or related fields 3+ years of relevant experience in web development and must have deployed web applications to production environment Intermediate level of proficiency with Python with relevant experience of 3+ years Intermediate to advance level proficiency with at least one web application development tool including but not limit to: Flask API HTML Django Angular JS Streamlit Microsoft Power Platform Intermediate level of proficiency with SQL Excellent communication, time/project management, problem solving, organizational, and analytical skills Preferred Qualifications: Health care industry experience Experience with Databricks and Azure Data Factory (ADF) Intermediate level of proficiency with Excel, Excel Macros Beginner to Intermediate level of proficiency with Microsoft Power BI At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 week ago
7.0 - 9.0 years
0 Lacs
Thiruvananthapuram
On-site
7 - 9 Years 1 Opening Trivandrum Role description We are seeking a highly experienced and motivated Azure Cloud Administrator with a strong background in Windows Server infrastructure , Azure IaaS/PaaS services , and cloud networking . The ideal candidate will have over 10 years of relevant experience and will be responsible for managing and optimizing our Azure environment while ensuring high availability, scalability, and security of our infrastructure. Key Responsibilities: Administer and manage Azure Cloud infrastructure including both IaaS and PaaS services. Deploy, configure, and maintain Windows Servers (2016/2019/2022). Manage Azure resources such as Virtual Machines, Storage Accounts, SQL Managed Instances, Azure Functions, Logic Apps, App Services, Azure Monitor, Azure Key Vault, Azure Recovery Services, Databricks, ADF, Synapse, and more. Ensure security and network compliance through effective use of Azure Networking features including NSGs, Load Balancers, and VPN gateways. Monitor and troubleshoot infrastructure issues using tools such as Log Analytics, Application Insights, and Azure Metrics. Perform server health checks, patch management, upgrades, backup/restoration, and DR testing. Implement and maintain Group Policies, DNS, IIS, Active Directory, and Entra ID (formerly Azure AD). Collaborate with DevOps teams to support infrastructure automation using Terraform and Azure DevOps. Support ITIL-based processes including incident, change, and problem management. Deliver Root Cause Analysis (RCA) and post-incident reviews for high-severity issues. Provide after-hours support as required during outages or maintenance windows. Required Technical Skills: Windows Server Administration – Deep expertise in Windows Server 2016/2019/2022. Azure Administration – Strong hands-on experience with Azure IaaS/PaaS services. Azure Networking – Solid understanding of cloud networking principles and security best practices. Azure Monitoring – Familiarity with Azure Monitor, Log Analytics, Application Insights. Infrastructure Tools – Experience with Microsoft IIS, DNS, AD, Group Policy, and Entra ID Connect. Cloud Automation – Good to have working knowledge of Terraform and Azure DevOps pipelines. Troubleshooting & RCA – Proven ability to analyze, resolve, and document complex technical issues. Skills Azure,Windows, Monitoring About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 1 week ago
2.0 - 3.0 years
0 Lacs
Cochin
On-site
Job Title - Data Engineer Sr.Analyst ACS Song Management Level: Level 10- Sr. Analyst Location: Kochi, Coimbatore, Trivandrum Must have skills: Python/Scala, Pyspark/Pytorch Good to have skills: Redshift Job Summary You’ll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture (do not remove the hyperlink) Experience: 3.5 -5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture)
Posted 1 week ago
7.0 years
6 - 10 Lacs
Gurgaon
On-site
About US:- We turn customer challenges into growth opportunities. Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe Job Responsibilities: Design and Develop Data Pipelines: Development and optimisation of scalable data pipelines within Microsoft Fabric , leveraging fabric based notebooks, Dataflows Gen2 , Data Pipelines , and Lakehouse architecture . Build robust pipelines using both batch and real-time processing techniques. Integrate with Azure Data Factory or Fabric-native orchestration for seamless data movement. Microsoft Fabric Architecture: Work with the Data Architecture team to implement scalable, governed data architectures within OneLake and Microsoft Fabric's unified compute and storage platform. Align models with business needs, promoting performance, security, and cost-efficiency. Data Pipeline Optimisation: Continuously monitor, enhance, and optimise Fabric pipelines, notebooks, and lakehouse artifacts for performance, reliability, and cost. Implement best practices for managing large-scale datasets and transformations in a Fabric-first ecosystem. Collaboration with Cross-functional Teams: Work closely with analysts, BI developers, and data scientists to gather requirements and deliver high-quality, consumable datasets. Enable self-service analytics via certified and reusable Power BI datasets connected to Fabric Lakehouses. Documentation and Knowledge Sharing: Maintain clear, up-to-date documentation for all data pipelines, semantic models, and data products. Share knowledge of Fabric best practices and mentor junior team members to support adoption across teams. Microsoft Fabric Platform Expertise: Use your expertise in Microsoft Fabric , including Lakehouses , Notebooks , Data Pipelines , and Direct Lake , to build scalable solutions integrated with Business Intelligence layers , Azure Synapse , and other Microsoft data services. Required Skills and Qualifications: Experience in Microsoft Fabric / Azure Eco System : 7+ years working with Azure eco system , Relavent experience in Microsoft Fabric, including Lakehouse,oine lake, Data Engineering, and Data Pipelines components. Proficiency in Azure Data Factory and/or Dataflows Gen2 within Fabric for building and orchestrating data pipelines. Advanced Data Engineering Skills: Extensive experience in data ingestion, transformation, and ELT/ETL pipeline design . Ability to enforce data quality, testing, and monitoring standards in cloud platforms. Cloud Architecture Design: Experience designing modern data platforms using Microsoft Fabric , OneLake , and Synapse or equivalent. Strong / Indeapth SQL and Data Modelling: Expertise in SQL and data modelling (e.g., star/snowflake schemas) for Data intergation / ETL , reporting and analytics use cases. Collaboration and Communication: Proven ability to work across business and technical teams, translating business requirements into scalable data solutions. Cost Optimisation: Experience tuning pipelines and cloud resources (Fabric, Databricks, ADF) for cost-performance balance. Preferred Skills: Deep understanding of Azure , Microsoft Fabric ecosystem , including Power BI integration , Direct Lake , and Fabric-native security and governance . Familiarity with OneLake , Delta Lake , and Lakehouse architecture as part of a modern data platform strategy. Experience using Power BI with Fabric Lakehouses and DirectQuery/Direct Lake mode for enterprise reporting. Working knowledge of PySpark , strong SQL , and Python scripting within Fabric or Databricks notebooks. Understanding of Microsoft Purview , Unity Catalog , or Fabric-native governance tools for lineage, metadata, and access control. Experience with DevOps practices for Fabric or Power BI, including version control, deployment pipelines, and workspace management. Knowledge of Azure Databricks : Familiarity with building and optimising Spark-based pipelines and Delta Lake models as part of a modern data platform. is an added advantage.
Posted 1 week ago
0 years
4 - 6 Lacs
Gurgaon
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose – the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Assistant V ice President, Databricks Squad Delivery lead The Databricks Delivery Lead will oversee the end-to-end delivery of Databricks-based solutions for clients, ensuring the successful implementation, optimization, and scaling of big data and analytics solutions. This role will drive the adoption of Databricks as the preferred platform for data engineering and analytics, while managing a cross-functional team of data engineers and developers. . Responsibilities Lead and manage Databricks-based project delivery, ensuring that all solutions are designed, developed, and implemented according to client requirements, best practices, and industry standards. Act as the subject matter expert (SME) on Databricks, providing guidance to teams on architecture, implementation, and optimization. Collaborate with architects and engineers to design optimal solutions for data processing, analytics, and machine learning workloads. Serve as the primary point of contact for clients, ensuring alignment between business requirements and technical delivery. Maintain effective communication with stakeholders, providing regular updates on project status, risks, and achievements. Oversee the setup, deployment, and optimization of Databricks workspaces, clusters, and pipelines. Ensure that Databricks solutions are optimized for cost and performance, utilizing best practices for data storage, processing, and querying. Continuously evaluate the effectiveness of the Databricks platform and processes, suggesting improvements or new features that could enhance delivery efficiency and effectiveness. Drive innovation within the team, introducing new tools, technologies, and best practices to improve delivery quality. . Qualifications we seek in you! Minimum Q ualifications / Skills Bachelor’s degree in Computer Science, Engineering, or a related field (Master’s or MBA preferred). Relevant years in IT services with experience specifically in Databricks and cloud-based data engineering. Preferred Q ualifications / Skills Proven experience in leading end-to-end delivery of data engineering or analytics solutions on Databricks. Strong experience in cloud technologies (AWS, Azure, GCP), data pipelines, and big data tools. Hands-on experience with Databricks, Spark, Delta Lake, MLflow, and related technologies. Expertise in data engineering concepts, including ETL, data lakes, data warehousing, and distributed computing. Preferred Certifications: Databricks Certified Associate or Professional. Cloud certifications (AWS Certified Solutions Architect, Azure Data Engineer, or equivalent). Certifications in data engineering, big data technologies, or project management (e.g., PMP, Scrum Master). Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com. Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Assistant Vice President Primary Location India-Gurugram Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 14, 2025, 11:20:58 PM Unposting Date Jan 11, 2026, 3:20:58 AM Master Skills List Digital Job Category Full Time
Posted 1 week ago
5.0 years
6 - 10 Lacs
Gurgaon
On-site
Job Responsibilities: Design and Develop Data Pipelines: Develop and optimise scalable data pipelines using Microsoft Fabric , including Fabric Notebooks , Dataflows Gen2 , Data Pipelines , and Lakehouse architecture . Work on both batch and real-time ingestion and transformation. Integrate with Azure Data Factory or Fabric-native orchestration for smooth data flow. Fabric Data Platform Implementation: Collaborate with data architects and engineers to implement governed Lakehouse models in Microsoft Fabric (OneLake) . Ensure data solutions are performant, reusable, and aligned with business needs and compliance standards. Data Pipeline Optimisation: Monitor and improve performance of data pipelines and notebooks in Microsoft Fabric. Apply tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery across domains. Collaboration with Cross-functional Teams: Work closely with BI developers, analysts, and data scientists to gather requirements and build high-quality datasets. Support self-service BI initiatives by developing well-structured datasets and semantic models in Fabric. Documentation and Reusability: Document pipeline logic, lakehouse architecture, and semantic layers clearly. Follow development standards and contribute to internal best practices for Microsoft Fabric-based solutions. Microsoft Fabric Platform Execution: Use your experience with Lakehouses , Notebooks , Data Pipelines , and Direct Lake in Microsoft Fabric to deliver reliable, secure, and efficient data solutions that integrate with Power BI , Azure Synapse , and other Microsoft services. Required Skills and Qualifications: 5+ years of experience in data engineering within the Azure ecosystem , with relevant hands-on experience in Microsoft Fabric , including Lakehouse , Dataflows Gen2 , and Data Pipelines . Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2 . Solid experience with data ingestion , ELT/ETL development , and data transformation across structured and semi-structured sources. Strong understanding of OneLake architecture and modern data lakehouse patterns . Strong command of SQL,Pyspark, Python applied to both data integration and analytical workloads. Ability to collaborate with cross-functional teams and translate data requirements into scalable engineering solutions. Experience in optimising pipelines and managing compute resources for cost-effective data processing in Azure/Fabric. Preferred Skills: Experience working in the Microsoft Fabric ecosystem , including Direct Lake , BI integration , and Fabric-native orchestration features. Familiarity with OneLake , Delta Lake , and Lakehouse principles in the context of Microsoft’s modern data platform. expert knowledge of PySpark , strong SQL , and Python scripting within Microsoft Fabric or Databricks notebooks. Understanding of Microsoft Purview or Unity Catalog , or Fabric-native tools for metadata , lineage , and access control . Exposure to DevOps practices for Fabric and Power BI, including Git integration , deployment pipelines, and workspace governance. Knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines is a plus.
Posted 1 week ago
0 years
2 - 9 Lacs
Gurgaon
On-site
KEY ACCOUNTABILITIES Analyse business requirements and support\create design for requirements Build and deploy new/changes to data mappings, sessions, and workflows in Azure Cloud Platform key focus area would be Azure Databricks. Develop performant code Perform ETL routines performance tuning, troubleshooting, support, and capacity estimation. Conduct thorough testing of ETL code changes to ensure quality deliverables Provide day-to-day support and mentoring to end users who are interacting with the data Profile and understand large amounts of source data available, including structured and semistructured/web activity data Analyse defects and provide fixes Provide release notes for deployments Support Release activities Problem solving attitude Keep up to date with new skills - Develop technology skills in other areas of Platform CUSTOMER CONDUCT FRAMEWORK Understand how FCA Conduct Rules apply to you as an employee of company and specifically, how they apply to this role and understand company's Conduct Framework, how it applies to this role, and the behaviours / actions that you need to demonstrate in consistently delivering good customer outcomes FUNCTIONAL / TECHNICAL SKILLS Skills and Experience: Experienced in ETL tools, data projects Recent Azure experience Strong knowledge of Azure Data Bricks (Python/SQL) Good knowledge of SQL & Python Strong Analytical skills Azure DevOps knowledge Experience with Azure Databricks, Logic Apps would be highly desirable Experience with Python programming would be highly desirable Experience with Azure Functions would be a plus
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France